My project is using Docker Compose to create two services (app and postgres). Locally, app (Clojure application using Compojure, JDBC, Korma, Ragtime, etc.) connects to postgres immediately and without issue. However, when I deploy my application to a Digital Ocean Droplet (1 GB RAM/30 GB Disk/Ubuntu 16.04.2 x64) for testing purposes, app seemingly takes minutes to connect to postgres - e.g. Korma inserts hang for many minutes and then eventually start working without issue. The Droplet is smallish, but it doesn't seem to be starved for resources (based on output of htop).
Here are the relevant portions of my application:
;; project.clj
(defproject backend "0.1.0-SNAPSHOT"
:min-lein-version "2.0.0"
:dependencies [[com.grammarly/perseverance "0.1.2"]
[commons-codec/commons-codec "1.4"]
[compojure "1.4.0"]
[environ "1.0.3"]
[clj-http "2.3.0"]
[korma "0.4.3"]
[lock-key "1.4.1"]
[me.raynes/fs "1.4.6"]
[midje "1.6.3"]
[org.clojure/clojure "1.8.0"]
[org.clojure/core.async "0.3.441"]
[org.clojure/java.jdbc "0.7.0-alpha2"]
[postgresql "9.3-1102.jdbc41"]
[ragtime "0.6.0"]
[ring-cors "0.1.7"]
[ring-mock "0.1.5"]
[ring/ring-defaults "0.1.5"]
[ring/ring-json "0.4.0"]]
:plugins [[lein-environ "1.0.3"]
[lein-midje "3.1.3"]
[lein-ring "0.9.7"]]
:aliases {"migrate" ["run" "-m" "backend.db/ragtime-migrate"]
"rollback" ["run" "-m" "backend.db/ragtime-rollback"]}
:ring {:handler backend.handler/app}
:profiles
{:dev {:dependencies [[javax.servlet/servlet-api "2.5"]
[ring/ring-mock "0.3.0"]]}})
;; db.clj
(ns backend.db
(:use [korma.core]
[korma.db])
(:require [clojure.string :as string]
[environ.core :as environ]
[lock-key.core :refer [encrypt-as-base64 decrypt-from-base64]
:rename {encrypt-as-base64 encrypt
decrypt-from-base64 decrypt}]
[ragtime.jdbc :as jdbc]
[ragtime.repl :as repl]))
(def database-host (environ/env :postgres-port-5432-tcp-addr)) ;; set by Docker
(def database-name (environ/env :database-name))
(def database-password (environ/env :database-password))
(def database-port (environ/env :postgres-port-5432-tcp-port)) ;; set by Docker
(def database-sslmode (environ/env :database-sslmode))
(def database-user (environ/env :database-user))
(def database-url (str "jdbc:postgresql://"
database-host
":"
database-port
"/"
database-name
"?user="
database-user
"&password="
database-password))
(defn load-config []
{:datastore (jdbc/sql-database {:connection-uri database-url})
:migrations (jdbc/load-resources "migrations")})
(defn ragtime-migrate []
(repl/migrate (load-config)))
(defn ragtime-rollback []
(repl/rollback (load-config)))
(defdb db (postgres {:db database-name
:host database-host
:password database-password
:port database-port
:user database-user
:sslmode database-sslmode}))
(defentity engagements)
(def lock (environ/env :lock))
(defn query-engagement [id]
(let [engagement (first
(select
engagements
(where {:id (read-string id)})))
decrypted-email (->
(:email_address engagement)
(decrypt lock))]
(conj engagement {:email_address decrypted-email})))
(defn create-engagement [email-address image-path]
(let [encrypted-email (encrypt email-address lock)]
(insert engagements
(values [{:email_address encrypted-email
:image_path image-path}]))))
;; docker-compose.yml
app:
build: .
volumes:
- .:/app
ports:
- "127.0.0.1:3000:3000"
links:
- postgres
postgres:
build: .
dockerfile: Dockerfile-postgres
expose:
- "5432"
Am I doing something incorrectly? Might this be a JDBC connection pool issue? Is there a convention for debugging this sort of issue?
UPDATE: I can confirm that the problem persists if I run the application directly on the Digital Ocean Droplet, as opposed to via Docker.
TLDR;
Adding the following flag to project.clj solved my problem: :jvm-opts ["-Djava.security.egd=file:/dev/urandom"]
(HT to Redditor /u/fitzoh!)
As far as I understand it, the issue I was seeing was caused by the JVM making a blocking request for random numbers to /dev/random
. Because the Droplet isn't doing much of anything (IO, network requests, etc.), it takes a long time (minutes, in my case) to generate enough entropy for /dev/random
to start generating random numbers.
One workaround is to use /dev/urandom
, which does not wait for entropy to accumulate and will happily generate (low-quality) random numbers. From this excellent Digital Ocean tutorial,
... however, since it's a non-blocking device, it will continue producing “random” data, even when the entropy pool runs out. This can result in lower quality random data, as repeats of previous data are much more likely. Lots of bad things can happen when the available entropy runs low on a production server, especially when this server performs cryptographic functions.
The other, seemingly more robust workaround (again, from the excellent DO tutorial) is to use a software solution, like haveged.
Based on the HAVEGE principle, and previously based on its associated library, haveged allows generating randomness based on variations in code execution time on a processor. Since it's nearly impossible for one piece of code to take the same exact time to execute, even in the same environment on the same hardware, the timing of running a single or multiple programs should be suitable to seed a random source. The haveged implementation seeds your system's random source (usually /dev/random) using differences in your processor's time stamp counter (TSC) after executing a loop repeatedly.