akkalagom

In lagom: on increasing concurrent http calls, thread count (akka.actor.default-dispatcher) keeps increasing. How to control this behaviour?


We observe that on increasing concurrent http calls to our service, thread count (akka.actor.default-dispatcher) keeps increasing (see screenshot from visualVM). Also after the requests stop, the thread count don’t go down. And most of these remain in PARK state. Is this proportional increase of threads an expected behaviour? How do we control this and reuse the same actors or kill the actors after request has been served.

I’m running the shopping-cart example from lagom-samples.

akka.actor.default-dispatcher {
  executor = "fork-join-executor"
  fork-join-executor {
       parallelism-min = 2
       parallelism-factor = 1.0
       parallelism-max = 6
    }
    throughput = 1

}

VisualVM SS of thread analysis for lagom application

Edit: Using thread-pool-executor as akka.actor.default-dispatcher stops serving any requests after multiple (20-30) concurrent requests. Even console goes unresponsive.

default-dispatcher  {
 type = Dispatcher
 executor = default-executor
 throughput = 1
 default-executor = { fallback = thread-pool-executor }
 thread-pool-executor = {
     keep-alive-time = 60s
     core-pool-size-min = 8
     core-pool-size-factor = 3.0
     core-pool-size-max = 64
     max-pool-size-min = 8
     max-pool-size-factor = 3.0
     max-pool-size-max = 64
     task-queue-size = -1
     task-queue-type = linked
     allow-core-timeout = on
    }
 }

In the akka docs introduction it highlights “Millions of actors can be efficiently scheduled on a dozen of threads”. So in that case why would we need to create threads proportional to number of concurrent requests


Solution

  • Are you blocking in your calls? Eg, are you calling Thread.sleep? Or using some synchronous IO? If so, then what you're seeing is entirely expected.

    Lagom is an asynchronous framework. All the IO and inter-service communication mechanisms it provides are non blocking. Its thread pools a tuned for non blocking. If you only using non blocking calls, you will see the thread pools behave with very low thread counts, and you won't find things going unresponsive.

    But the moment you start blocking, all bets are off. Blocking requires one thread per request.

    The default dispatcher that Akka uses is a fork join pool. It is designed for asynchronous use. If you block in a thread in its pool, it will start another thread to ensure other tasks can continue. So, that's why you see the thread pool grow. Don't block, and this won't happen.

    The thread pool executor on the other hand uses a fixed number of threads. If you block on this, then you risk deadlocking the entire application. Don't block, and this won't happen.