I've recently moved to Play framework 2.0 and there are some questions concerning me regarding how controllers actually work in play.
In play docs there are mentioned:
Because of the way Play 2.0 works, the action code must be as fast as possible (ie. non blocking).
However in another part of the docs:
/actions {
router = round-robin
nr-of-instances = 24
}
and
actions-dispatcher = {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 24
}
}
It seems that there are 24 actors allocated for controllers handling. I guess every request allocates one of those actors for the lifetime of request. Is that right?
Also, what does parallelism-factor
mean and how does fork-join-executor
differ from thread-pool
?
Also - docs should say that Async should be used for long computations. What qualifies as a long calculation? 100ms? 300ms? 5 seconds? 10 seconds? My guess would be anything over a second, but how to determine that?
The reason of this questioning is that testing async controller calls is way harder than regular calls. You have to spin up a fake application and do a full-fledged request instead of just calling a method and checking its return value.
Even if that wasn't the case, I doubt that wrapping everything in Async
and Akka.future
is the way.
I have asked for this in #playframework IRC channel but there was no answer and it seems that I'm not the only one not sure how things should be done.
Just to reiterate:
parallelism-factor
mean and why is it 1?fork-join-executor
differ from thread-pool-executor
?Async
?Thanks in advance.
Edit: some stuff from IRC
Some stuff from IRC.
<imeredith> arturaz: i cant be boethered writing up a full reply but here are key points
<imeredith> arturaz: i believe that some type of CPS goes on with async stuff which frees up request threads
<arturaz> CPS?
<imeredith> continuations
<imeredith> when the future is finished, or timedout, it then resumes the request
<imeredith> and returns data
<imeredith> arturaz: as for testing, you can do .await on the future and it will block until the data is ready
<imeredith> (i believe)
<imeredith> arturaz: as for "long" and parallelism - the longer you hold a request thread, the more parrellism you need
<imeredith> arturaz: ie servlets typically need a lot of threads because you have to hold the request thread open for a longer time then if you are using play async
<imeredith> "Is it right that every request allocates one actor from /actions pool?" - yes i belive so
<imeredith> "What does parallelism-factor mean and why is it 1?" - im guessing this is how many actors there are in the pool?
<imeredith> or not
<imeredith> "How does fork-join-executor differ from thread-pool-executor?" -no idea
<imeredith> "How long should a calculation be to become wrapped in Async?" - i think that is the same as asking "how long is a piece of string"
<imeredith> "Is is not possible to test async controller method without spinning up fake applications?" i think you should be able to get the result
<viktorklang> imeredith: A good idea is to read the documentation: http://doc.akka.io/docs/akka/2.0.3/general/configuration.html ( which says parallelism-factor is: # Parallelism (threads) ... ceil(available processors * factor))
<arturaz> viktorklang, don't get me wrong, but that's the problem - this is not documentation, it's a reminder to yourself.
<arturaz> I have absolutely no idea what that should mean
<viktorklang> arturaz: It's the number of processors available multiplied with the factor you give, and then rounded up using "ceil". I don't know how it could be more clear.
<arturaz> viktorklang, how about: This factor is used in calculation `ceil(number of processors * factor)` which describes how big is a thread pool given for your actors.
<viktorklang> arturaz: But that is not strictly true since the size is also guarded by your min and max values
<arturaz> then why is it there? :)
<viktorklang> arturaz: Parallelism (threads) ... ceil(available processors * factor) could be expanded by adding a big of conversational fluff: Parallelism ( in other words: number of threads), it is calculated using the given factor as: ceil(available processors * factor)
<viktorklang> arturaz: Because your program might not work with a parallelism less than X and you don't want to use more threads than X (i.e if you have a 48 core box and you have 4.0 as factor that'll be a crapload of threads)
<viktorklang> arturaz: I.e. scheduling overhead gives diminishing returns, especially if ctz switching is across physical slots.
<viktorklang> arturaz: Changing thread pool sizes will always require you to have at least basic understanding on Threads and thread scheduling
<viktorklang> arturaz: makes sense?
<arturaz> yes
<arturaz> and thank you
<arturaz> I'll add this to my question, but this kind of knowledge would be awesome docs ;)
When a message arrives on an actor an actor, it holds on to that actor as long as it takes to process that message. If you process the request synchronously (calculate the entire response during the processing of that message), then This actor cannot service other requests until the response is done. If instead you can, upon the receipt of this request, send work off to another actor, the actor which received the request can start working on the next request while the first request is being worked on by other actors.
The number of threads used for actors is "num cpus * parallelism-factor" (you can however specify min and max)
Dunno
Unless there are real calculations going on, I'd tend to make async anything that is talking to some other system, like doing io with a database / filesystem. Certainly anything that might block the thread. However, since there is so little overhead in passing messages though, I don't think there would be a problem with just sending all the work off to other actors.
See the Play Documentation on functional tests about how to test your controllers.