In the guide for ZeroMQ, there is this:
If you use
inproc
and socket pairs, you are building a tightly-bound application, i.e., one where your threads are structurally interdependent. Do this when low latency is really vital.
I care a lot about latency for my application.
Question:
Is it the "inproc
-ness" alone that makes it low-latency?
Or is there something special about "inproc + PAIR
" that is faster than inproc + "WHATEVER"
?
Q : is it the "
inproc
-ness" alone that makes it low-latency?
Yes, . . . as bazza has already put in general yesterday, let me add a few cents :
1) the inproc://
-transport-class is the stack-less, protocol-free and a pure ( thus fast & almost zero-latency ) RAM memory-region mapping vehicle and also ( as was asked in the second question )
Q : Or is there something special about "
inproc + PAIR
" that is faster thaninproc + "WHATEVER"
?
2) the PAIR/PAIR
-Scalable-Formal-Communication-Pattern archetype is adding no extra, Pattern's archetype-related, multi-(many)-party behavioural-handshaking ( compared to some of the other, distributed Finite-State-Automata ( expressing the Pattern's archetype behaviour states and transitions among all the distributed peers - not with a PAIR/PAIR
exclusive 1-on-1 digital fire-hose ) so nothing gets added here, beyond thread-safe local pointer mechanics on either side plus some Context()
-instance signalling.
Incidentally, you may have noticed, that for a pure-inproc://
-transport-class application, you may instantiate the Context( 0 )
having Zero-I/O-threads, as in these cases the Context()
-signaling does not need them at all, as it just manages pointer-tricks over local-RAM memory-regions -- so cute, isn't it? )