python-2.7while-loop

Running multiple while true loops independently in python


Essentially I have 2 "while True:" loops in my code. Both of the loops are right at the end. However when I run the code, only the first while True: loop gets run, and the second one gets ignored.

For example:

while True:
    print "hi"

while True:
    print "bye"

Here, it will continuously print hi, but wont print bye at all (the actual code has a tracer.execute() for one loop, and the other is listening to a port, and they both work on their own).


Is there any way to get both loops to work at the same time independently?


Solution

  • Yes. A way to get both loops to work at the same time independently:

    Your initial surprise was related to the nature, how Finite-State-Automata actually work.

    [0]: any-processing-will-always-<START>-here
    [1]: Read a next instruction
    [2]: Execute the instruction
    [3]: GO TO [1]
    

    The stream of abstract instructions is being executed in a pure-[SERIAL] manner, one after another. There is no other way in the CPU since uncle Turing.

    Your desire to have more streams-of-instructions run at the same time independently is called [CONCURRENT] process-scheduling.


    You have several tools for achieving a wanted modus-operandi:

    Read about a weaker form, using just a thread-based concurrency ( which, due to a Python-specific GIL-locking, yet executes on the physical hardware as a [CONCURRENT]-processing, but GIL-interleaving ( which was knowingly implemented as a very cheap form of a collision-avoidance for each and every case, that this [CONCURRENCY] might introduce ) will finally interleave each of the ( now ) [CONCURRENT]-streams, so as to principally avoid colliding access to any Python object at the same time. If you are fine with this execute-just-one-instruction-stream-fragment-at-a-time ( and round-robin their actual order of GIL-stepped execution ), you can live in a safe and collision-free world.

    Another tool, Python may use, is the joblib.Parallel()( joblib.delayed() ), where you will have to master a bit more things to make these ( now a set of fully spawned subprocesses, each ( yes, each ) having a full-copy of python-state + all variables ( read: a lot of time and memory needed to spawn them ) and no mutual coordination ).

    So decide about which form is just-enough for the kind of your use-case, and better check the new Amdahl's Law re-formulation carefully ( implications on costs of going distributed or parallel )