javaperformanceparallel-processingsingle-threaded

Can having multiple threads on a single core system still improve performance?


I just learned the basics of parallel-processing in Java. I read this question: Multiple threads and performance on a single CPU and wondered if there is not another reason why multiple threads might be faster than single thread on a single-core system. I was thinking about how every thread has it's own piece of memory which it uses. Imagine in Java that FXML was part of the main thread. This would likely increase the size of the main threads' memory and in turn this may slow down the thread because it has to load more values on the swap or worse, has to make more calls to the memory (I think the current threads' values are copied to cache).

To sum it up, can making multiple threads on a single-core system increase performance due to the seperated memory?


Solution

  • Having multiple threads on a single-core CPU can improve performance in the majority of cases, because in the majority of cases a thread is not busy doing computations, it is waiting for things to happen.

    This includes I/O, such as waiting for a disk operation to complete, waiting for the user to press a key on the keyboard or move a mouse, etc., and even some non-I/O situations, such as waiting for a different thread to signal that an event has occurred, waiting for a timer to fire, etc.

    So, since threads spend the vast majority of their time doing nothing but waiting, they compete against each other for the CPU far less frequently than you might think.

    That's why if you look at the number of active threads in a modern desktop computer you are likely to see hundreds of threads, and if you look at a server, you are likely to see thousands of threads. That's clearly a lot more than the number of cores that the computer has, and obviously, it would not be done if there was no benefit from it.

    The only situation where multiple threads on a single core will not improve performance is when the threads are busy doing non-stop computations. This tends to only happen in specialized situations, like scientific computing, cryptocurrency mining, etc.

    So, multiple threads on a single-core system do usually increase performance, but this has very little to do with memory, and to the extent that it does, it has nothing to do with any notion of "separated" memory, whatever you mean by that term.

    As a matter of fact, running multiple threads on the same core or even different cores on the same chip that mostly access different areas of memory tends to hurt performance, because each time the CPU switches from one thread to the other it begins to access a different set of memory locations, which are unlikely to be in the CPU's cache, so each context switch tends to be followed by a barrage of cache misses, which represent overhead. But usually, it is still worth it.