javamultithreadinglockingcpusynchronized

How CPU switches the execution from one thread to another to access a lock resource in Java?


I'm learning about multi-threading in Java and I have a short question. I have a synchronized block or method and 2 (or more) threads. And I have only one CPU with only one core, so the 2 threads will work sequentially when CPU decides to switch from one thread to another.

public synchronized void doSomething() {
    // do something
}

So, only one thread will execute this method at a time because it's synchronized. The other thread is blocked and cannot execute the method until the first thread will finish the execution.

I think there are 2 situations, the CPU could switch the execution from thread1 to thread2 before thread1 finished the execution of doSomething(). If the CPU will do that, it means that thread1 will stop it's execution of the doSomething() and will let it unfinished, the thread2 will see that doSomething() is locked because thread1 executes it. After that I think CPU will switch back to thread1 and continue executing doSomething(). And until CPU will switch back to thread2 maybe thread1 will start executing doSomething() another time.

And the second situation when thread1 executes doSomething() and thread2 is blocked the entire period when thread1 executes the method. And after thread1 finishes execution of doSomething() the CPU will switch to thread2 and that thread will start execution of doSOmething() because at that moment the method is not locked.

Can someone explain me what is the correct situation?


Solution

  • You're missing part of the picture. It's what people call, "the scheduler." It's the component of the operating system that is responsible to decide which threads to run and when and for how long.

    How CPU switches the execution from one thread to another to access a lock resource in Java?

    It doesn't. That's not the CPU's job. The CPU just executes simple instructions, one after the other, until it gets interrupted by some external hardware event. Then it switches, not to another thread, but to an interrupt handler routine in the operating system.

    One of the external interrupt sources is a (heartbeat) timer that interrupts on a regular schedule—somewhere between tens of times per second to thousands of times, depending on the OS. Each time the heart beats/timer ticks, the interrupt handler calls the scheduler, and the scheduler decides which program thread should be the next to run upon return from the interrupt.

    The scheduler has a collection of so-called "queues," which may or may not be actual FIFO queues, and every live thread in the system belongs to exactly one queue. The most important queue is the run queue, which contains all of the threads that are, "ready-to-run." If some thread is in the run queue, then that means the only reason it is not actually running is, it's waiting for the scheduler to put it on a CPU.

    Before the OS returns from the timer interrupt, the scheduler picks a thread from the run queue to be the one that will run next. The queue is never empty because even when no application thread is ready-to-run, the system's idle thread always is ready.

    only one thread will execute this method at a time because it's synchronized. The other thread is blocked and cannot execute the method until the first thread will finish the execution.

    Your program's threads must make a system call to lock the object before they can enter the synchronized block, and if your thread B tries it when the object already is locked, the OS will move it to a "wait queue" that is associated with to that object. The thread no longer will be runnable. If the heartbeat subsequently ticks while your thread A actually is running, then the scheduler won't even try to restart thread B because thread B is not in the run queue. If thread A is the only runnable thread (no other running programs) then the heartbeat interrupt will return to thread A.

    When thread A unlocks the object, that's another system call. And, when any object is unlocked, the scheduler will move at least one thread (maybe all of the threads, it depends on the scheduler's policy) from the corresponding wait queue to the run queue. Your thread B now is eligible to run again though, whether the system call returns immediately to thread B or back to thread A again depends (once more) on the scheduler policy.