This code demonstrates that the mutex is being shared between two threads, but something weird is going on with the scoping block around thread_mutex
.
(I have a variation of this code in another question, but this seems like a second mystery.)
#include <thread>
#include <mutex>
#include <iostream>
#include <unistd.h>
int main ()
{
std::mutex m;
std::thread t ([&] ()
{
while (true)
{
{
std::lock_guard <std::mutex> thread_lock (m);
usleep (10*1000); // or whatever
}
std::cerr << "#";
std::cerr.flush ();
}
});
while (true)
{
std::lock_guard <std::mutex> main_lock (m);
std::cerr << ".";
std::cerr.flush ();
}
}
This basically works, as it is, but the scoping block around thread_lock
should theoretically not be necessary. However, if you comment it out...
#include <thread>
#include <mutex>
#include <iostream>
#include <unistd.h>
int main ()
{
std::mutex m;
std::thread t ([&] ()
{
while (true)
{
// {
std::lock_guard <std::mutex> thread_lock (m);
usleep (10*1000); // or whatever
// }
std::cerr << "#";
std::cerr.flush ();
}
});
while (true)
{
std::lock_guard <std::mutex> main_lock (m);
std::cerr << ".";
std::cerr.flush ();
}
}
The output is like this:
........########################################################################################################################################################################################################################################################################################################################################################################################################################################################################################
i.e., it seems like the thread_lock
NEVER yields to main_lock
.
Why does thread_lock
always gain the lock and main_lock
always wait, if the redundant scoping block is removed?
I tested your code (with block scope removed) on Linux with GCC (7.3.0) using pthreads and got similar results as you. The main thread is starved, although if I waited long enough, I would occasionally see main thread do some work.
However, I ran the same code on Windows with MSVC (19.15) and no thread was starved.
It looks like you're using posix, so I'd guess your standard library uses pthreads on the back-end? (I have to link pthreads even with C++11.) Pthreads mutexes don't guarantee fairness. But that's only half the story. Your output seems to be related to the usleep
call.
If I take out the usleep
, I see fairness (Linux):
// fair again
while (true)
{
std::lock_guard <std::mutex> thread_lock (m);
std::cerr << "#";
std::cerr.flush ();
}
My guess is that, due to sleeping so long while holding the mutex, it is virtually guaranteed that the main thread will be as blocked as blocked can be. Imagine that at first the main thread might try to spin in hope that the mutex will become available soon. After a while, it might get put on the waiting list.
In the auxiliary thread, the lock_guard
object is destroyed at the end of the loop, thus the mutex is released. It will wake the main thread, but it immediately constructs a new lock_guard
which locks the mutex again. It's unlikely that the main thread will grab the mutex because it was just scheduled. So unless a context switch occurs in this small window, the auxiliary thread will probably get the mutex again.
In the code with the scope block, the mutex in the auxiliary thread is released before the IO call. Printing to the screen takes a long time, so there is plenty of time for the main thread to get a chance to grab the mutex.
As @Ted Lyngmo said in his answer, if you add a sleep before the lock_guard
is created, it makes starvation much less likely.
while (true)
{
usleep (1);
std::lock_guard <std::mutex> thread_lock (m);
usleep (10*1000);
std::cerr << "#";
std::cerr.flush ();
}
I also tried this with yield, but I needed like 5+ to make it more fair, which leads me to believe that there are other nuances in the actual library implementation details, OS scheduler, and caching and memory subsystem effects.
By the way, thanks for a great question. It was really easy to test and play around with.