c++mutexmemory-barriersstdatomiclock-guard

c++11: thread with mutex sees atomic variable's value changing despite this being the only code that can change it


An atomic variable (128-bit structure in this case) is being updated, to the surprise of the only thread that would have the ability to update it. How so?

This is a minimal example so it doesn't do anything that makes sense, but: an alloc() function returns a malloc'd buffer 100 times, then allocates a new buffer it will return 100 times, and so on, even in the face of being called with multiple threads.

I have an atomic variable, which is a structure with a pointer, a 32-bit int, and another 32-bit counter meant to avoid ABA problems.

I have a function with two sections. The first section will, if the return count is non-zero, CAS the struct to decrement the return count (and increment the ABA counter), then return the pointer. Otherwise, the second section gets a mutex, allocates memory for a new pointer, and CAS's the little struct completely with the new pointer, a new non-zero return counter, and again an increment to the ABA counter.

In short, every thread can update this struct when the counter is above zero. But once it's zero, the first thread to avquire the mutex will, I think, be the only thread that can again CAS update this struct.

Except sometimes this CAS fails! "How can it fail" is my question.

Here is a running example. It can be compiled with g++ lockchange.cxx -o lockchange -latomic -pthread . It runs on gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC) on Fedora 31.

#include <algorithm>
#include <atomic>
#include <chrono>
#include <cassert>
#include <cstring>
#include <mutex>
#include <thread>
#include <vector>

using namespace std;


struct MyPair { /* Hungarian: pair */

    char*    pc;         /* a buffer to be used n times */
    int32_t  iRemaining; /* number of times left to use pc */
    uint32_t iUpdates;   /* to avoid ABA problem */
};



const int iThreads{ 200 };
const int iThreadIterations{ 1000000 };
const int iSizeItem{ 128 };

mutex mux;

atomic<MyPair> pairNext;



char* alloc() {

 TRY_AGAIN:
  MyPair pairCur = pairNext.load();

  // CASE 1: We can use the existing buffer?

  while ( pairCur.iRemaining ) {
      char* pcRV = pairCur.pc;

      MyPair pairNew = { pairCur.pc,
                         pairCur.iRemaining - 1,
                         pairCur.iUpdates + 1 };

      if ( pairNext.compare_exchange_weak( pairCur, pairNew ) )
          return pcRV;

      // Otherwise, pairNext was changed out from under us and pairCur
      // will have been updated.  Try again, as long as iRemaining
      // non-zero.
  }



  // CASE 2: We've used pc as many times as allowed, so allocate a new pc.

  // Get a mutex as we'll be changing too many fields to do atomically.
  lock_guard<mutex> guard( mux );

  // If multiple threads saw iRemaining = 0, they all will
  // have tried for the mutex; only one will have gotten it, so
  // there's a good chance that by the time we get the mutex, a
  // sibling thread will have allocated a new pc and placed it at
  // pairNext, so we don't need to allocate after all.

  if ( pairNext.load().iRemaining ) // <=============================== it's as if this line isn't seeing the update made by the line below in real time.
      goto TRY_AGAIN;

  // Get a new buffer.
  char* pcNew = (char*) malloc( iSizeItem );

  MyPair pairNew = { pcNew, 100, pairCur.iUpdates + 1 };

  if ( pairNext.compare_exchange_strong( pairCur, pairNew ) ) { //<===== the update that's not being seen above in real time
      // *** other stuff with pcNew that needs mutex protection ***;
      return pcNew;

  } else {

      // CASE 2c: after allocating a new page, we find that
      // another thread has beaten us to it.  I CAN'T FIGURE OUT
      // HOW THAT'S POSSIBLE THOUGH.  Our response should be safe
      // enough: put our allocation back, and start all over again
      // because who knows what else we missed.  I see this error
      // like 813 times out of 40 BILLION allocations in the
      // hammer test, ranging from 1 to 200 threads.

      printf( "unexpected: had lock but pairNext changed when iRemaining=0\n" );
      // In fact the following free and goto should and seem to
      // recover fine, but to be clear my question is how we can
      // possibly end up here in the first place.
      abort();
      free( pcNew );
      goto TRY_AGAIN;
  }
}



void Test( int iThreadNumber ) {

  for ( int i = 0; i < iThreadIterations; i++ )
      alloc();
}



int main( int nArg, char* apszArg[] ) {

  vector<thread> athr;

  for ( int i = 0; i < iThreads; i++ )
      athr.emplace_back( Test, i );

  for ( auto& thr: athr )
      thr.join();
}

Solution

  • Note that goto TRY_AGAIN; unlocks the mutex because you're jumping back to before lock_guard<mutex> was constructed. Usually people put {} around a scope with the lock-taking at the top to make this clear (and to control when the unlock happens). I didn't check ISO C++ rules to see if this is required behaviour, but at least the way G++ and clang++ implement it, goto does unlock. (Mixing RAII locking with goto seems like poor design).

    Also note that you do reload pairNext once while holding the mutex, but discard that value and keep pairCur as the "expected" value for your CAS attempt.

    For the CAS inside the critical section to be reached, pairNext.iRemaining either has to be

    I added new debug code which make this clear:

     lock_guard<mutex> guard( mux );    // existing code
    
     if ( pairNext.load().iRemaining )
          goto TRY_AGAIN;
    
      // new debugging code
      MyPair tmp = pairNext.load();
      if (memcmp(&tmp, &pairCur, sizeof(tmp)) != 0)
              printf("pairNext changed between retry loop and taking the mutex\n"
                    "cur  = %p, %d, %u\n"
                    "next = %p, %d, %u\n",
                    pairCur.pc, pairCur.iRemaining, pairCur.iUpdates,
                    tmp.pc, tmp.iRemaining, tmp.iUpdates);
    
    $ clang++ -g -O2 lc.cpp -o lockchange -latomic -pthread && ./lockchange 
    pairNext changed between retry loop and taking the mutex
    cur  = 0x7f594c000e30, 0, 808
    next =  0x7f5940000b60, 0, 909
    unexpected: had lock but pairNext changed when iRemaining=0
    Aborted (core dumped)
    

    Fixing this:

    Since you're reloading pairNext with the mutex held, just use that value as your "expected" for the CAS. Compilers unfortunately won't optimize foo.load().member into loading just that member: they still load the whole 16-byte object with a lock cmpxchg16b on x86-64, or whatever on other ISAs. So you're paying the whole cost anyway.

      lock_guard<mutex> guard( mux );
    
      pairCur = pairNext.load();   // may have been changed by other threads
      if ( pairCur.iRemaining )
          goto TRY_AGAIN;
    
      // then same as before, use it for CAS
      // no other thread can be in the critical section, 
      // and the code outside won't do anything while pairNext.iRemaining == 0
    

    A 16-byte atomic load costs the same as a CAS anyway, but the failure path would have to either free the malloc buffer or spin until the CAS succeeded before leaving the critical section. The latter could actually work if you can avoid wasting too much CPU time and causing contention, e.g. with _mm_pause().