I am wondering how can std::atomic_ref
be implemented efficiently (one std::mutex
per object) for non-atomic objects as the following property seems rather hard to enforce:
Atomic operations applied to an object through an atomic_ref are atomic with respect to atomic operations applied through any other atomic_ref referencing the same object.
In particular, the following code:
void set(std::vector<Big> &objs, size_t i, const Big &val) {
std::atomic_ref RefI{objs[i]};
RefI.store(val);
}
Seems quite difficult to implement as the std::atomic_ref
would need to somehow pick every time the same std::mutex
(unless it is a big master lock shared by all objects of the same type).
Am I missing something? Or each object is responsible to implement std::atomic_ref
and therefore either be atomic or carry a std::mutex
?
The implementation is pretty much exactly the same as std::atomic<T>
itself. This is not a new problem.
See Where is the lock for a std::atomic? A typical implementation of std::atomic
/ std::atomic_ref
a static hash table of locks, indexed by address, for non-lock-free objects. Hash collisions only lead to extra contention, not a correctness problem. (Deadlocks are still impossible; the locks are only used by atomic functions which never try to take 2 at a time.)
On GCC for example, std::atomic_ref
is just another way to invoke __atomic_store
on an object. (See the GCC manual: atomic builtins)
The compiler knows whether T
is small enough to be lock-free or not. If not, it calls the libatomic library function which will use the lock.
Fun fact: that means it only works if the object has sufficient alignment for atomic<T>
. But on many 32-bit platforms including x86, uint64_t
might only have 4-byte alignment. atomic_ref
on such an object will compile and run, but not actually be atomic if the compiler uses an SSE 8-byte load/store in 32-bit mode to implement it. Fortunately there's no danger for objects that have alignof(T) == sizeof(T)
, like most primitive types on 64-bit architectures.
This is why you need to allocate the underlying non-atomic object with the required alignment, e.g.
alignas(std::atomic_ref<T>::required_alignment) T foo;
or check that it must be sufficiently aligned already, e.g.
static_assert( std::atomic_ref<T>::required_alignment) == alignof(T), "T isn't *guaranteed* aligned enough for atomic_ref" );
See https://en.cppreference.com/w/cpp/atomic/atomic_ref/required_alignment