Many programming languages today have happens-before
relation and release+acquire
synchronization operations.
Some of these programming languages:
I would like to know if release+acquire
can violate happens-before
:
release+acquire
and happens-before
Release/acquire
establishes happens-before
relation between different threads: in other words everything before release
in Thread 1
is guaranteed to be visible in Thread 2
after acquire
:
\ Thread 1 /
\ -------- /
\ x = 1 / Everything
\ y = 2 / here...
\ write-release(ready = true) /
└───────────────────────────┘
|
└─────────────┐ (happens-before)
V
┌─────────────────────────┐
/ Thread 2 \
...is visible to / -------- \
everything / read-acquire(ready == true) \
here / assert(x == 1) \
/ assert(y == 2) \
More than that, happens-before
is a strict partial order.
This means it is:
Thread 2
is guaranteed to see not only writes made by Thread 1
, but also all writes by other threads that Thread 1
sawa
happens-before b
, or b
happens-before a
, both isn't allowedrelease/acquire
might break happens-before
As we know from IRIW litmus test, release/acquire
could cause two threads to see writes from different threads in different order (for C++ see also the last example here, and these two examples from gcc wiki):
// Thread 1
x.store(1, memory_order_release);
// Thread 2
y.store(1, memory_order_release);
// Thread 3
assert(x.load(memory_order_acquire) == 1 && y.load(memory_order_acquire) == 0)
// Thread 4
assert(y.load(memory_order_acquire) == 1 && x.load(memory_order_acquire) == 0)
Here both assert
s can pass, which means that Thread 3
and Thread 4
see writes to x
and y
in different order.
As I understand, if it were ordinary variables, then this would violate the asymmetry property of happens-before.
But because x and y are atomics it's OK.
(BTW I am not sure about that)
Nate Eldredge demonstrated in his answer that this IRIW example is OK.
But I still have a sneaking suspicion that there might be something similar to IRIW which would cause Thread 3
and Thread 4
to see regular writes to happen-before in different order — this would break happens-before (it wouldn't be transitive anymore).
In cppreference there is also this quote:
The implementation is required to ensure that the happens-before relation is acyclic, by introducing additional synchronization if necessary (it can only be necessary if a consume operation is involved, see Batty et al)
The quote hints that there might be cases when happens-before
is violated and which require additional synchronization ("acyclic" means that happens-before forms a directed acyclic graph, which is equivalent to saying "strict partial order").
If it's possible I would like to know what are these cases.
Since java allows data races, I'm also interested in cases when happens-before
is violated only in presence of data races.
To give you an example, here is a explanation why sequentially consistent (SC) atomics cannot violate happens-before
.
(A similar explanation for release/acquire atomics would be an answer to my question).
By "violate happens-before
" I mean "to violate axioms of happens-before
, which is a strict partial order".
Strict partial orders correspond directly to directed acyclic graphs (DAGs).
Here is an example of a DAG from wiki (note that it has no cycles):
Let's show that with SC atomics happens-before
graph stays acyclic.
Remember that SC atomics happen in a global order (the same for all threads), and:
Look at this happens-before
graph:
Thread1 Thread2 Thread3
======= ======= =======
│ │ │
W(x) │ │
↓ │ │
Sw(a) ┐ │ W(y)
│ │ │ ↓
│ └> Sr(a) ┌ Sw(b)
│ ↓ │ │
│ Sr(b)<─┘ │
│ ↓ │
│ R(x) │
│ ↓ │
│ R(y) │
│ │ │
V V V
On the graph:
W(x)
and R(x)
are regular actions: write and read of x
Sw(a)
and Sr(a)
are SC atomics: write and read of a
sequenced-before order
in C++): in the order they go in the codehappens-before
is established by SC atomicsNote that arrows on the graph always go downwards
=> the graph cannot have cycles
=> it's always a DAG
=> happens-before
axioms cannot be violated
The same proof doesn't work for release/acquire
atomics because (as far as I understand) they don't happen in a global order => a HB arrow between Sw(a)
and Sr(a)
might go upwards => a cycle is possible. (I'm not sure about that)
Answer: no, release/acquire cannot break happens-before.
The proof is given by Nate Eldredge in this comment:
Oh, indeed. And actually, I might see how to prove it. The HB relation is transitive by its construction. The sequencing relation is acyclic, so if there is a cycle in HB, it must contain at least one synchronizes-with step, which is, roughly, an acquire load L that takes its value from a release store S. But because of the cycle, L happens before S. Therefore by p14, L must take its value from some store other than S, contradiction. Not quite a proof because I have oversimplified the definition of synchronizes-with from atomics.order p2, but it's a start.
I just would like to put it in a separate answer to make it easier for people to notice.
Plus here are additional explanations from me (maybe it will make it easier to understand for some people).
First of all, if we use only release/acquire
atomics and non-atomic memory accesses, then happens-before
is transitive (and acyclic) by construction.
Similarly to SC
graph, in case of release/acquire
edges also always point downwards:
Thread1 Thread2 Thread3 Thread4
======= ======= ======= =======
│ │ │ │
│ ┌ Wrel(a)┐ │ │
Racq(a)<┘ │ │┌ Wrel(b)┐ │
↓ │ ││ │ └> Racq(b)
Racq(b)<─────│───│┘ │ ↓
│ │ └─────│────> Racq(a)
│ │ │ │
V V V V
(BTW notice that this graph is different from SC
: Thread 1
and Thread 4
see Wrel(a)
and Wrel(b)
in different orders. But edges point downwards nonetheless)
Edges from Wrel(x)
to Racq(x)
always point downwards because Racq(x)
sees Wrel(x)
and everything before Wrel(x)
as completed (See Notes in the end of the answer).
(In C++ spec this is called synchronizes-with
, you can learn more about it here.)
As a result, similarly to SC
graph, all edges always point downwards
=> the graph cannot have cycles
=> it's always a DAG
=> happens-before
axioms cannot be violated
Actually happens-before
produced by release/acquire
atomics — is basically the original happens-before
introduces by Leslie Lamport in Time, Clocks and the Ordering of Events in a Distributed System.
I really recommend to read the paper to everyone interested in HB
— Lamport's explanations are short and clear, and the idea is really cool.
Let's also demonstrate with picture why cycles aren't possible.
This a what a cycle looks like:
Thread1 Thread2 Thread3
======= ======= =======
│ │ │
Racq(a)<─────│──────────│─────┐
↓ │ │ │
Wrel(b) ┐ │ │ │
│ │ │ │ │
│ └> Racq(b) │ │
│ ↓ │ │
│ Wrel(c) ┐ │ │
│ │ │ │ │
│ │ └> Racq(c) │
│ │ ↓ │
│ │ Wrel(a) ┘
│ │ │
V V V
Within each thread happens-before
is the order of actions in the source code (it' called sequenced-before
in C++ and program order
in Java).
Clearly, no HB
cycles are possible here.
It means that the edge which "goes back" and closes the cycle must be a synchronizes-with
edge like the Wrel(a)->Racq(a)
edge in the graph above.
Notice a contradiction:
Wrel(a)
must complete before Racq(a)
, because Racq(a)
reads the value written by Wrel(a)
Racq(a)
must complete before Wrel(a)
, because there is a chain Racq(a)->Wrel(b)->Racq(b)->Wrel(c)->Racq(c)->Wrel(a)
where every Wrel(x)
(and everything before it) completes before Racq(x)
reads it.This means the Wrel(a)->Racq(a)
edge is not allowed => cycles aren't possible.
In terms of the C++ memory model it violates coherence requirements:
The value of an atomic object M, as determined by evaluation B, shall be the value stored by some side effect A that modifies M, where B does not happen before A.
Notes.
I stated that:
Racq(x)
seesWrel(x)
and everything beforeWrel(x)
as completed
but in C++ standard that isn't stated directly.
Instead it has this:
happens-before
is what defines what a read seesRacq(x)
and Wrel(x)
is called synchronizes-with
happens-before
is built from synchronizes-with
and huge amount of other rules and ordersIt's possible to deduce my statement from the C++ standard, although it might not be easy. (That is why I recommend to read this article instead).
I used the statement because it concisely describes what memory barrier instructions do, and this is how happens-before
can be (and probably is) easily implemented .
So often a memory barrier instruction is all we need to implement happens-before
with all its mathematical properties.
For an overview of these instructions on different CPUs I would recommend the section on that in Memory Barriers: a Hardware View for Software Hackers by Paul E. McKenney.
(For example, memory barriers in PowerPC basically work the same as release/acquire
atomics in C++)