I want to create a memoized version of a given Supplier
such that multiple threads can use it concurrently with the guarantee that the original supplier's get()
is called at most once, and that all of the threads see that same result. Double-checked locking seems like a good fit.
class CachingSupplier<T> implements Supplier<T> {
private T result = null;
private boolean initialized = false;
private volatile Supplier<? extends T> delegate;
CachingSupplier(Supplier<? extends T> delegate) {
this.delegate = Objects.requireNonNull(delegate);
}
@Override
public T get() {
if (!this.initialized && this.delegate != null) {
synchronized (this) {
Supplier<? extends T> supplier = this.delegate;
if (supplier != null) {
this.result = supplier.get();
this.initialized = true;
this.delegate = null;
}
}
}
return this.result;
}
}
My understanding is that in this case delegate
needs to be volatile
because otherwise the code in the synchronized
block might be reordered: the write to delegate
could happen before the write to result
, possibly exposing result
to other threads before it has been fully initialized. Is that correct?
So, normally this would entail a volatile read of delegate
outside of the synchronized
block on each invocation, only entering the synchronized
block at most once per contending thread while result
is uninitialized and then never again.
But once result
has been initialized, is it possible to also avoid the cost, however negligible, of the unsynchronized volatile read of delegate
on subsequent invocations by first checking the non-volatile flag initialized
and short-circuiting? Or does this buy me absolutely nothing over normal double-checked locking? Or does it somehow hurt performance more than it helps? Or is it actually broken?
Don’t implement double-checked locking, use an existing tool that does the work for you:
class CachingSupplier<T> implements Supplier<T> {
private final Supplier<? extends T> delegate;
private final ConcurrentHashMap<Supplier<? extends T>,T> map=new ConcurrentHashMap<>();
CachingSupplier(Supplier<? extends T> delegate) {
this.delegate = Objects.requireNonNull(delegate);;
}
@Override
public T get() {
return map.computeIfAbsent(delegate, Supplier::get);
}
}
Note that more than often, simply doing an eager first-time evaluation and replacing the supplier by a constant returning one before publishing it to other threads is even simpler and sufficient. Or just using a volatile
variable and accepting that there might be a few concurrent evaluations if multiple threads encounter a supplier that has not been evaluated yet.
The implementations below are just for informational (academic) purpose, the simpler implementation above is strongly recommended.
You can use publication guarantees of immutable objects instead:
class CachingSupplier<T> implements Supplier<T> {
private Supplier<? extends T> delegate;
private boolean initialized;
CachingSupplier(Supplier<? extends T> delegate) {
Objects.requireNonNull(delegate);
this.delegate = () -> {
synchronized(this) {
if(!initialized) {
T value = delegate.get();
this.delegate = () -> value;
initialized = true;
return value;
}
return this.delegate.get();
}
};
}
@Override
public T get() {
return this.delegate.get();
}
}
Here, initialized
is written and read under the synchronized(this)
guard, but at the first evaluation, the delegate
is replaced by a new Supplier
that invariably returns the evaluate value without the need for any check.
Since the new supplier is immutable, it is safe, even if read by a thread which never executed the synchronized
block.
As igaz correctly pointed out, the class above is not immune to data races, if the CachingSupplier
instance itself is not safely published. An implementation that is completely immune to data races, even when being improperly published, but still working without memory barriers in the ordinary access case, is even more involved:
class CachingSupplier<T> implements Supplier<T> {
private final List<Supplier<? extends T>> delegate;
private boolean initialized;
CachingSupplier(Supplier<? extends T> delegate) {
Objects.requireNonNull(delegate);
this.delegate = Arrays.asList(() -> {
synchronized(this) {
if(!initialized) {
T value = delegate.get();
setSupplier(() -> value);
initialized = true;
return value;
}
return getSupplier().get();
}
});
}
private void setSupplier(Supplier<? extends T> s) {
delegate.set(0, s);
}
private Supplier<? extends T> getSupplier() {
return delegate.get(0);
}
@Override
public T get() {
return getSupplier().get();
}
}
I think that emphasizes even more the beauty of the first solution…