I see below implementation of LRU cache in a legacy project where I have a question on usage of SoftReference
for value object but not for key object.
Here is the implementation
public class LRUCacheImpl<K, V> implements LRUCache<K, V> {
// SoftReference is used for a memory friendly cache.
// The value will be removed under memory shortage situations and
// the keys of the values will be removed from the cache map.
private final Map<K, SoftReference<V>> cache;
public LRUCacheImpl(final int cacheSize) {
// 'true' uses the access order instead of the insertion order.
this.cache = new LinkedHashMap<K, SoftReference<V>> (cacheSize, 0.75f, true) {
private static final long serialVersionUID = 1L;
@Override
protected boolean removeEldestEntry(Map.Entry<K, SoftReference<V>> eldest) {
// When to remove the eldest entry i.e. Least Recently Used (i.e. LRU) entry
return size() > cacheSize; // Size exceeded the max allowed.
}
};
}
@Override
public V put(K key, V value) {
SoftReference<V> previousValueReference = cache.put(key, new SoftReference<V>(value));
return previousValueReference != null ? previousValueReference.get() : null;
}
@Override
public V get(K key) {
SoftReference<V> valueReference = cache.get(key);
return valueReference != null ? valueReference.get() : null;
}
}
GC reclaims the memory for softly reachable objects if application is about to reach OutOfMemory(OOM).
If I apply the same logic, only memory for value should be reclaimed (as soft reference is created for value object only).
But here is the comment in the beginning of file
// SoftReference is used for a memory friendly cache. // The value will be removed under memory shortage situations and // the keys of the values will be removed from the cache map.
My question is how corresponding key object will be removed from map once app reach OOM. Should not the key be wrapped with soft reference as well?
cache.put(new SoftReference<K>(key), new SoftReference<V>(value));
I wasn't aware of the fact the LinkedHashMap
supports limiting the size of the map. By implementing removeEldestEntry
, you prevent the map from holding more than cacheSize
keys.
As you can see in the LinkedHashMap
implementation, adding a new Entry causes the removal of the eldest entry if removeEldestEntry
returns true.
void addEntry(int hash, K key, V value, int bucketIndex) {
createEntry(hash, key, value, bucketIndex);
// Remove eldest entry if instructed, else grow capacity if appropriate
Entry<K,V> eldest = header.after;
if (removeEldestEntry(eldest)) {
removeEntryForKey(eldest.key);
} else {
if (size >= threshold)
resize(2 * table.length);
}
}
Therefore, the least recently used keys would be removed from the map.
Using this constructor :
this.cache = new LinkedHashMap<K, SoftReference<V>> (cacheSize, 0.75f, true)
means that the entries are ordered according to their access order. Therefore, each time you get or put a key, you move the corresponding entry to the back of the list linked list, which meant the start of the list contains the least recently used Entry.
Therefore, the //the keys of the values will be removed from the cache map
probably refers to the fact that the least recently used keys would eventually be removed from the map, independently of the removal of values under memory shortage.