I checked out official Android documentation for LRUCache which says : Each time a value is accessed, it is moved to the head of a queue. When a value is added to a full cache, the value at the end of that queue is evicted and may become eligible for garbage collection. I suppose this is the doubly linked list which is maintained by linkedhashmap which is used by the cache. To check this behavior, I checked out the source code for LruCache, and checked the get(K key) method. It further calls upon map's get method which gets the value from the underlying hashmap and calls upon recordAccess method.
public V get(Object key) {
LinkedHashMapEntry<K,V> e = (LinkedHashMapEntry<K,V>)getEntry(key);
if (e == null)
return null;
e.recordAccess(this);
return e.value;
}
recordAccess method in turn moves the accessed entry to the end of the list in case accessOrder is set to true (for my problem let's assume it is), else it does nothing.
/**
* This method is invoked by the superclass whenever the value
* of a pre-existing entry is read by Map.get or modified by Map.set.
* If the enclosing Map is access-ordered, it moves the entry
* to the end of the list; otherwise, it does nothing.
*/
void recordAccess(HashMap<K,V> m) {
LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m;
if (lm.accessOrder) {
lm.modCount++;
remove();
addBefore(lm.header);
}
}
This sounds contradictory to the above statement where it's said that the element is moved to the head of the queue. Instead it's moved to the last element of the list (using head.before). Surely, I'm missing something here, any help?
You are not missing anything, just you are reading LruCache
documentation for LinkedHashMap
. LinkedHashMap
has its own documentation, in particular about its accessOrder
. (Same on Java docs).
[...when accessOrder=true...] order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order)
So LinkedHashMap
puts the most recently used entries at the end, and it is documented.
Practically LruCache
describes how such cache works in theory, but LinkedHashMap
shows how to implement it without adding separate backward-moving iterators: by putting the recent elements at the end, trimming can use the already available (forward-moving) iterator to access (and remove) old elements efficiently.
Though here and now I could not tell what was wrong with removeEldestEntry
. Perhaps it did not exist in the past.