cachinglinkedhashmaplru

What is the difference between LRU and LFU


What is the difference between LRU and LFU cache implementations?

I know that LRU can be implemented using LinkedHashMap. But how to implement LFU cache?


Solution

  • Let's consider a constant stream of cache requests with a cache capacity of 3, see below:

    A, B, C, A, A, A, A, A, A, A, A, A, A, A, B, C, D
    

    If we just consider a Least Recently Used (LRU) cache with a HashMap + doubly linked list implementation with O(1) eviction time and O(1) load time, we would have the following elements cached while processing the caching requests as mentioned above.

    [A]
    [A, B]
    [A, B, C]
    [B, C, A] <- a stream of As keeps A at the head of the list.
    [C, A, B]
    [A, B, C]
    [B, C, D] <- here, we evict A, we can do better! 
    

    When you look at this example, you can easily see that we can do better - given the higher expected chance of requesting an A in the future, we should not evict it even if it was least recently used.

    A - 12
    B - 2
    C - 2
    D - 1
    

    Least Frequently Used (LFU) cache takes advantage of this information by keeping track of how many times the cache request has been used in its eviction algorithm.