Understanding LRU Cache

Mohit Kalra
7 min readJan 23, 2021

--

LRU cache is one of the most popular design questions of leetcode, frequently asked in coding interviews. LRU Cache presents an amalgamation of design, critical thinking, innovation in using data structures for solving a given problem. This article provides a step by step tutorial for understanding LRU Cache.

Problem Background

Cache is a software or hardware component that helps speed up processing by utilizing the principle of locality of reference. A cache is used to lookup a pre-computed or recently used data which is helpful for the current computation. Given that the cache size might be limited, there’s often an eviction policy in place, which helps remove stale elements.

This problem revolves around building a cache API of a given capacity which stores (key, value) pairs, looks them up in constant O(1) time and evicts stale (key, value) pairs using the least recently used policy. In least recently used policy, data which was least recently asked for or updated, is thrown out of the cache to accommodate a more important recently introduced data.

Problem Understanding

The API for building this cache consists of the following methods:

  1. LRUCache(capacity) — helps initialize an LRU cache using least recently used eviction policy.
  2. Get(key) — helps fetch the most recent value of the key provided. If the key isn’t present in the cache, it's a cache miss and thus we return “-1”. Remember, when we issue a get on a key, it becomes most recently used. This is important detail and should not be missed.
  3. void put(int key, int value) — helps introduce a new (key,value) pair in the cache or if the key already exists, its corresponding value is updated. In both the cases, the key becomes the most recently used.

Once we have the API figured out, it's now time to figure out the fastest strategy to code this up. We would ideally want to be able to do the get and put operations in O(1) time.

Thinking our way through

What is the first thing that comes to mind for having a fast lookup for a given key?

Hashmap

If it was just about holding (key,value) pairs then a normal hashmap would suffice our problem. But we need to have a sense of ordering of the data, so that we can figure out what data is the most or the least recently used.

This is the reason that helps us move towards a backup data structure that will help us check the ordering of the pairs. As we duplicate data, thus, we will be using O(2N) ~ O(N) space given a LRU cache of capacity N when its full.

We want a data structure that doesn’t increase our complexity of lookup from constant to O(N) or more. The data structures that would come to mind might be the following :

Arrays

Arrays as a supporting data structure might seem as a great choice, provided we have the size of the cache already given, which means we can allocate an array of given capacity and keep storing and removing elements. Although storing a new element in an array can be fast, deletion in an array is costly and we might need to shuffle if we choose the conventional way. On the other hand, we can just maintain a set of indices available for storing data. This’ll help us prevent the unnecessary shuffle involved. But we forgot about the main question : how do we go about knowing the order of given data? One idea could be having the most recently used data in the starting of the array and the least recently used in the end. But, while updating the order, we might want to bring the data to front, and we’ll end up shuffling, which will increase our time complexity. Can we do better?

Linked List

We came to know that to maintain the order, we will need to shuffle around the data.

One of the main problems of arrays is the shuffling. Can we use a linked list? We can add data to a linked list in constant time. Given that we store the node reference to a key somewhere, we can look it up in constant time. But, we cannot delete data in constant time. To delete a node in a simple linked list takes O(n) because we don’t have access to the previous node to update its pointers. How can we solve this?

Can we maintain a pointer to the previous node? Yes, we can. This leads us to moving towards a doubly linked list. A doubly linked list provides us a way to access the previous node. We can thus, process our deletion in O(1) time. This by far proves to be the best data structure for maintaining the order of our data.

Heaps

Heaps help us assign priorities to an element and the element on the top of the heap has the most priority. But, heaps don’t maintain a relative ordering of the elements. For eg: say if the element on the top is least recently used and we end up popping it, can we be sure of its relative ordering with other elements? The answer is no. And thus, we can simply say heaps are not ordered. Thus, we can’t use them.

Binary Search Trees

In the domain of trees, one of the trees that ends up helping with ordering is a binary search tree. An inorder traversal of a binary search tree provides a sorted order of the elements. Given that we can maintain the trees as balanced using a balancing technique such as RB Trees or AVL deletion will be effectively O(lgn). Adding data comes up to be O(lgn) and given that we maintain direct reference of the node in a hashmap, lookup comes out to be O(1). But keeping a tree balanced is complex and out of scope of an interview coding session. On the other hand, we’ve seen better time complexity using a doubly linked list.

We thus, settle down with implementing our cache using a hashmap for fast lookup and an auxiliary data structure for helping us with ordering. We chose doubly linked list as it provides with constant lookup, addition and deletion time complexity. Please refer to the image below for more reference.

An overall view of the LRU Cache implementation

Breaking the code down

This post uses python for implementation, although details about alternate language implementations will be added on later.

As we are using a doubly linked list, we need to have head and tail sentinels in place. If you need more info on how use of sentinels eases operations on doubly linked list, refer here.

we first define the structure of a node in a doubly linked list.

We store the key, value and pointers to previous and next nodes. We need the key to remove the least recently used key. Once we have the Node class in place, we can move onto initializing our LRUCache class.

  • map is initialized to a defaultdict. Defaultdict helps prevent keyError by initializing the key using the expression passed in the defaultdict constructor. In our implementation, every new entry is initialized to an empty Node object. We store the size of the object in size variable.
  • head and tail are sentinels ; next pointer of head points to tail while prev of tail points to head. For more info on why this is done, check the references.
  • The next part of the code is coding the get method of the LRU Cache API.
  • If the linked list has no elements or the key is not found in the map, we return -1
  • If the key exists, we first delete the node from its current position, move it to the front to signify that it's the most recently used key,value pair.
  • For the put method, we first check if the key exists in map. If it does, then it's an update.
  • For the update part of it, we update the val attribute of the node reference and move it to the front, just as we did in get.
  • For adding a new entry, we first check if the list is full. If the list is full, we delete the last entry which signifies the LRU pair. We can access it using the previous pointer of the tail sentinel.
  • To add a new node, we first create one and move it to the front. This is useful to introduce a new data entry in the cache.
  • For moving a node to the front, we make use of the head sentinel.
  • We first store the current first node’s reference in head_next and use it to update pointers of node.
  • For deleting the node, we can use the prev and next pointers of node to reach previous and next node and update their respective pointers.

Complete Code

This code was heavily derived from the original leetcode solution. This motive of this post was to provide more intuition into coming up with such solutions, the trade offs involved in a design and how to break up the code so as to the understand it in a better way.

Official leetcode solution : https://leetcode.com/problems/lru-cache/solution/

Official leetcode discussion : https://leetcode.com/problems/lru-cache/discuss/45911/Java-Hashtable-%2B-Double-linked-list-(with-a-touch-of-pseudo-nodes)

Disclaimer!

I’m a beginner in programming and I’m still learning. There is a lot of improvement that can be done and I’ll be happy to receive comments in the section below.

References

--

--

Mohit Kalra
Mohit Kalra

No responses yet