algoadvance

Design and implement a data structure for Least Recently Used (LRU) Cache. It should support two operations: get and put.

  1. get(key) - Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1.
  2. put(key, value) - Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item.

The cache is initialized with a positive capacity.

Clarifying Questions

  1. Capacity: Will the capacity of the cache always be a positive integer greater than zero?
    • Yes.
  2. Thread Safety: Do we need to consider thread safety for the cache operations?
    • No, you can assume single-threaded access.
  3. Data Types: Will the keys and values be integers only?
    • Yes.

Strategy

We can solve the problem using a combination of a hashmap (dictionary in Python) and a doubly linked list:

Steps:

  1. When a key is accessed using get(), move that key-value pair to the front of the list to mark it as recently used.
  2. When a new key-value pair is added using put(), add it to the front of the list. If the cache is at capacity, remove the least recently used item (from the end of the list).

Code

class DLinkedNode:
    def __init__(self, key=0, value=0):
        self.key = key
        self.value = value
        self.prev = None
        self.next = None

class LRUCache:

    def __init__(self, capacity: int):
        self.cache = {}
        self.capacity = capacity
        self.size = 0
        # Initialize the dummy head and tail nodes
        self.head, self.tail = DLinkedNode(), DLinkedNode()
        self.head.next = self.tail
        self.tail.prev = self.head

    def _add_node(self, node):
        node.prev = self.head
        node.next = self.head.next
        self.head.next.prev = node
        self.head.next = node

    def _remove_node(self, node):
        prev = node.prev
        next = node.next
        prev.next = next
        next.prev = prev

    def _move_to_head(self, node):
        self._remove_node(node)
        self._add_node(node)

    def _pop_tail(self):
        res = self.tail.prev
        self._remove_node(res)
        return res

    def get(self, key: int) -> int:
        node = self.cache.get(key, None)
        if not node:
            return -1
        self._move_to_head(node)
        return node.value

    def put(self, key: int, value: int) -> None:
        node = self.cache.get(key, None)
        if not node:
            newNode = DLinkedNode(key, value)
            self.cache[key] = newNode
            self._add_node(newNode)
            self.size += 1
            if self.size > self.capacity:
                tail = self._pop_tail()
                del self.cache[tail.key]
                self.size -= 1
        else:
            node.value = value
            self._move_to_head(node)

Time Complexity

This design ensures that the LRU Cache operations are efficient and meet the requirements of the problem.

Try our interview co-pilot at AlgoAdvance.com