LeetCode Problem 146: LRU Cache
字数 3279
更新时间 2025-10-25 17:21:16

Alright, let's dive into a detailed explanation of LeetCode Problem 146: LRU Cache.


1. Problem Description

Design and implement a data structure that satisfies the constraints of a LRU (Least Recently Used) cache.

Implement the LRUCache class:

  • LRUCache(int capacity): Initializes the LRU cache with a positive integer capacity.
  • int get(int key): Returns the value of the key if it exists in the cache, otherwise returns -1.
  • void put(int key, int value): Updates the value of the key if it already exists. Otherwise, inserts the key-value pair into the cache.
    If the insertion causes the number of keys to exceed the capacity, the least recently used key should be evicted.

The get and put operations must run in O(1) average time complexity.

Example:

Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // returns 1 (key=1 accessed, becomes most recent)
lRUCache.put(3, 3); // evicts key=2, cache is {1=1, 3=3}
lRUCache.get(2);    // returns -1 (not found)
lRUCache.put(4, 4); // evicts key=1, cache is {4=4, 3=3}
lRUCache.get(1);    // returns -1
lRUCache.get(3);    // returns 3
lRUCache.get(4);    // returns 4

2. Understanding the Problem

The core of the LRU caching mechanism is:

  • The cache capacity is limited. When full, the least recently accessed item is removed.
  • Access includes both get and put (a put for an existing key also counts as access, requiring an update to most recent).
  • O(1) time complexity is required for both get and put.

3. Thought Process

3.1 Why not just a hash table?

A hash table provides O(1) lookups but cannot track access order.
If only a hash table is used, finding the oldest item for eviction would require scanning, which is O(n) and does not meet the requirement.

3.2 How to track access order?

We need a data structure that allows quick reordering of elements. A common choice is a doubly linked list:

  • The most recently accessed items are placed at the head (or tail) of the list, and the least recently accessed are at the opposite end.
  • When a key is accessed, its corresponding node is moved to the head (O(1) time, if the node's location is known).
  • When removing the least recently accessed node, simply delete the tail node (O(1)).

3.3 How to quickly locate a list node?

The hash table's value does not directly store the data value, but rather a pointer (or reference) to the corresponding node in the linked list.
This way:

  • A key can be used to find the linked list node in O(1) via the hash table.
  • Nodes can be moved or deleted in O(1) within the list.

Data Structure Design:

  • Hash table: unordered_map<int, Node*>
  • Doubly linked list: Node { key, value, prev, next }
  • Maintain dummy head and dummy tail nodes for easier boundary handling.

4. Detailed Steps

4.1 Node Definition

struct DLinkedNode {
    int key, value;
    DLinkedNode* prev;
    DLinkedNode* next;
    DLinkedNode() : key(0), value(0), prev(nullptr), next(nullptr) {}
    DLinkedNode(int k, int v) : key(k), value(v), prev(nullptr), next(nullptr) {}
};

4.2 Class Structure

class LRUCache {
private:
    unordered_map<int, DLinkedNode*> cache;
    DLinkedNode* head; // dummy head (most recent)
    DLinkedNode* tail; // dummy tail (least recent)
    int size;
    int capacity;

    // Move a node to the head (indicating recent access)
    void moveToHead(DLinkedNode* node) {
        removeNode(node);
        addToHead(node);
    }

    // Add a node to the head
    void addToHead(DLinkedNode* node) {
        node->prev = head;
        node->next = head->next;
        head->next->prev = node;
        head->next = node;
    }

    // Remove a node (does not free memory, as it might be reused)
    void removeNode(DLinkedNode* node) {
        node->prev->next = node->next;
        node->next->prev = node->prev;
    }

    // Remove the tail node and return it (for eviction)
    DLinkedNode* removeTail() {
        DLinkedNode* node = tail->prev;
        removeNode(node);
        return node;
    }

public:
    LRUCache(int capacity) : capacity(capacity), size(0) {
        // Create dummy head and tail nodes
        head = new DLinkedNode();
        tail = new DLinkedNode();
        head->next = tail;
        tail->prev = head;
    }
    
    int get(int key) {
        if (!cache.count(key)) return -1;
        DLinkedNode* node = cache[key];
        moveToHead(node); // Move to head as recently accessed
        return node->value;
    }
    
    void put(int key, int value) {
        if (cache.count(key)) {
            // Key exists, update value and move to head
            DLinkedNode* node = cache[key];
            node->value = value;
            moveToHead(node);
        } else {
            // Create new node
            DLinkedNode* node = new DLinkedNode(key, value);
            cache[key] = node;
            addToHead(node);
            size++;
            // If capacity is exceeded, remove the least recently used
            if (size > capacity) {
                DLinkedNode* removed = removeTail();
                cache.erase(removed->key);
                delete removed;
                size--;
            }
        }
    }
};

5. Complexity Analysis

  • Time Complexity: Both get and put are O(1).
    Because hash table operations are O(1), and linked list insertion, deletion, and moving (with a known node pointer) are also O(1).
  • Space Complexity: O(capacity), as the hash table and linked list store at most capacity elements.

6. Key Points Summary

  1. Doubly Linked List maintains access order (most recent at the head, least recent at the tail).
  2. Hash Table provides O(1) mapping from key to list node.
  3. Dummy head and tail nodes simplify list operations (no need for null checks).
  4. On access (get or put for an existing key), move the node to the head.
  5. When inserting a new key, if capacity is exceeded, delete the tail node and remove its entry from the hash table.

This completes the implementation of a fully functional LRU cache data structure that meets all requirements.

相似文章
相似文章
 全屏