Skip to main content
Low-Level Design·Intermediate

LLD Case Study: Thread-Safe LRU Cache

Design an O(1) LRU cache using HashMap + doubly linked list. Covers the exact data structure invariants, thread-safe implementation with per-key segmented locking, TTL eviction, singleflight to prevent thundering herd, and the step-by-step path from a single-server cache to a distributed Memcached-style tier. The canonical LLD problem that separates engineers who understand data structure composition from those who just memorize it.

45 min read 10 sections 7 interview questions
LRU CacheDoubly Linked ListHash MapThread SafetyO(1)LLDConcurrencyLock GranularityCache EvictionSingleflightTTLSegmented LockingCache InvalidationDistributed CacheLoading Cache

Why LRU Cache Is the Canonical LLD Interview Problem

LRU cache appears in more FAANG coding and LLD rounds than almost any other problem — not because it's intrinsically hard, but because it simultaneously tests data structure composition (why doubly linked list + hash map?), invariant reasoning (what must always be true after every operation?), concurrency design (how do you avoid serializing all access?), and extensibility (how do you add TTL or a loading function without rewriting the core?).

The naive answer — "use an OrderedDict" — fails the invariant and extensibility questions immediately. Interviewers want to see you derive the data structure from first principles: O(1) get requires O(1) lookup (hash map), O(1) eviction requires knowing the least-recently-used node without scanning (tail pointer), and O(1) move-to-head requires O(1) pointer surgery (doubly linked list, not singly linked). Each constraint forces the next design decision.

The candidate gap: A mid-level engineer gets a working LRU. A senior engineer adds thread safety and explains why per-spot locking beats a global lock. A staff engineer adds singleflight for thundering herd prevention, TTL with background eviction, and maps the path to a distributed tier without changing the public interface.

IMPORTANT

What Earns Each Level

6/10: Implements LRU with OrderedDict or a simple hash map with a linked list that has bugs on edge cases (single-element cache, duplicate puts). Cannot explain why doubly-linked vs singly-linked.

8/10: Clean HashMap + doubly linked list. Correct invariants (map size equals list node count, head is most recent, tail is least recent, capacity never exceeded). Handles put() for existing keys (update value + move to head). Adds a global lock for thread safety and knows it's a bottleneck.

10/10: Segmented LRU with N segments (hash key → segment). Each segment has its own map, sentinel head/tail, and lock — O(1) amortized with bounded contention. Explains singleflight for thundering herd prevention. Adds TTL via a priority queue (heap on expiry timestamp) with a lazy eviction strategy. Maps the distributed extension to consistent hashing across nodes with a write-through or write-behind policy. States ALL invariants explicitly at the start.

SEDIE Walkthrough for LRU Cache

01

Scope

O(1) get and put with predictable LRU eviction. Thread-safe under concurrent read/write. Capacity-bounded: oldest entry evicted when full. Optional: TTL per entry, loader function for cache-aside, singleflight for duplicate misses.

02

Extract Entities

Core entities: LRUCache (orchestrator), Node (doubly linked list node owning key, value, prev, next, and optional expiry). Service objects: EvictionPolicy (interface for LRU, LFU swap), Loader (interface for cache-aside pattern), Singleflight (dedup in-flight requests per key).

03

Design Classes

LRUCache owns: capacity, HashMap[key → Node], sentinel head Node (most recent), sentinel tail Node (least recent), and lock(s). Sentinel nodes eliminate edge cases — head.next is MRU, tail.prev is LRU. Never null-check nodes during pointer surgery.

04

Implement Critical Methods

get(key): map lookup + move-to-head if found, all under lock. put(key, value): if key exists, update + move-to-head; else insert at head + evict tail if over capacity. All under lock. move_to_head(node) and evict_tail() are private helpers — they must never be called without the lock held.

05

Extensibility

Add TTL by storing expiry on Node and lazy-checking on get(). Add singleflight by keying in-flight Futures per key. Add segmented locking by partitioning keys across N independent (map, list, lock) triples. None of these require touching the Node or the core pointer-surgery logic.

Data Structure Composition — Why Doubly Linked List + HashMap

Rendering diagram...

Invariants — State Them Before Coding

The invariant question is the most reliable signal that separates strong candidates. State these at the start, before writing any code, and refer back to them when explaining each operation:

  1. Size invariant: len(map) == count of non-sentinel nodes in list. A node exists in the list if and only if its key exists in the map. They are always updated together.

  2. Recency invariant: head.next is the most recently used node; tail.prev is the least recently used node.

  3. Capacity invariant: After every put(), len(map) <= capacity. If capacity was reached, exactly one node (the tail.prev) was evicted before the new node was inserted at head.

  4. Bidirectionality invariant: For any non-sentinel node N: N.prev.next == N and N.next.prev == N. Pointer surgery that violates this corrupts the list silently.

  5. Thread safety invariant: Invariants 1–4 hold as observed by any thread. No thread can observe the map and list in an inconsistent intermediate state (e.g., key in map but node not yet in list).

Full Thread-Safe LRU Cache Implementation

pythonlru_cache.py
from __future__ import annotations
import threading
from typing import Optional, TypeVar, Generic, Callable
from datetime import datetime, timedelta

K = TypeVar("K")
V = TypeVar("V")


class Node(Generic[K, V]):
    __slots__ = ("key", "value", "prev", "next", "expires_at")

    def __init__(self, key: K, value: V, ttl_seconds: Optional[float] = None):
        self.key = key
        self.value = value
        self.prev: Optional[Node[K, V]] = None
        self.next: Optional[Node[K, V]] = None
        self.expires_at: Optional[datetime] = (
            datetime.utcnow() + timedelta(seconds=ttl_seconds)
            if ttl_seconds else None
        )

    def is_expired(self) -> bool:
        return self.expires_at is not None and datetime.utcnow() > self.expires_at


class LRUCache(Generic[K, V]):
    """
    Thread-safe LRU cache with optional TTL and loader support.

    Invariants:
      1. len(map) == node count between sentinels
      2. head.next is MRU; tail.prev is LRU
      3. len(map) <= capacity after every put()
      4. N.prev.next == N and N.next.prev == N for all nodes
    """

    def __init__(
        self,
        capacity: int,
        loader: Optional[Callable[[K], V]] = None,
        ttl_seconds: Optional[float] = None,
    ):
        if capacity < 1:
            raise ValueError("Capacity must be at least 1")
        self._capacity = capacity
        self._loader = loader
        self._ttl = ttl_seconds
        self._map: dict[K, Node[K, V]] = {}
        self._lock = threading.Lock()

        # Sentinel nodes eliminate null-check edge cases
        self._head: Node = Node.__new__(Node)  # type: ignore[assignment]
        self._tail: Node = Node.__new__(Node)  # type: ignore[assignment]
        self._head.prev = None
        self._head.next = self._tail
        self._tail.prev = self._head
        self._tail.next = None

    # ── Public API ──────────────────────────────────────────────────────────

    def get(self, key: K) -> Optional[V]:
        with self._lock:
            node = self._map.get(key)
            if node is None:
                return self._load(key)   # cache miss
            if node.is_expired():
                self._remove_node(node)
                del self._map[key]
                return self._load(key)
            self._move_to_head(node)
            return node.value

    def put(self, key: K, value: V) -> None:
        with self._lock:
            node = self._map.get(key)
            if node:
                node.value = value
                if self._ttl:
                    node.expires_at = datetime.utcnow() + timedelta(seconds=self._ttl)
                self._move_to_head(node)
            else:
                new_node = Node(key, value, self._ttl)
                self._map[key] = new_node
                self._insert_at_head(new_node)
                if len(self._map) > self._capacity:
                    evicted = self._evict_tail()
                    del self._map[evicted.key]

    def delete(self, key: K) -> bool:
        with self._lock:
            node = self._map.pop(key, None)
            if node:
                self._remove_node(node)
                return True
            return False

    def __len__(self) -> int:
        with self._lock:
            return len(self._map)

    # ── Private helpers (must always be called with _lock held) ─────────────

    def _insert_at_head(self, node: Node) -> None:
        """O(1): insert node immediately after sentinel head."""
        node.prev = self._head
        node.next = self._head.next
        self._head.next.prev = node   # invariant 4
        self._head.next = node

    def _remove_node(self, node: Node) -> None:
        """O(1): unlink node from list. Caller must delete from map."""
        node.prev.next = node.next
        node.next.prev = node.prev

    def _move_to_head(self, node: Node) -> None:
        """O(1): move existing node to MRU position."""
        self._remove_node(node)
        self._insert_at_head(node)

    def _evict_tail(self) -> Node:
        """O(1): remove and return the LRU node (tail.prev)."""
        lru = self._tail.prev
        self._remove_node(lru)
        return lru

    def _load(self, key: K) -> Optional[V]:
        """Call loader if configured. Lock is held by caller."""
        if self._loader is None:
            return None
        value = self._loader(key)   # NOTE: lock is held during load — use singleflight in prod
        if value is not None:
            new_node = Node(key, value, self._ttl)
            self._map[key] = new_node
            self._insert_at_head(new_node)
            if len(self._map) > self._capacity:
                evicted = self._evict_tail()
                del self._map[evicted.key]
        return value


# ── Segmented LRU — eliminates single-lock bottleneck ──────────────────────

class SegmentedLRUCache(Generic[K, V]):
    """
    Partitions keys across N independent LRU segments.
    Concurrent gets/puts on different segments never block each other.
    """

    def __init__(self, capacity: int, num_segments: int = 16,
                 ttl_seconds: Optional[float] = None):
        segment_capacity = max(1, capacity // num_segments)
        self._segments = [
            LRUCache(segment_capacity, ttl_seconds=ttl_seconds)
            for _ in range(num_segments)
        ]
        self._num_segments = num_segments

    def _segment(self, key: K) -> LRUCache[K, V]:
        return self._segments[hash(key) % self._num_segments]

    def get(self, key: K) -> Optional[V]:
        return self._segment(key).get(key)

    def put(self, key: K, value: V) -> None:
        self._segment(key).put(key, value)

    def delete(self, key: K) -> bool:
        return self._segment(key).delete(key)

Failure Modes — What Breaks and Why

Failure ModeRoot CauseSymptomFix
Double-booking (two threads park in same node)Non-atomic check-then-set outside lockCorrupted map/list invariant; node in map not in listAll check-and-set inside single lock acquisition
Memory leak — detached nodesNode removed from list but key still in map (or vice versa)map.len() grows unboundedly past capacityAlways update map and list together; never one without the other
Thundering herd on cold cacheN concurrent misses all call the loaderN × loader latency, N × backend requestsSingleflight: deduplicate in-flight loads per key so only one thread calls loader
Stale TTL entries servedExpiry checked on put but not on getExpired value returned to callerCheck node.is_expired() inside get() and evict lazily
Global lock bottleneckSingle threading.Lock() on entire cacheAll threads serialize at peak throughputSegmented LRU: N segments each with own lock; contention bounded by segment size
Singly-linked eviction is O(n)Using LinkedList without back-pointerEviction requires scanning to find predecessorDoubly linked list: O(1) remove_node via prev/next pointers

Singleflight — Preventing Thundering Herd

The most common production failure mode for an LRU cache is the thundering herd: 1,000 concurrent requests for the same uncached key all miss, all call the database, and the database falls over. The LRU cache is cold; every thread independently checks, misses, and fires a backend request.

Singleflight solves this by tracking in-flight requests per key. When the first thread misses and starts loading, it registers a Future under that key. All subsequent threads that miss for the same key receive the same Future. When the first thread's load completes, all waiting threads get the result simultaneously. The database sees exactly 1 request instead of 1,000.

Implementation in the _load() method: before calling self._loader(key), check a shared in_flight: dict[K, Future]. If an entry exists, wait for it. If not, create a Future, register it, release the cache lock (!) to call the backend without holding the cache lock, then acquire the cache lock again to insert the result. The key insight: you must release the cache lock before calling an external system or you create a deadlock where the backend timeout takes the cache down with it.

Python's concurrent.futures or a simple threading.Event per key implement singleflight. For distributed systems, Redis SETNX with TTL achieves the same effect at the cluster level.

Extension Path — Local to Distributed

The local LRU cache works for a single process. At multiple servers, two problems emerge: (1) each server has a different view of what's cached — cache hit rates fall dramatically, and (2) a write on server A doesn't invalidate the cached copy on server B.

Level 1 — Sticky routing: Route requests for key K always to the same server using consistent hashing. Each server's local LRU cache becomes the authoritative cache for its key range. Simple, but a server failure causes a cold-start storm for that key range.

Level 2 — Shared external cache (Memcached/Redis): Replace the local cache with a client that talks to a shared Memcached cluster. get(key) and put(key, value) become network calls. The LRU eviction policy lives inside Memcached. The local LRUCache class becomes a thin client. Advantage: all servers share the same cache. Disadvantage: network latency (~0.1ms local vs. 0.5–1ms Memcached) and network becomes a single point of failure.

Level 3 — Two-tier caching: L1 = local LRU cache (very hot keys, sub-millisecond), L2 = Memcached (shared, millisecond), L3 = database. get() tries L1, then L2, then L3. Invalidation propagates via a pub/sub channel — when L3 is written, a message fires, L2 deletes the entry, L1 entries expire via TTL.

The interface stays the same: the get(key) and put(key, value) signatures don't change from local to distributed. The implementation behind the interface grows in complexity. This is the extensibility argument for keeping the storage layer abstracted behind a Store interface from day one.

TIP

Interview Delivery — What to Say and When

Before coding: State the 5 invariants explicitly. "Let me state the invariants first: (1) map size equals node count, (2) head.next is MRU, tail.prev is LRU, (3) after every put, size ≤ capacity, (4) N.prev.next == N for all N, (5) all four hold as observed by any thread." Interviewers at senior+ level are watching for this.

On data structure choice: "Doubly linked list because we need O(1) remove of any node. Singly linked would require scanning for the predecessor — O(n). Hash map because get() must be O(1). The map stores node pointers directly, so move-to-head requires no search."

On concurrency: "Global lock is correct but serializes everything. For a hot cache, I'd use segmented locking: hash the key to one of 16 segments, each with its own lock. Concurrent accesses for different segments never block each other. That's effectively 16× the throughput."

On TTL: "I add an expires_at field to Node. In get(), after the map lookup, I check node.is_expired() and evict lazily if so. A background thread can do eager eviction via a min-heap ordered by expiry, but lazy eviction is simpler and correct for most use cases."

Staff signal: Mention singleflight. "Without singleflight, a cold cache under load produces N backend requests per unique key — thundering herd. With singleflight, the first miss fires one backend request; all other threads in flight for the same key wait for that result. This is how production caches like Caffeine (Java) and groupcache (Go) are built."

Interview Questions

Click to reveal answers