How to Approach a Low Level Design Interview
The mindset and communication playbook for low level design interviews. Covers what interviewers grade on (SOLID adherence, design patterns, extensibility), time budgeting, recovery patterns, and the failure modes that catch senior candidates in object-oriented design and lld rounds.
What This Page Is (and Isn't)
This page is the pre-game for an LLD interview: how to think about the problem before drawing a single class. The companion page, How to Design at LLD, covers the execution: turning a blank whiteboard into a defensible class diagram with concrete patterns and code.
The split exists because strong engineers fail LLD rounds not because they don't know SOLID or design patterns — they ship object-oriented code every day — but because they make the wrong moves at the meta level: they jump to a class diagram before clarifying scope, name patterns without naming the requirement that forced the choice, or run out of time on the wrong method (writing a toString() instead of park()).
Read this page to understand the signals interviewers grade on, the traps that catch senior candidates, and the recovery patterns when you realize at minute 25 that your inheritance hierarchy is wrong. Then read the design page for the mechanical playbook.
The asymmetric truth: in a 45-minute LLD interview, the interviewer cannot evaluate your full OOP knowledge. They sample your judgment: which entity becomes a class, which becomes an attribute, which becomes a service. A candidate who designs a partial system but justifies every decision with SRP, OCP, or a named pattern outperforms a candidate who produces a complete UML but cannot say why PricingStrategy is an interface and not a concrete class.
Why LLD Differs From HLD (and What That Means for Your Approach)
LLD and HLD are graded on different rubrics, and conflating them is a fast way to fail. HLD asks "can you architect a distributed system?" — sharding, replication, caching, queue choice, fan-out at scale. LLD asks "can you decompose a domain into objects that respect SOLID and accommodate change?" — class boundaries, inheritance vs composition, pattern selection, concurrency primitives at the object level.
The practical implication: in HLD you move from numbers to architecture (140K reads/sec → cache layer required). In LLD you move from requirements to entities (the noun "Vehicle" with two variation axes → composition with FormFactor and FuelType fields). HLD candidates who default to "add a load balancer and Cassandra" fail; LLD candidates who default to "use Strategy and Factory" fail. The mode of reasoning is requirement signal → SOLID principle → pattern — not pattern-first.
Concretely, this changes what the interviewer probes. HLD probes: "what's the throughput?", "what fails when Redis goes down?", "how does this shard?". LLD probes: "what would break if we changed the pricing model?" (SRP), "how do you add a new vehicle type?" (OCP), "can MotorcycleSpot replace ParkingSpot wherever it's used?" (LSP), "how do you unit-test the lot without instantiating real pricing?" (DIP via constructor injection). Memorize these probe shapes — they map directly to the SOLID principle the interviewer wants you to demonstrate.
One last difference: in HLD you draw boxes that represent services. In LLD you draw classes that represent domain concepts. A service in HLD wraps multiple classes; a class in LLD lives inside one service. Don't conflate the boundaries — drawing a "service-style" box in an LLD interview signals you're solving the wrong problem.
The Five Signals LLD Interviewers Actually Score
Every FAANG-style LLD rubric reduces to these five signals. Memorize them — they tell you what to optimize at every moment:
- Requirements decomposition — do you extract entities (nouns) and behaviors (verbs) systematically, or do you guess a class structure and patch it later?
- SOLID adherence — every design choice should be defensible against SRP, OCP, LSP, ISP, or DIP. Interviewers explicitly probe: "what would break if we changed X?" (SRP), "how do you add Y?" (OCP).
- Pattern recognition with justification — naming Strategy or Observer is not enough; you must name the requirement that forced the pattern ("pricing has 3 algorithms that swap at runtime → Strategy").
- Extensibility reasoning — proactively show how the next requirement (a new vehicle type, a new pricing tier, a new payment method) drops in without modifying existing code.
- Communication and ownership — narrate decisions out loud, propose the agenda, drive the conversation. Silence reads as uncertainty even if you're thinking correctly.
Junior candidates miss signals 1 and 4. Senior candidates lose on 2 and 3 — they apply patterns intuitively but never articulate the principle. Staff+ candidates win on 4 — they make every design choice in service of a future change the interviewer hasn't asked about yet.
The 7 Mindset Rules for LLD Interviews
Rule 1 — Treat the prompt as deliberately ambiguous
'Design a parking lot' or 'design Splitwise' is intentionally vague. The interviewer is testing whether you ask about actors, scale, persistence, concurrency, and pricing rules — or fill in assumptions silently. Silent assumptions are the single most common L5+ failure. Always say: 'Before I draw anything — is this one lot or a network? What vehicle types? Is the system multi-threaded? Is pricing flat-rate or time-based?'
Rule 2 — Nouns become candidates, not commitments
When you scan requirements for nouns, every noun is a candidate class. Some survive (ParkingLot, Vehicle, Ticket); others collapse into attributes (color, license_plate). The exercise is to enumerate candidates, then defend which ones become classes via the 'does it have identity and behavior?' test. Skipping this scan is how candidates miss BookItem-vs-Book or Account-vs-Transaction distinctions.
Rule 3 — Drive the conversation; do not wait to be led
Interviewers expect senior candidates to propose the agenda. At minute 5 say: 'Now that requirements are clear, I'll spend 5 minutes extracting entities, 10 minutes on the class diagram with key relationships, 15 minutes implementing the 2-3 most signal-rich methods, then 5 minutes on extensions — does that work?' This signals ownership and lets the interviewer redirect early if they have a specific deep-dive in mind.
Rule 4 — Every class decision = 1 SOLID principle + 1 pattern (when applicable)
Anchor each design choice to a principle: 'PricingStrategy is an interface because OCP — adding peak-hour pricing must not modify ParkingLot' or 'Ticket and Lending are separate because SRP — Ticket holds state, Lending coordinates the transaction.' Pattern naming follows: 'so this is the Strategy pattern.' Never name a pattern without naming the SOLID principle it serves.
Rule 5 — Pick 2-3 methods to implement, not all of them
In 15 minutes of coding time, you can implement 2-3 methods well or 8 methods badly. Pick the signal-rich ones: methods with concurrency (park()), methods with strategy selection (calculateFee()), methods with state transitions (checkout()). Skip getters, setters, and toString(). Decline the trap: 'I'll implement park() and calculateFee() — getters are boilerplate I'd auto-generate in real code.'
Rule 6 — Surface concurrency, persistence, and testability unprompted
Don't wait for 'how do you make this thread-safe?' — proactively say 'park() needs a lock on the spot because two threads could see the same available spot.' Same for persistence ('in production this state lives in Postgres; here I'm modeling in-memory') and testability ('PricingStrategy as an interface lets me inject a MockPricing in unit tests'). These are L5+ signals that distinguish you from candidates who only design the happy path.
Rule 7 — Watch the clock; finish with extensions
At minute 35 of a 45-minute interview, stop adding classes. Spend the last 5-7 minutes on extensions: 'To add electric-vehicle charging spots, I'd subclass ParkingSpot — zero changes to ParkingLot. To add a reservation system, I'd add a Reservation entity and a ReservationService — Ticket and ParkingSpot are unchanged.' This is where OCP signal is strongest, and most candidates skip it because they've over-invested in the implementation.
The 45-Minute LLD Interview Timeline
LLD Anti-Patterns That Lose Points (and the Senior Fix)
| Anti-pattern | Why it costs points | Senior fix |
|---|---|---|
| Drawing a class diagram in the first 90 seconds | Signals you skip requirements gathering — explicit L5+ red flag | Spend 5 min asking actors, scale, persistence, concurrency before a single box |
| Conflating Book and BookItem (or Order and OrderLine) | Catalog-vs-instance confusion produces a schema that cannot model multiple copies or partial fulfillment | Run the noun scan and ask: 'is this the catalog entry or a physical instance?' for every domain object |
| Embedding pricing logic inside ParkingSpot.calculateFee() | Violates SRP — pricing changes force ParkingSpot changes; mixes domain entity with service logic | Extract PricingStrategy as an interface; ParkingLot or PricingService composes it |
| Using inheritance for everything (Order extends Cart extends Item) | Deep hierarchies break LSP and create rigid coupling | Apply the is-a vs has-a test; default to composition unless the subtype is a true specialization |
| Naming patterns without naming the requirement | 'Let me add a Factory here' without 'because creation logic varies by vehicle type' reads as pattern-matching | Always pair: requirement → SOLID principle violated → pattern that fixes it → name |
| Returning null or -1 for error cases | Forces every caller to remember error checks; loses information about what failed | Throw typed exceptions: ParkingLotFullException, InvalidVehicleTypeException; show one try/catch at the caller |
| Ignoring concurrency in multi-actor systems | Parking lots, ticketing, banking — every realistic LLD has concurrent access; ignoring it is a junior signal | Surface lock granularity proactively: 'park() locks the spot, not the whole lot, so 100 threads can park in parallel' |
| Writing getters/setters/toString first | Burns coding time on boilerplate; signals you don't know which methods carry signal | Pick 2-3 methods that show design thinking; explicitly say 'I'd auto-generate getters in real code' |
| Treating the diagram as final once drawn | When a new requirement surfaces a gap, candidates patch instead of restructuring | Be willing to erase: 'this requirement breaks my hierarchy — let me revise the inheritance to use composition with a Strategy' |
| Over-engineering with patterns the requirements don't justify | Adding Visitor or Chain-of-Responsibility for a 3-class system reads as junior cargo-culting | Apply the 'rule of three': only introduce a pattern when the third instance of the same shape appears, or when extensibility is explicitly required |
The Most Expensive Mistake — Hierarchy Lock-In
The single highest-leverage failure mode in LLD interviews: committing to an inheritance hierarchy in the first 10 minutes and refusing to revise it when requirements surface a gap.
Example: the interviewer says "design a vehicle rental system." You draw Vehicle <- Car, Truck, Motorcycle. At minute 25, the interviewer adds: "actually, we also rent boats and snowmobiles, and pricing depends on whether the vehicle is electric, gas, or human-powered." Your hierarchy now needs two orthogonal axes (form-factor × power-source). Inheritance can encode one axis; the other becomes a tangle of subclasses (ElectricCar, GasCar, ElectricTruck, GasTruck...).
Cost: 5-10 minutes of recovery, plus the perception that you committed without anticipating multi-axis variation.
Fix: when a class has more than one variation axis, switch to composition with strategies. Vehicle has a PowerSource field (Electric, Gas, Human) and a FormFactor field (Car, Truck, Boat) — both as enums or composed objects. This generalizes to any number of axes. Recognize this signal in requirements: "vehicle type AND fuel type AND size class" → composition. "Just vehicle type" → inheritance is fine.
The meta-rule: inheritance is for single-axis specialization. Composition is for multi-dimensional variation. Locking into inheritance early is the most common LLD design debt.
Recovery Patterns When the LLD Goes Wrong
When you realize at minute 25 that your hierarchy is wrong
Acknowledge cleanly: 'Stepping back — the requirement you just added means my Vehicle hierarchy can't represent two variation axes. Let me revise to composition: Vehicle has a FuelType strategy and a SizeClass.' Erasing and redrawing earns more credit than patching with conditionals. Interviewers explicitly grade for the willingness to revise under pressure.
When the interviewer asks 'what pattern is this?' and you've used one without naming it
Don't deflect. Name the pattern, then justify the requirement: 'This is Strategy — pricing has multiple algorithms (hourly, daily, peak-hour) that swap at runtime, and ParkingLot shouldn't change when we add tiered pricing — that's OCP.' Linking pattern → SOLID is the L5+ signal; naming alone is the L4 signal.
When the interviewer pushes back on a class boundary you drew
Do not defend; reason. 'Fair point — let me reconsider.' Walk through the SRP test: 'Ticket holds state (entry time, vehicle, spot). Lending coordinates the checkout transaction. They have different reasons to change — Ticket changes if we add metadata; Lending changes if checkout rules change. So I'd keep them separate.' If the boundary is genuinely wrong, revise: 'You're right — these have the same lifecycle and only one reason to change; merging them.'
When you're asked to deep dive on a method you barely sketched
60 seconds to establish the contract first: 'park() takes a Vehicle, returns a Ticket or throws ParkingLotFullException. The constraint is two threads cannot park in the same spot. Now the implementation...' This shows you don't dive into code without contract clarity.
When you're 5 minutes from the end with major gaps
Don't try to cover everything — explicit prioritization is itself the signal: 'In the remaining 5 minutes, I'll do extensions since OCP is the highest-signal topic; persistence and concurrency edge cases I'd cover next if we had more time.' Naming what you're skipping demonstrates time-management judgment.
When you give a wrong answer and realize it 30 seconds later
Self-correct out loud: 'Actually, what I said about Singleton being thread-safe via new — that's only safe with the GIL in CPython. In Java I'd need double-checked locking or an enum-based Singleton.' Self-correction is a positive signal; it shows reflection. Pretending it didn't happen is far worse.
The Decision Loop You Should Run on Every Class
What LLD Interview Levels Actually Test
| Level | Primary signal | How to demonstrate |
|---|---|---|
| L4 / Mid (E4) | Can you produce a working class diagram from a clear spec? | Correct entity extraction, basic SOLID adherence, one or two named patterns where they fit |
| L5 / Senior (E5) | Can you defend every design choice against SOLID and predict extension impact? | Pattern + SOLID linkage on every choice; concurrency surfaced unprompted; extensibility walkthrough at the end; willingness to revise |
| L6 / Staff (E6) | Can you identify the design choice that constrains future evolution and reason about it? | Names the *one* class boundary the rest of the design hinges on; debates inheritance-vs-composition explicitly; discusses testability and DI as first-class concerns |
| L7 / Senior Staff (E7) | Can you connect class-level decisions to system-level architecture? | Discusses how LLD decisions interact with HLD (sharding implications of an aggregate boundary; cache invalidation implications of an entity's lifecycle); debates domain-driven design tradeoffs |
How to Practice (and What to Practice)
The wrong practice: memorizing the canonical Parking Lot, Elevator, and Vending Machine designs. Interviewers know these are memorized — they ask variants ("parking lot with EV charging and reservations" or "elevator with VIP override") that punish recall and reward derivation.
The right practice: drill the 6 canonical LLD problems (Parking Lot, Elevator, Vending Machine, Library, Splitwise, ATM) deeply enough to re-derive them from requirements in 30 minutes. The drill:
- Entity-extraction reflexes: pick a one-paragraph requirement and produce the noun list, verb list, and survive/collapse decisions in under 5 minutes. Speed of extraction creates space for design thinking.
- The SOLID self-check loop: practice running it audibly for every class. After the design, walk through SRP, OCP, LSP, ISP, DIP and name where each applies (or where you traded it off).
- Pattern → requirement linkage: for each of the 8 patterns commonly tested (Strategy, Observer, Factory, Builder, Singleton, State, Decorator, Template Method), write the requirement signal that triggers each one. "Multiple algorithms swap at runtime" → Strategy. "Multiple listeners react to events" → Observer.
- Recovery drills: have a friend ambush your design at minute 25 with a new requirement that breaks your hierarchy. Practice revising without panic.
What NOT to over-practice: writing complete UML for every class. In 45 minutes you'll draw 8-15 classes; complete UML for each is impossible. Prioritize the 3-4 classes that carry signal (the entity hierarchy and the strategy interfaces) and abbreviate the rest with names + key fields.
Interview Questions
Click to reveal answersSign in to take the Quiz
This topic has 15 quiz questions with instant feedback and detailed explanations. Sign in to unlock quizzes.
Sign in to take quiz →