Scenarios
guides
How to Approach Data & Product Scenario Questions
Scenario questions — DAU drops, model degradation, A/B test decisions, incident triage — are the dominant interview format for senior data scientists, product analysts, and ML engineers. This guide gives you the DIAGNOSE framework: a structured 5-step approach that transforms vague 'what would you do?' prompts into clear, systematic answers that impress interviewers at Meta, Google, Airbnb, and Stripe.
Scenario Walkthrough: Why Is DAU Dropping?
End-to-end DAU triage the way strong DS/PM orgs run it: lock the metric definition, validate instrumentation before narrative, decompose the composite into new vs returning and session depth, segment to localize a sharp break, and ship actions with impact sizing and explicit uncertainty. Includes worked timing, SQL-shaped diagnostics, and the Simpson / mix-shift traps that sink mid-level answers.
Scenario Walkthrough: Engagement vs Revenue — Guardrails & Horizon
The highest-signal PM/DS tradeoff: a surface, ranking, or growth lever lifts a leading engagement input while threatening RPM, ARPU, chargebacks, or long-horizon retention. Learn to express a *constrained* objective, pre-register guardrails, separate short-window novelty from LTV, and run readouts the way strong experimentation orgs do—not as a single p-value on one chart.
Scenario Walkthrough: Marketplace Supply–Demand Imbalance — Liquidity First
Interview-prep walkthrough for two-sided marketplaces: liquidity before GMV, cell-level supply and demand, search-to-fill, zero results, and dual-sided pain (riders and drivers can disagree in the same city). Covers levers, incentives, repositioning, throttles, and why network effects break naive A/B designs.
Scenario Walkthrough: Post-Launch — Was This Feature a Success?
Structured readout for 'we shipped' moments: pre-registered success criteria vs ad hoc happy charts, leading vs lagging metrics, holdout and cannibalization, seasonality, and the governance of when to call a win. Mirrors how strong DS orgs run launch reviews (Meta, Google, Amazon) so you sound like you have owned a launch, not like you cherry-picked a green dashboard tile.
Scenario Walkthrough: The A/B Test Went Wrong — SRM, Peeking, and Interference
When the experiment is lying but the slide deck is green. Walk the failure modes that dominate production: sample ratio mismatch, peeking and early stopping, novelty and learning effects, network interference, wrong randomization unit, and thin-event metrics read too early. Teaches the remediation playbook: invalidate, debug assignment, or redesign — and how to say that without panicking the room. Grounded in standard OCE practice, chi-square SRM, and the Kohavi, Tang, and Xu body of work on trustworthy experiments.
Scenario Walkthrough: Recommendation Model CTR Dropped 15% Overnight
A structured incident walkthrough for sudden ML metric degradation. Learn how to separate feature drift, label drift, training-serving skew, serving regressions, and upstream schema failures with fast falsification checks and executive-safe communication.
Scenario Walkthrough: Payment Service Returning 500s in Production
A step-by-step incident response walkthrough for severe production outages. Covers triage order, dependency isolation, rollback decisions, cascading failure containment, and stakeholder communication under time pressure.
Scenario Walkthrough: Trust & Safety Escalation — Abuse Signals & Response
When bad actors spike, reports flood, or a model misfires, strong answers triage threat class, validate measurement, use velocity and graph signals for coordination, respect human review capacity, and govern precision versus harm — not generic retrain-the-model talk. Maps signals to tiered actions, appeals, and guard metrics the way integrity orgs actually run operations.