Preview — Pro guide
You are seeing a portion of this guide. Sign in and upgrade to unlock the full article, quizzes, and interview answers.
Sections
Related Guides
How to Approach Data & Product Scenario Questions
Scenarios
Scenario Walkthrough: Engagement vs Revenue — Guardrails & Horizon
Scenarios
Scenario Walkthrough: The A/B Test Went Wrong — SRM, Peeking, and Interference
Scenarios
Scenario Walkthrough: Trust & Safety Escalation — Abuse Signals & Response
Scenarios
Scenario Walkthrough: Marketplace Supply–Demand Imbalance — Liquidity First
Scenarios
Funnel Analysis: Conversion Optimization, Drop-off Attribution, and Funnel SQL
Product Analytics
Product Analytics for Interviews: Metric Design, Root Cause Analysis, and Scenario Frameworks
Product Analytics
Metric Design for Data Scientists: North Star Metrics, Guardrails, and Causal Attribution
Machine Learning
A/B Testing & Experimentation at Scale
Machine Learning
Feature Flags: Safe Rollouts, Kill Switches, and the Dark Launch Pattern
Production Engineering
Scenario Walkthrough: Post-Launch — Was This Feature a Success?
Structured readout for 'we shipped' moments: pre-registered success criteria vs ad hoc happy charts, leading vs lagging metrics, holdout and cannibalization, seasonality, and the governance of when to call a win. Mirrors how strong DS orgs run launch reviews (Meta, Google, Amazon) so you sound like you have owned a launch, not like you cherry-picked a green dashboard tile.
What Interviewers Are Actually Testing
Post-launch questions are a trap for people who only know how to describe a dashboard. The interviewer wants to see that you can separate a genuine launch effect from selection, mix shift, seasonality, and metric gaming on a pre-specified contract—not from whatever turned green in week one.
Strong candidates anchor on what was written before the PR merged: the primary, guardrails, minimum duration, and the unit of analysis (user, session, marketplace, geo cell). They talk about counterfactuals: randomized holdout, staggered rollouts, or at minimum a clean pre-period and synthetic or DID-style thinking when full randomization is missing. They flag when the launch population is not the general population: power users, specific geos, or opt-in betas. Weak candidates announce victory because "conversion is up" without asking relative to what and for whom.
6/10 vs 9/10
6/10: a single time series after launch with a highlight reel; "metrics look good."
8/10: pre-registered primary and guardrails; leading vs lagging split; at least one cannibalization or mix check; a stated uncertainty about long-run LTV.
9–10/10: holdout or equivalent counterfactual story; geo/temporal seasonality; explicit org governance (who can override, what is invalid data); a falsification plan if the good news is drive-by of an acquisition spike, not the feature.