Skip to main content
Product Analytics·Intermediate

Funnel Analysis: Conversion Optimization, Drop-off Attribution, and Funnel SQL

Funnel analysis measures how users progress through a sequence of steps — signup → activation → first purchase → retention. It is the most frequently tested analytics framework in product data science interviews at Meta, Airbnb, Stripe, and Lyft. This guide covers funnel construction, drop-off rate calculation, multi-touch attribution, and the SQL patterns for session-level and user-level funnels.

33 min read 9 sections 6 interview questions
Funnel AnalysisConversion RateDrop-off RateUser AcquisitionProduct AnalyticsActivation RateRetentionAttributionSQL AnalyticsA/B TestingGrowth AnalyticsInterview Analytics

Why Funnel Analysis Is the Core Product Analytics Skill

A funnel is a sequence of steps users must complete to reach a desired outcome. Every product has funnels: signup → email verify → first session, or search → click → add to cart → checkout → purchase. Funnel analysis answers: where are users dropping off, and why?

Funnels appear in ~60% of product data science interviews because they are:

  • Universal: every consumer product has at least one funnel
  • Actionable: drop-off analysis directly drives product prioritization
  • Analytically rich: they require cohort thinking, attribution, SQL, and statistical testing

The three funnel questions every interviewer will ask:

  1. "Walk me through how you'd analyze our signup funnel" (structure + SQL)
  2. "Step 3's conversion rate dropped 15% after the redesign. What do you investigate?" (diagnosis)
  3. "We want to improve funnel conversion by 10%. Where do you start?" (prioritization)

Mastering funnel analysis means being able to answer all three precisely.

IMPORTANT

What the Interviewer Is Actually Testing

Three signals in funnel analysis interviews:

  1. Do you know the difference between user-level and session-level funnels? User-level: how many users who landed on the signup page eventually verified email? Session-level: of all sessions that started checkout, how many completed it? Mixing these produces misleading results — the interviewer will test whether you know which one to use.

  2. Can you write the funnel SQL? Converting a funnel question to SQL using GROUP BY, CASE statements, and window functions is a standard test. Know both the date-windowed approach and the sequential event detection approach.

  3. Do you know how to diagnose a funnel drop? Saying "the drop rate went up" without a hypothesis tree (segment breakdown, UX change, technical bug, seasonality) is a data analyst answer, not a senior DS answer.

Funnel Structure and Drop-off Analysis

Rendering diagram...

Funnel Metrics — What to Track and Report

Step-over-step conversion rate: users who completed step N / users who completed step N-1. This is the primary funnel metric. Report it for each step.

Overall conversion rate: users who completed the last step / users who entered the first step. The headline metric for the funnel's efficiency.

Drop-off rate: 1 - step-over-step conversion rate. The proportion of users who left at each step.

Time-to-convert: the time between entering the funnel (step 1) and completing it (last step). P50 and P90 time-to-convert are useful — long P90 suggests friction at some step.

Funnel volume vs conversion: high volume with low conversion is often better to optimize than low volume with high conversion — the absolute user impact of a 10% improvement is larger at higher volume. Prioritize where (absolute users lost) × (potential improvement) is highest.

User-level vs session-level:

  • User-level funnel: tracks whether a unique user ever completed each step (across multiple sessions). Answers: "how many users eventually buy after seeing the product page?" Window: typically 7, 14, or 30 days to allow for multi-session purchase journeys.
  • Session-level funnel: tracks whether a specific session progressed through each step. Answers: "in a single session, how many users complete checkout?" Best for: single-session flows (checkout, signup), where cross-session attribution doesn't make sense.

Funnel SQL — User-Level and Session-Level

sqlfunnel-analysis.sql
-- User-level funnel: did each user complete each step within 14 days of their first step?
-- Events table: user_id, event_type, occurred_at

WITH user_funnel AS (
  SELECT
    user_id,
    -- Step 1: product view (entry point)
    MIN(CASE WHEN event_type = 'product_view' THEN occurred_at END)          AS step1_at,
    -- Step 2: add to cart (within 14 days of step 1)
    MIN(CASE WHEN event_type = 'add_to_cart' THEN occurred_at END)           AS step2_at,
    -- Step 3: checkout started
    MIN(CASE WHEN event_type = 'checkout_start' THEN occurred_at END)        AS step3_at,
    -- Step 4: order confirmed
    MIN(CASE WHEN event_type = 'order_confirmed' THEN occurred_at END)       AS step4_at
  FROM events
  WHERE occurred_at >= '2024-01-01'
    AND occurred_at < '2024-02-01'
  GROUP BY user_id
),

funnel_steps AS (
  SELECT
    COUNT(DISTINCT user_id)                                           AS step1_users,
    COUNT(DISTINCT CASE
      WHEN step2_at > step1_at
       AND step2_at <= step1_at + INTERVAL '14 days'
      THEN user_id END)                                               AS step2_users,
    COUNT(DISTINCT CASE
      WHEN step3_at > step2_at
       AND step3_at <= step1_at + INTERVAL '14 days'
      THEN user_id END)                                               AS step3_users,
    COUNT(DISTINCT CASE
      WHEN step4_at > step3_at
       AND step4_at <= step1_at + INTERVAL '14 days'
      THEN user_id END)                                               AS step4_users
  FROM user_funnel
  WHERE step1_at IS NOT NULL  -- only include users who entered the funnel
)

SELECT
  step1_users,
  step2_users,
  step3_users,
  step4_users,
  ROUND(100.0 * step2_users / NULLIF(step1_users, 0), 1) AS s1_s2_cvr,
  ROUND(100.0 * step3_users / NULLIF(step2_users, 0), 1) AS s2_s3_cvr,
  ROUND(100.0 * step4_users / NULLIF(step3_users, 0), 1) AS s3_s4_cvr,
  ROUND(100.0 * step4_users / NULLIF(step1_users, 0), 1) AS overall_cvr
FROM funnel_steps;

-- Session-level funnel: within a single session, how far did the user progress?
-- session_events table: session_id, user_id, event_type, occurred_at

SELECT
  COUNT(DISTINCT session_id)                                  AS sessions_entered,
  COUNT(DISTINCT CASE WHEN step2_in_session THEN session_id END) AS reached_step2,
  COUNT(DISTINCT CASE WHEN step3_in_session THEN session_id END) AS reached_step3,
  COUNT(DISTINCT CASE WHEN step4_in_session THEN session_id END) AS completed
FROM (
  SELECT
    session_id,
    BOOL_OR(event_type = 'product_view')    AS step1_in_session,
    BOOL_OR(event_type = 'add_to_cart')     AS step2_in_session,
    BOOL_OR(event_type = 'checkout_start')  AS step3_in_session,
    BOOL_OR(event_type = 'order_confirmed') AS step4_in_session
  FROM session_events
  GROUP BY session_id
) session_funnel
WHERE step1_in_session;  -- only sessions that started the funnel

Funnel Drop-off Diagnosis — The Analyst Framework

When a funnel step's conversion drops, the investigation follows a structured path:

Step 1 — Confirm the drop is real: Is it instrument failure (logging bug)? Is it seasonal (weekend drop vs weekday)? Compare to the same period last week/month. Confirm across data sources.

Step 2 — Segment to isolate: Platform: did iOS conversion drop but Android is flat? → iOS-specific change User type: new users vs returning users — if new-user funnel dropped, check acquisition Device type: mobile vs desktop — mobile checkout friction is a classic conversion killer Geography: US-only drop → US-specific event or regulatory change Acquisition channel: paid search vs organic vs email — channel mix shift can look like a funnel change (lower-intent paid traffic converts worse)

Step 3 — Correlate with product changes: Was there a UI change at this step? A new form field? A price/shipping change? Payment method change? Load time regression? Check the deployment log.

Step 4 — Build a hypothesis and test it: "Mobile checkout conversion dropped because the new address form added two required fields that are harder to fill on mobile. Hypothesis: remove those fields or add address autocomplete." → Run an A/B test or a 1-week quick experiment.

Funnel Drop-off — Hypothesis Lookup Table

Where the Drop IsMost Likely CauseDiagnostic QueryPotential Fix
Landing → Signup startPoor value prop, slow load, irrelevant trafficSegment by acquisition channel, device, page load timeImprove landing page copy, fix page speed, filter low-intent traffic
Signup start → Email verifyEmail deliverability, verification frictionCheck email open rate, resend rate, time-to-verify P90In-app OTP instead of email, resend prompt, SMS option
Cart → Checkout startShipping cost shock, trust concernsCompare abandonment by cart value, new vs returningShow shipping cost early, add trust badges, offer free shipping threshold
Checkout start → PaymentForm friction, payment method missingDevice type breakdown, error rate on form submitFewer fields, autofill, add PayPal/Apple Pay
Payment → ConfirmPayment failure, gateway timeoutError code breakdown, p99 payment API latencyBetter error messages, retry logic, alternative payment fallback
⚠ WARNING

Funnel Analysis Mistakes

Mistake 1: Not ordering steps by time. A user can complete step 3 before step 2 if you don't enforce sequential ordering in your SQL. Always filter CASE WHEN step2_at > step1_at. Without this, users who did steps out of order inflate your conversion numbers.

Mistake 2: Not setting a conversion window. Users who see a product page in January and buy in March are not part of the same funnel journey — they're a new session. Always define a conversion window (7, 14, or 30 days) and enforce it in the WHERE clause.

Mistake 3: Reporting only the overall conversion rate. "Our checkout funnel converts at 16%" tells you nothing about where to optimize. Always report step-over-step rates and absolute user counts at each step.

Mistake 4: Ignoring channel mix as a confounder. If paid marketing spend shifted toward less-qualified audiences, overall funnel conversion will drop — not because the funnel got worse, but because the input population changed. Always segment by acquisition channel when investigating conversion drops.

Mistake 5: Not checking for instrumentation gaps. If a new step was added to the funnel (e.g., a new "confirm address" screen) and it wasn't instrumented with an event, it looks like a massive drop at that step. Always cross-check funnel steps with the product changelog.

TIP

Funnel Interview Template

When asked to analyze or design a funnel:

  1. Define the funnel steps explicitly: "The checkout funnel I'd define is: Product Page → Add to Cart → Checkout Start → Payment Entry → Order Confirmed"

  2. Choose user-level or session-level: "This is a multi-session purchase journey for a high-consideration product, so I'd use a 14-day user-level funnel"

  3. Report step-over-step AND overall: always report both rates, not just the overall

  4. Prioritize by impact: (users entering step N) × (1 - conversion rate) = absolute users lost. Optimize the step with the highest absolute user loss, not the worst conversion rate.

  5. Segment before concluding: any change in a funnel step requires segmentation by platform, user type, and acquisition channel before drawing conclusions.

Interview Questions

Click to reveal answers