Skip to main content

Preview — Pro guide

You are seeing a portion of this guide. Sign in and upgrade to unlock the full article, quizzes, and interview answers.

MLSD Case Study: Search Ranking System

Design web/ecommerce search ranking with lexical + vector retrieval, multi-stage ranking, and freshness-aware indexing. Covers query understanding, relevance labels, and online experimentation.

50 min read 2 sections 1 interview questions
Search RankingLearning to RankBM25ANN RetrievalQuery UnderstandingNDCGMRRFreshnessSearch ML

Problem Framing: Relevance Under Millisecond Constraints

Search ranking balances four competing demands simultaneously: lexical precision (exact keyword matches must surface correctly), semantic recall (paraphrases and synonyms must retrieve relevant results), business rules (availability, promotions, policy compliance), and latency (sub-150ms P99 for user-facing search results pages).

The foundational insight that separates strong answers: recall is a retrieval responsibility, precision is a ranking responsibility. No amount of ranking sophistication can compensate for relevant results that were never retrieved. Conversely, precise retrieval is insufficient if the top results are not ordered by relevance — the user still has to find what they need.

Strong design answers begin with query understanding: clarify the query class distribution before designing the pipeline.

  • Navigational queries (~10–20%): user wants a specific known destination. Precision matters overwhelmingly; BM25 with query normalization is often sufficient.
  • Informational queries (~50–60%): user wants to learn. Semantic recall and diversity matter; neural models add significant value.
  • Transactional queries (~20–30%): user intends to take an action (buy, sign up, download). Business metrics (conversion, margin) are primary; ranking must integrate business signals alongside relevance.

The label strategy depends on query class. Informational queries use editorial judgments (NDCG-optimized). Transactional queries use behavioral labels (click, add-to-cart, purchase) with position-bias correction. Using the wrong label type for the query class produces a model that is evaluated on the wrong objective.

Clarify these dimensions in the first 3 minutes of an interview before diving into model architecture.

IMPORTANT

Premium content locked

This guide is premium content. Upgrade to Pro to unlock the full guide, quizzes, and interview Q&A.