Skip to main content

Preview — Pro guide

You are seeing a portion of this guide. Sign in and upgrade to unlock the full article, quizzes, and interview answers.

Continual & Online Learning: Catastrophic Forgetting, EWC, Replay Buffers, and Streaming ML Tradeoffs

Production models face drifting data — ads, fraud, search — yet naive fine-tuning forgets old tasks. This guide covers catastrophic forgetting, elastic weight consolidation (EWC), experience replay, dark knowledge retention, warm-start vs cold-start, and when Netflix-style batch retraining beats true online gradients.

40 min read 2 sections 1 interview questions
Continual LearningOnline LearningCatastrophic ForgettingExperience ReplayElastic Weight ConsolidationStreaming MLRiver MLFine-TuningConcept DriftKafka MLWarm StartFOMAML

The Continual Learning Problem in Production

**Batch ML** assumes stationary . **Non-stationary** worlds — adversaries, seasonality, product surface changes — demand models that **update without erasing** past competence. **Catastrophic forgetting** (McClelland, Rumelhart research lineage; modern deep nets exhibit sharply): after fine-tuning on March fraud patterns, **April model** may lose January attack recall unless you architect retention. Interviews blur **online learning** (single model updated each example or mini-batch from a stream) with **continual learning** (sequence of tasks) — clarify definitions before answering.

IMPORTANT

Premium content locked

This guide is premium content. Upgrade to Pro to unlock the full guide, quizzes, and interview Q&A.