Preview — Pro guide
You are seeing a portion of this guide. Sign in and upgrade to unlock the full article, quizzes, and interview answers.
Multi-Task & Transfer Learning: Shared Representations, Negative Transfer, and Fine-Tuning Strategy
Sharing encoders across tasks can improve data efficiency — or hurt if tasks conflict. This guide covers hard parameter sharing, soft sharing (cross-stitch, sluice), adapter layers (LoRA-style intuition), negative transfer diagnostics, and when Google-style pretrain-finetune beats training from scratch on tabular vs vision vs NLP.
42 min read 2 sections 1 interview questions
Multi-Task LearningTransfer LearningFine-TuningNegative TransferAdapter LayersLoRAHard Parameter SharingRepresentation LearningDomain AdaptationPretrainingHugging FaceTask Weighting