Preview — Pro guide
You are seeing a portion of this guide. Sign in and upgrade to unlock the full article, quizzes, and interview answers.
Sections
Related Guides
AI Agents & Agentic Systems Framework
GenAI & Agents
Multi-Agent Systems: Orchestration, LangGraph, and Production Patterns
GenAI & Agents
MCP Security and Tool Trust Boundaries for LLM Agent Systems
GenAI & Agents
HITL and Durable Agent Execution: Interrupt, Approve, Resume Safely
GenAI & Agents
LLM & Agent Evaluation: Trajectories, RAGAS, LLM-as-Judge, and Hallucination Mitigation
GenAI & Agents
A2A Protocol: Agent-to-Agent Communication and Interoperability
Master Google's Agent-to-Agent (A2A) open protocol for inter-agent communication in generative AI systems. Learn Agent Cards, task lifecycle management, async execution with SSE streaming, enterprise authentication, and how A2A complements MCP to build production multi-agent architectures.
Why Inter-Agent Communication Needs a Protocol, Not Ad-Hoc JSON
Most multi-agent demos wire agents together with direct HTTP calls and custom JSON payloads. This works for a 2-agent prototype. It collapses at scale because every new agent integration requires bespoke serialization, discovery logic, error handling, and auth plumbing. Adding a third agent doubles the integration surface; adding a tenth makes it unmanageable.
The core problem is four-fold. First, discovery: how does agent A know agent B exists and what it can do? Without a standard, you hard-code endpoint URLs and capability lists. Second, lifecycle: when agent A delegates a task to agent B, who tracks whether it is running, completed, or failed? Custom solutions reinvent this state machine per integration. Third, communication format: text, files, and structured data all need to flow between agents — ad-hoc JSON schemas fragment immediately across teams and vendors. Fourth, security: cross-organization agent collaboration requires standardized authentication and authorization, not per-pair API key exchanges.
Google's Agent-to-Agent (A2A) protocol — released April 2025, donated to the Linux Foundation, and reaching v1.0.0 in March 2026 — addresses all four. It defines a standard over HTTP + JSON-RPC 2.0 with Agent Cards for discovery, a task state machine for lifecycle, typed message Parts for communication, and OAuth 2.0/bearer token support for enterprise auth. Over 150 organizations (Salesforce, SAP, ServiceNow, Atlassian, Deloitte) back it.
The non-obvious insight most resources miss: A2A treats agents as opaque peers. Unlike MCP, which exposes an agent's internal tools to a caller, A2A deliberately hides internal implementation. The caller sees capabilities and outputs, never the agent's reasoning chain, tool set, or prompt. This opacity is a feature, not a limitation — it enables cross-vendor collaboration without leaking proprietary architecture. Interviewers reward candidates who articulate this distinction cleanly.
What Interviewers Test on Agent Interoperability
Strong answers distinguish the protocol layer from the framework layer. Interviewers want to hear: (1) why opaque agent boundaries matter for enterprise trust; (2) the A2A task lifecycle states and when each transition fires; (3) how Agent Cards enable decentralized discovery without a central registry; (4) how A2A and MCP complement each other (MCP = vertical tool access, A2A = horizontal agent coordination). A 6/10 answer says 'agents talk via APIs.' A 9/10 answer explains the task state machine, streaming semantics, and why opacity enables vendor interoperability.