a zine for agent-builders · v1

LangGraph &
Google ADK

two ways to build agents, side by side. primitives, data models, and the stuff the docs don't tell you — prepared for a forward deployed engineer interview.

read time~ 25 min
interactivesuper-step demo, event stream, quiz
vibejulia evans, but technical
A B C D E F

the big idea

Both frameworks solve the same problem (orchestrating stateful LLM workflows) from opposite directions.

LangGraph is a graph runtime. You define nodes, edges, and shared state. You build the agent.

ADK is an agent framework. You pick agent types (LlmAgent, SequentialAgent, LoopAgent) and compose them. You configure the agent.

Keep that split in your back pocket. Everything fits around it.

· · ·
Part One

LangGraph 🕸️

LangGraph models an agent as a directed graph with shared state, inspired by Google's Pregel system. Every node reads state → does something → returns a state update. Edges decide who runs next. That's really the whole thing.

three primitives, that's it

01

State

A shared TypedDict. Every node reads and writes to it.

02

Nodes

Plain functions: (state) → state_update. Do the work.

03

Edges

Rules for what runs next. Fixed or conditional.

interview memory hook: State · Nodes · Edges. Lead with those three when asked "what are LangGraph's primitives?" Everything else — reducers, Send, Command, checkpointers — is variations on these.

defining state, with a twist

from typing import TypedDict, Annotated
from langgraph.graph.message import add_messages
from langchain_core.messages import AnyMessage

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]  # ← reducer!
    user_name: str
    turn_count: int

That Annotated[list, add_messages] thing is a reducer. It's how LangGraph decides to merge updates to a key. Without it, new values overwrite old ones. With it, new messages append. We'll see that live in a minute.

a node is just a function

def classify(state: AgentState) -> dict:
    # read state
    last_msg = state["messages"][-1].content
    # do work (could be an LLM call, a tool, whatever)
    label = "urgent" if "help" in last_msg else "normal"
    # return ONLY the fields you want to update
    return {"classification": label}

edges: three flavors

from langgraph.graph import StateGraph, START, END

graph = StateGraph(AgentState)
graph.add_node("classify", classify)
graph.add_node("respond", respond)
graph.add_node("escalate", escalate)

# 1. fixed edge: always go from START to classify
graph.add_edge(START, "classify")

# 2. conditional edge: pick next based on state
def route(state):
    return "escalate" if state["classification"] == "urgent" else "respond"

graph.add_conditional_edges("classify", route, ["respond", "escalate"])

# 3. terminal edges
graph.add_edge("respond", END)
graph.add_edge("escalate", END)

app = graph.compile()  # ← don't forget this!
the thing you'll forget: .compile(). The graph object is a builder, not a runnable, until you compile it. Trips up every newcomer at least once.

how it actually runs: the super-step model

LangGraph doesn't traverse the graph step-by-step like a flowchart. It runs in super-steps: in each step, all active nodes run in parallel, their outputs merge via reducers, and the next set of active nodes is computed. Inspired by Pregel. Step through the demo below to see it live.

interactive demo

super-step executor

super-step: 0
normal urgent
START
classify
respond
escalate
END
Press step to begin executing the graph. The user message is "please help me urgently".
◆ state
messages: [
  "please help me urgently"
]
classification: null
response: null
◆ active nodes

Notice how it's not stepping edge by edge — it's in waves. All the active nodes in super-step N run together, their updates merge, and the waves keep propagating until every node goes quiet.

reducers: the trap nobody explains

When two parallel nodes update the same state key, or when you call the same node multiple times, LangGraph needs to know how to merge the new value with the old. That's what a reducer is.

Default behavior: overwrite. But for lists of messages, you usually want append. Watch the difference:

reducer playground

🚫 no reducer (overwrites)

messages: []

Each write replaces the list. Old messages vanish. 😱

✓ with add_messages reducer

messages: []

Reducer appends. History accumulates. 🙌

interview gold: if they ask "why do messages accumulate but other fields get overwritten?" — it's the add_messages reducer. Nothing more. Being able to explain this cleanly signals real understanding.

two advanced primitives worth knowing

Send is for dynamic fan-out: spawn N parallel invocations of a node when you don't know N at graph-definition time (map-reduce-ish).

from langgraph.types import Send

def dispatch(state):
    return [Send("make_joke", {"subject": s}) for s in state["subjects"]]

graph.add_conditional_edges("pick_subjects", dispatch)

Command lets a node update state AND route in one return — skipping the usual edge resolution. Great for multi-agent handoffs.

from langgraph.types import Command

def review(state) -> Command:
    return Command(
        update={"status": "approved"},
        goto="deploy"   # jump, don't use edges
    )

persistence = superpower

Attach a checkpointer and LangGraph saves state after every super-step. This is wild. It unlocks: fault tolerance, human-in-the-loop, time travel, and long-running agents.

from langgraph.checkpoint.memory import InMemorySaver
# or: from langgraph.checkpoint.postgres import PostgresSaver

app = graph.compile(checkpointer=InMemorySaver())

# every run needs a thread_id; same thread = same conversation history
cfg = {"configurable": {"thread_id": "conv-42"}}
app.invoke({"messages": [...]}, config=cfg)
· · ·
Part Two

Google ADK 🤖

Where LangGraph gives you graph primitives, ADK gives you agent primitives. It's higher-level and more opinionated. Released at Google Cloud NEXT 2025, it's the same framework powering Google's own products (Agentspace, Customer Engagement Suite).

agent types, in order of importance

📝

LlmAgent

An LLM + instructions + tools + (optionally) sub-agents. The workhorse.

⚙️

Workflow Agents

SequentialAgent, ParallelAgent, LoopAgent. Deterministic orchestrators.

🔧

Custom Agents

Extend BaseAgent. For logic that doesn't fit the built-ins.

LlmAgent: the workhorse

from google.adk.agents import LlmAgent

capital_agent = LlmAgent(
    name="capital_agent",               # required, unique
    model="gemini-2.5-flash",
    description="Answers capital questions",  # for OTHER agents to route to this one
    instruction="Respond with the capital of the country asked.",  # system prompt
    tools=[get_capital_city],          # plain functions work!
    output_key="last_answer",          # auto-save response to state
)
don't confuse these: description is what OTHER agents see when deciding whether to delegate to this one. instruction is the system prompt for THIS agent's LLM. Both matter. They do different things.

workflow agents (the "no-LLM" orchestrators)

These are deterministic. They don't use an LLM to decide control flow — they just run their children in a fixed pattern. This is how ADK replaces the edge-routing you'd write by hand in LangGraph.

from google.adk.agents import SequentialAgent, ParallelAgent, LoopAgent

# assembly line: run in order, pass via state
pipeline = SequentialAgent(
    name="pipeline",
    sub_agents=[fetcher, analyst, summarizer],
)

# fan-out: all run concurrently
swarm = ParallelAgent(
    name="code_review_swarm",
    sub_agents=[security_checker, style_checker, performance_analyst],
)

# iterate until exit_loop tool called or max iterations hit
refiner = LoopAgent(
    name="refiner",
    sub_agents=[generator, critic],
    max_iterations=3,
)

the data model: session ▸ state ▸ events

Memorize this cold for the interview. Every ADK interaction lives inside a session, which holds:

state prefix conventions

prefixscopewhen to use
(none)this session onlyconversation-specific draft, plan, etc.
user:all sessions, same useruser preferences, saved settings
app:all sessions, all usersfeature flags, shared config
temp:one invocation only, not persistedintermediate scratch between sub-agents

events are the fundamental unit of flow

Every interaction produces events. State never changes directly — it changes because an event with a state_delta was emitted. Watch a full turn play out below.

adk event stream

one session, one turn

User asks: "What's the capital of Peru?" Watch events flow into the session as the LlmAgent (capital_agent) processes it.

◆ session.state
(empty)
◆ cumulative events
0
what you just saw: events are immutable, chronological, and every state change flows through one. The SessionService applies state_delta from events into session.state. That's why you use output_key or tool_context.state instead of mutating session.state directly — those helpers generate proper events.

sub-agents: delegation vs orchestration

There are two different ways an agent can have children. This distinction gets people confused:

# 1. LLM-DRIVEN delegation (non-deterministic)
coordinator = LlmAgent(
    name="coordinator",
    model="gemini-2.5-flash",
    instruction="Delegate to the right specialist.",
    sub_agents=[greeter_agent, weather_agent],   # LLM picks one
)

# 2. DETERMINISTIC orchestration (fixed order)
pipeline = SequentialAgent(
    name="pipeline",
    sub_agents=[fetcher, analyst],   # runs in this exact order
)

When a LlmAgent has sub_agents, its LLM dynamically routes to one using a built-in transfer_to_agent tool (it reads each sub-agent's description to decide). When a SequentialAgent has sub_agents, they just run in order. Know which you want.

the Runner and contexts

You don't run agents directly — you run them through a Runner, which creates an InvocationContext that travels with execution. For most code you only touch the specialized context types:

contextwhere you see itgives you
ToolContexttool function paramsstate + artifact helpers + auth
CallbackContextbefore/after-agent callbacksstate + artifacts
ReadonlyContextread-only spots (e.g. dynamic instruction)just read state
InvocationContextinside BaseAgent._run_async_impleverything (services, session, etc.)
· · ·

same problem, both frameworks

The fastest way to internalize the difference is to see the same solution written in both. Flip the tab.

side by side
Task: Build a content pipeline that (1) fetches a topic summary, (2) writes a draft, and (3) revises it for clarity. Three steps, fixed order, each step's output feeds the next.
from typing import TypedDict
from langgraph.graph import StateGraph, START, END

class State(TypedDict):
    topic: str
    summary: str
    draft: str
    final: str

def fetch_summary(state):
    return {"summary": llm_summarize(state["topic"])}

def write_draft(state):
    return {"draft": llm_draft(state["summary"])}

def revise(state):
    return {"final": llm_revise(state["draft"])}

g = StateGraph(State)
g.add_node("fetch", fetch_summary)
g.add_node("draft", write_draft)
g.add_node("revise", revise)
g.add_edge(START, "fetch")
g.add_edge("fetch", "draft")
g.add_edge("draft", "revise")
g.add_edge("revise", END)

app = g.compile()
result = app.invoke({"topic": "octopi"})
primitives used
State (TypedDict) shape of shared data
Nodes (3 functions) each returns partial state update
Edges (4 fixed) linear path through the graph
compile() + invoke() you run the graph yourself
observation: ~20 lines of scaffolding. Total control. You wire every edge.
from google.adk.agents import LlmAgent, SequentialAgent

fetcher = LlmAgent(
    name="fetcher",
    model="gemini-2.5-flash",
    instruction="Summarize the topic from the query.",
    output_key="summary",
)

drafter = LlmAgent(
    name="drafter",
    model="gemini-2.5-flash",
    instruction="Write a draft from this summary: {summary}",
    output_key="draft",
)

reviser = LlmAgent(
    name="reviser",
    model="gemini-2.5-flash",
    instruction="Revise for clarity: {draft}",
    output_key="final",
)

pipeline = SequentialAgent(
    name="content_pipeline",
    sub_agents=[fetcher, drafter, reviser],
)
# run via Runner + SessionService
primitives used
LlmAgent × 3 each is an LLM + instruction
output_key auto-saves response to session.state
{key} template injection reads state into instructions
SequentialAgent deterministic order, no LLM routing
observation: no state class, no edges. Agents + one workflow agent. More concise, less control.

the cheatsheet

conceptLangGraphADK
unit of workNode (function)Agent (LlmAgent)
shared dataState (TypedDict)session.state (dict)
routingedges (explicit)workflow agents OR LLM delegation
historystate["messages"]session.events
persistenceCheckpointersSessionService
merging updatesReducers (per-key)Event-based state_delta
parallelismsuper-steps, Send()ParallelAgent
loopscycles + conditional edgesLoopAgent
HITLinterrupt() + checkpointerlong-running tool
compilation neededyes (.compile())no
long-term memoryBYO vectorstoreMemoryService (Memory Bank)
· · ·

gotchas

Click any card to expand. These are the things that don't show up in the tutorials but will trip you up in the interview.

!
Forgetting .compile()
langgraph

Your StateGraph is a builder, not a runnable. Until you call .compile(), you can't .invoke() it. Classic first-timer mistake.

Reducers silently overwrite
langgraph

No Annotated[..., reducer] = last-write-wins. If two parallel nodes both write to state["log"] without a reducer, only one survives. Non-deterministic. Always annotate list-like fields.

Cycles with no END condition
langgraph

LangGraph happily lets you build cycles. Great for agent loops, dangerous if you forget a conditional edge routing to END. Always have an exit.

🧵
thread_id is the unit of persistence
langgraph

Two invocations with the same thread_id share history. Different thread = fresh conversation. Forgetting to pass it on resume = lost state.

description ≠ instruction
adk

description is what OTHER agents see when deciding to delegate. instruction is the system prompt for THIS agent's LLM. Mixing them up breaks multi-agent routing in weird ways.

Don't mutate session.state directly
adk

Use output_key, tool_context.state, or EventActions(state_delta=...). Direct mutation skips the event log and can desync on persistence. The event IS the state change.

InMemorySessionService in prod
adk

Default for adk web. Fine for dev, disastrous for prod — scale to 2+ instances and sessions stop being shared. Use DatabaseSessionService or VertexAiSessionService for anything real.

🤔
Sub-agents under LlmAgent vs Workflow
adk

Under LlmAgent: LLM decides (non-deterministic routing). Under SequentialAgent: fixed order. Under ParallelAgent: all at once. Exact same parameter name, radically different behavior.

📝
Tool docstrings are prompts
adk

The LLM sees the function name, docstring, and param types to decide when to call your tool. A bad docstring = a tool that never gets called (or gets called wrong). Treat them like API docs.

· · ·

quiz yourself

Ten questions. Instant feedback. The ones you miss are the ones to study.

knowledge check

good luck! 🍀

The deepest interview signal isn't knowing every API — it's being able to say "here's what the framework gives me, here's what I'd have to build, here's the trade-off." You've got this.

end of zine · v1