Aibytec

LangGraph for Beginners: Build a Multi-Step AI Workflow with State Management

🐍 Python AI ⏱ 15 min read 📅 March 23, 2026 BEGINNER FRIENDLY

LangGraph for Beginners:
Build a Multi-Step AI Workflow
with State Management

LangGraph gives your AI agents a memory, a map, and decision-making power. In this tutorial you will build a real customer support ticket handler — step by step — from scratch.

📬Ticket Classifier
🔀Smart Routing
Quality Check Node
🧠Shared State

What Is LangGraph — and Why Should You Care?

Most AI workflows look like a straight line: user sends a message → LLM responds. That works for simple chatbots. But real-world tasks — processing a support ticket, running a research pipeline, executing a multi-step agent — need branching logic, shared memory, and the ability to loop.

LangGraph is a Python library from LangChain that lets you build AI workflows as a stateful directed graph. Each step is a node. State flows between nodes. Edges can be conditional — meaning the workflow can make decisions and take different paths based on what the AI discovers at runtime.

📦
State
A shared Python dictionary that all nodes can read and write. Every piece of information lives here — nothing is lost between steps.
Nodes
Python functions that receive the state, do something (call an LLM, run code, make a decision), and return an updated state.
➡️
Edges
Connections between nodes. Can be fixed (always go here next) or conditional (go to node A or B based on state).

"LangGraph is to AI workflows what Django is to web apps — it gives you structure, state, and sanity at scale."

OVERVIEW

What We're Building

We will build a Customer Support Ticket Handler — an AI agent that processes incoming support tickets through a 6-node LangGraph pipeline:

📬
CLASSIFY
billing / tech / general
🏢
ROUTE
assign department
✍️
DRAFT
write AI response
🔍
QUALITY CHECK
score 0-100
🚨
APPROVE / ESCALATE
conditional route

The key feature: After the quality check, LangGraph uses a conditional edge to automatically decide — approve the response (score ≥ 60) or escalate to a senior agent (score < 60 or LLM flags it). This branching logic is impossible with a simple chain — and trivial with LangGraph.

DIAGRAM

The Workflow Flowchart

Here is the complete graph we will build. Every box is a node. The diamond is a conditional edge — the decision point LangGraph evaluates at runtime.

START classify_ticket() billing | technical | general route_to_department() Finance | Tech | Success draft_response() LLM writes reply quality_check() score 0-100 + escalate? should_escalate() conditional edge score ≥ 60 approve() send response score < 60 escalate() senior review END END LEGEND Node Conditional edge
Figure 1 — LangGraph Customer Support Workflow (6 nodes, 1 conditional edge)
STEP 0

Setup & Installation

🐍
Python 3.9+
Required
📦
langgraph
Core library
langchain-anthropic
Claude API wrapper
🔑
Anthropic API Key
console.anthropic.com
terminal
pip install langgraph langchain-anthropic langchain-coreexport ANTHROPIC_API_KEY='your-key-here'
project structure
support-ticket-agent/ ├── support_graph.py ← main file (we build this) └── .env ← API key
STEP 1

Define the State

The state is the single source of truth for your entire graph. Every node reads from it and writes back to it. We define it as a TypedDict — a typed Python dictionary that gives us autocomplete and error checking.

🔑 Key insight: Every field starts empty. As the ticket flows through each node, fields get filled in. By the time it reaches END, the state has a complete record of everything that happened — classification, routing, draft, score, and final response.

support_graph.py — State Definition
from typing import TypedDict, Literal from langgraph.graph import StateGraph, END from langchain_anthropic import ChatAnthropic from langchain_core.messages import HumanMessage import os# ── State: shared dictionary for the entire graph ────────────── class TicketState(TypedDict): ticket_id: str # e.g. "TKT-2026-001" ticket_text: str # raw customer message classification: str # billing | technical | general priority: str # low | medium | high | urgent department: str # which team handles this draft_response: str # AI-written reply quality_score: int # 0-100 quality rating final_response: str # approved or escalated reply needs_escalation: bool # flag from quality check# ── LLM client (reused by all nodes) ────────────────────────── llm = ChatAnthropic( model="claude-sonnet-4-20250514", max_tokens=512, api_key=os.environ.get("ANTHROPIC_API_KEY") )
FieldSet By NodePurpose
ticket_id, ticket_textYou (input)Starting data — the ticket to process
classification, priorityclassify_ticketCategory and urgency from LLM analysis
departmentroute_to_departmentWhich human team would handle this
draft_responsedraft_responseInitial AI-written reply to the customer
quality_score, needs_escalationquality_checkQuality rating and escalation flag
final_responseapprove / escalateWhat actually gets sent to the customer
STEP 2

Build the Nodes

Each node is a regular Python function that receives the current state and returns an updated version. The signature is always the same: def node_name(state: TicketState) -> TicketState.

Node 1 — classify_ticket()

Sends the ticket text to Claude and asks it to return a structured classification. We parse the output line by line — no JSON parsing needed, no fragility.

Node 1: classify_ticket
def classify_ticket(state: TicketState) -> TicketState: """Node 1: Classify ticket into category and set priority.""" prompt = f"""Classify this customer support ticket.Ticket: {state['ticket_text']}Respond in EXACTLY this format (no extra text): CATEGORY: billing|technical|general PRIORITY: low|medium|high|urgent """ response = llm.invoke([HumanMessage(content=prompt)]) text = response.content.strip()# Parse the structured response category = "general" # safe defaults priority = "medium"for line in text.split("\n"): if line.startswith("CATEGORY:"): category = line.split(":")[1].strip().lower() elif line.startswith("PRIORITY:"): priority = line.split(":")[1].strip().lower()print(f"[classify] Category: {category} | Priority: {priority}")# Return updated state — always spread with {**state, ...} return {**state, "classification": category, "priority": priority}

Node 2 — route_to_department()

A lightweight routing node — no LLM needed here. Just a dictionary lookup that maps the classification to a human department name. Simple, fast, zero API cost.

Node 2: route_to_department
def route_to_department(state: TicketState) -> TicketState: """Node 2: Map classification to department. No LLM needed.""" department_map = { "billing": "Finance & Billing Team", "technical": "Technical Support Team", "general": "Customer Success Team" } dept = department_map.get(state["classification"], "Customer Success Team") print(f"[route] Assigned to: {dept}") return {**state, "department": dept}

Node 3 — draft_response()

Uses the department and classification from state to write a contextually appropriate response. Notice how the prompt dynamically uses values from state — this is why shared state is so powerful.

Node 3: draft_response
def draft_response(state: TicketState) -> TicketState: """Node 3: Write a department-specific reply using the LLM.""" prompt = f"""You are an agent from the {state['department']}. Write a professional, empathetic response to this {state['classification']} support ticket.Original ticket: {state['ticket_text']}Priority level: {state['priority']}Requirements: - Acknowledge the customer's concern directly - Provide a concrete next step or solution - Keep it 2-3 paragraphs - End with your name: 'Support Team, AiBytec' """ response = llm.invoke([HumanMessage(content=prompt)]) print(f"[draft] Response drafted ({len(response.content)} chars)") return {**state, "draft_response": response.content}

Node 4 — quality_check()

The AI reviews its own draft. It scores it 0–100 and flags whether the ticket needs human escalation. This self-review pattern is one of the most powerful patterns in agentic AI.

Node 4: quality_check
def quality_check(state: TicketState) -> TicketState: """Node 4: Self-review — score the draft and flag for escalation.""" prompt = f"""Review this customer support response for quality.Original ticket: {state['ticket_text']}Drafted response: {state['draft_response']}Evaluate on: accuracy, empathy, actionability, professionalism.Respond in EXACTLY this format: SCORE: [0-100] ESCALATE: yes|no REASON: [one sentence] """ response = llm.invoke([HumanMessage(content=prompt)]) text = response.content.strip()score = 75 # safe defaults escalate = Falsefor line in text.split("\n"): if line.startswith("SCORE:"): try: score = int(line.split(":")[1].strip()) except ValueError: pass elif line.startswith("ESCALATE:"): escalate = "yes" in line.lower()print(f"[quality] Score: {score}/100 | Escalate: {escalate}") return {**state, "quality_score": score, "needs_escalation": escalate}

Nodes 5a & 5b — approve() and escalate()

Two terminal nodes — only one will ever run per ticket, chosen by the conditional edge we define next. They write the final_response field and hand off to END.

Nodes 5a & 5b: approve and escalate
def approve_response(state: TicketState) -> TicketState: """Node 5a: Quality passed — finalise and send.""" print(f"[approve] Response approved (score: {state['quality_score']})") return {**state, "final_response": state["draft_response"]}def escalate_ticket(state: TicketState) -> TicketState: """Node 5b: Quality failed — wrap for senior human review.""" escalation_header = ( f"[ESCALATED — Quality Score: {state['quality_score']}/100]\n" f"Department: {state['department']}\n" f"Priority: {state['priority']}\n" f"{'='*50}\n\n" ) print(f"[escalate] Ticket escalated to senior agent") return { **state, "final_response": escalation_header + state["draft_response"] }
STEP 3

Add the Conditional Edge

This is the magic of LangGraph. A conditional edge is a function that inspects the current state and returns a string that tells LangGraph which node to go to next. It must return a Literal type matching one of your registered options.

💡 Beginner tip: The function name does not matter to LangGraph. What matters is what it returns — the returned string must match a key in the dictionary you pass to add_conditional_edges().

Conditional Edge Function
def should_escalate(state: TicketState) -> Literal["escalate", "approve"]: """ Conditional edge: inspects state and decides next node.Rules: - Escalate if LLM flagged it OR score is below 60 - Approve otherwise """ if state["needs_escalation"] or state["quality_score"] < 60: return "escalate" return "approve"# The return value maps to the dict keys in add_conditional_edges() # "escalate" → escalate_ticket node # "approve" → approve_response node
STEP 4

Compile & Run the Graph

Now we wire everything together. StateGraph accepts our state class. We add nodes, define edges, set the entry point, and call .compile() to get a runnable app.

Build + Compile + Run
def build_support_graph(): """Assemble and compile the full LangGraph workflow.""" graph = StateGraph(TicketState)# ── 1. Register all nodes ────────────────────────────────── graph.add_node("classify", classify_ticket) graph.add_node("route", route_to_department) graph.add_node("draft", draft_response) graph.add_node("quality_check", quality_check) graph.add_node("approve", approve_response) graph.add_node("escalate", escalate_ticket)# ── 2. Set entry point ───────────────────────────────────── graph.set_entry_point("classify")# ── 3. Add fixed edges ───────────────────────────────────── graph.add_edge("classify", "route") graph.add_edge("route", "draft") graph.add_edge("draft", "quality_check")# ── 4. Add conditional edge (the decision point) ─────────── graph.add_conditional_edges( source="quality_check", # from this node path=should_escalate, # call this function path_map={ # map return values to nodes "approve": "approve", "escalate": "escalate" } )# ── 5. Both terminal nodes go to END ─────────────────────── graph.add_edge("approve", END) graph.add_edge("escalate", END)return graph.compile() # returns a runnable Pregel object# ── Run it ───────────────────────────────────────────────────── if __name__ == "__main__": support_app = build_support_graph()# Test with a billing ticket initial_state = { "ticket_id": "TKT-2026-001", "ticket_text": "I was charged twice for my subscription this month and I need a refund immediately. This has happened before and I'm very frustrated.", "classification": "", "priority": "", "department": "", "draft_response": "", "quality_score": 0, "final_response": "", "needs_escalation": False, }result = support_app.invoke(initial_state)print("\n" + "="*50) print(f"Ticket ID: {result['ticket_id']}") print(f"Classification: {result['classification']}") print(f"Priority: {result['priority']}") print(f"Department: {result['department']}") print(f"Quality Score: {result['quality_score']}/100") print(f"Escalated: {result['needs_escalation']}") print(f"\nFinal Response:\n{result['final_response']}")
$ python support_graph.py
[classify] Category: billing | Priority: high [route] Assigned to: Finance & Billing Team [draft] Response drafted (842 chars) [quality] Score: 87/100 | Escalate: False [approve] Response approved (score: 87)================================================== Ticket ID: TKT-2026-001 Classification: billing Priority: high Department: Finance & Billing Team Quality Score: 87/100 Escalated: FalseFinal Response: Dear Valued Customer,Thank you for reaching out and we sincerely apologise for the double charge on your account. We understand how frustrating this must be, especially as it has occurred previously... [response continues]

Full Code — Complete File

Copy the complete support_graph.py below. This is the entire working agent — nothing omitted.

support_graph.py — COMPLETE FILE FULL CODE
""" LangGraph Customer Support Ticket Handler AiBytec.com — Python AI Tutorial github.com/MuhammadRustamShomi/langgraph-support-agent """from typing import TypedDict, Literal from langgraph.graph import StateGraph, END from langchain_anthropic import ChatAnthropic from langchain_core.messages import HumanMessage import os# ── State ────────────────────────────────────────────────────── class TicketState(TypedDict): ticket_id: str ticket_text: str classification: str priority: str department: str draft_response: str quality_score: int final_response: str needs_escalation: bool# ── LLM ──────────────────────────────────────────────────────── llm = ChatAnthropic( model="claude-sonnet-4-20250514", max_tokens=512, api_key=os.environ.get("ANTHROPIC_API_KEY") )# ── Node 1: Classify ─────────────────────────────────────────── def classify_ticket(state: TicketState) -> TicketState: prompt = f"""Classify this support ticket.Ticket: {state['ticket_text']}Respond EXACTLY in this format: CATEGORY: billing|technical|general PRIORITY: low|medium|high|urgent """ response = llm.invoke([HumanMessage(content=prompt)]) text = response.content.strip() category, priority = "general", "medium" for line in text.split("\n"): if line.startswith("CATEGORY:"): category = line.split(":")[1].strip().lower() elif line.startswith("PRIORITY:"): priority = line.split(":")[1].strip().lower() print(f"[classify] {category} | {priority}") return {**state, "classification": category, "priority": priority}# ── Node 2: Route ────────────────────────────────────────────── def route_to_department(state: TicketState) -> TicketState: dept_map = { "billing": "Finance & Billing Team", "technical": "Technical Support Team", "general": "Customer Success Team" } dept = dept_map.get(state["classification"], "Customer Success Team") print(f"[route] {dept}") return {**state, "department": dept}# ── Node 3: Draft Response ───────────────────────────────────── def draft_response(state: TicketState) -> TicketState: prompt = f"""You are from the {state['department']}. Write a professional, empathetic response to this {state['classification']} ticket.Ticket: {state['ticket_text']} Priority: {state['priority']}Write 2-3 paragraphs. End with: 'Support Team, AiBytec' """ response = llm.invoke([HumanMessage(content=prompt)]) print(f"[draft] {len(response.content)} chars drafted") return {**state, "draft_response": response.content}# ── Node 4: Quality Check ────────────────────────────────────── def quality_check(state: TicketState) -> TicketState: prompt = f"""Review this support response for quality.Ticket: {state['ticket_text']} Response: {state['draft_response']}Respond EXACTLY in this format: SCORE: [0-100] ESCALATE: yes|no REASON: [one sentence] """ response = llm.invoke([HumanMessage(content=prompt)]) text = response.content.strip() score, escalate = 75, False for line in text.split("\n"): if line.startswith("SCORE:"): try: score = int(line.split(":")[1].strip()) except: pass elif line.startswith("ESCALATE:"): escalate = "yes" in line.lower() print(f"[quality] Score: {score} | Escalate: {escalate}") return {**state, "quality_score": score, "needs_escalation": escalate}# ── Node 5a: Approve ─────────────────────────────────────────── def approve_response(state: TicketState) -> TicketState: print(f"[approve] Approved (score: {state['quality_score']})") return {**state, "final_response": state["draft_response"]}# ── Node 5b: Escalate ────────────────────────────────────────── def escalate_ticket(state: TicketState) -> TicketState: header = ( f"[ESCALATED — Score: {state['quality_score']}/100]\n" f"Dept: {state['department']} | Priority: {state['priority']}\n" f"{'='*50}\n\n" ) print(f"[escalate] Escalated to senior agent") return {**state, "final_response": header + state["draft_response"]}# ── Conditional Edge Function ────────────────────────────────── def should_escalate(state: TicketState) -> Literal["escalate", "approve"]: if state["needs_escalation"] or state["quality_score"] < 60: return "escalate" return "approve"# ── Build Graph ──────────────────────────────────────────────── def build_support_graph(): graph = StateGraph(TicketState) graph.add_node("classify", classify_ticket) graph.add_node("route", route_to_department) graph.add_node("draft", draft_response) graph.add_node("quality_check", quality_check) graph.add_node("approve", approve_response) graph.add_node("escalate", escalate_ticket) graph.set_entry_point("classify") graph.add_edge("classify", "route") graph.add_edge("route", "draft") graph.add_edge("draft", "quality_check") graph.add_conditional_edges( "quality_check", should_escalate, {"approve": "approve", "escalate": "escalate"} ) graph.add_edge("approve", END) graph.add_edge("escalate", END) return graph.compile()# ── Main ─────────────────────────────────────────────────────── if __name__ == "__main__": app = build_support_graph()tickets = [ { "ticket_id": "TKT-001", "ticket_text": "I was charged twice this month. Need refund ASAP.", "classification": "", "priority": "", "department": "", "draft_response": "", "quality_score": 0, "final_response": "", "needs_escalation": False, }, { "ticket_id": "TKT-002", "ticket_text": "My API integration returns 403 on every request.", "classification": "", "priority": "", "department": "", "draft_response": "", "quality_score": 0, "final_response": "", "needs_escalation": False, }, ]for ticket in tickets: print(f"\n{'='*50}\nProcessing {ticket['ticket_id']}...") result = app.invoke(ticket) print(f"Classification: {result['classification']}") print(f"Priority: {result['priority']}") print(f"Quality Score: {result['quality_score']}/100") print(f"Escalated: {result['needs_escalation']}") print(f"\n--- Final Response ---\n{result['final_response'][:300]}...")

Next Steps & Extensions

You now have a working 6-node LangGraph agent. Here is how to extend it for production:

🔁

Add Retry Logic

Add a loop back from escalate to draft_response — Claude rewrites until quality score is above threshold.

💾

Add Memory (Checkpointing)

Use LangGraph's MemorySaver to persist state to disk and resume interrupted workflows.

🖥️

Streamlit Dashboard

Wrap in Streamlit — ticket input form, live node progress display, and response output panel.

🗄️

Database Integration

Add a node that pulls customer history from PostgreSQL — so Claude sees past tickets before drafting.

🌐

FastAPI Endpoint

Deploy as a REST API with FastAPI — receive tickets as POST requests, return structured JSON responses.

👥

Multi-Agent Teams

Add specialist sub-agents per department — each with their own tools, prompts, and decision logic.

🎓 What You Learned Today

How to define a TypedDict state as the shared memory of a workflow
How to write nodes that receive and return state
How to add conditional edges that branch based on AI output
How to use the self-review pattern to catch low-quality AI outputs
How to compile and invoke the graph with a starting state
Why structured output parsing beats raw JSON for robustness

You Just Built a Real AI Agent

Most developers think AI agents are complicated. They are not — once you understand the three primitives: state, nodes, and edges. Everything else — multi-agent teams, loops, memory, tool use — is just combining these building blocks in different ways.

LangGraph is the cleanest way to build these workflows in Python. And now you know exactly how it works.

🎓

Build 10 More Agents Like This

In Certificate 2: Agentic AI Developer at AiBytec, we go from LangGraph basics to full multi-agent production systems with LangFuse observability, FastAPI deployment, and real client projects.

Enroll at AiBytec.com →

🐍 Share this with every Python developer you know who hasn't tried LangGraph yet.

#LangGraph #PythonAI #AgenticAI #StateManagement #LangChain #ClaudeAPI #AITutorial #AiBytec

Leave a Comment

Your email address will not be published. Required fields are marked *

Advanced AI solutions for business Chatbot
Chat with AI
Verified by MonsterInsights