Zarif Automates

How to Build an AI Agent with LangChain: A Complete 2026 Tutorial

ZarifZarif
|

If you want to build AI agents that actually do things — browse the web, run code, query databases — LangChain is still the fastest path from zero to working prototype in 2026.

Definition

An AI agent built with LangChain is a Python program that connects a large language model to a set of tools, allowing it to reason through multi-step tasks, call external APIs, and take actions autonomously until a goal is completed.

TL;DR

  • LangChain + LangGraph is the dominant stack for building AI agents in Python in 2026 — LangChain for simple agents, LangGraph for production-grade stateful workflows
  • The core pattern is ReAct: the agent Reasons about what to do, takes an Action with a tool, Observes the result, and repeats until done
  • You need three things to build a working agent: an LLM, at least one tool, and a runtime loop (LangGraph's create_react_agent is the recommended approach)
  • Memory is handled via message state — short-term by default, persistent with a checkpointer
  • Building an agent takes less than 50 lines of Python once your environment is set up

What LangChain Actually Is (and Why It Still Matters in 2026)

LangChain started as a utility library for chaining LLM calls. In 2026, it's evolved into a full agent framework — and it runs on top of LangGraph, a lower-level library for building stateful, graph-based workflows.

The distinction matters:

  • LangChain gives you pre-built agent templates, tool integrations, and model connectors. Best for getting started fast.
  • LangGraph gives you fine-grained control over agent state, branching logic, and human-in-the-loop interrupts. Best for production systems.

For this tutorial, you'll use LangChain's high-level API to build your first agent, then understand where LangGraph fits in when you need more control.

LangChain supports over 1,000 integrations — covering every major LLM provider (OpenAI, Anthropic, Google, Mistral), vector databases, search APIs, and custom tools. This means you're not locked into any single vendor, and you can swap models without rewriting your agent logic.

Step 1: Set Up Your Environment

Before writing any agent code, get your environment ready.

pip install langchain langchain-openai langgraph

Set your API key as an environment variable — never hardcode it in your script:

export OPENAI_API_KEY="your-key-here"

If you're using Anthropic's Claude instead of OpenAI:

pip install langchain-anthropic
export ANTHROPIC_API_KEY="your-key-here"
Tip

Use a .env file and the python-dotenv library to manage API keys locally. Add .env to your .gitignore immediately — this is the most common way developers accidentally leak credentials.

Step 2: Understand the ReAct Pattern

Before writing code, understand what your agent is actually doing under the hood.

LangChain agents use the ReAct pattern (Reasoning + Acting). On every turn, the agent:

  1. Reasons — The LLM thinks through what it needs to do next and which tool to call
  2. Acts — It calls the selected tool with the appropriate inputs
  3. Observes — It reads the tool's output
  4. Repeats — It reasons again based on the observation, until it has a final answer

This loop is what separates an agent from a simple LLM call. An LLM call is one shot. An agent keeps going until the task is done — or until it hits your configured iteration limit.

The ReAct pattern is transparent: you can see every reasoning step and every tool call in the agent's output. This makes debugging much easier than black-box approaches.

Step 3: Define Your Tools

Tools are what give your agent capabilities beyond text generation. A tool is any Python function your agent can call during its reasoning loop.

Here's how to define two basic tools — a calculator and a web search:

from langchain_core.tools import tool

@tool
def calculator(expression: str) -> str:
    """Evaluates a mathematical expression. Input should be a valid Python math expression like '2 + 2' or '100 * 0.15'."""
    try:
        result = eval(expression, {"__builtins__": {}}, {})
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_current_date(query: str) -> str:
    """Returns the current date. Use this when you need to know today's date."""
    from datetime import date
    return str(date.today())

The docstring on each tool is critical — it's what the LLM reads to decide when and how to use the tool. Write docstrings that are specific, describe the input format, and explain exactly what the tool returns.

For real agents, you'll typically include tools like:

  • Web search (Tavily, SerpAPI, Brave Search)
  • Code execution
  • Database queries
  • File reading/writing
  • API calls to external services

LangChain ships with built-in integrations for many of these, so you don't always need to write custom tools from scratch.

Step 4: Initialize Your LLM and Build the Agent

With your tools defined, connect them to an LLM and create the agent:

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Combine your tools
tools = [calculator, get_current_date]

# Create the agent
agent = create_react_agent(llm, tools)

The create_react_agent function from LangGraph is the current recommended approach — it replaced the older create_react_agent from langchain.agents which was deprecated in v1.0. LangGraph's version adds built-in state management, checkpointing, and better support for multi-turn conversations.

Setting temperature=0 makes your agent deterministic. For most task-oriented agents, you want consistency, not creativity.

Join the Free Skool Community

Get access to workflow templates, weekly live calls, and a private network of AI automation builders.

Join for Free

Step 5: Run Your Agent

Now actually invoke the agent with a task:

# Run the agent
result = agent.invoke({
    "messages": [("human", "What is 15% of 847, and what is today's date?")]
})

# Print the final response
print(result["messages"][-1].content)

The agent will reason through the task, call the calculator with 847 * 0.15, call the date tool, and combine the results into a final answer — all automatically.

To see every step in the agent's reasoning loop, print all messages:

for message in result["messages"]:
    print(f"{message.type}: {message.content}")

This shows you the full chain: human input → AI reasoning → tool calls → tool outputs → final AI response.

Step 6: Add Memory for Multi-Turn Conversations

By default, each agent invocation starts fresh. To build an agent that remembers previous turns, add a checkpointer:

from langgraph.checkpoint.memory import MemorySaver

# Create agent with memory
memory = MemorySaver()
agent_with_memory = create_react_agent(llm, tools, checkpointer=memory)

# Use a thread_id to maintain conversation state
config = {"configurable": {"thread_id": "user-123"}}

# First message
result1 = agent_with_memory.invoke(
    {"messages": [("human", "My name is Zarif.")]},
    config=config
)

# Second message — agent remembers the first
result2 = agent_with_memory.invoke(
    {"messages": [("human", "What's my name?")]},
    config=config
)

print(result2["messages"][-1].content)  # "Your name is Zarif."

The thread_id is how the checkpointer knows which conversation to load. Use a unique ID per user session in production. For persistent memory across restarts, swap MemorySaver for a database-backed checkpointer like PostgresSaver or RedisSaver.

Step 7: Add a System Prompt

System prompts let you give your agent a persona, specific instructions, or domain constraints:

from langchain_core.messages import SystemMessage

system_prompt = """You are a helpful financial assistant.
You have access to a calculator and can look up today's date.
Always show your calculations step by step.
If you're asked about investments or specific financial advice, remind the user to consult a licensed advisor."""

agent = create_react_agent(
    llm,
    tools,
    state_modifier=system_prompt
)

A good system prompt dramatically improves agent reliability. Define the agent's role, what it should and shouldn't do, and any output format requirements upfront.

Warning

Don't put sensitive business rules or API keys in your system prompt — it's accessible to anyone who can read your LLM's input. System prompts are not a security layer.

Step 8: Handle Errors and Set Iteration Limits

Production agents need guardrails. Two essential configurations:

Iteration limit — prevents infinite loops if the agent can't find a solution:

agent = create_react_agent(
    llm,
    tools,
    max_iterations=10  # Stop after 10 reasoning steps
)

Error handling — wrap your tool functions with try/except blocks so a failing tool doesn't crash the whole agent:

@tool
def safe_web_search(query: str) -> str:
    """Search the web for current information on a topic."""
    try:
        # your search implementation
        results = search_api.search(query)
        return results
    except Exception as e:
        return f"Search failed: {str(e)}. Try rephrasing your query."

When a tool returns an error message instead of crashing, the agent can reason about the failure and try a different approach.

Deploying Your Agent

Once your agent works locally, the standard production path is:

  1. Wrap the agent in a FastAPI endpoint to expose it as an HTTP API
  2. Run it on a cloud platform (AWS Lambda, Google Cloud Run, Railway, Sevalla)
  3. Add authentication to your API before exposing it publicly
  4. Monitor tool call volumes and LLM token usage — these are your main cost drivers

LangGraph also offers LangGraph Cloud, a managed hosting platform that handles scaling and persistence for production agents. It's worth evaluating if you don't want to manage infrastructure yourself.

Where to Go From Here

Once you've built your first LangChain agent, the logical next steps are:

  • Multi-agent systems — build a supervisor agent that orchestrates multiple specialized sub-agents
  • RAG integration — add a vector database so your agent can search over private documents
  • Human-in-the-loop — use LangGraph's interrupt system to pause the agent and ask for human approval before critical actions
  • Streaming — stream agent responses token-by-token for better UX in chat interfaces

The skills compound fast. Once you can build one tool and one agent loop, the complexity you can automate scales linearly.

What is LangChain used for in 2026?

LangChain is used to build AI-powered applications that connect large language models to tools, databases, and external APIs. The most common use cases in 2026 are AI agents (autonomous task-completing systems), RAG systems (AI that can search and reason over private documents), and chatbots with persistent memory. LangChain provides the plumbing — model connectors, tool integrations, and agent templates — so you don't have to build these from scratch.

Do I need LangGraph or LangChain to build an agent?

For a simple agent, LangChain's high-level API is sufficient. For production systems that need stateful workflows, human-in-the-loop checkpoints, or complex branching logic, use LangGraph. In practice, LangChain runs on top of LangGraph — so you're using both. Think of LangChain as the easy on-ramp and LangGraph as the full highway once you need more control.

How much does it cost to run a LangChain agent?

The main cost is LLM API calls. An agent typically makes 2–6 LLM calls per task (one per reasoning step). With GPT-4o at roughly $0.005 per 1K output tokens, a simple task might cost $0.01–0.05. For high-volume production agents, use GPT-4o-mini or Claude Haiku for reasoning steps where top-model quality isn't needed — this can cut costs by 10–20x. Add tool costs (search API subscriptions, database queries) on top.

Is LangChain still worth learning in 2026?

Yes — LangChain remains the most widely adopted Python framework for building AI agents, with over 1,000 integrations and strong community support. The ecosystem has matured significantly: the core API stabilized with v1.0, LangGraph handles production complexity, and LangSmith provides observability. The skills you build with LangChain transfer directly to LangGraph and to understanding agentic AI architecture more broadly.

What is the ReAct pattern in AI agents?

ReAct stands for Reasoning and Acting. It's the loop that powers most LangChain agents: the LLM reasons about what tool to use next, calls that tool, observes the result, then reasons again based on the new information. This continues until the agent reaches a final answer. The ReAct pattern makes agents transparent and debuggable — you can see every reasoning step — which is why it's the default for most agent frameworks.

Zarif

Zarif

Zarif is an AI automation educator helping thousands of professionals and businesses leverage AI tools and workflows to save time, cut costs, and scale operations.