To Data & Beyond

To Data & Beyond

Share this post

To Data & Beyond
To Data & Beyond
Building LLM Agents with LangGraph Course #3: LangGraph Components & Building LangGraph Search Agent

Building LLM Agents with LangGraph Course #3: LangGraph Components & Building LangGraph Search Agent

Step-by-Step Guide to Building LLM Agents with LangGraph

Youssef Hosni's avatar
Youssef Hosni
Apr 15, 2025
∙ Paid
26

Share this post

To Data & Beyond
To Data & Beyond
Building LLM Agents with LangGraph Course #3: LangGraph Components & Building LangGraph Search Agent
1
Share

Get 50% off for 1 year

In our previous article, we built a ReAct agent from scratch using basic components. Now, we’ll implement a similar agent using LangGraph, a powerful extension of LangChain specifically designed for creating agents and multi-agent flows.

We will start with defining the core components of LangGraph and their roles and then we will build a web search agent and finalize the tutorial with testing and visualizing this agent.

This article is the Third Article in the ongoing series of Building LLM Agents with LangGraph:

  • Introduction to Agents & LangGraph (Published!)

  • Building Simple ReAct Agent from Scratch (Published!)

  • Main Building Units of LangGraph (You are Here!)

  • Agentic Search Tools in LangGraph (Coming Soon!)

  • Persistence and Streaming in LangGraph (Coming Soon!)

  • Human in the Loop in LLM Agents (Coming Soon!)

  • Putting it All Together! Building Essay Writer Agent (Coming Soon!)

This series is designed to take readers from foundational knowledge to advanced practices in building LLM agents with LangGraph.

Each article delves into essential components, such as constructing simple ReAct agents from scratch, leveraging LangGraph’s building units, utilizing agentic search tools, implementing persistence and streaming capabilities, integrating human-in-the-loop interactions, and culminating in the creation of a fully functional essay-writing agent.

By the end of this series, you will have a comprehensive understanding of LangGraph, practical skills to design and deploy LLM agents, and the confidence to build customized AI-driven workflows tailored to diverse applications.

Table of Contents:

  1. LangGraph Core Components

  2. Building Agent with LangGraph

  3. Testing and Visualizing Our Agent


My New E-Book: LLM Roadmap from Beginner to Advanced Level

Youssef Hosni
·
June 18, 2024
My New E-Book: LLM Roadmap from Beginner to Advanced Level

I am pleased to announce that I have published my new ebook LLM Roadmap from Beginner to Advanced Level. This ebook will provide all the resources you need to start your journey towards mastering LLMs.

Read full story

1. LangGraph Core Components

LangGraph allows us to describe and orchestrate control flow efficiently, particularly when creating cyclic graphs — exactly what we need for our agent implementation.

One of LangGraph’s key features is its built-in persistence, making it easier to manage multiple conversations simultaneously and remember previous iterations and actions. This persistence also enables powerful human-in-the-loop features, enhancing the flexibility of our agent systems.

Before diving into LangGraph, let’s understand some fundamental LangChain components that form the building blocks of our agent:

Prompt templates are reusable frameworks that allow us to create standardized prompts with formatted variables that can be replaced dynamically based on user content. The LangChain Hub hosts numerous examples of these templates, including agent prompts similar to what we’ve been using.

Tools are another essential component. In our implementation, we’ll be using the Tavily search tool, which is available through the LangChain community package that contains hundreds of similar tools. These tools give our agents the ability to interact with external systems and data sources.

LangGraph operates on three core concepts: nodes, edges, and conditional edges. Nodes represent agents or functions, edges connect these nodes, and conditional edges determine which node to proceed to next based on specific conditions.

LangGraph components

Let’s examine how an agent would map to a LangGraph object:

  • We’ll have a node called “call OpenAI” that calls the language model

  • A conditional edge will check for the existence of an action to take

  • Another node called “take action” will execute any actions identified

The state we’ll track is relatively simple — just a list of messages that grows over time. This state is accessible at all parts of the graph, at each node and edge, and can be stored in a persistence layer for later use.

2. Building Agent with LangGraph

To implement our agent, we’ll create three key methods:

  1. A method to call the language model (LLM node)

  2. A method to check whether there’s an action present (conditional edge)

  3. A method to execute identified actions (action node)

The LLM node takes the current state, adds the system message, calls the model, and returns the result. The conditional edge examines the last message to determine if any tool calls are present. If tool calls exist, the action node executes them by finding the relevant tool, invoking it with the appropriate arguments, and appending the results as tool messages.

One of the powerful features of modern language models is parallel tool calling, which allows for multiple tools to be called simultaneously. Our implementation supports this capability, making our agent more efficient when handling complex queries.

We will start with importing all necessary components from LangGraph, LangChain, and Python’s typing module and then create a Tavily search tool that will retrieve up to 4 results per query.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Youssef Hosni
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share