To Data & Beyond

To Data & Beyond

Share this post

To Data & Beyond
To Data & Beyond
Building Agents with LangGraph Course #6: A Guide to Human-in-the-Loop Interactions

Building Agents with LangGraph Course #6: A Guide to Human-in-the-Loop Interactions

Youssef Hosni's avatar
Youssef Hosni
Aug 18, 2025
∙ Paid
28

Share this post

To Data & Beyond
To Data & Beyond
Building Agents with LangGraph Course #6: A Guide to Human-in-the-Loop Interactions
3
Share

Get 90% off for 1 year

Welcome back to our course on Building agents with LangGraph! In the previous lessons, we’ve constructed a capable agent that can use tools to answer questions.

However, in many real-world scenarios, we need more control. We might want to approve an agent’s actions before it executes them, correct its course if it misunderstands, or even explore alternative paths.

This is where the concept of “Human in the Loop” (HITL) becomes essential. LangGraph is designed with this interactivity in mind, providing powerful tools for pausing, inspecting, and manipulating the state of your agent.

In this lesson, we will explore these advanced human-in-the-loop patterns:

  • Manual Approval: How to interrupt the graph’s execution before a critical step, like a tool call, to allow for human review.

  • State Modification: How to directly edit the agent’s state to correct its actions or steer its behavior.

  • Time Travel: A fascinating feature that lets you rewind to any previous state in the conversation and branch off to explore a new path.

  • Injecting Tool Outputs: How to manually provide tool results to the agent, bypassing the actual tool execution.

Let’s dive in and see how to give our agent a human supervisor.

Table of Contents:

  1. Project Setup and Dependencies

  2. Interrupting Execution for Manual Approval

  3. Understanding State Memory and Time Travel

  • Modifying State on the Fly

  • Time Travel: Rewinding and Branching

  • Go Back in Time and Edit

4. Manually Adding Tool Results

This article is the Sixth Article in the ongoing series of Building LLM Agents with LangGraph:

  • Introduction to Agents & LangGraph (Published!)

  • Building Simple ReAct Agent from Scratch (Published!)

  • Main Building Units of LangGraph (Published!)

  • Agentic Search Tools in LangGraph (Published!)

  • Persistence and Streaming in LangGraph (Published!)

  • Human in the Loop in LLM Agents (You are here!)

  • Putting it All Together! Building Essay Writer Agent (Coming Soon!)

This series is designed to take readers from foundational knowledge to advanced practices in building LLM agents with LangGraph.

Each article delves into essential components, such as constructing simple ReAct agents from scratch, leveraging LangGraph’s building units, utilizing agentic search tools, implementing persistence and streaming capabilities, integrating human-in-the-loop interactions, and culminating in the creation of a fully functional essay-writing agent.

By the end of this series, you will have a comprehensive understanding of LangGraph, practical skills to design and deploy LLM agents, and the confidence to build customized AI-driven workflows tailored to diverse applications.

Get 90% off for 1 year


Back to School Discounts: 90% Discount on Everything

Youssef Hosni
·
Aug 11
Back to School Discounts: 90% Discount on Everything

Happy back-to-school season! On this occasion, I am offering a massive 90% discount on the books, courses, and To Data & Beyond during this week, so we can learn and grow together!

Read full story

1. Project Setup and Dependencies

We’ll begin with the familiar setup from our last lesson. First, we configure our environment variables and import the necessary libraries. We’ll use SqliteSaver to enable checkpointing, which is the mechanism that saves our graph’s state and makes all these interactive patterns possible.

from dotenv import load_dotenv
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver

_ = load_dotenv()


memory = SqliteSaver.from_conn_string(":memory:")

For human-in-the-loop interactions, we often need to modify or replace messages in our agent’s state. The standard operator.add() reducer simply appends messages. To gain more control, we’ll create a custom reducer function, reduce_messages.

This function will inspect incoming messages. If a new message has the same unique ID as an existing one, it will replace it. Otherwise, it will append the new message. This allows us to overwrite previous steps if needed.

from uuid import uuid4
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage

"""
In previous examples we've annotated the `messages` state key
with the default `operator.add` or `+` reducer, which always
appends new messages to the end of the existing messages array.

Now, to support replacing existing messages, we annotate the
`messages` key with a customer reducer function, which replaces
messages with the same `id`, and appends them otherwise.
"""
def reduce_messages(left: list[AnyMessage], right: list[AnyMessage]) -> list[AnyMessage]:
    # assign ids to messages that don't have them
    for message in right:
        if not message.id:
            message.id = str(uuid4())
    # merge the new messages with the existing messages
    merged = left.copy()
    for message in right:
        for i, existing in enumerate(merged):
            # replace any existing messages with the same id
            if existing.id == message.id:
                merged[i] = message
                break
        else:
            # append any new messages to the end
            merged.append(message)
    return merged

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], reduce_messages]

We’ll use the same TavilySearchResults tool as before.

Get All My Books with 40 % Discount

tool = TavilySearchResults(max_results=2)

2. Interrupting Execution for Manual Approval

Now, let’s build our agent. The core logic remains the same, but we’ll introduce one crucial change during the compilation step. By adding interrupt_before=[“action”], we instruct LangGraph to pause the execution right before calling the action node. Since our action node is responsible for executing tools, this effectively creates an approval gate.

class Agent:
    def __init__(self, model, tools, system="", checkpointer=None):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile(
            checkpointer=checkpointer,
            # This is the new parameter we're adding
            interrupt_before=["action"]
        )
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def exists_action(self, state: AgentState):
        print(state)
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

Let’s initialize the agent and make a request.

Get All My Books with 40 % Discount

prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatOpenAI(model="gpt-3.5-turbo")
abot = Agent(model, [tool], system=prompt, checkpointer=memory)
messages = [HumanMessage(content="Whats the weather in SF?")]
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

{‘messages’: [HumanMessage(content=’Whats the weather in SF?’, id=’0d8ac19e-6fc1–406f-924d-9ddcc87b8e29'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’, ‘function’: {‘arguments’: ‘{“query”:”weather in San Francisco”}’, ‘name’: ‘tavily_search_results_json’}, ‘type’: ‘function’}]}, response_metadata={‘token_usage’: {‘completion_tokens’: 22, ‘prompt_tokens’: 152, ‘total_tokens’: 174, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’: None}, id=’run-ef442ef3–7e27–4d37-b0f4-ba29d1161590–0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}])]}
{‘messages’: [AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’, ‘function’: {‘arguments’: ‘{“query”:”weather in San Francisco”}’, ‘name’: ‘tavily_search_results_json’}, ‘type’: ‘function’}]}, response_metadata={‘token_usage’: {‘completion_tokens’: 22, ‘prompt_tokens’: 152, ‘total_tokens’: 174, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’: None}, id=’run-ef442ef3–7e27–4d37-b0f4-ba29d1161590–0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}])]}

Notice the output stops after the AIMessage containing the tool_calls. The graph has paused because of our interrupt_before configuration. We can inspect the current state to confirm this.

abot.graph.get_state(thread)

StateSnapshot(values={‘messages’: [HumanMessage(content=’Whats the weather in SF?’, id=’0d8ac19e-6fc1–406f-924d-9ddcc87b8e29'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in San Francisco”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-ef442ef3–7e27–4d37-b0f4-ba29d1161590–0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}])]}, next=(‘action’,), config={‘configurable’: {‘thread_id’: ‘1’, ‘thread_ts’: ‘1f07adc9–50ca-6c39–8001–98c519bd55d6’}}, metadata={‘source’: ‘loop’, ‘step’: 1, ‘writes’: {‘llm’: {‘messages’: [AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in San Francisco”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-ef442ef3–7e27–4d37-b0f4-ba29d1161590–0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}])]}}}, created_at=’2025–08–16T20:07:06.051464+00:00', parent_config={‘configurable’: {‘thread_id’: ‘1’, ‘thread_ts’: ‘1f07adc9–505b-6a71–8000–30a74372b034’}})

The output of get_state is a StateSnapshot object. Look for the next attribute.

abot.graph.get_state(thread).next

(‘action’,)

The output (‘action’,) confirms that the graph is paused and waiting to execute the action node.

To continue, we simply stream again, passing None as the input but using the same thread configuration. This tells LangGraph to pick up where it left off.

for event in abot.graph.stream(None, thread):
    for v in event.values():
        print(v)

Calling: {‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}
Back to the model!
{‘messages’: [ToolMessage(content=”[{‘url’: ‘https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/august-8/', ‘content’: ‘| 18. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\n| 19. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\n| 20. August | 17 °C | 62 °F | 21 °C | 71 °F | 13 °C | 56 °F | 14 °C | 57 °F | 1.0 mm | 0.0 inch. |\n| 21. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\n| 22. August | 17 °C | 63 °F | 23 °C | 73 °F | 14 °C | 57 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. | […] | Max. Temperature °C (°F) | 14 °C (57.3) °F | 14.9 °C (58.7) °F | 16.2 °C (61.2) °F | 17.4 °C (63.3) °F | 19.2 °C (66.5) °F | 21.5 °C (70.8) °F | 21.8 °C (71.2) °F | 22.2 °C (71.9) °F | 23.1 °C (73.6) °F | 21.3 °C (70.3) °F | 17.1 °C (62.8) °F | 13.9 °C (57.1) °F |\n| Precipitation / Rainfall mm (in) | 113 (4) | 118 (4) | 83 (3) | 40 (1) | 21 (0) | 6 (0) | 2 (0) | 2 (0) | 3 (0) | 25 (0) | 57 (2) | 111 (4) | […] | Min. Temperature °C (°F) | 6.2 °C (43.2) °F | 7.1 °C (44.8) °F | 8.2 °C (46.8) °F | 8.9 °C (48.1) °F | 10.3 °C (50.6) °F | 11.8 °C (53.3) °F | 12.7 °C (54.9) °F | 13.3 °C (55.9) °F | 13.1 °C (55.6) °F | 11.9 °C (53.4) °F | 9 °C (48.2) °F | 6.8 °C (44.2) °F |’}, {‘url’: ‘https://www.weather2travel.com/california/san-francisco/august/', ‘content’: ‘weather2travel.com — travel deals for your holiday in the sun\nClick to search\n\n# San Francisco weather in August 2025\n\nExpect daytime maximum temperatures of 20°C in San Francisco, California in August based on long-term weather averages. There are 10 hours of sunshine per day on average.’}]”, name=’tavily_search_results_json’, id=’bd1cad08–729b-46af-a8db-6fbb66d2b7c1', tool_call_id=’call_i7rGhnzgZf5hW3bsDcqdTrH0')]}
{‘messages’: [HumanMessage(content=’Whats the weather in SF?’, id=’d3cafb23–9573–443e-b530–512bb6e6b4f1'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in San Francisco”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-01d73196-ce30–4068-b5a6–484bafd0aeee-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in San Francisco’}, ‘id’: ‘call_i7rGhnzgZf5hW3bsDcqdTrH0’}]), ToolMessage(content=”[{‘url’: ‘https://en.climate-data.org/north-america/united-states-of-america/california/san-francisco-385/t/august-8/', ‘content’: ‘| 18. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. |\n| 19. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\n| 20. August | 17 °C | 62 °F | 21 °C | 71 °F | 13 °C | 56 °F | 14 °C | 57 °F | 1.0 mm | 0.0 inch. |\n| 21. August | 17 °C | 62 °F | 22 °C | 72 °F | 13 °C | 56 °F | 14 °C | 57 °F | 0.1 mm | 0.0 inch. |\n| 22. August | 17 °C | 63 °F | 23 °C | 73 °F | 14 °C | 57 °F | 14 °C | 57 °F | 0.0 mm | 0.0 inch. | […] | Max. Temperature °C (°F) | 14 °C (57.3) °F | 14.9 °C (58.7) °F | 16.2 °C (61.2) °F | 17.4 °C (63.3) °F | 19.2 °C (66.5) °F | 21.5 °C (70.8) °F | 21.8 °C (71.2) °F | 22.2 °C (71.9) °F | 23.1 °C (73.6) °F | 21.3 °C (70.3) °F | 17.1 °C (62.8) °F | 13.9 °C (57.1) °F |\n| Precipitation / Rainfall mm (in) | 113 (4) | 118 (4) | 83 (3) | 40 (1) | 21 (0) | 6 (0) | 2 (0) | 2 (0) | 3 (0) | 25 (0) | 57 (2) | 111 (4) | […] | Min. Temperature °C (°F) | 6.2 °C (43.2) °F | 7.1 °C (44.8) °F | 8.2 °C (46.8) °F | 8.9 °C (48.1) °F | 10.3 °C (50.6) °F | 11.8 °C (53.3) °F | 12.7 °C (54.9) °F | 13.3 °C (55.9) °F | 13.1 °C (55.6) °F | 11.9 °C (53.4) °F | 9 °C (48.2) °F | 6.8 °C (44.2) °F |’}, {‘url’: ‘https://www.weather2travel.com/california/san-francisco/august/', ‘content’: ‘weather2travel.com — travel deals for your holiday in the sun\nClick to search\n\n# San Francisco weather in August 2025\n\nExpect daytime maximum temperatures of 20°C in San Francisco, California in August based on long-term weather averages. There are 10 hours of sunshine per day on average.’}]”, name=’tavily_search_results_json’, id=’bd1cad08–729b-46af-a8db-6fbb66d2b7c1', tool_call_id=’call_i7rGhnzgZf5hW3bsDcqdTrH0'), AIMessage(content=’The weather in San Francisco, California in August typically has daytime maximum temperatures around 20°C (68°F) based on long-term weather averages. There are about 10 hours of sunshine per day on average.’, response_metadata={‘token_usage’: {‘completion_tokens’: 43, ‘prompt_tokens’: 1097, ‘total_tokens’: 1140, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=’run-db559cf6–03b6–4849-ac39-c6db47fece82–0')]}
{‘messages’: [AIMessage(content=’The weather in San Francisco, California in August typically has daytime maximum temperatures around 20°C (68°F) based on long-term weather averages. There are about 10 hours of sunshine per day on average.’, response_metadata={‘token_usage’: {‘completion_tokens’: 43, ‘prompt_tokens’: 1097, ‘total_tokens’: 1140, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=’run-db559cf6–03b6–4849-ac39-c6db47fece82–0')]}

This time, the execution completes. The agent calls the tool and generates the final response. If we check the next node now, it will be empty, indicating the graph has finished.

abot.graph.get_state(thread).next
# Output: ()

()

We can even wrap this in a simple loop to create an interactive command-line approval process.

messages = [HumanMessage("Whats the weather in LA?")]
thread = {"configurable": {"thread_id": "2"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)
while abot.graph.get_state(thread).next:
    print("\n", abot.graph.get_state(thread),"\n")
    _input = input("proceed?")
    if _input != "y":
        print("aborting")
        break
    for event in abot.graph.stream(None, thread):
        for v in event.values():
            print(v)

{‘messages’: [HumanMessage(content=’Whats the weather in LA?’, id=’64073328–247e-4d53–8f67–908b59f20a62'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’, ‘function’: {‘arguments’: ‘{“query”:”weather in Los Angeles”}’, ‘name’: ‘tavily_search_results_json’}, ‘type’: ‘function’}]}, response_metadata={‘token_usage’: {‘completion_tokens’: 22, ‘prompt_tokens’: 152, ‘total_tokens’: 174, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’: None}, id=’run-95861c64-f69d-444f-8b1e-bc2da097f7ec-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}])]}
{‘messages’: [AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’, ‘function’: {‘arguments’: ‘{“query”:”weather in Los Angeles”}’, ‘name’: ‘tavily_search_results_json’}, ‘type’: ‘function’}]}, response_metadata={‘token_usage’: {‘completion_tokens’: 22, ‘prompt_tokens’: 152, ‘total_tokens’: 174, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘tool_calls’, ‘logprobs’: None}, id=’run-95861c64-f69d-444f-8b1e-bc2da097f7ec-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}])]}

StateSnapshot(values={‘messages’: [HumanMessage(content=’Whats the weather in LA?’, id=’64073328–247e-4d53–8f67–908b59f20a62'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in Los Angeles”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-95861c64-f69d-444f-8b1e-bc2da097f7ec-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}])]}, next=(‘action’,), config={‘configurable’: {‘thread_id’: ‘2’, ‘thread_ts’: ‘1f07be88-b98c-6ebc-8001-dcc406d8f0f0’}}, metadata={‘source’: ‘loop’, ‘step’: 1, ‘writes’: {‘llm’: {‘messages’: [AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in Los Angeles”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-95861c64-f69d-444f-8b1e-bc2da097f7ec-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}])]}}}, created_at=’2025–08–18T04:05:15.316172+00:00', parent_config={‘configurable’: {‘thread_id’: ‘2’, ‘thread_ts’: ‘1f07be88-b941–6e49–8000-ad365b4fd777’}})

proceed?y
Calling: {‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}
Back to the model!
{‘messages’: [ToolMessage(content=”[{‘url’: ‘https://weathershogun.com/weather/usa/ca/los-angeles/451/august/2025-08-18', ‘content’: ‘Monday, August 18, 2025. Los Angeles, CA — Weather Forecast \n\n===============\n\n☰\n\nLos Angeles, CA\n\nImage 1: WeatherShogun.com\n\nHomeContactBrowse StatesPrivacy PolicyTerms and Conditions\n\n°F)°C)\n\n❮\n\nTodayTomorrowHourly7 days30 daysAugust\n\n❯\n\nLos Angeles, California Weather: \n\nMonday, August 18, 2025\n\nDay 84°\n\nNight 64°\n\nPrecipitation 0 %\n\nWind 8 mph\n\nUV Index (0–11+)11\n\nTuesday\n\n Hourly\n Today\n Current Air Quality\n Hourly Air Quality Forecast\n 7 days\n 30 days’}, {‘url’: ‘https://en.climate-data.org/north-america/united-states-of-america/california/los-angeles-714829/t/august-8/', ‘content’: ‘| Max. Temperature °C (°F) | 19.5 °C (67.2) °F | 19.4 °C (66.9) °F | 21.4 °C (70.5) °F | 23.3 °C (73.9) °F | 25.2 °C (77.4) °F | 28.1 °C (82.6) °F | 31.3 °C (88.3) °F | 31.9 °C (89.5) °F | 31 °C (87.7) °F | 27.2 °C (81) °F | 23.1 °C (73.5) °F | 18.8 °C (65.9) °F |\n| Precipitation / Rainfall mm (in) | 84 (3) | 89 (3) | 54 (2) | 19 (0) | 11 (0) | 3 (0) | 2 (0) | 0 (0) | 4 (0) | 17 (0) | 21 (0) | 53 (2) | […] | Humidity(%) | 52% | 57% | 59% | 55% | 56% | 55% | 52% | 49% | 49% | 49% | 46% | 53% |\n| Rainy days (d) | 4 | 5 | 4 | 2 | 1 | 0 | 0 | 0 | 1 | 2 | 2 | 4 |\n| avg. Sun hours (hours) | 7.6 | 7.6 | 8.3 | 9.0 | 9.1 | 10.2 | 11.3 | 10.8 | 9.5 | 8.3 | 7.9 | 7.4 | […] ## \n\n## \n\n# Los Angeles Weather in August\n\nAre you planning a holiday with hopefully nice weather in Los Angeles in August 2025? Here you can find all information about the weather in Los Angeles in August:\n\n## Los Angeles weather in August\n\n| | | | | | |\n| — — | — — | — — | — — | — — | — — |\n| | Temperature August | 24.5°C | 76.1°F | | Precipitation / Rainfall August | 0mm | 0 inches |\n| | Temperature August max. | 31.9°C | 89.5°F |\n| | Temperature August min. | 18.2°C | 64.8°F |’}]”, name=’tavily_search_results_json’, id=’a1f1c6b0–198a-42b8–9323–065007924151', tool_call_id=’call_6ED1ZQ8nrjYIOY14yqInLPZc’)]}
{‘messages’: [HumanMessage(content=’Whats the weather in LA?’, id=’64073328–247e-4d53–8f67–908b59f20a62'), AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘function’: {‘arguments’: ‘{“query”:”weather in Los Angeles”}’, ‘name’: ‘tavily_search_results_json’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’, ‘type’: ‘function’}]}, response_metadata={‘finish_reason’: ‘tool_calls’, ‘logprobs’: None, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘token_usage’: {‘completion_tokens’: 22, ‘completion_tokens_details’: {‘accepted_prediction_tokens’: 0, ‘audio_tokens’: 0, ‘reasoning_tokens’: 0, ‘rejected_prediction_tokens’: 0}, ‘prompt_tokens’: 152, ‘prompt_tokens_details’: {‘audio_tokens’: 0, ‘cached_tokens’: 0}, ‘total_tokens’: 174}}, id=’run-95861c64-f69d-444f-8b1e-bc2da097f7ec-0', tool_calls=[{‘name’: ‘tavily_search_results_json’, ‘args’: {‘query’: ‘weather in Los Angeles’}, ‘id’: ‘call_6ED1ZQ8nrjYIOY14yqInLPZc’}]), ToolMessage(content=”[{‘url’: ‘https://weathershogun.com/weather/usa/ca/los-angeles/451/august/2025-08-18', ‘content’: ‘Monday, August 18, 2025. Los Angeles, CA — Weather Forecast \n\n===============\n\n☰\n\nLos Angeles, CA\n\nImage 1: WeatherShogun.com\n\nHomeContactBrowse StatesPrivacy PolicyTerms and Conditions\n\n°F)°C)\n\n❮\n\nTodayTomorrowHourly7 days30 daysAugust\n\n❯\n\nLos Angeles, California Weather: \n\nMonday, August 18, 2025\n\nDay 84°\n\nNight 64°\n\nPrecipitation 0 %\n\nWind 8 mph\n\nUV Index (0–11+)11\n\nTuesday\n\n Hourly\n Today\n Current Air Quality\n Hourly Air Quality Forecast\n 7 days\n 30 days’}, {‘url’: ‘https://en.climate-data.org/north-america/united-states-of-america/california/los-angeles-714829/t/august-8/', ‘content’: ‘| Max. Temperature °C (°F) | 19.5 °C (67.2) °F | 19.4 °C (66.9) °F | 21.4 °C (70.5) °F | 23.3 °C (73.9) °F | 25.2 °C (77.4) °F | 28.1 °C (82.6) °F | 31.3 °C (88.3) °F | 31.9 °C (89.5) °F | 31 °C (87.7) °F | 27.2 °C (81) °F | 23.1 °C (73.5) °F | 18.8 °C (65.9) °F |\n| Precipitation / Rainfall mm (in) | 84 (3) | 89 (3) | 54 (2) | 19 (0) | 11 (0) | 3 (0) | 2 (0) | 0 (0) | 4 (0) | 17 (0) | 21 (0) | 53 (2) | […] | Humidity(%) | 52% | 57% | 59% | 55% | 56% | 55% | 52% | 49% | 49% | 49% | 46% | 53% |\n| Rainy days (d) | 4 | 5 | 4 | 2 | 1 | 0 | 0 | 0 | 1 | 2 | 2 | 4 |\n| avg. Sun hours (hours) | 7.6 | 7.6 | 8.3 | 9.0 | 9.1 | 10.2 | 11.3 | 10.8 | 9.5 | 8.3 | 7.9 | 7.4 | […] ## \n\n## \n\n# Los Angeles Weather in August\n\nAre you planning a holiday with hopefully nice weather in Los Angeles in August 2025? Here you can find all information about the weather in Los Angeles in August:\n\n## Los Angeles weather in August\n\n| | | | | | |\n| — — | — — | — — | — — | — — | — — |\n| | Temperature August | 24.5°C | 76.1°F | | Precipitation / Rainfall August | 0mm | 0 inches |\n| | Temperature August max. | 31.9°C | 89.5°F |\n| | Temperature August min. | 18.2°C | 64.8°F |’}]”, name=’tavily_search_results_json’, id=’a1f1c6b0–198a-42b8–9323–065007924151', tool_call_id=’call_6ED1ZQ8nrjYIOY14yqInLPZc’), AIMessage(content=’The weather in Los Angeles today is 84°F during the day and 64°F at night. There is no precipitation expected with a wind speed of 8 mph and a UV Index of 11.’, response_metadata={‘token_usage’: {‘completion_tokens’: 42, ‘prompt_tokens’: 1065, ‘total_tokens’: 1107, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=’run-ed2612c2–8e62–4664-b4c1-d6572317d3bc-0')]}
{‘messages’: [AIMessage(content=’The weather in Los Angeles today is 84°F during the day and 64°F at night. There is no precipitation expected with a wind speed of 8 mph and a UV Index of 11.’, response_metadata={‘token_usage’: {‘completion_tokens’: 42, ‘prompt_tokens’: 1065, ‘total_tokens’: 1107, ‘prompt_tokens_details’: {‘cached_tokens’: 0, ‘audio_tokens’: 0}, ‘completion_tokens_details’: {‘reasoning_tokens’: 0, ‘audio_tokens’: 0, ‘accepted_prediction_tokens’: 0, ‘rejected_prediction_tokens’: 0}}, ‘model_name’: ‘gpt-3.5-turbo’, ‘system_fingerprint’: None, ‘finish_reason’: ‘stop’, ‘logprobs’: None}, id=’run-ed2612c2–8e62–4664-b4c1-d6572317d3bc-0')]}


3. Understanding State Memory and Time Travel

Before we start modifying state, let’s understand how LangGraph’s memory works. As a graph executes, the checkpointer saves a snapshot of the state at every step. Each snapshot contains:

  1. values: The actual AgentState (e.g., our list of messages).

  2. config: A configuration dictionary containing the thread_id and a unique timestamp, thread_ts. This thread_ts is the key to time travel, as it uniquely identifies a specific state snapshot.

You can interact with this memory using several methods:

  • get_state(config): Retrieves a specific snapshot. If you only provide the thread_id, it returns the latest state.

  • get_state_history(config): Returns an iterator over all snapshots for a given thread, from newest to oldest.

  • update_state(config, values): Creates a new state snapshot by modifying an existing one.

Get 90% off for 1 year

Keep reading with a 7-day free trial

Subscribe to To Data & Beyond to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Youssef Hosni
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share