pythonlangchain-agentslanggraph

Langgraph: Conditional 'END' State Fails to Stop Execution


I am using Langgraph to create a state graph, but I'm encountering an issue where the END condition in my conditional edge is not terminating the execution as expected. Instead of stopping at the END node, the graph continues to loop through the generate node even when should_continue returns "END."

Here is my reproducible code:

import operator
import logging
import random
from typing import TypedDict, Annotated, Sequence, List
from langgraph.graph import StateGraph, END
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, SystemMessage

logging.basicConfig(level=logging.INFO)


class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], operator.add]


class Agent:
    def __init__(self):
        # graph init
        graph = StateGraph(AgentState)
        graph.add_node("init", self.init_node)
        graph.add_node("generate", self.generate_node)
        graph.add_node("eval", self.eval_node)
        graph.add_node("search", self.search_node)

        graph.set_entry_point("init")
        graph.add_conditional_edges(
            "eval",
            self.should_continue,
            {
                "END": END,
                "CONTINUE": "generate",
            },
        )
        graph.add_edge("init", "search")
        graph.add_edge("search", "eval")
        graph.add_edge("eval", "generate")
        graph.add_edge("generate", "search")

        self.graph = graph.compile()

    def init_node(self, state: AgentState):

        content = "some keywords"
        logging.info(content)
        return {"messages": [AIMessage(content=content)]}

    def search_node(self, state: AgentState):

        content = "some results...."
        logging.info(content)
        return {"messages": [AIMessage(content=content)]}

    def eval_node(self, state: AgentState):

        content = "some evaluation...."
        logging.info(content)
        return {"messages": [AIMessage(content=content)]}

    def generate_node(self, state: AgentState):

        content = "some new keywords...."
        logging.info(content)
        return {"messages": [AIMessage(content=content)]}

    def should_continue(self, state: AgentState):

        choice = random.choice(["END", "CONTINUE"])
        logging.info(choice)
        return choice

Then I call it like this:

from reproducible_agent import Agent
from langchain_core.messages import HumanMessage

bot = Agent()
messages = [HumanMessage(content="hello world")]
result = bot.graph.invoke({"messages": messages})

logs:

INFO:root:some keywords. INFO:root:some results....
INFO:root:some evaluation....
INFO:root:END. INFO:root:some new keywords....
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:END. INFO:root:some new keywords....
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:CONTINUE. INFO:root:some new keywords....
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:CONTINUE. INFO:root:some new keywords....
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:CONTINUE. INFO:root:some new keywords....
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:CONTINUE. INFO:root:some new keywords....
...
INFO:root:some results....
INFO:root:some evaluation....
INFO:root:CONTINUE. INFO:root:some new keywords....

Eventually, it breaks with the following error:

GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition. You can increase the limit by setting the recursion_limit config key.

I also tried, with no luck


...
graph.add_conditional_edges(
            "eval",
            self.should_continue)

...

    def should_continue(self, state: AgentState):

        choice = random.choice(["END", "CONTINUE"])
        logging.info(choice)

        if choice == "END":
            return END
        else:
            return "generate"


Solution

  • If I was to be exact, I would say it is your eval() simply because you are asking your AI to evaluate its decision before choosing a path and for every evaluation it might come to a different conclusion; here is why

    But by removing the unconditional edge from eval to generate, the Agent now relies on the should_continue function to decide whether to continue processing ("CONTINUE") / terminate ("END").

    So where is what I suggest remove:

    # remove this
    graph.add_edge("eval", "generate")
    

    If I may suggest why not use crew-ai, this would help with orchestration of agents which can talk to each other or give the responsibility of evaluation to another agent rather than the same agent.

    see crew-ai docs: crew-ai