When I invoke a graph that includes interrupt
s in one of its nodes, it seems to get into an invalid/ unrecoverable state, with the following error:
return chatGeneration.message;
^
TypeError: Cannot read properties of undefined (reading 'message')
at ChatOpenAI.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/language_models/chat_models.js:64:31)
The first encounter of the interrupt
appears to go well, but after the second encounter this occurs.
(I include the minimal code to reproduce this in full below the question text.)
The main logic is in approveNode
, which contains the interrupt
.
y
, it proceeds to the toolsNode
which the agentNode
requested.n
), it proceeds to END
The issue is that once it proceeds to END
,
the subsequent call to graph.invoke
results in this error.
Another thing that I have tried is to change the logic in approveNode
such that:
y
, it proceeds to the toolsNode
which the agentNode
requested. (same as before)n
), it proceeds to back to agentNode
. (this has changed)(and I change the main graph accordingly to reflect this updated flow)
However, this results in the same error as above, just that it happens after the first interrupt
instead of the second interrupt
.
Questions:
workflow
that I have defined valid? Is there a better way to structure it?References used:
Main graph:
const workflow = new StateGraph(MessagesAnnotation)
.addNode('agent', agentNode)
.addNode('tools', toolsNode)
.addNode('approve', approveNode, {
ends: ['tools', END],
})
.addEdge(START, 'agent')
.addEdge('tools', 'agent')
.addConditionalEdges('agent', agentRouter, ['approve', END]);
const checkpointer = new MemorySaver();
const graph = workflow.compile({
checkpointer,
});
const graphConfig = {
configurable: { thread_id: '0x0004' },
};
Tools, nodes, and routers:
const cmdFooTool = tool(async function(inputs) {
console.log('===TOOL CMD_FOO===');
return inputs.name;
}, {
name: 'CMD_FOO',
description: 'Invoke when you want to do a Foo.',
schema: z.object({
name: z.string('Any string'),
}),
});
const cmdBarTool = tool(async function(inputs) {
console.log('===TOOL QRY_BAR===');
return inputs.name;
}, {
name: 'QRY_BAR',
description: 'Invoke when you want to query a Bar.',
schema: z.object({
name: z.string('Any string'),
}),
});
const tools = [cmdFooTool, cmdBarTool];
const llmWithTools = llm.bindTools(tools);
const toolsNode = new ToolNode(tools);
async function agentNode(state) {
console.log('===AGENT NODE===');
const response = await llmWithTools.invoke(state.messages);
console.log('=RESPONSE=',
'\ncontent:', response.content,
'\ntool_calls:', response.tool_calls.map((toolCall) => (toolCall.name)));
return { messages: [response] };
}
async function approveNode (state) {
console.log('===APPROVE NODE===');
const lastMsg = state.messages.at(-1);
const toolCall = lastMsg.tool_calls.at(-1);
const interruptMessage = `Please review the following tool invocation:
${toolCall.name} with inputs ${JSON.stringify(toolCall.args, undefined, 2)}
Do you approve (y/N)`;
console.log('=INTERRUPT PRE=');
const interruptResponse = interrupt(interruptMessage);
console.log('=INTERRUPT POST=');
const isApproved = (interruptResponse.trim().charAt(0).toLowerCase() === 'y');
const goto = (isApproved) ? 'tools' : END;
console.log('=RESULT=\n', { isApproved, goto });
return new Command({ goto });
}
function hasToolCalls(message) {
return message?.tool_calls?.length > 0;
}
async function agentRouter (state) {
const lastMsg = state.messages.at(-1);
if (hasToolCalls(lastMsg)) {
return 'approve';
}
return END;
}
Simulate a run:
let state;
let agentResult;
let inputText;
let invokeWith;
// step 1: prompt
inputText = 'Pls perform a Foo with name "ASDF".';
console.log('===HUMAN PROMPT===\n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);
state = await graph.getState(graphConfig);
console.log('===STATE NEXT===\n', state.next);
console.log('=LAST MSG=\n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=\n', agentResult.messages.at(-1).tool_calls);
// step 2: interrupted in the 'approve' node, human in the loop authorises
inputText = 'yes'
console.log('===HUMAN INTERRUPT RESPONSE===\n', inputText);
invokeWith = new Command({ resume: inputText });
agentResult = await graph.invoke(invokeWith, graphConfig);
state = await graph.getState(graphConfig);
console.log('===STATE NEXT===\n', state.next);
console.log('=LAST MSG=\n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=\n', agentResult.messages.at(-1).tool_calls);
// step 3: prompt
inputText = 'Pls perform a Foo with name "ZXCV".';
console.log('===HUMAN PROMPT===\n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);
state = await graph.getState(graphConfig);
console.log('===STATE NEXT===\n', state.next);
console.log('=LAST MSG=\n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=\n', agentResult.messages.at(-1).tool_calls);
// step 4: interrupted in the 'approve' node, human in the loop does not authorise
inputText = 'no';
console.log('===HUMAN INTERRUPT RESPONSE===\n', inputText);
invokeWith = new Command({ resume: inputText });
agentResult = await graph.invoke(invokeWith, graphConfig);
state = await graph.getState(graphConfig);
console.log('===STATE NEXT===\n', state.next);
console.log('=LAST MSG=\n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=\n', agentResult.messages.at(-1).tool_calls);
// step 5: prompt
inputText = 'Pls perform a Foo with name "GHJK".';
console.log('===HUMAN PROMPT===\n', inputText);
invokeWith = { messages: [new HumanMessage(inputText)] };
agentResult = await graph.invoke(invokeWith, graphConfig);
state = await graph.getState(graphConfig);
console.log('===STATE NEXT===\n', state.next);
console.log('=LAST MSG=\n', agentResult.messages.at(-1).content);
console.log('=LAST TOOL CALLS=\n', agentResult.messages.at(-1).tool_calls);
Full output:
===HUMAN PROMPT===
Pls perform a Foo with name "ASDF".
===AGENT NODE===
(node:58990) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
=RESPONSE=
content:
tool_calls: [ 'CMD_FOO' ]
===APPROVE NODE===
=INTERRUPT PRE=
===STATE NEXT===
[ 'approve' ]
=LAST MSG=
=LAST TOOL CALLS=
[
{
name: 'CMD_FOO',
args: { name: 'ASDF' },
type: 'tool_call',
id: 'call_u7CIyWdTesFATZ5bGG2uaVUZ'
}
]
===HUMAN INTERRUPT RESPONSE===
yes
===APPROVE NODE===
=INTERRUPT PRE=
=INTERRUPT POST=
=RESULT=
{ isApproved: true, goto: 'tools' }
===TOOL CMD_FOO===
===AGENT NODE===
=RESPONSE=
content: The Foo operation has been performed with the name "ASDF".
tool_calls: []
===STATE NEXT===
[]
=LAST MSG=
The Foo operation has been performed with the name "ASDF".
=LAST TOOL CALLS=
[]
===HUMAN PROMPT===
Pls perform a Foo with name "ZXCV".
===AGENT NODE===
=RESPONSE=
content:
tool_calls: [ 'CMD_FOO' ]
===APPROVE NODE===
=INTERRUPT PRE=
===STATE NEXT===
[ 'approve' ]
=LAST MSG=
=LAST TOOL CALLS=
[
{
name: 'CMD_FOO',
args: { name: 'ZXCV' },
type: 'tool_call',
id: 'call_kKF91c8G6enWwlrLFON8TYLJ'
}
]
===HUMAN INTERRUPT RESPONSE===
no
===APPROVE NODE===
=INTERRUPT PRE=
=INTERRUPT POST=
=RESULT=
{ isApproved: false, goto: '__end__' }
===STATE NEXT===
[]
=LAST MSG=
=LAST TOOL CALLS=
[
{
name: 'CMD_FOO',
args: { name: 'ZXCV' },
type: 'tool_call',
id: 'call_kKF91c8G6enWwlrLFON8TYLJ'
}
]
===HUMAN PROMPT===
Pls perform a Foo with name "GHJK".
===AGENT NODE===
file:///Users/user/code/lgdemo/node_modules/@langchain/core/dist/language_models/chat_models.js:64
return chatGeneration.message;
^
TypeError: Cannot read properties of undefined (reading 'message')
at ChatOpenAI.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/language_models/chat_models.js:64:31)
at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
at async RunnableCallable.agentNode [as func] (file:///Users/user/code/lgdemo//test.js:51:20)
at async RunnableCallable.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/utils.js:79:27)
at async RunnableSequence.invoke (file:///Users/user/code/lgdemo//node_modules/@langchain/core/dist/runnables/base.js:1274:33)
at async _runWithRetry (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/retry.js:67:22)
at async PregelRunner._executeTasksWithRetry (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/runner.js:217:33)
at async PregelRunner.tick (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/runner.js:45:40)
at async CompiledStateGraph._runLoop (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/index.js:1296:17)
at async createAndRunLoop (file:///Users/user/code/lgdemo//node_modules/@langchain/langgraph/dist/pregel/index.js:1195:17) {
pregelTaskId: '7bd60c12-4beb-54b7-85a7-9bc1461600f5'
}
Node.js v23.3.0
I'm an engineer on the LangChain team, and what follows is a copy & paste of my response to the same question posed as a GitHub issue on the LangGraphJS repository.
I haven't executed your code, but I think that the issue could be that on refusal you're not inserting a ToolMessage
into the messages
state. There are some docs on this here
You can handle this on refusal by returning a command with an update:
field that has a tool message. For example:
async function approveNode (state) {
console.log('===APPROVE NODE===');
const lastMsg = state.messages.at(-1);
const toolCall = lastMsg.tool_calls.at(-1);
const interruptMessage = `Please review the following tool invocation:
${toolCall.name} with inputs ${JSON.stringify(toolCall.args, undefined, 2)}
Do you approve (y/N)`;
console.log('=INTERRUPT PRE=');
const interruptResponse = interrupt(interruptMessage);
console.log('=INTERRUPT POST=');
const isApproved = (interruptResponse.trim().charAt(0).toLowerCase() === 'y');
if (isApproved) {
return new Command({ goto: 'tools' });
}
// rejection case
return new Command({
goto: END,
update: {
messages: [
new ToolMessage({
status: "error"
content: `The user declined your request to execute the ${toolCall.name} tool, with arguments ${JSON.stringify(toolCall.args)}`,
tool_call_id: toolCall.id
}]
});
}
Also bear in mind that this implementation is not handling parallel tool calls. To handle parallel tool calls you have a few options.
ToolMessage
per tool call, as shown aboveinterrupt
once for the whole batch of callstools
if any calls are approved (or END
if no calls are approved)
Send
object in the goto
field and send a copy of the AIMessage
with the tool calls filtered down to just the approved list.ToolMessage
in the update
field of the command as above - one for each declined call.Send
in your conditional edge to fan out the tool calls to the tools
node (by sending a filtered copy of the AIMessage
, as mentioned above) and do the interrupt
in the tool handler.
Send
here you would wind up processing all tool messages in the same node, which would cause the approved tool calls to be reprocessed every time the graph is interrupted after that particular tool call is approved.Here's a hastily-written example of how you could write a wrapper that requires approval for individual tool handlers for use with the "Option 2" approach mentioned in the last bullet above:
function requiresApproval<ToolHandlerT extends (...args: unknown[]) => unknown>(toolHandler: toolHandlerT) {
return (...args: unknown[]) => {
const interruptMessage = `Please review the following tool invocation: ${toolHandler.name}(${args.map(JSON.stringify).join", "})`;
const interruptResponse = interrupt(interruptMessage);
const isApproved = (interruptResponse.trim().charAt(0).toLowerCase() === 'y');
if (isApproved) {
return toolHandler(..args);
}
throw new Error(`The user declined your request to execute the ${toolHandler.name} tool, with arguments ${JSON.stringify(args)}`);
}
}