I have the following code block that uses langchain.create_react_agent
to create an agent:
movie_chat = prompt | llm | StrOutputParser()
tools = [
Tool.from_function(
name="Movie Chat",
description="For when you need to chat about movies. The question will be a string. Return a string.",
func=movie_chat.invoke,
)
]
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
In that, the llm
is the input for create_react_agent()
but also it's a component in the chain prompt | llm | StrOutputParser()
to create the chat model movie_chat
which later on is used to create a tool.
To me this is a duplicate usage of the llm
, and it raises the question whether or not I can use one specific language model for the tool and several different language models when creating the tools? If yes, can you give one simple example and with possibly a diagram to illustrate how those models interact with each other?
The complete code looks like this:
import os
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import Tool
from langchain import hub
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.schema import StrOutputParser
from langchain_neo4j import Neo4jChatMessageHistory, Neo4jGraph
from uuid import uuid4
SESSION_ID = str(uuid4())
print(f"Session ID: {SESSION_ID}")
llm = ChatOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY")
)
graph = Neo4jGraph(
url=os.getenv("NEO4J_URI"),
username=os.getenv("NEO4J_USERNAME"),
password=os.getenv("NEO4J_PASSWORD")
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a movie expert. You find movies from a genre or plot.",
),
("human", "{input}"),
]
)
movie_chat = prompt | llm | StrOutputParser()
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
tools = [
Tool.from_function(
name="Movie Chat",
description="For when you need to chat about movies. The question will be a string. Return a string.",
func=movie_chat.invoke,
)
]
## [TODO]: Create a LangSmith Personal Access Token API Key
## source: https://graphacademy.neo4j.com/courses/llm-fundamentals/3-intro-to-langchain/4-agents/
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
while (q := input("> ")) != "exit":
response = chat_agent.invoke(
{
"input": q
},
{"configurable": {"session_id": SESSION_ID}},
)
print(response["output"])
Yes, you can absolutely use one language model (LLM) for the create_react_agent()
logic and different models (or model instances) for tools, including different configurations of the same base model (e.g., temperature, model version), or even completely different providers (like OpenAI for one tool and Anthropic Claude or a local model for another).
Things to Note:
The Agent's LLM (create_react_agent
): This is the brain that interprets the user's input, reasons, and chooses which tool to call.
Tool's LLM: This is used only inside the tool’s func
. It’s called by the agent, not shared with it.
There is no restriction that says a tool must use the same LLM instance as the agent. Each tool is an independent callable, and it can use any LLM or logic internally.
Sample example using different LLMs models for Agent and Tool:
from langchain_openai import ChatOpenAI
from langchain.tools import Tool
from langchain.agents import create_react_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain import hub
# Agent LLM
agent_llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# Tool-specific LLM (e.g., more creative)
tool_llm = ChatOpenAI(model="gpt-4", temperature=0.9)
# Tool logic using tool-specific LLM
tool_prompt = ChatPromptTemplate.from_messages([
("system", "You are a creative movie expert."),
("human", "{input}"),
])
movie_chat = tool_prompt | tool_llm | StrOutputParser()
# Define tool
movie_tool = Tool.from_function(
name="Movie Chat",
description="Get creative movie suggestions.",
func=movie_chat.invoke,
)
# Create agent with its own (separate) LLM
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(agent_llm, [movie_tool], agent_prompt)
agent_executor = AgentExecutor(agent=agent, tools=[movie_tool])
# Run agent
response = agent_executor.invoke({"input": "Suggest a sci-fi movie with time travel"})
print(response["output"])
here's a very basic diagram :
┌────────────┐
│ Human │
└─────┬──────┘
│
▼
┌──────────────┐
│ Agent (LLM A)│ <── Uses gpt-3.5
└─────┬────────┘
│
Decides which tool to call
│
▼
┌────────────────────────┐
│ Tool: Movie Chat │
│ uses Prompt + LLM B │ <── Uses gpt-4
└────────────────────────┘
│
▼
┌──────────┐
│ Output │
└──────────┘