I was following an old tutorial about chaining in Langchain. With it, I was writing some demo chains of my own, such as:
import json
from langchain_core.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnableMap
from langchain.schema.output_parser import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain.chains import SequentialChain
from langchain.chains.llm import LLMChain
api_key="sk-YOUR_OPENAI_API_KEY"
llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
seed=42,
api_key=api_key)
output_parser = StrOutputParser()
prompt_candidates = ChatPromptTemplate.from_template(
"""A trivia game has asked to which contry does the town of '{town}' belongs to, and the options are:
{country_options}
Only return the correct option chosen based on your knowledge, nothing more"""
)
prompt_finalists = ChatPromptTemplate.from_template(
"""Your task is to build OUTPUTWORD, follow these instructions:
1. Get CAPITAL CITY: It is the capital city of {country}
2. Get INITIAL LETTER: It is the initial letter of the CAPITAL CITY
3. Get OUTPUTWORD: Make a word starting with INITIAL LETTER and related with {subject}
Return the result in a JSON object with key `output` and OUTPUTWORD its correspondent output"""
)
# -------------------- CURRENT FUNCTIONAL SOLUTION --------------------
# Chains definition
candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key="country")
finalists_chain = LLMChain(
llm=llm.bind(response_format={"type": "json_object"}),
prompt=prompt_finalists, output_key="finalists"
)
# Chaining
final_chain = SequentialChain(
chains=[candidates_chain, finalists_chain],
input_variables=["town", "country_options", "subject"],
output_variables=["finalists"],
verbose=False
)
result=final_chain.invoke(
{
"town": "Puembo",
"country_options": ["Ukraine", "Ecuador", "Uzbekistan"],
"subject": "Biology"
}
)["finalists"]
print(result)
However, I got the following warning:
C:\Users\david\Desktop\dummy\test.py:44: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use :meth:`~RunnableSequence, e.g., `prompt | llm`` instead.
candidates_chain = LLMChain(llm=llm, prompt=prompt_candidates, output_key="country")
Indeed, I was reading the docs, which ask you to use the pipe "|" operator; however the examples provided there are very simple, and usually involve a prompt and a llm, which are very straightforward (and are even provided in the same warning message); however I could not figure out how to adapt the pipe operator in my own chain.
I was thinking of something like:
from langchain_core.output_parsers import StrOutputParser
chain_a = prompt_candidates | llm | StrOutputParser()
chain_b = prompt_finalists | llm | StrOutputParser()
composed_chain = chain_a | chain_b
output_chain=composed_chain.invoke(
{
"career": "Artificial Intelligence",
"research_list": "\n".join(research_col)
}
)
But this gets me:
TypeError: Expected mapping type as input to ChatPromptTemplate. Received <class 'str'>.
I have tried several stuff, but nothing functional. What am I doing wrong?
It's been a long time. After taking this short course plus the official documentation, I think I got to a functional, updated code:
# All original code above "CURRENT FUNCTIONAL SOLUTION"...
# -------------------- UPDATED FUNCTIONAL SOLUTION --------------------
candidates_chain = prompt_candidates | llm | output_parser
finalists_chain = RunnableMap({
"country": candidates_chain,
"subject": lambda x:x["subject"]
}) | prompt_finalists | llm.bind(response_format={"type": "json_object"}) | output_parser
result = finalists_chain.invoke(
{
"town": "Puembo",
"country_options":["Ukraine", "Ecuador", "Uzbekistan"],
"subject": "Biology"
}
)
print(result)
Being the output:
{
"output": "Quorum"
}
Without warnings at all.
Hope it helps.