I'm going to learn LangChain and stumble upon their Getting Started section. Because it doesn't work and I'm curious if I am the only person where LangChain examples don't work.
This is their tutorial I am talking about. https://python.langchain.com/docs/get_started/quickstart/
Let's use the very first example:
llm = ChatOpenAI(openai_api_key=api_key)
llm.invoke("how can langsmith help with testing?")
I wrote some initializing code as well to make ChatOpenAI work:
import os
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
llm = ChatOpenAI(openai_api_key=api_key)
llm.invoke("how can langsmith help with testing?")
The invoke
function seems to be executed as I can't see any error message. But I also can't see any further output. Nothing happens.
They even wrote "We can also guide its response with a prompt template.". However, there is not response.
Who can explain to me, what is happening here? And can you probably recommend me a better tutorial instead of that from LangChain?
As mentioned in the comments, the documentation assumes that the code is being written in a Jupyter notebook. The return type of the invoke
method is a BaseMessage
. If you want to see the response object, first assign the response of the invoke
function to a variable:
response = llm.invoke("how can langsmith help with testing?")
and then print its value:
print(response)
If you're only interested in the text of the response, use this instead:
print(response.content)