I am using langchain's create_pandas_dataframe_agent
agent to analyse a dataframe. The code looks like below:
from langchain_experimental.agents import create_pandas_dataframe_agent
import pandas as pd
from langchain_openai import AzureOpenAI
df = pd.read_csv("file_path")
llm = AzureOpenAI(
deployment_name=name, # I have a variable 'name'
temperature=0.0,
)
agent_executor = create_pandas_dataframe_agent(
llm,
df,
# Few other params
)
prompt = """ Some Text """
agent_executor.invoke(prompt)
Now as per my understanding when agent's invoke
is called both the prompt and the df
is passed to the LLM.
Note: I am using gpt 3.5 turbo instruct
as my LLM.
Now I want to check how many tokens are consumed when I run this code. Any idea how to know that, preferably using some code.
I tried checking Azure Dashboard, but it's difficult to isolate tokens from a single request.
With the help of a callback function, the number of tokens can be calculated.
from langchain_experimental.agents import create_pandas_dataframe_agent
import pandas as pd
from langchain_openai import ChatOpenAI
from langchain_community.callbacks.manager import get_openai_callback
df = pd.read_csv("titanic.csv")
df = df[['PassengerId', 'Survived', 'Pclass']]
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, api_key="")
agent_executor = create_pandas_dataframe_agent(
llm,
df,
agent_type="tool-calling",
verbose=True,
allow_dangerous_code=True, # Opt-in to allow execution of arbitrary code
)
with get_openai_callback() as cb:
agent_executor.agent.stream_runnable = False
openai_response = agent_executor.invoke("How many survivors in the Titanic?")
print(openai_response)
print(f"Total tokens used: {cb.total_tokens}")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")