Using function calling, 'openai' does not execute the function, but it prints the function with parameters. Please see below:
I am using a local LLM (LLaMA 2) in Colaboratory.
'openai' version installed: 0.27.8
!pip install openai==0.27.8
os.environ['OPENAI_API_KEY'] = 'NULL'
os.environ['OPENAI_API_BASE'] = "http://localhost:8000/v1"
#import OpenAI from "openai";
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = "2023-07-01-preview"
Here is the function:
def get_current_weather(location):
"""Get the current weather in a given location"""
url = "https://weatherapi-com.p.rapidapi.com/forecast.json"
querystring = {"q":location, "days":"1"}
headers = {
"X-RapidAPI-Key": "xxxxx",
"X-RapidAPI-Host": "weatherapi-com.p.rapidapi.com"
}
response = requests.get(url, headers=headers, params=querystring)
weather_info = {
"location": response.json()['location']['name'],
"temperature": response.json()['current']['temp_c'],
}
print(json.dumps(weather_info))
return json.dumps(weather_info)
It returns the following result:
response=get_current_weather('Naples, Italy')
{"location": "Naples", "temperature": 17.0}
Here is the schema for the function:
function_descriptions = [{
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
},
"required": ["location"],
},
}]
Here is the chat completion:
completion = openai.ChatCompletion.create(
model="gpt-35-turbo-0613",
messages=[{"role": "user", "content": user_prompt}],
functions=function_descriptions,
function_call={"name": "get_current_weather"}
#function_call="auto"
)
output = completion.choices[0].message
print(output)
{
"role": "assistant",
"content": "Ah, thank you for asking! The current temperature in Naples, Italy is approximately {get_current_weather({location: \"Naples, Italy\"})}. Would you like me to tell you more about the weather there?<|END_OF_ASSISTANT|></s>"
}
Why does it not execute the function, but it just prints the call {get_current_weather({location: \"Naples, Italy\"})}
?
Large Language Models (LLMs) are not able to directly run code. Their expertise lies in processing and generating text, which includes code snippets but doesn't extend to actual execution. If the goal is to execute code, there are two alternatives:
Server-Side Execution:
In this approach, the code is executed on a server owned by entities like OpenAI or Google. However, this method poses security concerns due to the risk of arbitrary code execution. It's important to note that you are not passing the actual function get_current_weather
to the model, you are just providing a description in the function_descriptions
.
Client-Side Processing:
Many companies opt for client-side processing, where the parameters identified by the model are sent to the client so it can be handle by the user (the function selection and identification of parameter is the actual benefit of the feature). To implement this, the model's output needs to be carefully managed on the client side.
After receiving the model's response, you can parse the result by examining finish_reason
in response.choices[0].finish_reason
. If you are using function_call="auto"
and the model identifies the need for a function call, it returns finish_reason='function_call'
. You can use this information to parse the output intelligently and execute the code on your server, maintaining control over the execution process.