openai-apifunction-calllarge-language-model

'openai' function_calling not execute the function


Using function calling, 'openai' does not execute the function, but it prints the function with parameters. Please see below:

I am using a local LLM (LLaMA 2) in Colaboratory.

'openai' version installed: 0.27.8

!pip install openai==0.27.8

os.environ['OPENAI_API_KEY'] = 'NULL'
os.environ['OPENAI_API_BASE'] = "http://localhost:8000/v1"

#import OpenAI from "openai";
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = "2023-07-01-preview"

Here is the function:

def get_current_weather(location):
"""Get the current weather in a given location"""

url = "https://weatherapi-com.p.rapidapi.com/forecast.json"

querystring = {"q":location, "days":"1"}

headers = {
    "X-RapidAPI-Key": "xxxxx",
    "X-RapidAPI-Host": "weatherapi-com.p.rapidapi.com"
}

response = requests.get(url, headers=headers, params=querystring)

weather_info = {
    "location": response.json()['location']['name'],
    "temperature": response.json()['current']['temp_c'],
}

print(json.dumps(weather_info))

return json.dumps(weather_info)

It returns the following result:

response=get_current_weather('Naples, Italy')
    {"location": "Naples", "temperature": 17.0}

Here is the schema for the function:

function_descriptions = [{
    "name": "get_current_weather",
    "description": "Get the current weather",
    "parameters": {
        "type": "object",
        "properties": {
            "location": {
                "type": "string",
                "description": "The city and state, e.g. San Francisco, CA"
            },
        },
        "required": ["location"],
    },
}]

Here is the chat completion:

completion = openai.ChatCompletion.create(
model="gpt-35-turbo-0613",
messages=[{"role": "user", "content": user_prompt}],
functions=function_descriptions,
function_call={"name": "get_current_weather"}
#function_call="auto"
)
output = completion.choices[0].message
print(output)
{
   "role": "assistant",
   "content": "Ah, thank you for asking! The current temperature in Naples, Italy is approximately {get_current_weather({location: \"Naples, Italy\"})}. Would you like me to tell you more about the weather there?<|END_OF_ASSISTANT|></s>"
 }

Why does it not execute the function, but it just prints the call {get_current_weather({location: \"Naples, Italy\"})}?


Solution

  • Large Language Models (LLMs) are not able to directly run code. Their expertise lies in processing and generating text, which includes code snippets but doesn't extend to actual execution. If the goal is to execute code, there are two alternatives: