large-language-modelautogen

I am getting an error while trying to run autogen using a local LLM


Below is the Code I am running. Note that I am using LM Studio (LLM: Llama 2) and I have double checked to make sure that the server number is correct.

from autogen import AssistantAgent, UserProxyAgent

config_list = [
    {
        "api_type": "open_ai",
        "api_base": "http://localhost:1234/v1",
        "api_key": "NULL"
    }
]

llm_config = {'config_list': config_list}

assistant = AssistantAgent(
    name="assistant",
    llm_config = llm_config
)

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    max_consecutive_auto_reply=100,
)

task = """write a python method to output numbers 1 to 100"""

user_proxy.initiate_chat(
    assistant,
    message=task
)

This is the exact result I get after running python app.py (name of the program):

user_proxy (to assistant):

write a python method to output numbers 1 to 100

--------------------------------------------------------------------------------
Traceback (most recent call last):
  File "app.py", line 26, in <module>
    user_proxy.initiate_chat(
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\agentchat\conversable_agent.py", line 550, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\agentchat\conversable_agent.py", line 348, in send  
    recipient.receive(message, self, request_reply, silent)
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\agentchat\conversable_agent.py", line 481, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\agentchat\conversable_agent.py", line 906, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\agentchat\conversable_agent.py", line 625, in generate_oai_reply
    response = client.create(
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\oai\client.py", line 247, in create
    response = self._completions_create(client, params)
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\autogen\oai\client.py", line 327, in _completions_create    
    response = completions.create(**params)
  File "C:\Users\Rohun\Development\AutoGen\env\lib\site-packages\openai\_utils\_utils.py", line 299, in wrapper
    return func(*args, **kwargs)
TypeError: create() got an unexpected keyword argument 'api_type'

I am unsure what the error above is referring as I looked on the autogen github link and it shows them using 'api_type' multiple times in the config list. How to resolve this?


Solution

  • the api_type was removed recently from autogen to retain compatibility with the openai api. see autogen roadmap