langchainlarge-language-modelgpt-3py-langchain

Issue with LangChain Misclassifying gpt-3.5-turbo-instruct as Chat Model


OpenAI deprecated its text-davinci-003 completion model. I've updated the model to gpt-3.5-turbo-instruct. I am encountering an issue with the LangChain where it incorrectly classifies the gpt-3.5-turbo-instruct model as a chat model. This is causing initialization problems in my code.

Environment:

python = "^3.10"
langchain = "^0.0.130"

OS: Ubuntu

Expected Behavior:

The expected behavior is that the gpt-3.5-turbo-instruct model should be recognized as a completion model by LangChain and initialized appropriately without warnings or errors.

Actual Behavior:

When attempting to initialize the gpt-3.5-turbo-instruct model, I receive warnings suggesting that this model is being misclassified as a chat model. The specific warnings are:

/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
/home/mahdi/.cache/pypoetry/virtualenvs/backend-bRqVKcMN-py3.11/lib/python3.11/site-packages/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI

My simplified code:

from langchain import OpenAI


llm = OpenAI({
            "model_name": "gpt-3.5-turbo-instruct",
            "temperature": 0.0,
            "top_p": 1,
            "openai_api_key": "API_KEY",
        })
        
print(llm)

Output:

OpenAIChat[Params: {'model_name': 'gpt-3.5-turbo-instruct', 'temperature': 0.0, 'top_p': 1}


Solution

  • I solved the problem by updating the LangChain to 0.1.0 (the current latest version).