I want to upgrade my GPT-3 to GPT-3.5 Turbo in Node.js. But I have a problem with that.
My code:
const askAi = async (message) => {
try {
const openAIInstance = await _createOpenAIInstance()
const response = await openAIInstance.createCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: message }]
})
const repliedMessage = response.data.choices[0].message.content
return repliedMessage
} catch (err) {
logger.error('', '', 'Ask AI Error: ' + err.message)
return sendInternalError(err)
}
}
But when I try to input the message:
askAi('Suggest me a job position for Auto CAD user')
It's returning an error:
Request failed with status code 400
You used the wrong function to get a completion for the given model.
The function createCompletion
works with the Completions API. In other words, the function createCompletion
works with GPT-3 models (e.g., text-davinci-003
) or GPT base models (e.g., davinci-002
).
Note: OpenAI NodeJS SDK v4
was released on August 16, 2023, and is a complete rewrite of the SDK. Among other things, there are changes in method names. See the v3
to v4
migration guide.
Model | NodeJS function (SDK v3 ) |
NodeJS function (SDK v4 ) |
---|---|---|
GPT-3.5 and GPT-4 | openai.createChatCompletion | openai.chat.completions.create |
GPT base and GPT-3 | openai.createCompletion | openai.completions.create |
• If you have the OpenAI NodeJS SDK v3
: Change createCompletion
to createChatCompletion
const askAi = async (message) => {
try {
const openAIInstance = await _createOpenAIInstance()
const response = await openAIInstance.createChatCompletion({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: message }]
})
const repliedMessage = response.data.choices[0].message.content
return repliedMessage
} catch (err) {
logger.error('', '', 'Ask AI Error: ' + err.message)
return sendInternalError(err)
}
}
• If you have the OpenAI NodeJS SDK v4
: Change createCompletion
to chat.completions.create
Note: There are also changes in extracting the message content in v4
.
const askAi = async (message) => {
try {
const openAIInstance = await _createOpenAIInstance()
const response = await openAIInstance.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: message }]
})
const repliedMessage = response.choices[0].message.content
return repliedMessage
} catch (err) {
logger.error('', '', 'Ask AI Error: ' + err.message)
return sendInternalError(err)
}
}