typescriptnext.jsopenai-apiazure-openai

Azure OpenAI Phi-4-multimodel-instruct: 'auto' tool choice error when using runTools() method that worked with GPT-4o


I recently switched from using GPT-4o to Phi-4-multimodel-instruct in my Next.js application using Azure AI services, but I'm encountering the following error:

BadRequestError: 400 {"object":"error","message":"\"auto\" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}

The error occurs when calling the runTools() method, which was working perfectly with GPT-4o. Here's my implementation:

OpenAI Instance Configuration:

import { AzureOpenAI } from "openai";
export const OpenAIInstance = () => {
  try {
    if (
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION ||
      !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME
    ) {
      throw new Error(
        "Missing required environment variables for OpenAI instance."
      );
    }
    
    const azureOpenAI = new AzureOpenAI({
      apiKey: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY,
      apiVersion: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION,
      baseURL: `https://${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME}.openai.azure.com/models/chat/completions?api-version=${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION}`
    });

    return azureOpenAI;
  } catch (error) {
    console.error(
      "Error initializing OpenAI instance:",
      (error as Error).message
    );
    throw error;
  }
};

Chat API Extension Implementation:

export const ChatApiExtensions = async (props: {
  chatThread: ChatThreadModel;
  userMessage: string;
  history: ChatCompletionMessageParam[];
  extensions: RunnableToolFunction<any>[];
  signal: AbortSignal;
}): Promise<ChatCompletionStreamingRunner> => {
  const { userMessage, history, signal, chatThread, extensions } = props;
  const openAI = OpenAIInstance();
  
  const model = process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_MODEL_NAME;
  if (!model) {
    throw new Error("Model deployment name is not configured");
  }

  const systemMessage = await extensionsSystemMessage(chatThread);
  try {
    return await openAI.beta.chat.completions.runTools(
      {
        model: model,
        stream: true,
        messages: [
          {
            role: "system",
            content: chatThread.personaMessage + "\n" + systemMessage,
          },
          ...history,
          {
            role: "user",
            content: userMessage,
          },
        ],
        tools: extensions,
        temperature: 0.7,
        max_tokens: 4000,
      },
      { 
        signal: signal,
      }
    );
  } catch (error) {
    console.error("Error in ChatApiExtensions:", error);
    throw error;
  }
};

Based on the error message, it seems Phi-4-multimodel-instruct requires additional parameters for tool usage that weren't needed with GPT-4o. I've researched the Azure documentation but haven't found specifics about these flags (--enable-auto-tool-choice and --tool-call-parser).

Has anyone successfully used tools with Phi-4-multimodel-instruct on Azure? How can I modify my code to make this work?

Environment:


Solution

  • You can not find these options that is because as of now phi-4-multimodel-instruct does not support tool calling. See Details

    See Picture