I’ve set up a tool function to capture important bits of information from user text.
import OpenAI from "openai";
import {Tool} from "openai/resources/responses/responses";
console.log(process.env.OPENAI_API_KEY);
const client = new OpenAI();
const model = "gpt-5";
const tools: [Tool] = [{
strict: true,
type: "function",
name: "log_point",
description: "Extract one salient item from the user content. Call once per item. Salient items include: tasks (e.g., buy X), identities (name/nationality), locations, deadlines/dates, quantities, constraints, commitments, preferences. Ignore greetings/small talk. Do not call if nothing salient.",
parameters: {
type: "object",
additionalProperties: false,
properties: {
message: {
type: "string",
description: "A summary of the salient points. This should be short and efficiently phrased."
}
},
required: ["message"]
}
}];
const input = [];
client.responses.create({
model,
tools,
input: [
{
role: "system",
content: "You are a helpful assistant. Call the tool once per salient item. After calling tools, also produce a brief 1–2 sentence message summarizing the extracted items."
},
{
role: "user",
content:
"Hello. How are you? I am fine. " +
"What do you want to do? I want to go to the store. " +
"My name is Alice Gumballs and I'm an alien from the Wibbley-Wobbley galaxy." +
"We need to remember to buy a chicken for dinner, this is important." +
"Maria is coming too, she will provide dessert for us." +
" What can I cook?",
}]
})
.then(response => {
console.log('output_text', response.output_text);
response.output.forEach((item, index) => {
console.log(`Item (${index}):`, item.type)
if (item.type == "function_call") {
console.log(`Highlight (${index}):`, JSON.parse(item.arguments).message)
}
})
})
.catch(error => {
console.error('error', error)
});
The function calls seem to be working correctly, however, I don’t get any text response. I would like the user to chat as if there wasn’t any tool calls, i.e. a back and forth with the agent.
The response I get is this:
{
"id": "resp_68d1c90ca3288198a597554f7d16f29f04ae0c8444524f94",
"object": "response",
"created_at": 1758578956,
"status": "completed",
"background": false,
"billing": {
"payer": "developer"
},
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"max_tool_calls": null,
"model": "gpt-5-2025-08-07",
"output": [
{
"id": "rs_68d1c90e3f048198a88b4d9a495236ae04ae0c8444524f94",
"type": "reasoning",
"summary": []
},
{
"id": "fc_68d1c92157848198a925cb1f656e918c04ae0c8444524f94",
"type": "function_call",
"status": "completed",
"arguments": "{\"message\":\"Name: Alice Gumballs\"}",
"call_id": "call_ZYQ3cbyS7xbuedfMVkll7RoX",
"name": "log_point"
},
{
"id": "fc_68d1c92192848198bb7e6b1126f4667304ae0c8444524f94",
"type": "function_call",
"status": "completed",
"arguments": "{\"message\":\"Identity: alien from the Wibbley-Wobbley galaxy\"}",
"call_id": "call_eIs2DDQC3P580x4kpp6PWdwD",
"name": "log_point"
},
// …other function calls
],
"parallel_tool_calls": true,
"previous_response_id": null,
"prompt_cache_key": null,
"reasoning": {
"effort": "medium",
"summary": null
},
"safety_identifier": null,
"service_tier": "default",
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"tool_choice": "auto",
"tools": [
{
"type": "function",
"description": "Extract one salient item from the user content. Call once per item. Salient items include: tasks (e.g., buy X), identities (name/nationality), locations, deadlines/dates, quantities, constraints, commitments, preferences. Ignore greetings/small talk. Do not call if nothing salient.",
"name": "log_point",
"parameters": {
"type": "object",
"additionalProperties": false,
"properties": {
"message": {
"type": "string",
"description": "A summary of the salient points. This should be short and efficiently phrased."
}
},
"required": [
"message"
]
},
"strict": true
}
],
"top_logprobs": 0,
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 231,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 1377,
"output_tokens_details": {
"reasoning_tokens": 1216
},
"total_tokens": 1608
},
"user": null,
"metadata": {},
"output_text": ""
}
If I remove the tooling (or disable it) then I get a text response.
I understand that it is possible to have use tool functions and still have a text response.
How can I get text response and using tooling at the same time?
One thing maybe worth highlighting is that there is the first output item. It says reasoning, but contains no content. I don't know if there should be content in there, or it should be in the output_text field.
To have "chat" experience the code must reply to "function_call" by resending original request along with a result of a function.
- Make a request to the model with tools it could call
- Receive a tool call from the model
- Execute code on the application side with input from the tool call
- Make a second request to the model with the tool output
- Receive a final response from the model (or more tool calls)
I think you expect LLM to somehow suspend processing of the original request, execute the function and than continue with the request. This in not how LLM-based systems (I know of) work - the LLM finishes processing of the original request with list of next steps and then need to start over with the new data added, there is no state in the LLM itself between those two tries.
You can consider function calls as equivalents of user chat requests where your prompt has request for more info from user like
If input provided user name return "Hello, {user name}",
otherwise ask for the name
Where you'd send prompt + user initial input first, than display result which could be "provide name" and than send both user inputs resulting in two calls one with "user:Hi", and second one with "user:Hi, user:My name is Alice". Same would be if you have a function call in the prompt except instead of asking user code provides answer itself resulting with interaction like "user:Hi" (result "func: user_name"), second call "user:Hi, func:user_name=Alice".
One more note - since there is no memory for such calls it is your code's responsibility to indicate that function was called - so there is really no "void" results - fact that function was called needs to somehow show up in the input of the next call.