Skip to content

Creates a text-to-text response

client.responses.create(ResponseCreateParamsbody, RequestOptionsoptions?): CreateResponseResponse { id, created, model, 23 more } | Stream<CreateResponseStreamResponse { id, choices, created, 3 more } >
post/responses

Generate text responses from text prompts. This endpoint supports both streaming and non-streaming responses for VLLM models only.

ParametersExpand Collapse
ResponseCreateParams = ResponseCreateParamsNonStreaming { stream } | ResponseCreateParamsStreaming { stream }
ResponseCreateParamsBase { input, model, instructions, 13 more }
input: string | Array<UnionMember0 { type, id, arguments, 7 more } | UnionMember1 { content, role, tool_call_id, 2 more } >

The input text prompt or conversation history. Can be a string or an array of message objects for conversation context.

Accepts one of the following:
string
Array<UnionMember0 { type, id, arguments, 7 more } | UnionMember1 { content, role, tool_call_id, 2 more } >
UnionMember0 { type, id, arguments, 7 more }
type: "function_call" | "function_call_output" | "reasoning"

The type of input item (must be function_call, function_call_output, or reasoning)

Accepts one of the following:
"function_call"
"function_call_output"
"reasoning"
id?: string

The unique ID of the reasoning item (optional for reasoning)

arguments?: string

JSON string of function arguments (required for function_call)

call_id?: string

The call ID (required for function_call and function_call_output)

content?: Array<Content> | null

Array of reasoning content parts (optional for reasoning, can be null)

text?: string

The reasoning text content

type?: "reasoning_text"

The type of content

Accepts one of the following:
"reasoning_text"
encrypted_content?: string | null

Encrypted content (optional)

name?: string

The function name (required for function_call)

output?: string

JSON string of function output (required for function_call_output)

status?: string | null

Status of the item (optional, can be null)

summary?: Array<unknown>

Summary of the reasoning (optional for reasoning)

UnionMember1 { content, role, tool_call_id, 2 more }
content: string | Array<UnionMember0 { text, type } | Record<string, unknown>>

The content of the message (string or content parts array)

Accepts one of the following:
string
Array<UnionMember0 { text, type } | Record<string, unknown>>
UnionMember0 { text, type }
text: string

The text content

type: "input_text"

The type of content part

Accepts one of the following:
"input_text"
Record<string, unknown>
role?: "user" | "assistant" | "system" | 2 more

The role of the message author

Accepts one of the following:
"user"
"assistant"
"system"
"tool"
"developer"
tool_call_id?: string

Tool call ID that this message is responding to (required for tool role)

tool_calls?: Array<ToolCall>

Tool calls made by the assistant (for assistant role messages)

id: string

The ID of the tool call.

function: Function { arguments, name }

The function that the model called.

arguments: string

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

name: string

The name of the function to call.

type: "function"

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
type?: "message"

Optional type identifier for message items (used by some clients like Codex)

Accepts one of the following:
"message"
model: string

Model ID used to generate the response. Must be a VLLM model.

instructions?: string | null

System-level instructions for the model. This sets the behavior and context for the response generation.

max_output_tokens?: number | null

Maximum number of tokens to generate in the response. If not specified, the model will use a default value.

minimum1
max_tokens?: number | null

The maximum number of tokens that can be generated in the completion. Alias for max_output_tokens for compatibility.

minimum0
metadata?: Record<string, string> | null

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

modalities?: Array<"text"> | null

Specifies the output types the model should generate. For text-to-text, this should be ["text"].

Accepts one of the following:
"text"
parallel_tool_calls?: boolean | null

Whether to enable parallel tool calls. When true, the model can make multiple tool calls in parallel.

stop?: string | null | Array<string>

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Accepts one of the following:
string | null
Array<string>
stream?: false | null

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

Accepts one of the following:
false
stream_options?: StreamOptions | null

Options for streaming response. Only set this when you set stream: true.

include_usage?: boolean

If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array.

All other chunks will also include a usage field, but with a null value. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request.

temperature?: number | null

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice?: "none" | "auto" | "required" | ChatCompletionNamedToolChoice { function, type }

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

none is the default when no tools are present. auto is the default if tools are present.

Accepts one of the following:
"none" | "auto" | "required"
"none"
"auto"
"required"
ChatCompletionNamedToolChoice { function, type }

Specifies a tool the model should use. Use to force the model to call a specific function.

function: Function { name }
name: string

The name of the function to call.

type: "function"

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
tools?: Array<Tool>

A list of tools the model may call. Currently, only functions are supported as a tool. Uses Responses API format (with name, description, parameters at top level).

type: "function" | "web_search" | "web_search_2025_08_26"

The type of the tool. Supported values are function (custom tools), web_search, and web_search_2025_08_26 (built-in web search).

Accepts one of the following:
"function"
"web_search"
"web_search_2025_08_26"
description?: string

A description of what the function does, used by the model to choose when and how to call the function.

name?: string

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

parameters?: Record<string, unknown>

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

top_p?: number | null

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
user?: string

A unique identifier representing your end-user, which can help DigitalOcean to monitor and detect abuse.

ResponseCreateParamsNonStreaming extends ResponseCreateParamsBase { input, model, instructions, 13 more } { stream }
stream?: false | null

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

Accepts one of the following:
false
ResponseCreateParamsNonStreaming extends ResponseCreateParamsBase { input, model, instructions, 13 more } { stream }
stream?: false | null

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

Accepts one of the following:
false
ReturnsExpand Collapse
CreateResponseResponse { id, created, model, 23 more }

Represents a text-to-text response returned by the model, based on the provided input. VLLM models only.

id: string

A unique identifier for the response.

created: number

The Unix timestamp (in seconds) of when the response was created.

model: string

The model used to generate the response.

object: "response"

The object type, which is always response.

Accepts one of the following:
"response"
usage: Usage { input_tokens, input_tokens_details, output_tokens, 2 more }

Detailed token usage statistics for the request, including input/output token counts and detailed breakdowns.

input_tokens: number

The number of input tokens.

input_tokens_details: InputTokensDetails { cached_tokens }

A detailed breakdown of the input tokens.

cached_tokens: number

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: number

The number of output tokens.

output_tokens_details: OutputTokensDetails { reasoning_tokens, tool_output_tokens }

A detailed breakdown of the output tokens.

reasoning_tokens: number

The number of reasoning tokens.

tool_output_tokens: number

The number of tool output tokens.

total_tokens: number

The total number of tokens used.

background?: boolean | null

Whether the request was processed in the background

choices?: Array<Choice> | null

A list of response choices. Can be more than one if n is greater than 1. Optional - Responses API primarily uses the output array.

finish_reason: "stop" | "length" | "tool_calls" | "content_filter"

The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, or tool_calls if the model called a tool.

Accepts one of the following:
"stop"
"length"
"tool_calls"
"content_filter"
index: number

The index of the choice in the list of choices.

message: Message { content, role, tool_calls }

The generated message response.

content?: string

The generated text content.

role?: "assistant"

The role of the message author, which is always assistant.

Accepts one of the following:
"assistant"
tool_calls?: Array<ToolCall>

The tool calls generated by the model, such as function calls.

id: string

The ID of the tool call.

function: Function { arguments, name }

The function that the model called.

arguments: string

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

name: string

The name of the function to call.

type: "function"

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
logprobs?: Logprobs | null

Log probability information for the choice. Only present if logprobs was requested in the request.

content: Array<ChatCompletionTokenLogprob { token, bytes, logprob, top_logprobs } > | null

A list of message content tokens with log probability information.

token: string

The token.

bytes: Array<number> | null

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprob: number

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

top_logprobs: Array<TopLogprob>

List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.

token: string

The token.

bytes: Array<number> | null

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprob: number

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

input_messages?: Array<unknown> | null

Input messages (if applicable)

max_output_tokens?: number | null

Maximum output tokens setting

max_tool_calls?: number | null

Maximum tool calls setting

output?: Array<UnionMember0 { arguments, call_id, name, 4 more } | UnionMember1 { text, type } | UnionMember2 { id, content, summary, 3 more } >

An array of content items generated by the model. This includes text content, function calls, reasoning items, and other output types. Use this field for Responses API compatibility.

Accepts one of the following:
UnionMember0 { arguments, call_id, name, 4 more }
arguments: string

JSON string of function arguments

call_id: string

The unique ID of the function tool call

name: string

The name of the function to call

type: "function_call"

The type of output item

Accepts one of the following:
"function_call"
id?: string

The unique ID of the function tool call (same as call_id)

encrypted_content?: string | null

Encrypted content (optional)

status?: string | null

Status of the item (optional, can be null)

UnionMember1 { text, type }
text: string

The text content

type: "text"

The type of output item

Accepts one of the following:
"text"
UnionMember2 { id, content, summary, 3 more }
id: string

The unique ID of the reasoning item

content: Array<Content>

Array of reasoning content parts

text: string

The reasoning text content

type: "reasoning_text"

The type of content

Accepts one of the following:
"reasoning_text"
summary: Array<unknown>

Summary of the reasoning (usually empty)

type: "reasoning"

The type of output item

Accepts one of the following:
"reasoning"
encrypted_content?: string | null

Encrypted content (optional)

status?: string | null

Status of the item (optional, can be null)

output_messages?: Array<unknown> | null

Output messages (if applicable)

parallel_tool_calls?: boolean | null

Whether parallel tool calls are enabled

previous_response_id?: string | null

Previous response ID (for multi-turn conversations)

prompt?: string | null

Prompt used for the response

reasoning?: string | null

Reasoning content

service_tier?: string | null

Service tier used

status?: string | null

Status of the response

temperature?: number | null

Temperature setting used for the response

text?: string | null

Text content

tool_choice?: string | null

Tool choice setting used for the response

tools?: Array<Tool> | null

Tools available for the response

type: "function" | "web_search" | "web_search_2025_08_26"

The type of the tool. Supported values are function (custom tools), web_search, and web_search_2025_08_26 (built-in web search).

Accepts one of the following:
"function"
"web_search"
"web_search_2025_08_26"
description?: string

A description of what the function does, used by the model to choose when and how to call the function.

name?: string

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

parameters?: Record<string, unknown>

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

top_logprobs?: number | null

Top logprobs setting

top_p?: number | null

Top-p setting used for the response

truncation?: string | null

Truncation setting

user?: string | null

User identifier

Creates a text-to-text response
import Gradient from '@digitalocean/gradient';

const client = new Gradient({
  modelAccessKey: 'My Model Access Key',
});

const createResponseResponse = await client.responses.create({
  input: 'Tell me a three-sentence bedtime story about a unicorn.',
  model: 'llama3-8b-instruct',
});

console.log(createResponseResponse.id);
{
  "id": "response-abc123def456",
  "created": 1721596428,
  "model": "llama3-8b-instruct",
  "object": "response",
  "usage": {
    "input_tokens": 133,
    "input_tokens_details": {
      "cached_tokens": 128
    },
    "output_tokens": 41,
    "output_tokens_details": {
      "reasoning_tokens": 24,
      "tool_output_tokens": 0
    },
    "total_tokens": 174
  },
  "background": true,
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Once upon a time, a unicorn named Sparkle...",
        "role": "assistant",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ]
      },
      "logprobs": {
        "content": [
          {
            "token": "token",
            "bytes": [
              0
            ],
            "logprob": 0,
            "top_logprobs": [
              {
                "token": "token",
                "bytes": [
                  0
                ],
                "logprob": 0
              }
            ]
          }
        ]
      }
    }
  ],
  "input_messages": [
    {}
  ],
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output": [
    {
      "arguments": "arguments",
      "call_id": "call_id",
      "name": "name",
      "type": "function_call",
      "id": "id",
      "encrypted_content": "encrypted_content",
      "status": "status"
    }
  ],
  "output_messages": [
    {}
  ],
  "parallel_tool_calls": true,
  "previous_response_id": "previous_response_id",
  "prompt": "prompt",
  "reasoning": "reasoning",
  "service_tier": "service_tier",
  "status": "status",
  "temperature": 0,
  "text": "text",
  "tool_choice": "tool_choice",
  "tools": [
    {
      "type": "function",
      "description": "description",
      "name": "name",
      "parameters": {
        "foo": "bar"
      }
    }
  ],
  "top_logprobs": 0,
  "top_p": 0,
  "truncation": "truncation",
  "user": "user"
}
Returns Examples
{
  "id": "response-abc123def456",
  "created": 1721596428,
  "model": "llama3-8b-instruct",
  "object": "response",
  "usage": {
    "input_tokens": 133,
    "input_tokens_details": {
      "cached_tokens": 128
    },
    "output_tokens": 41,
    "output_tokens_details": {
      "reasoning_tokens": 24,
      "tool_output_tokens": 0
    },
    "total_tokens": 174
  },
  "background": true,
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Once upon a time, a unicorn named Sparkle...",
        "role": "assistant",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ]
      },
      "logprobs": {
        "content": [
          {
            "token": "token",
            "bytes": [
              0
            ],
            "logprob": 0,
            "top_logprobs": [
              {
                "token": "token",
                "bytes": [
                  0
                ],
                "logprob": 0
              }
            ]
          }
        ]
      }
    }
  ],
  "input_messages": [
    {}
  ],
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output": [
    {
      "arguments": "arguments",
      "call_id": "call_id",
      "name": "name",
      "type": "function_call",
      "id": "id",
      "encrypted_content": "encrypted_content",
      "status": "status"
    }
  ],
  "output_messages": [
    {}
  ],
  "parallel_tool_calls": true,
  "previous_response_id": "previous_response_id",
  "prompt": "prompt",
  "reasoning": "reasoning",
  "service_tier": "service_tier",
  "status": "status",
  "temperature": 0,
  "text": "text",
  "tool_choice": "tool_choice",
  "tools": [
    {
      "type": "function",
      "description": "description",
      "name": "name",
      "parameters": {
        "foo": "bar"
      }
    }
  ],
  "top_logprobs": 0,
  "top_p": 0,
  "truncation": "truncation",
  "user": "user"
}