Skip to content

Creates a text-to-text response

responses.create(ResponseCreateParams**kwargs) -> CreateResponseResponse
post/responses

Generate text responses from text prompts. This endpoint supports both streaming and non-streaming responses for VLLM models only.

ParametersExpand Collapse
input: Union[str, Iterable[InputUnionMember1]]

The input text prompt or conversation history. Can be a string or an array of message objects for conversation context.

Accepts one of the following:
InputUnionMember0 = str

Simple text input prompt

InputUnionMember1 = Iterable[InputUnionMember1]

Array of message objects or Responses API items for conversation context

Accepts one of the following:
class InputUnionMember1UnionMember0:
type: Literal["function_call", "function_call_output", "reasoning"]

The type of input item (must be function_call, function_call_output, or reasoning)

Accepts one of the following:
"function_call"
"function_call_output"
"reasoning"
id: Optional[str]

The unique ID of the reasoning item (optional for reasoning)

arguments: Optional[str]

JSON string of function arguments (required for function_call)

call_id: Optional[str]

The call ID (required for function_call and function_call_output)

content: Optional[Iterable[InputUnionMember1UnionMember0Content]]

Array of reasoning content parts (optional for reasoning, can be null)

text: Optional[str]

The reasoning text content

type: Optional[Literal["reasoning_text"]]

The type of content

Accepts one of the following:
"reasoning_text"
encrypted_content: Optional[str]

Encrypted content (optional)

name: Optional[str]

The function name (required for function_call)

output: Optional[str]

JSON string of function output (required for function_call_output)

status: Optional[str]

Status of the item (optional, can be null)

summary: Optional[Iterable[object]]

Summary of the reasoning (optional for reasoning)

class InputUnionMember1UnionMember1:
content: Union[str, Iterable[InputUnionMember1UnionMember1ContentUnionMember1]]

The content of the message (string or content parts array)

Accepts one of the following:
InputUnionMember1UnionMember1ContentUnionMember0 = str
InputUnionMember1UnionMember1ContentUnionMember1 = Iterable[InputUnionMember1UnionMember1ContentUnionMember1]
Accepts one of the following:
class InputUnionMember1UnionMember1ContentUnionMember1UnionMember0:
text: str

The text content

type: Literal["input_text"]

The type of content part

Accepts one of the following:
"input_text"
InputUnionMember1UnionMember1ContentUnionMember1UnionMember1 = Dict[str, object]
role: Optional[Literal["user", "assistant", "system", 2 more]]

The role of the message author

Accepts one of the following:
"user"
"assistant"
"system"
"tool"
"developer"
tool_call_id: Optional[str]

Tool call ID that this message is responding to (required for tool role)

tool_calls: Optional[Iterable[InputUnionMember1UnionMember1ToolCall]]

Tool calls made by the assistant (for assistant role messages)

id: str

The ID of the tool call.

function: InputUnionMember1UnionMember1ToolCallFunction

The function that the model called.

arguments: str

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

name: str

The name of the function to call.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
type: Optional[Literal["message"]]

Optional type identifier for message items (used by some clients like Codex)

Accepts one of the following:
"message"
model: str

Model ID used to generate the response. Must be a VLLM model.

instructions: Optional[str]

System-level instructions for the model. This sets the behavior and context for the response generation.

max_output_tokens: Optional[int]

Maximum number of tokens to generate in the response. If not specified, the model will use a default value.

minimum1
max_tokens: Optional[int]

The maximum number of tokens that can be generated in the completion. Alias for max_output_tokens for compatibility.

minimum0
metadata: Optional[Dict[str, str]]

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

modalities: Optional[List[Literal["text"]]]

Specifies the output types the model should generate. For text-to-text, this should be ["text"].

Accepts one of the following:
"text"
parallel_tool_calls: Optional[bool]

Whether to enable parallel tool calls. When true, the model can make multiple tool calls in parallel.

stop: Optional[Union[Optional[str], SequenceNotStr[str], null]]

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Accepts one of the following:
StopUnionMember0 = Optional[str]
StopUnionMember1 = SequenceNotStr[str]
stream: Optional[Literal[false]]

If set to true, the model response data will be streamed to the client as it is generated using server-sent events.

Accepts one of the following:
false
stream_options: Optional[StreamOptions]

Options for streaming response. Only set this when you set stream: true.

include_usage: Optional[bool]

If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array.

All other chunks will also include a usage field, but with a null value. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request.

temperature: Optional[float]

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

minimum0
maximum2
tool_choice: Optional[ToolChoice]

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

none is the default when no tools are present. auto is the default if tools are present.

Accepts one of the following:
ToolChoiceUnionMember0 = Literal["none", "auto", "required"]

none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.

Accepts one of the following:
"none"
"auto"
"required"
class ToolChoiceChatCompletionNamedToolChoice:

Specifies a tool the model should use. Use to force the model to call a specific function.

function: ToolChoiceChatCompletionNamedToolChoiceFunction
name: str

The name of the function to call.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
tools: Optional[Iterable[Tool]]

A list of tools the model may call. Currently, only functions are supported as a tool. Uses Responses API format (with name, description, parameters at top level).

type: Literal["function", "web_search", "web_search_2025_08_26"]

The type of the tool. Supported values are function (custom tools), web_search, and web_search_2025_08_26 (built-in web search).

Accepts one of the following:
"function"
"web_search"
"web_search_2025_08_26"
description: Optional[str]

A description of what the function does, used by the model to choose when and how to call the function.

name: Optional[str]

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

parameters: Optional[Dict[str, object]]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

top_p: Optional[float]

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

minimum0
maximum1
user: Optional[str]

A unique identifier representing your end-user, which can help DigitalOcean to monitor and detect abuse.

ReturnsExpand Collapse
class CreateResponseResponse:

Represents a text-to-text response returned by the model, based on the provided input. VLLM models only.

id: str

A unique identifier for the response.

created: int

The Unix timestamp (in seconds) of when the response was created.

model: str

The model used to generate the response.

object: Literal["response"]

The object type, which is always response.

Accepts one of the following:
"response"
usage: Usage

Detailed token usage statistics for the request, including input/output token counts and detailed breakdowns.

input_tokens: int

The number of input tokens.

input_tokens_details: UsageInputTokensDetails

A detailed breakdown of the input tokens.

cached_tokens: int

The number of tokens that were retrieved from the cache. More on prompt caching.

output_tokens: int

The number of output tokens.

output_tokens_details: UsageOutputTokensDetails

A detailed breakdown of the output tokens.

reasoning_tokens: int

The number of reasoning tokens.

tool_output_tokens: int

The number of tool output tokens.

total_tokens: int

The total number of tokens used.

background: Optional[bool]

Whether the request was processed in the background

choices: Optional[List[Choice]]

A list of response choices. Can be more than one if n is greater than 1. Optional - Responses API primarily uses the output array.

finish_reason: Literal["stop", "length", "tool_calls", "content_filter"]

The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, or tool_calls if the model called a tool.

Accepts one of the following:
"stop"
"length"
"tool_calls"
"content_filter"
index: int

The index of the choice in the list of choices.

message: ChoiceMessage

The generated message response.

content: Optional[str]

The generated text content.

role: Optional[Literal["assistant"]]

The role of the message author, which is always assistant.

Accepts one of the following:
"assistant"
tool_calls: Optional[List[ChoiceMessageToolCall]]

The tool calls generated by the model, such as function calls.

id: str

The ID of the tool call.

function: ChoiceMessageToolCallFunction

The function that the model called.

arguments: str

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

name: str

The name of the function to call.

type: Literal["function"]

The type of the tool. Currently, only function is supported.

Accepts one of the following:
"function"
logprobs: Optional[ChoiceLogprobs]

Log probability information for the choice. Only present if logprobs was requested in the request.

content: Optional[List[ChatCompletionTokenLogprob]]

A list of message content tokens with log probability information.

token: str

The token.

bytes: Optional[List[int]]

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprob: float

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

top_logprobs: List[TopLogprob]

List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.

token: str

The token.

bytes: Optional[List[int]]

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprob: float

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

input_messages: Optional[List[object]]

Input messages (if applicable)

max_output_tokens: Optional[int]

Maximum output tokens setting

max_tool_calls: Optional[int]

Maximum tool calls setting

output: Optional[List[Output]]

An array of content items generated by the model. This includes text content, function calls, reasoning items, and other output types. Use this field for Responses API compatibility.

Accepts one of the following:
class OutputUnionMember0:
arguments: str

JSON string of function arguments

call_id: str

The unique ID of the function tool call

name: str

The name of the function to call

type: Literal["function_call"]

The type of output item

Accepts one of the following:
"function_call"
id: Optional[str]

The unique ID of the function tool call (same as call_id)

encrypted_content: Optional[str]

Encrypted content (optional)

status: Optional[str]

Status of the item (optional, can be null)

class OutputUnionMember1:
text: str

The text content

type: Literal["text"]

The type of output item

Accepts one of the following:
"text"
class OutputUnionMember2:
id: str

The unique ID of the reasoning item

content: List[OutputUnionMember2Content]

Array of reasoning content parts

text: str

The reasoning text content

type: Literal["reasoning_text"]

The type of content

Accepts one of the following:
"reasoning_text"
summary: List[object]

Summary of the reasoning (usually empty)

type: Literal["reasoning"]

The type of output item

Accepts one of the following:
"reasoning"
encrypted_content: Optional[str]

Encrypted content (optional)

status: Optional[str]

Status of the item (optional, can be null)

output_messages: Optional[List[object]]

Output messages (if applicable)

parallel_tool_calls: Optional[bool]

Whether parallel tool calls are enabled

previous_response_id: Optional[str]

Previous response ID (for multi-turn conversations)

prompt: Optional[str]

Prompt used for the response

reasoning: Optional[str]

Reasoning content

service_tier: Optional[str]

Service tier used

status: Optional[str]

Status of the response

temperature: Optional[float]

Temperature setting used for the response

text: Optional[str]

Text content

tool_choice: Optional[str]

Tool choice setting used for the response

tools: Optional[List[Tool]]

Tools available for the response

type: Literal["function", "web_search", "web_search_2025_08_26"]

The type of the tool. Supported values are function (custom tools), web_search, and web_search_2025_08_26 (built-in web search).

Accepts one of the following:
"function"
"web_search"
"web_search_2025_08_26"
description: Optional[str]

A description of what the function does, used by the model to choose when and how to call the function.

name: Optional[str]

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

parameters: Optional[Dict[str, object]]

The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.

top_logprobs: Optional[int]

Top logprobs setting

top_p: Optional[float]

Top-p setting used for the response

truncation: Optional[str]

Truncation setting

user: Optional[str]

User identifier

Creates a text-to-text response
from gradient import Gradient

client = Gradient(
    model_access_key="My Model Access Key",
)
create_response_response = client.responses.create(
    input="Tell me a three-sentence bedtime story about a unicorn.",
    model="llama3-8b-instruct",
)
print(create_response_response.id)
{
  "id": "response-abc123def456",
  "created": 1721596428,
  "model": "llama3-8b-instruct",
  "object": "response",
  "usage": {
    "input_tokens": 133,
    "input_tokens_details": {
      "cached_tokens": 128
    },
    "output_tokens": 41,
    "output_tokens_details": {
      "reasoning_tokens": 24,
      "tool_output_tokens": 0
    },
    "total_tokens": 174
  },
  "background": true,
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Once upon a time, a unicorn named Sparkle...",
        "role": "assistant",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ]
      },
      "logprobs": {
        "content": [
          {
            "token": "token",
            "bytes": [
              0
            ],
            "logprob": 0,
            "top_logprobs": [
              {
                "token": "token",
                "bytes": [
                  0
                ],
                "logprob": 0
              }
            ]
          }
        ]
      }
    }
  ],
  "input_messages": [
    {}
  ],
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output": [
    {
      "arguments": "arguments",
      "call_id": "call_id",
      "name": "name",
      "type": "function_call",
      "id": "id",
      "encrypted_content": "encrypted_content",
      "status": "status"
    }
  ],
  "output_messages": [
    {}
  ],
  "parallel_tool_calls": true,
  "previous_response_id": "previous_response_id",
  "prompt": "prompt",
  "reasoning": "reasoning",
  "service_tier": "service_tier",
  "status": "status",
  "temperature": 0,
  "text": "text",
  "tool_choice": "tool_choice",
  "tools": [
    {
      "type": "function",
      "description": "description",
      "name": "name",
      "parameters": {
        "foo": "bar"
      }
    }
  ],
  "top_logprobs": 0,
  "top_p": 0,
  "truncation": "truncation",
  "user": "user"
}
Returns Examples
{
  "id": "response-abc123def456",
  "created": 1721596428,
  "model": "llama3-8b-instruct",
  "object": "response",
  "usage": {
    "input_tokens": 133,
    "input_tokens_details": {
      "cached_tokens": 128
    },
    "output_tokens": 41,
    "output_tokens_details": {
      "reasoning_tokens": 24,
      "tool_output_tokens": 0
    },
    "total_tokens": 174
  },
  "background": true,
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Once upon a time, a unicorn named Sparkle...",
        "role": "assistant",
        "tool_calls": [
          {
            "id": "id",
            "function": {
              "arguments": "arguments",
              "name": "name"
            },
            "type": "function"
          }
        ]
      },
      "logprobs": {
        "content": [
          {
            "token": "token",
            "bytes": [
              0
            ],
            "logprob": 0,
            "top_logprobs": [
              {
                "token": "token",
                "bytes": [
                  0
                ],
                "logprob": 0
              }
            ]
          }
        ]
      }
    }
  ],
  "input_messages": [
    {}
  ],
  "max_output_tokens": 0,
  "max_tool_calls": 0,
  "output": [
    {
      "arguments": "arguments",
      "call_id": "call_id",
      "name": "name",
      "type": "function_call",
      "id": "id",
      "encrypted_content": "encrypted_content",
      "status": "status"
    }
  ],
  "output_messages": [
    {}
  ],
  "parallel_tool_calls": true,
  "previous_response_id": "previous_response_id",
  "prompt": "prompt",
  "reasoning": "reasoning",
  "service_tier": "service_tier",
  "status": "status",
  "temperature": 0,
  "text": "text",
  "tool_choice": "tool_choice",
  "tools": [
    {
      "type": "function",
      "description": "description",
      "name": "name",
      "parameters": {
        "foo": "bar"
      }
    }
  ],
  "top_logprobs": 0,
  "top_p": 0,
  "truncation": "truncation",
  "user": "user"
}