Skip to content

Evaluation Runs

Run an Evaluation Test Case
client.Agents.EvaluationRuns.New(ctx, body) (*AgentEvaluationRunNewResponse, error)
post/v2/gen-ai/evaluation_runs
Retrieve Information About an Existing Evaluation Run
client.Agents.EvaluationRuns.Get(ctx, evaluationRunUuid) (*AgentEvaluationRunGetResponse, error)
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}
Retrieve Results of an Evaluation Run
client.Agents.EvaluationRuns.ListResults(ctx, evaluationRunUuid, query) (*AgentEvaluationRunListResultsResponse, error)
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results
Retrieve Results of an Evaluation Run Prompt
client.Agents.EvaluationRuns.GetResults(ctx, evaluationRunUuid, promptID) (*AgentEvaluationRunGetResultsResponse, error)
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}
ModelsExpand Collapse
type APIEvaluationMetric struct{…}
Category APIEvaluationMetricCategoryoptional
Accepts one of the following:
const APIEvaluationMetricCategoryMetricCategoryUnspecified APIEvaluationMetricCategory = "METRIC_CATEGORY_UNSPECIFIED"
const APIEvaluationMetricCategoryMetricCategoryCorrectness APIEvaluationMetricCategory = "METRIC_CATEGORY_CORRECTNESS"
const APIEvaluationMetricCategoryMetricCategoryUserOutcomes APIEvaluationMetricCategory = "METRIC_CATEGORY_USER_OUTCOMES"
const APIEvaluationMetricCategoryMetricCategorySafetyAndSecurity APIEvaluationMetricCategory = "METRIC_CATEGORY_SAFETY_AND_SECURITY"
const APIEvaluationMetricCategoryMetricCategoryContextQuality APIEvaluationMetricCategory = "METRIC_CATEGORY_CONTEXT_QUALITY"
const APIEvaluationMetricCategoryMetricCategoryModelFit APIEvaluationMetricCategory = "METRIC_CATEGORY_MODEL_FIT"
Description stringoptional
Inverted booloptional

If true, the metric is inverted, meaning that a lower value is better.

IsMetricGoal booloptional
MetricName stringoptional
MetricRank int64optional
formatint64
MetricType APIEvaluationMetricMetricTypeoptional
Accepts one of the following:
const APIEvaluationMetricMetricTypeMetricTypeUnspecified APIEvaluationMetricMetricType = "METRIC_TYPE_UNSPECIFIED"
const APIEvaluationMetricMetricTypeMetricTypeGeneralQuality APIEvaluationMetricMetricType = "METRIC_TYPE_GENERAL_QUALITY"
const APIEvaluationMetricMetricTypeMetricTypeRagAndTool APIEvaluationMetricMetricType = "METRIC_TYPE_RAG_AND_TOOL"
MetricUuid stringoptional
MetricValueType APIEvaluationMetricMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricMetricValueTypeMetricValueTypeNumber APIEvaluationMetricMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricMetricValueTypeMetricValueTypeString APIEvaluationMetricMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricMetricValueTypeMetricValueTypePercentage APIEvaluationMetricMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
RangeMax float64optional

The maximum value for the metric.

formatfloat
RangeMin float64optional

The minimum value for the metric.

formatfloat
type APIEvaluationMetricResult struct{…}
ErrorDescription stringoptional

Error description if the metric could not be calculated.

MetricName stringoptional

Metric name

MetricValueType APIEvaluationMetricResultMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeNumber APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeString APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypePercentage APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
NumberValue float64optional

The value of the metric as a number.

formatdouble
Reasoning stringoptional

Reasoning of the metric result.

StringValue stringoptional

The value of the metric as a string.

type APIEvaluationPrompt struct{…}
EvaluationTraceSpans []APIEvaluationPromptEvaluationTraceSpanoptional

The evaluated trace spans.

CreatedAt Timeoptional

When the span was created

formatdate-time
Input unknownoptional

Input data for the span (flexible structure - can be messages array, string, etc.)

Name stringoptional

Name/identifier for the span

Output unknownoptional

Output data from the span (flexible structure - can be message, string, etc.)

RetrieverChunks []APIEvaluationPromptEvaluationTraceSpansRetrieverChunkoptional

Any retriever span chunks that were included as part of the span.

ChunkUsagePct float64optional

The usage percentage of the chunk.

formatdouble
ChunkUsed booloptional

Indicates if the chunk was used in the prompt.

IndexUuid stringoptional

The index uuid (Knowledge Base) of the chunk.

SourceName stringoptional

The source name for the chunk, e.g., the file name or document title.

Text stringoptional

Text content of the chunk.

SpanLevelMetricResults []APIEvaluationMetricResultoptional

The span-level metric results.

ErrorDescription stringoptional

Error description if the metric could not be calculated.

MetricName stringoptional

Metric name

MetricValueType APIEvaluationMetricResultMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeNumber APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeString APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypePercentage APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
NumberValue float64optional

The value of the metric as a number.

formatdouble
Reasoning stringoptional

Reasoning of the metric result.

StringValue stringoptional

The value of the metric as a string.

Type APIEvaluationPromptEvaluationTraceSpansTypeoptional

Types of spans in a trace

Accepts one of the following:
const APIEvaluationPromptEvaluationTraceSpansTypeTraceSpanTypeUnknown APIEvaluationPromptEvaluationTraceSpansType = "TRACE_SPAN_TYPE_UNKNOWN"
const APIEvaluationPromptEvaluationTraceSpansTypeTraceSpanTypeLlm APIEvaluationPromptEvaluationTraceSpansType = "TRACE_SPAN_TYPE_LLM"
const APIEvaluationPromptEvaluationTraceSpansTypeTraceSpanTypeRetriever APIEvaluationPromptEvaluationTraceSpansType = "TRACE_SPAN_TYPE_RETRIEVER"
const APIEvaluationPromptEvaluationTraceSpansTypeTraceSpanTypeTool APIEvaluationPromptEvaluationTraceSpansType = "TRACE_SPAN_TYPE_TOOL"
GroundTruth stringoptional

The ground truth for the prompt.

Input stringoptional
InputTokens stringoptional

The number of input tokens used in the prompt.

formatuint64
Output stringoptional
OutputTokens stringoptional

The number of output tokens used in the prompt.

formatuint64
PromptChunks []APIEvaluationPromptPromptChunkoptional

The list of prompt chunks.

ChunkUsagePct float64optional

The usage percentage of the chunk.

formatdouble
ChunkUsed booloptional

Indicates if the chunk was used in the prompt.

IndexUuid stringoptional

The index uuid (Knowledge Base) of the chunk.

SourceName stringoptional

The source name for the chunk, e.g., the file name or document title.

Text stringoptional

Text content of the chunk.

PromptID int64optional

Prompt ID

formatint64
PromptLevelMetricResults []APIEvaluationMetricResultoptional

The metric results for the prompt.

ErrorDescription stringoptional

Error description if the metric could not be calculated.

MetricName stringoptional

Metric name

MetricValueType APIEvaluationMetricResultMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeNumber APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeString APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypePercentage APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
NumberValue float64optional

The value of the metric as a number.

formatdouble
Reasoning stringoptional

Reasoning of the metric result.

StringValue stringoptional

The value of the metric as a string.

TraceID stringoptional

The trace id for the prompt.

type APIEvaluationRun struct{…}
AgentDeleted booloptional

Whether agent is deleted

AgentDeploymentName stringoptional

The agent deployment name

AgentName stringoptional

Agent name

AgentUuid stringoptional

Agent UUID.

AgentVersionHash stringoptional

Version hash

AgentWorkspaceUuid stringoptional

Agent workspace uuid

CreatedByUserEmail stringoptional
CreatedByUserID stringoptional
formatuint64
ErrorDescription stringoptional

The error description

EvaluationRunUuid stringoptional

Evaluation run UUID.

EvaluationTestCaseWorkspaceUuid stringoptional

Evaluation test case workspace uuid

FinishedAt Timeoptional

Run end time.

formatdate-time
PassStatus booloptional

The pass status of the evaluation run based on the star metric.

QueuedAt Timeoptional

Run queued time.

formatdate-time
RunLevelMetricResults []APIEvaluationMetricResultoptional
ErrorDescription stringoptional

Error description if the metric could not be calculated.

MetricName stringoptional

Metric name

MetricValueType APIEvaluationMetricResultMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeNumber APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeString APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypePercentage APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
NumberValue float64optional

The value of the metric as a number.

formatdouble
Reasoning stringoptional

Reasoning of the metric result.

StringValue stringoptional

The value of the metric as a string.

RunName stringoptional

Run name.

StarMetricResult APIEvaluationMetricResultoptional
ErrorDescription stringoptional

Error description if the metric could not be calculated.

MetricName stringoptional

Metric name

MetricValueType APIEvaluationMetricResultMetricValueTypeoptional
Accepts one of the following:
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeUnspecified APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_UNSPECIFIED"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeNumber APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_NUMBER"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypeString APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_STRING"
const APIEvaluationMetricResultMetricValueTypeMetricValueTypePercentage APIEvaluationMetricResultMetricValueType = "METRIC_VALUE_TYPE_PERCENTAGE"
NumberValue float64optional

The value of the metric as a number.

formatdouble
Reasoning stringoptional

Reasoning of the metric result.

StringValue stringoptional

The value of the metric as a string.

StartedAt Timeoptional

Run start time.

formatdate-time
Status APIEvaluationRunStatusoptional

Evaluation Run Statuses

Accepts one of the following:
const APIEvaluationRunStatusEvaluationRunStatusUnspecified APIEvaluationRunStatus = "EVALUATION_RUN_STATUS_UNSPECIFIED"
const APIEvaluationRunStatusEvaluationRunQueued APIEvaluationRunStatus = "EVALUATION_RUN_QUEUED"
const APIEvaluationRunStatusEvaluationRunRunningDataset APIEvaluationRunStatus = "EVALUATION_RUN_RUNNING_DATASET"
const APIEvaluationRunStatusEvaluationRunEvaluatingResults APIEvaluationRunStatus = "EVALUATION_RUN_EVALUATING_RESULTS"
const APIEvaluationRunStatusEvaluationRunCancelling APIEvaluationRunStatus = "EVALUATION_RUN_CANCELLING"
const APIEvaluationRunStatusEvaluationRunCancelled APIEvaluationRunStatus = "EVALUATION_RUN_CANCELLED"
const APIEvaluationRunStatusEvaluationRunSuccessful APIEvaluationRunStatus = "EVALUATION_RUN_SUCCESSFUL"
const APIEvaluationRunStatusEvaluationRunPartiallySuccessful APIEvaluationRunStatus = "EVALUATION_RUN_PARTIALLY_SUCCESSFUL"
const APIEvaluationRunStatusEvaluationRunFailed APIEvaluationRunStatus = "EVALUATION_RUN_FAILED"
TestCaseDescription stringoptional

Test case description.

TestCaseName stringoptional

Test case name.

TestCaseUuid stringoptional

Test-case UUID.

TestCaseVersion int64optional

Test-case-version.

formatint64