Retrieve Results
Retrieve Results of an Evaluation Run Prompt
agents.evaluation_runs.retrieve_results(intprompt_id, EvaluationRunRetrieveResultsParams**kwargs) -> promptAPIEvaluationPromptEvaluationRunRetrieveResultsResponse
get/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}
To retrieve results of an evaluation run, send a GET request to /v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results/{prompt_id}
.
Parameters
evaluation_run_uuidstr
prompt_idint
Returns
EvaluationRunRetrieveResultsResponseclass
from do_gradientai import GradientAI
client = GradientAI()
response = client.agents.evaluation_runs.retrieve_results(
prompt_id=1,
evaluation_run_uuid="\"123e4567-e89b-12d3-a456-426614174000\"",
)
print(response.prompt)
200 Example
{
"prompt": {
"ground_truth": "\"example string\"",
"input": "\"example string\"",
"input_tokens": "\"12345\"",
"output": "\"example string\"",
"output_tokens": "\"12345\"",
"prompt_chunks": [
{
"chunk_usage_pct": 123,
"chunk_used": true,
"index_uuid": "\"123e4567-e89b-12d3-a456-426614174000\"",
"source_name": "\"example name\"",
"text": "\"example string\""
}
],
"prompt_id": 123,
"prompt_level_metric_results": [
{
"error_description": "\"example string\"",
"metric_name": "\"example name\"",
"metric_value_type": "METRIC_VALUE_TYPE_UNSPECIFIED",
"number_value": 123,
"reasoning": "\"example string\"",
"string_value": "\"example string\""
}
]
}
}