## List Results `agents.evaluation_runs.list_results(strevaluation_run_uuid, EvaluationRunListResultsParams**kwargs) -> EvaluationRunListResultsResponse` **get** `/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results` To retrieve results of an evaluation run, send a GET request to `/v2/gen-ai/evaluation_runs/{evaluation_run_uuid}/results`. ### Parameters - **evaluation\_run\_uuid:** `str` - **page:** `int` Page number. - **per\_page:** `int` Items per page. ### Returns - `class EvaluationRunListResultsResponse` Gets the full results of an evaluation run with all prompts. - **evaluation\_run:** `Optional[APIEvaluationRun]` - **links:** `Optional[APILinks]` Links to other pages - **meta:** `Optional[APIMeta]` Meta information about the data set - **prompts:** `Optional[List[APIEvaluationPrompt]]` The prompt level results. - **ground\_truth:** `Optional[str]` The ground truth for the prompt. - **input:** `Optional[str]` - **input\_tokens:** `Optional[str]` The number of input tokens used in the prompt. - **output:** `Optional[str]` - **output\_tokens:** `Optional[str]` The number of output tokens used in the prompt. - **prompt\_chunks:** `Optional[List[PromptChunk]]` The list of prompt chunks. - **chunk\_usage\_pct:** `Optional[float]` The usage percentage of the chunk. - **chunk\_used:** `Optional[bool]` Indicates if the chunk was used in the prompt. - **index\_uuid:** `Optional[str]` The index uuid (Knowledge Base) of the chunk. - **source\_name:** `Optional[str]` The source name for the chunk, e.g., the file name or document title. - **text:** `Optional[str]` Text content of the chunk. - **prompt\_id:** `Optional[int]` Prompt ID - **prompt\_level\_metric\_results:** `Optional[List[APIEvaluationMetricResult]]` The metric results for the prompt. - **error\_description:** `Optional[str]` Error description if the metric could not be calculated. - **metric\_name:** `Optional[str]` Metric name - **metric\_value\_type:** `Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]` - `"METRIC_VALUE_TYPE_UNSPECIFIED"` - `"METRIC_VALUE_TYPE_NUMBER"` - `"METRIC_VALUE_TYPE_STRING"` - `"METRIC_VALUE_TYPE_PERCENTAGE"` - **number\_value:** `Optional[float]` The value of the metric as a number. - **reasoning:** `Optional[str]` Reasoning of the metric result. - **string\_value:** `Optional[str]` The value of the metric as a string. ### Example ```python from do_gradientai import GradientAI client = GradientAI() response = client.agents.evaluation_runs.list_results( evaluation_run_uuid="\"123e4567-e89b-12d3-a456-426614174000\"", ) print(response.evaluation_run) ```