## List Evaluation Runs `agents.evaluation_test_cases.list_evaluation_runs(strevaluation_test_case_uuid, EvaluationTestCaseListEvaluationRunsParams**kwargs) -> EvaluationTestCaseListEvaluationRunsResponse` **get** `/v2/gen-ai/evaluation_test_cases/{evaluation_test_case_uuid}/evaluation_runs` To list all evaluation runs by test case, send a GET request to `/v2/gen-ai/evaluation_test_cases/{evaluation_test_case_uuid}/evaluation_runs`. ### Parameters - **evaluation\_test\_case\_uuid:** `str` - **evaluation\_test\_case\_version:** `int` Version of the test case. ### Returns - `class EvaluationTestCaseListEvaluationRunsResponse` - **evaluation\_runs:** `Optional[List[APIEvaluationRun]]` List of evaluation runs. - **agent\_deleted:** `Optional[bool]` Whether agent is deleted - **agent\_name:** `Optional[str]` Agent name - **agent\_uuid:** `Optional[str]` Agent UUID. - **agent\_version\_hash:** `Optional[str]` Version hash - **agent\_workspace\_uuid:** `Optional[str]` Agent workspace uuid - **created\_by\_user\_email:** `Optional[str]` - **created\_by\_user\_id:** `Optional[str]` - **error\_description:** `Optional[str]` The error description - **evaluation\_run\_uuid:** `Optional[str]` Evaluation run UUID. - **evaluation\_test\_case\_workspace\_uuid:** `Optional[str]` Evaluation test case workspace uuid - **finished\_at:** `Optional[datetime]` Run end time. - **pass\_status:** `Optional[bool]` The pass status of the evaluation run based on the star metric. - **queued\_at:** `Optional[datetime]` Run queued time. - **run\_level\_metric\_results:** `Optional[List[APIEvaluationMetricResult]]` - **error\_description:** `Optional[str]` Error description if the metric could not be calculated. - **metric\_name:** `Optional[str]` Metric name - **metric\_value\_type:** `Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]` - `"METRIC_VALUE_TYPE_UNSPECIFIED"` - `"METRIC_VALUE_TYPE_NUMBER"` - `"METRIC_VALUE_TYPE_STRING"` - `"METRIC_VALUE_TYPE_PERCENTAGE"` - **number\_value:** `Optional[float]` The value of the metric as a number. - **reasoning:** `Optional[str]` Reasoning of the metric result. - **string\_value:** `Optional[str]` The value of the metric as a string. - **run\_name:** `Optional[str]` Run name. - **star\_metric\_result:** `Optional[APIEvaluationMetricResult]` - **started\_at:** `Optional[datetime]` Run start time. - **status:** `Optional[Literal["EVALUATION_RUN_STATUS_UNSPECIFIED", "EVALUATION_RUN_QUEUED", "EVALUATION_RUN_RUNNING_DATASET", 6 more]]` Evaluation Run Statuses - `"EVALUATION_RUN_STATUS_UNSPECIFIED"` - `"EVALUATION_RUN_QUEUED"` - `"EVALUATION_RUN_RUNNING_DATASET"` - `"EVALUATION_RUN_EVALUATING_RESULTS"` - `"EVALUATION_RUN_CANCELLING"` - `"EVALUATION_RUN_CANCELLED"` - `"EVALUATION_RUN_SUCCESSFUL"` - `"EVALUATION_RUN_PARTIALLY_SUCCESSFUL"` - `"EVALUATION_RUN_FAILED"` - **test\_case\_description:** `Optional[str]` Test case description. - **test\_case\_name:** `Optional[str]` Test case name. - **test\_case\_uuid:** `Optional[str]` Test-case UUID. - **test\_case\_version:** `Optional[int]` Test-case-version. ### Example ```python from do_gradientai import GradientAI client = GradientAI() response = client.agents.evaluation_test_cases.list_evaluation_runs( evaluation_test_case_uuid="\"123e4567-e89b-12d3-a456-426614174000\"", ) print(response.evaluation_runs) ```