## List `agents.evaluation_metrics.list() -> EvaluationMetricListResponse` **get** `/v2/gen-ai/evaluation_metrics` To list all evaluation metrics, send a GET request to `/v2/gen-ai/evaluation_metrics`. ### Returns - `class EvaluationMetricListResponse` - **metrics:** `Optional[List[APIEvaluationMetric]]` - **description:** `Optional[str]` - **inverted:** `Optional[bool]` If true, the metric is inverted, meaning that a lower value is better. - **metric\_name:** `Optional[str]` - **metric\_type:** `Optional[Literal["METRIC_TYPE_UNSPECIFIED", "METRIC_TYPE_GENERAL_QUALITY", "METRIC_TYPE_RAG_AND_TOOL"]]` - `"METRIC_TYPE_UNSPECIFIED"` - `"METRIC_TYPE_GENERAL_QUALITY"` - `"METRIC_TYPE_RAG_AND_TOOL"` - **metric\_uuid:** `Optional[str]` - **metric\_value\_type:** `Optional[Literal["METRIC_VALUE_TYPE_UNSPECIFIED", "METRIC_VALUE_TYPE_NUMBER", "METRIC_VALUE_TYPE_STRING", "METRIC_VALUE_TYPE_PERCENTAGE"]]` - `"METRIC_VALUE_TYPE_UNSPECIFIED"` - `"METRIC_VALUE_TYPE_NUMBER"` - `"METRIC_VALUE_TYPE_STRING"` - `"METRIC_VALUE_TYPE_PERCENTAGE"` - **range\_max:** `Optional[float]` The maximum value for the metric. - **range\_min:** `Optional[float]` The minimum value for the metric. ### Example ```python from do_gradientai import GradientAI client = GradientAI() evaluation_metrics = client.agents.evaluation_metrics.list() print(evaluation_metrics.metrics) ```