# Autoscale ## Create `gpu_droplets.autoscale.create(AutoscaleCreateParams**kwargs) -> AutoscaleCreateResponse` **post** `/v2/droplets/autoscale` To create a new autoscale pool, send a POST request to `/v2/droplets/autoscale` setting the required attributes. The response body will contain a JSON object with a key called `autoscale_pool` containing the standard attributes for the new autoscale pool. ### Parameters - **config:** `Config` The scaling configuration for an autoscale pool, which is how the pool scales up and down (either by resource utilization or static configuration). - `AutoscalePoolStaticConfigParam` - `AutoscalePoolDynamicConfigParam` - **droplet\_template:** `AutoscalePoolDropletTemplateParam` - **name:** `str` The human-readable name of the autoscale pool. This field cannot be updated ### Returns - `class AutoscaleCreateResponse` - **autoscale\_pool:** `Optional[AutoscalePool]` ### Example ```python from gradient import Gradient client = Gradient() autoscale = client.gpu_droplets.autoscale.create( config={ "min_instances": 1, "max_instances": 5, "target_cpu_utilization": 0.5, "cooldown_minutes": 10, }, droplet_template={ "name": "example.com", "region": "nyc3", "size": "c-2", "image": "ubuntu-20-04-x64", "ssh_keys": ["3b:16:e4:bf:8b:00:8b:b8:59:8c:a9:d3:f0:19:fa:45"], "backups": True, "ipv6": True, "monitoring": True, "tags": ["env:prod", "web"], "user_data": "#cloud-config\nruncmd:\n - touch /test.txt\n", "vpc_uuid": "760e09ef-dc84-11e8-981e-3cfdfeaae000", }, name="my-autoscale-pool", ) print(autoscale.autoscale_pool) ``` ## Retrieve `gpu_droplets.autoscale.retrieve(strautoscale_pool_id) -> AutoscaleRetrieveResponse` **get** `/v2/droplets/autoscale/{autoscale_pool_id}` To show information about an individual autoscale pool, send a GET request to `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID`. ### Parameters - **autoscale\_pool\_id:** `str` ### Returns - `class AutoscaleRetrieveResponse` - **autoscale\_pool:** `Optional[AutoscalePool]` ### Example ```python from gradient import Gradient client = Gradient() autoscale = client.gpu_droplets.autoscale.retrieve( "autoscale_pool_id", ) print(autoscale.autoscale_pool) ``` ## Update `gpu_droplets.autoscale.update(strautoscale_pool_id, AutoscaleUpdateParams**kwargs) -> AutoscaleUpdateResponse` **put** `/v2/droplets/autoscale/{autoscale_pool_id}` To update the configuration of an existing autoscale pool, send a PUT request to `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID`. The request must contain a full representation of the autoscale pool including existing attributes. ### Parameters - **autoscale\_pool\_id:** `str` - **config:** `Config` The scaling configuration for an autoscale pool, which is how the pool scales up and down (either by resource utilization or static configuration). - `AutoscalePoolStaticConfigParam` - `AutoscalePoolDynamicConfigParam` - **droplet\_template:** `AutoscalePoolDropletTemplateParam` - **name:** `str` The human-readable name of the autoscale pool. This field cannot be updated ### Returns - `class AutoscaleUpdateResponse` - **autoscale\_pool:** `Optional[AutoscalePool]` ### Example ```python from gradient import Gradient client = Gradient() autoscale = client.gpu_droplets.autoscale.update( autoscale_pool_id="0d3db13e-a604-4944-9827-7ec2642d32ac", config={ "target_number_instances": 2 }, droplet_template={ "name": "example.com", "region": "nyc3", "size": "c-2", "image": "ubuntu-20-04-x64", "ssh_keys": ["3b:16:e4:bf:8b:00:8b:b8:59:8c:a9:d3:f0:19:fa:45"], "backups": True, "ipv6": True, "monitoring": True, "tags": ["env:prod", "web"], "user_data": "#cloud-config\nruncmd:\n - touch /test.txt\n", "vpc_uuid": "760e09ef-dc84-11e8-981e-3cfdfeaae000", }, name="my-autoscale-pool", ) print(autoscale.autoscale_pool) ``` ## List `gpu_droplets.autoscale.list(AutoscaleListParams**kwargs) -> AutoscaleListResponse` **get** `/v2/droplets/autoscale` To list all autoscale pools in your team, send a GET request to `/v2/droplets/autoscale`. The response body will be a JSON object with a key of `autoscale_pools` containing an array of autoscale pool objects. These each contain the standard autoscale pool attributes. ### Parameters - **name:** `str` The name of the autoscale pool - **page:** `int` Which 'page' of paginated results to return. - **per\_page:** `int` Number of items returned per page ### Returns - `class AutoscaleListResponse` - **meta:** `MetaProperties` Information about the response itself. - **autoscale\_pools:** `Optional[List[AutoscalePool]]` - **id:** `str` A unique identifier for each autoscale pool instance. This is automatically generated upon autoscale pool creation. - **active\_resources\_count:** `int` The number of active Droplets in the autoscale pool. - **config:** `Config` The scaling configuration for an autoscale pool, which is how the pool scales up and down (either by resource utilization or static configuration). - `AutoscalePoolStaticConfig` - `AutoscalePoolDynamicConfig` - **created\_at:** `datetime` A time value given in ISO8601 combined date and time format that represents when the autoscale pool was created. - **droplet\_template:** `AutoscalePoolDropletTemplate` - **name:** `str` The human-readable name set for the autoscale pool. - **status:** `Literal["active", "deleting", "error"]` The current status of the autoscale pool. - `"active"` - `"deleting"` - `"error"` - **updated\_at:** `datetime` A time value given in ISO8601 combined date and time format that represents when the autoscale pool was last updated. - **current\_utilization:** `Optional[CurrentUtilization]` - **links:** `Optional[PageLinks]` ### Example ```python from gradient import Gradient client = Gradient() autoscales = client.gpu_droplets.autoscale.list() print(autoscales.meta) ``` ## Delete `gpu_droplets.autoscale.delete(strautoscale_pool_id)` **delete** `/v2/droplets/autoscale/{autoscale_pool_id}` To destroy an autoscale pool, send a DELETE request to the `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID` endpoint. A successful response will include a 202 response code and no content. ### Parameters - **autoscale\_pool\_id:** `str` ### Example ```python from gradient import Gradient client = Gradient() client.gpu_droplets.autoscale.delete( "autoscale_pool_id", ) ``` ## Delete Dangerous `gpu_droplets.autoscale.delete_dangerous(strautoscale_pool_id, AutoscaleDeleteDangerousParams**kwargs)` **delete** `/v2/droplets/autoscale/{autoscale_pool_id}/dangerous` To destroy an autoscale pool and its associated resources (Droplets), send a DELETE request to the `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID/dangerous` endpoint. ### Parameters - **autoscale\_pool\_id:** `str` - **x\_dangerous:** `bool` ### Example ```python from gradient import Gradient client = Gradient() client.gpu_droplets.autoscale.delete_dangerous( autoscale_pool_id="0d3db13e-a604-4944-9827-7ec2642d32ac", x_dangerous=True, ) ``` ## List History `gpu_droplets.autoscale.list_history(strautoscale_pool_id, AutoscaleListHistoryParams**kwargs) -> AutoscaleListHistoryResponse` **get** `/v2/droplets/autoscale/{autoscale_pool_id}/history` To list all of the scaling history events of an autoscale pool, send a GET request to `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID/history`. The response body will be a JSON object with a key of `history`. This will be set to an array containing objects each representing a history event. ### Parameters - **autoscale\_pool\_id:** `str` - **page:** `int` Which 'page' of paginated results to return. - **per\_page:** `int` Number of items returned per page ### Returns - `class AutoscaleListHistoryResponse` - **meta:** `MetaProperties` Information about the response itself. - **history:** `Optional[List[History]]` - **created\_at:** `datetime` The creation time of the history event in ISO8601 combined date and time format. - **current\_instance\_count:** `int` The current number of Droplets in the autoscale pool. - **desired\_instance\_count:** `int` The target number of Droplets for the autoscale pool after the scaling event. - **history\_event\_id:** `str` The unique identifier of the history event. - **reason:** `Literal["CONFIGURATION_CHANGE", "SCALE_UP", "SCALE_DOWN"]` The reason for the scaling event. - `"CONFIGURATION_CHANGE"` - `"SCALE_UP"` - `"SCALE_DOWN"` - **status:** `Literal["in_progress", "success", "error"]` The status of the scaling event. - `"in_progress"` - `"success"` - `"error"` - **updated\_at:** `datetime` The last updated time of the history event in ISO8601 combined date and time format. - **links:** `Optional[PageLinks]` ### Example ```python from gradient import Gradient client = Gradient() response = client.gpu_droplets.autoscale.list_history( autoscale_pool_id="0d3db13e-a604-4944-9827-7ec2642d32ac", ) print(response.meta) ``` ## List Members `gpu_droplets.autoscale.list_members(strautoscale_pool_id, AutoscaleListMembersParams**kwargs) -> AutoscaleListMembersResponse` **get** `/v2/droplets/autoscale/{autoscale_pool_id}/members` To list the Droplets in an autoscale pool, send a GET request to `/v2/droplets/autoscale/$AUTOSCALE_POOL_ID/members`. The response body will be a JSON object with a key of `droplets`. This will be set to an array containing information about each of the Droplets in the autoscale pool. ### Parameters - **autoscale\_pool\_id:** `str` - **page:** `int` Which 'page' of paginated results to return. - **per\_page:** `int` Number of items returned per page ### Returns - `class AutoscaleListMembersResponse` - **meta:** `MetaProperties` Information about the response itself. - **droplets:** `Optional[List[Droplet]]` - **created\_at:** `datetime` The creation time of the Droplet in ISO8601 combined date and time format. - **current\_utilization:** `DropletCurrentUtilization` - **cpu:** `Optional[float]` The CPU utilization average of the individual Droplet. - **memory:** `Optional[float]` The memory utilization average of the individual Droplet. - **droplet\_id:** `int` The unique identifier of the Droplet. - **health\_status:** `str` The health status of the Droplet. - **status:** `Literal["provisioning", "active", "deleting", "off"]` The power status of the Droplet. - `"provisioning"` - `"active"` - `"deleting"` - `"off"` - **updated\_at:** `datetime` The last updated time of the Droplet in ISO8601 combined date and time format. - **links:** `Optional[PageLinks]` ### Example ```python from gradient import Gradient client = Gradient() response = client.gpu_droplets.autoscale.list_members( autoscale_pool_id="0d3db13e-a604-4944-9827-7ec2642d32ac", ) print(response.meta) ``` ## Domain Types ### Autoscale Pool - `class AutoscalePool` - **id:** `str` A unique identifier for each autoscale pool instance. This is automatically generated upon autoscale pool creation. - **active\_resources\_count:** `int` The number of active Droplets in the autoscale pool. - **config:** `Config` The scaling configuration for an autoscale pool, which is how the pool scales up and down (either by resource utilization or static configuration). - `AutoscalePoolStaticConfig` - `AutoscalePoolDynamicConfig` - **created\_at:** `datetime` A time value given in ISO8601 combined date and time format that represents when the autoscale pool was created. - **droplet\_template:** `AutoscalePoolDropletTemplate` - **name:** `str` The human-readable name set for the autoscale pool. - **status:** `Literal["active", "deleting", "error"]` The current status of the autoscale pool. - `"active"` - `"deleting"` - `"error"` - **updated\_at:** `datetime` A time value given in ISO8601 combined date and time format that represents when the autoscale pool was last updated. - **current\_utilization:** `Optional[CurrentUtilization]` ### Autoscale Pool Droplet Template - `class AutoscalePoolDropletTemplate` - **image:** `str` The Droplet image to be used for all Droplets in the autoscale pool. You may specify the slug or the image ID. - **region:** `Literal["nyc1", "nyc2", "nyc3", 11 more]` The datacenter in which all of the Droplets will be created. - `"nyc1"` - `"nyc2"` - `"nyc3"` - `"ams2"` - `"ams3"` - `"sfo1"` - `"sfo2"` - `"sfo3"` - `"sgp1"` - `"lon1"` - `"fra1"` - `"tor1"` - `"blr1"` - `"syd1"` - **size:** `str` The Droplet size to be used for all Droplets in the autoscale pool. - **ssh\_keys:** `List[str]` The SSH keys to be installed on the Droplets in the autoscale pool. You can either specify the key ID or the fingerprint. Requires `ssh_key:read` scope. - **ipv6:** `Optional[bool]` Assigns a unique IPv6 address to each of the Droplets in the autoscale pool. - **name:** `Optional[str]` The name(s) to be applied to all Droplets in the autoscale pool. - **project\_id:** `Optional[str]` The project that the Droplets in the autoscale pool will belong to. Requires `project:read` scope. - **tags:** `Optional[List[str]]` The tags to apply to each of the Droplets in the autoscale pool. Requires `tag:read` scope. - **user\_data:** `Optional[str]` A string containing user data that cloud-init consumes to configure a Droplet on first boot. User data is often a cloud-config file or Bash script. It must be plain text and may not exceed 64 KiB in size. - **vpc\_uuid:** `Optional[str]` The VPC where the Droplets in the autoscale pool will be created. The VPC must be in the region where you want to create the Droplets. Requires `vpc:read` scope. - **with\_droplet\_agent:** `Optional[bool]` Installs the Droplet agent. This must be set to true to monitor Droplets for resource utilization scaling. ### Autoscale Pool Dynamic Config - `class AutoscalePoolDynamicConfig` - **max\_instances:** `int` The maximum number of Droplets in an autoscale pool. - **min\_instances:** `int` The minimum number of Droplets in an autoscale pool. - **cooldown\_minutes:** `Optional[int]` The number of minutes to wait between scaling events in an autoscale pool. Defaults to 10 minutes. - **target\_cpu\_utilization:** `Optional[float]` Target CPU utilization as a decimal. - **target\_memory\_utilization:** `Optional[float]` Target memory utilization as a decimal. ### Autoscale Pool Static Config - `class AutoscalePoolStaticConfig` - **target\_number\_instances:** `int` Fixed number of instances in an autoscale pool. ### Current Utilization - `class CurrentUtilization` - **cpu:** `Optional[float]` The average CPU utilization of the autoscale pool. - **memory:** `Optional[float]` The average memory utilization of the autoscale pool.