Skip to content
Merged
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
93 changes: 93 additions & 0 deletions doc/source/serve/advanced-guides/advanced-autoscaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -723,3 +723,96 @@ When your custom autoscaling policy has complex dependencies or you want better
- **Contribute to Ray Serve**: If your policy is general-purpose and might benefit others, consider contributing it to Ray Serve as a built-in policy by opening a feature request or pull request on the [Ray GitHub repository](https://github.com/ray-project/ray/issues). The recommended location for the implementation is `python/ray/serve/autoscaling_policy.py`.
- **Ensure dependencies in your environment**: Make sure that the external dependencies are installed in your Docker image or environment.
:::


(serve-external-scale-api)=

### External scaling API

:::{warning}
This API is in alpha and may change before becoming stable.
:::

The external scaling API provides programmatic control over the number of replicas for any deployment in your Ray Serve application. Unlike Ray Serve's built-in autoscaling, which scales based on queue depth and ongoing requests, this API allows you to scale based on any external criteria you define.

#### Example: Predictive scaling

This example shows how to implement predictive scaling based on historical patterns or forecasts. You can preemptively scale up before anticipated traffic spikes by running an external script that adjusts replica counts based on time of day.

##### Define the deployment

The following example creates a simple text processing deployment that you can scale externally. Save this code to a file named `external_scaler_predictive.py`:

```{literalinclude} ../doc_code/external_scaler_predictive.py
:language: python
:start-after: __serve_example_begin__
:end-before: __serve_example_end__
```

##### Configure external scaling

Before using the external scaling API, enable it in your application configuration by setting `external_scaler_enabled: true`. Save this configuration to a file named `external_scaler_config.yaml`:

```{literalinclude} ../doc_code/external_scaler_config.yaml
:language: yaml
:start-after: __external_scaler_config_begin__
:end-before: __external_scaler_config_end__
```

:::{warning}
External scaling and built-in autoscaling are mutually exclusive. You can't use both for the same application. If you set `external_scaler_enabled: true`, you **must not** configure `autoscaling_config` on any deployment in that application. Attempting to use both results in an error.
:::

##### Implement the scaling logic

The following script implements predictive scaling based on time of day and historical traffic patterns. Save this script to a file named `external_scaler_predictive_client.py`:

```{literalinclude} ../doc_code/external_scaler_predictive_client.py
:language: python
:start-after: __client_script_begin__
:end-before: __client_script_end__
```

The script uses the external scaling API endpoint to scale deployments:
- **API endpoint**: `POST http://localhost:8265/api/v1/applications/{application_name}/deployments/{deployment_name}/scale`
- **Request body**: `{"target_num_replicas": <number>}` (must conform to the [`ScaleDeploymentRequest`](../api/doc/ray.serve.schema.ScaleDeploymentRequest.rst) schema)

The scaling client continuously adjusts the number of replicas based on the time of day:
- Business hours (9 AM - 5 PM): 10 replicas
- Off-peak hours: 3 replicas

##### Run the example

Follow these steps to run the complete example:

1. Start the Ray Serve application with the configuration:

```bash
serve run external_scaler_config.yaml
```

2. Run the predictive scaling client in a separate terminal:

```bash
python external_scaler_predictive_client.py
```

The client adjusts replica counts automatically based on the time of day. You can monitor the scaling behavior in the Ray dashboard or by checking the application logs.

#### Important considerations

Understanding how the external scaler interacts with your deployments helps you build reliable scaling logic:

- **Idempotent API calls**: The scaling API is idempotent. You can safely call it multiple times with the same `target_num_replicas` value without side effects. This makes it safe to run your scaling logic on a schedule or in response to repeated metric updates.

- **Interaction with serve deploy**: When you upgrade your service with `serve deploy`, the number of replicas you set through the external scaler API stays intact. This behavior matches what you'd expect from Ray Serve's built-in autoscaler—deployment updates don't reset replica counts.

- **Query current replica count**: You can get the current number of replicas for any deployment by querying the GET `/applications` API:

```bash
curl -X GET http://localhost:8265/api/serve/applications/ \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this trailing backslash intentional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

```

The response follows the [`ServeInstanceDetails`](../api/doc/ray.serve.schema.ServeInstanceDetails.rst) schema, which includes an `applications` field containing a dictionary with application names as keys. Each application includes detailed information about all its deployments, including current replica counts. Use this information to make informed scaling decisions. For example, you might scale up gradually by adding a percentage of existing replicas rather than jumping to a fixed number.

- **Initial replica count**: When you deploy an application for the first time, Ray Serve creates the number of replicas specified in the `num_replicas` field of your deployment configuration. The external scaler can then adjust this count dynamically based on your scaling logic.
10 changes: 10 additions & 0 deletions doc/source/serve/doc_code/external_scaler_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# __external_scaler_config_begin__
applications:
- name: my-app
import_path: my_module:app
external_scaler_enabled: true
deployments:
- name: my-deployment
num_replicas: 1
# __external_scaler_config_end__

35 changes: 35 additions & 0 deletions doc/source/serve/doc_code/external_scaler_predictive.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# __serve_example_begin__
import time
from ray import serve
from typing import Any

@serve.deployment(num_replicas=3)
class TextProcessor:
"""A simple text processing deployment that can be scaled externally."""
def __init__(self):
self.request_count = 0

def __call__(self, text: Any) -> dict:
# Simulate text processing work
time.sleep(0.1)
self.request_count += 1
return {
"request_count": self.request_count,
}


app = TextProcessor.bind()
# __serve_example_end__

if __name__ == "__main__":
import requests

serve.run(app)

# Test the deployment
resp = requests.post(
"http://localhost:8000/",
json="hello world"
)
print(f"Response: {resp.json()}")

82 changes: 82 additions & 0 deletions doc/source/serve/doc_code/external_scaler_predictive_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# __client_script_begin__
import logging
import time
from datetime import datetime
import requests

APPLICATION_NAME = "my-app"
DEPLOYMENT_NAME = "TextProcessor"
SERVE_ENDPOINT = "http://localhost:8265"
SCALING_INTERVAL = 300 # Check every 5 minutes

logger = logging.getLogger(__name__)


def get_current_replicas(app_name: str, deployment_name: str) -> int:
"""Get current replica count. Returns -1 on error."""
try:
resp = requests.get(
f"{SERVE_ENDPOINT}/api/serve/applications/",
timeout=10
)
if resp.status_code != 200:
logger.error(f"Failed to get applications: {resp.status_code}")
return -1

apps = resp.json().get("applications", {})
if app_name not in apps:
logger.error(f"Application {app_name} not found")
return -1

deployments = apps[app_name].get("deployments", {})
if deployment_name in deployments:
return deployments[deployment_name]["target_num_replicas"]

logger.error(f"Deployment {deployment_name} not found")
return -1
except requests.exceptions.RequestException as e:
logger.error(f"Request failed: {e}")
return -1


def scale_deployment(app_name: str, deployment_name: str):
"""Scale deployment based on time of day."""
hour = datetime.now().hour
current = get_current_replicas(app_name, deployment_name)

# Check if we successfully retrieved the current replica count
if current == -1:
logger.error("Failed to get current replicas, skipping scaling decision")
return

target = 10 if 9 <= hour < 17 else 3 # Peak hours: 9am-5pm

delta = target - current
if delta == 0:
logger.info(f"Already at target ({current} replicas)")
return

action = "Adding" if delta > 0 else "Removing"
logger.info(f"{action} {abs(delta)} replicas ({current} -> {target})")

try:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Deployment Scaling Fails on Replica Retrieval Error

The scale_deployment function does not check if get_current_replicas returns -1 (error condition). When current replicas cannot be retrieved, the function should return early instead of continuing to compute delta and attempt scaling. Currently, if current is -1, the delta calculation (target - (-1)) produces an incorrect value, and the function proceeds to send an API request even though the current replica count is unknown.

Fix in Cursor Fix in Web

resp = requests.post(
f"{SERVE_ENDPOINT}/api/v1/applications/{app_name}/deployments/{deployment_name}/scale",
headers={"Content-Type": "application/json"},
json={"target_num_replicas": target},
timeout=10
)
if resp.status_code == 200:
logger.info("Successfully scaled deployment")
else:
logger.error(f"Scale failed: {resp.status_code} - {resp.text}")
except requests.exceptions.RequestException as e:
logger.error(f"Request failed: {e}")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Deployment Scaling Fails on Replica Retrieval Error

The scale_deployment() function does not validate that get_current_replicas() succeeded before using the result. If get_current_replicas() returns -1 (indicating an error), the function will still attempt to scale the deployment, calculating an incorrect delta and proceeding with an invalid scaling decision. The function should check if current == -1 and return early or log an error instead of attempting to scale.

Fix in Cursor Fix in Web


if __name__ == "__main__":
logger.info(f"Starting predictive scaling for {APPLICATION_NAME}/{DEPLOYMENT_NAME}")
while True:
scale_deployment(APPLICATION_NAME, DEPLOYMENT_NAME)
time.sleep(SCALING_INTERVAL)
# __client_script_end__
4 changes: 3 additions & 1 deletion doc/source/serve/production-guide/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@ applications:
- name: ...
route_prefix: ...
import_path: ...
runtime_env: ...
runtime_env: ...
external_scaler_enabled: ...
deployments:
- name: ...
num_replicas: ...
Expand Down Expand Up @@ -99,6 +100,7 @@ These are the fields per `application`:
- **`route_prefix`**: An application can be called via HTTP at the specified route prefix. It defaults to `/`. The route prefix for each application must be unique.
- **`import_path`**: The path to your top-level Serve deployment (or the same path passed to `serve run`). The most minimal config file consists of only an `import_path`.
- **`runtime_env`**: Defines the environment that the application runs in. Use this parameter to package application dependencies such as `pip` packages (see {ref}`Runtime Environments <runtime-environments>` for supported fields). The `import_path` must be available _within_ the `runtime_env` if it's specified. The Serve config's `runtime_env` can only use [remote URIs](remote-uris) in its `working_dir` and `py_modules`; it can't use local zip files or directories. [More details on runtime env](serve-runtime-env).
- **`external_scaler_enabled`**: Enables the external scaling API, which lets you scale deployments from outside the Ray cluster using a REST API. When enabled, you can't use built-in autoscaling (`autoscaling_config`) for any deployment in this application. Defaults to `False`. See [External Scaling API](serve-external-scale-api) for details.
- **`deployments (optional)`**: A list of deployment options that allows you to override the `@serve.deployment` settings specified in the deployment graph code. Each entry in this list must include the deployment `name`, which must match one in the code. If this section is omitted, Serve launches all deployments in the graph with the parameters specified in the code. See how to [configure serve deployment options](serve-configure-deployment).
- **`args`**: Arguments that are passed to the [application builder](serve-app-builder-guide).

Expand Down