diff --git a/doc/source/serve/llm/architecture/core.md b/doc/source/serve/llm/architecture/core.md new file mode 100644 index 000000000000..9e1b1d37fe73 --- /dev/null +++ b/doc/source/serve/llm/architecture/core.md @@ -0,0 +1,507 @@ +(serve-llm-architecture-core)= +# Core components + +This guide explains the technical implementation details of Ray Serve LLM's core components. You'll learn about the abstractions, protocols, and patterns that enable extensibility and modularity. + +## Core abstractions + +Beyond `LLMServer` and `OpenAiIngress`, Ray Serve LLM defines several core abstractions that enable extensibility and modularity: + +### LLMEngine protocol + +The `LLMEngine` abstract base class defines the contract for all inference engines. This abstraction allows Ray Serve LLM to support multiple engine implementations (vLLM, SGLang, TensorRT-LLM, etc.) with a consistent interface. + +The engine operates at the **OpenAI API level**, not at the raw prompt level. This means: +- It accepts OpenAI-formatted requests (`ChatCompletionRequest`, `CompletionRequest`, etc.). +- It returns OpenAI-formatted responses. +- Engine-specific details (such as tokenization, sampling) are hidden behind this interface. + +#### Key methods + +```python + +class LLMEngine(ABC): + """Base protocol for all LLM engines.""" + + @abstractmethod + async def chat( + self, + request: ChatCompletionRequest + ) -> AsyncGenerator[Union[str, ChatCompletionResponse, ErrorResponse], None]: + """Run a chat completion. + + Yields: + - Streaming: yield "data: \\n\\n" for each chunk. + - Non-streaming: yield single ChatCompletionResponse. + - Error: yield ErrorResponse. + - In all cases, it's still a generator to unify the upper-level logic. + """ + + @abstractmethod + async def completions( + self, + request: CompletionRequest + ) -> AsyncGenerator[Union[str, CompletionResponse, ErrorResponse], None]: + """Run a text completion.""" + + @abstractmethod + async def embeddings( + self, + request: EmbeddingRequest + ) -> AsyncGenerator[Union[EmbeddingResponse, ErrorResponse], None]: + """Generate embeddings.""" + + @abstractmethod + async def start(self): + """Start the engine (async initialization).""" + + @abstractmethod + async def check_health(self) -> bool: + """Check if engine is healthy.""" + + @abstractmethod + async def shutdown(self): + """Gracefully shutdown the engine.""" +``` + +#### Engine implementations + +Ray Serve LLM provides: + +- **VLLMEngine**: Production-ready implementation using vLLM. + - Supports continuous batching and paged attention. + - Supports all kinds of parallelism. + - KV cache transfer for prefill-decode disaggregation. + - Automatic prefix caching (APC). + - LoRA adapter support. + +Future implementations could include: +- **TensorRT-LLM**: NVIDIA's optimized inference engine. +- **SGLang**: Fast serving with RadixAttention. + +Ray Serve LLM deeply integrates with vLLM since it has end-to-end Ray support in the engine, which gives benefits in fine-grained placement of workers and other optimizations. The engine abstraction makes it straightforward to add new implementations without changing the core serving logic. + +### LLMConfig + +`LLMConfig` is the central configuration object that specifies everything needed to deploy an LLM: + +```python +@dataclass +class LLMConfig: + """Configuration for LLM deployment.""" + + # Model loading + model_loading_config: Union[dict, ModelLoadingConfig] + + # Hardware requirements + accelerator_type: Optional[str] = None # For example, "A10G", "L4", "H100" + + # Placement group configuration + placement_group_config: Optional[dict] = None + + # Engine-specific arguments + engine_kwargs: Optional[dict] = None + + # Ray Serve deployment configuration + deployment_config: Optional[dict] = None + + # LoRA adapter configuration + lora_config: Optional[Union[dict, LoraConfig]] = None + + # Runtime environment (env vars, pip packages) + runtime_env: Optional[dict] = None + +``` + +#### Model loading configuration + +The `ModelLoadingConfig` specifies where and how to load the model. The following code shows the configuration structure: + +```python +@dataclass +class ModelLoadingConfig: + """Configuration for model loading.""" + + # Model identifier (used for API requests) + model_id: str + + # Model source (HuggingFace or cloud storage) + model_source: Union[str, dict] + # Examples: + # - "Qwen/Qwen2.5-7B-Instruct" (HuggingFace) + # - {"bucket_uri": "s3://my-bucket/models/qwen-7b"} (S3) +``` + +#### LoRA configuration + +The following code shows the configuration structure for serving multiple LoRA adapters with a shared base model: + +```python +@dataclass +class LoraConfig: + """Configuration for LoRA multiplexing.""" + + # Path to LoRA weights (local or S3/GCS) + dynamic_lora_loading_path: Optional[str] = None + + # Maximum number of adapters per replica + max_num_adapters_per_replica: int = 1 +``` + +Ray Serve's multiplexing feature automatically routes requests to replicas that have the requested LoRA adapter loaded, using an LRU cache for adapter management. + +### Deployment protocols + +Ray Serve LLM defines two key protocols that components must implement: + +#### DeploymentProtocol + +The base protocol for all deployments: + +```python +class DeploymentProtocol(Protocol): + """Base protocol for Ray Serve LLM deployments.""" + + @classmethod + def get_deployment_options(cls, *args, **kwargs) -> dict: + """Return Ray Serve deployment options. + + Returns: + dict: Options including: + - placement_strategy: PlacementGroup configuration + - num_replicas: Initial replica count + - autoscaling_config: Autoscaling parameters + - ray_actor_options: Ray actor options + """ +``` + +This protocol ensures that all deployments can provide their own configuration for placement, scaling, and resources. + +#### LLMServerProtocol + +Extended protocol for LLM server deployments: + +```python +class LLMServerProtocol(DeploymentProtocol): + """Protocol for LLM server deployments.""" + + @abstractmethod + async def chat( + self, + request: ChatCompletionRequest, + raw_request: Optional[Request] = None + ) -> AsyncGenerator[Union[str, ChatCompletionResponse, ErrorResponse], None]: + """Handle chat completion request.""" + + @abstractmethod + async def completions( + self, + request: CompletionRequest, + raw_request: Optional[Request] = None + ) -> AsyncGenerator[Union[str, CompletionResponse, ErrorResponse], None]: + """Handle text completion request.""" + + @abstractmethod + async def embeddings( + self, + request: EmbeddingRequest, + raw_request: Optional[Request] = None + ) -> AsyncGenerator[Union[EmbeddingResponse, ErrorResponse], None]: + """Handle embedding request.""" +``` + +This protocol ensures that all LLM server implementations (`LLMServer`, `DPServer`, `PDProxyServer`) provide consistent methods for handling requests. + +## Builder pattern + +Ray Serve LLM uses the builder pattern to separate class definition from deployment decoration. This provides flexibility and testability. + +**Key principle**: Classes aren't decorated with `@serve.deployment`. Decoration happens in builder functions. + +### Why use builders? + +Builders provide two key benefits: + +1. **Flexibility**: Different deployment configurations for the same class. +2. **Production readiness**: You can use builders in YAML files and run `serve run config.yaml` with the target builder module. + +### Builder example + +```python +def my_build_function( + llm_config: LLMConfig, +) -> Deployment: + # Get default options from the class + serve_options = LLMServer.get_deployment_options(llm_config) + + # Merge with user-provided options + serve_options.update(kwargs) + + # Decorate and bind + return serve.deployment(deployment_cls).options( + **serve_options + ).bind(llm_config) +``` + +You can use the builder function in two ways: + +::::{tab-set} + +:::{tab-item} Python +:sync: python + +```python +# serve.py +from ray import serve +from ray.serve.llm import LLMConfig +from my_module import my_build_function + +llm_config = LLMConfig( + model_loading_config=dict( + model_id="qwen-0.5b", + model_source="Qwen/Qwen2.5-0.5B-Instruct", + ), + accelerator_type="A10G", + deployment_config=dict( + autoscaling_config=dict( + min_replicas=1, + max_replicas=2, + ) + ), +) + +app = my_build_function(llm_config) +serve.run(app) +``` + +Run the deployment: + +```bash +python serve.py +``` +::: + +:::{tab-item} YAML +:sync: yaml + +```yaml +# config.yaml +applications: +- args: + llm_config: + model_loading_config: + model_id: qwen-0.5b + model_source: Qwen/Qwen2.5-0.5B-Instruct + accelerator_type: A10G + deployment_config: + autoscaling_config: + min_replicas: 1 + max_replicas: 2 + import_path: my_module:my_build_function + name: custom_llm_deployment + route_prefix: / +``` + +Run the deployment: + +```bash +serve run config.yaml +``` +::: + +:::: + +## Async constructor pattern + +`LLMServer` uses an async constructor to handle engine initialization. This pattern ensures the engine is fully started before the deployment begins serving requests. + +```python +class LLMServer(LLMServerProtocol): + """LLM server deployment.""" + + async def __init__(self, llm_config: LLMConfig, **kwargs): + """Async constructor - returns fully started instance. + + Ray Serve calls this constructor when creating replicas. + By the time this returns, the engine is ready to serve. + """ + super().__init__() + self._init_shared(llm_config, **kwargs) + await self.start() # Start engine immediately + + def _init_shared(self, llm_config: LLMConfig, **kwargs): + """Shared initialization logic.""" + self._llm_config = llm_config + self._engine_cls = self._get_engine_class() + # ... other initialization + + async def start(self): + """Start the underlying engine.""" + self.engine = self._engine_cls(self._llm_config) + await asyncio.wait_for( + self._start_engine(), + timeout=600 + ) + + @classmethod + def sync_init(cls, llm_config: LLMConfig, **kwargs) -> "LLMServer": + """Sync constructor for testing. + + Returns unstarted instance. Caller must call await start(). + """ + instance = cls.__new__(cls) + LLMServerProtocol.__init__(instance) + instance._init_shared(llm_config, **kwargs) + return instance # Not started yet! +``` + +### Why use async constructors? + +Async constructors provide several benefits: + +1. **Engine initialization is async**: Loading models and allocating GPU memory takes time. +2. **Failure detection**: If the engine fails to start, the replica fails immediately. +3. **Explicit control**: Clear distinction between when the server is ready versus initializing. +4. **Testing flexibility**: `sync_init` allows testing without engine startup. + +## Component relationships + +The following diagram shows how core components relate to each other: + +``` +┌─────────────────────────────────────────────────────────┐ +│ RAY SERVE (Foundation) │ +│ @serve.deployment | DeploymentHandle | Routing │ +└────────────────────────┬────────────────────────────────┘ + │ + ┌──────────────────┼──────────────────┐ + │ │ │ + ▼ ▼ ▼ +┌──────────┐ ┌──────────┐ ┌──────────┐ +│ Protocol │ │ Ingress │ │ Config │ +│ │ │ │ │ │ +│ • Deploy │ │ • OpenAI │ │ • LLM │ +│ Proto │ │ API │ │ Config │ +│ • Server │ │ • Model │ │ • Model │ +│ Proto │ │ Routing│ │ Loading│ +└─────┬────┘ └────┬─────┘ └────┬─────┘ + │ │ │ + └────────┬───────┴────────────────────┘ + │ + ▼ + ┌─────────────┐ + │ LLMServer │ + │ │ + │ Implements: │ + │ • Protocol │ + │ │ + │ Uses: │ + │ • Config │ + │ • Engine │ + └──────┬──────┘ + │ + ▼ + ┌─────────────┐ + │ LLMEngine │ + │ (Protocol) │ + │ │ + │ Implemented │ + │ by: │ + │ • VLLMEngine│ + │ • Future... │ + └─────────────┘ +``` + +## Extension points + +The core architecture provides several extension points: + +### Custom engines + +Implement `LLMEngine` protocol to support new inference backends: + +```python +class MyCustomEngine(LLMEngine): + """Custom engine implementation.""" + + async def chat(self, request): + # Your implementation + pass + + # ... implement other methods +``` + +### Custom server implementations + +Extend `LLMServer` or implement `LLMServerProtocol` directly: + +```python +class CustomLLMServer(LLMServer): + """Custom server with additional features.""" + + async def chat(self, request, raw_request=None): + # Add custom preprocessing + modified_request = self.preprocess(request) + + # Call parent implementation + async for chunk in super().chat(modified_request, raw_request): + yield chunk +``` + +### Custom ingress + +Implement your own ingress for custom API formats: + +```python +from typing import List +from ray import serve +from ray.serve import DeploymentHandle + +# Define your FastAPI app or Ray Serve application. +# For example: app = Application() + +@serve.ingress(app) +class CustomIngress: + """Custom ingress with non-OpenAI API.""" + + def __init__(self, server_handles: List[DeploymentHandle]): + self.handles = server_handles + + @app.post("/custom/endpoint") + async def custom_endpoint(self, request: "CustomRequest"): + # CustomRequest is a user-defined request model. + # Your custom logic + pass +``` + +### Custom builders + +Create domain-specific builders for common patterns: + +```python +def build_multimodal_deployment( + model_config: dict, + **kwargs +) -> Deployment: + """Builder for multimodal models.""" + llm_config = LLMConfig( + model_loading_config={ + "input_modality": InputModality.MULTIMODAL, + **model_config + }, + engine_kwargs={ + "task": "multimodal", + } + ) + return build_llm_deployment(llm_config, **kwargs) +``` + +These extension points allow you to customize Ray Serve LLM for specific use cases without modifying core code. + +## See also + +- {doc}`overview` - High-level architecture overview +- {doc}`serving-patterns/index` - Detailed serving pattern documentation +- {doc}`routing-policies` - Request routing architecture +- {doc}`../user-guides/index` - Practical deployment guides + diff --git a/doc/source/serve/llm/architecture/index.md b/doc/source/serve/llm/architecture/index.md index 0016682599c4..5481316dd9bf 100644 --- a/doc/source/serve/llm/architecture/index.md +++ b/doc/source/serve/llm/architecture/index.md @@ -1,10 +1,13 @@ # Architecture -Technical details and design documentation for Ray Serve LLM. +Technical documentation for Ray Serve LLM architecture, components, and patterns. ```{toctree} :maxdepth: 1 +Architecture overview +Core components +Serving patterns Request routing ``` diff --git a/doc/source/serve/llm/architecture/overview.md b/doc/source/serve/llm/architecture/overview.md new file mode 100644 index 000000000000..313260d77349 --- /dev/null +++ b/doc/source/serve/llm/architecture/overview.md @@ -0,0 +1,218 @@ +(serve-llm-architecture-overview)= +# Architecture overview + +Ray Serve LLM is a framework that specializes Ray Serve primitives for distributed LLM serving workloads. This guide explains the core components, serving patterns, and routing policies that enable scalable and efficient LLM inference. + +## What Ray Serve LLM provides + +Ray Serve LLM takes the performance of a single inference engine (such as vLLM) and extends it to support: + +- **Horizontal scaling**: Replicate inference across multiple GPUs on the same node or across nodes. +- **Advanced distributed strategies**: Coordinate multiple engine instances for prefill-decode disaggregation, data parallel attention, and expert parallelism. +- **Modular deployment**: Separate infrastructure logic from application logic for clean, maintainable deployments. + +Ray Serve LLM excels at highly distributed multi-node inference workloads where the unit of scale spans multiple nodes: + +- **Pipeline parallelism across nodes**: Serve large models that don't fit on a single node. +- **Disaggregated prefill and decode**: Scale prefill and decode phases independently for better resource utilization. +- **Cluster-wide parallelism**: Combine data parallel attention with expert parallelism for serving large-scale sparse MoE architectures such as Deepseek-v3, GPT OSS, etc. + + +## Ray Serve primitives + +Before diving into the architecture, you should understand these Ray Serve primitives: + +- **Deployment**: A class that defines the unit of scale. +- **Replica**: An instance of a deployment which corresponds to a Ray actor. Multiple replicas can be distributed across a cluster. +- **Deployment handle**: An object that allows one replica to call into replicas of other deployments. + +For more details, see the {ref}`Ray Serve core concepts `. + +## Core components + +Ray Serve LLM provides two primary components that work together to serve LLM workloads: + +### LLMServer + +`LLMServer` is a Ray Serve _deployment_ that manages a single inference engine instance. _Replicas_ of this _deployment_ can operate in three modes: + +- **Isolated**: Each _replica_ handles requests independently (horizontal scaling). +- **Coordinated within deployment**: Multiple _replicas_ work together (data parallel attention). +- **Coordinated across deployments**: Replicas coordinate with different deployments (prefill-decode disaggregation). + + +The following example demonstrates the sketch of how to use `LLMServer` standalone: + +```python +from ray import serve +from ray.serve.llm import LLMConfig +from ray.serve.llm.deployment import LLMServer + +llm_config = LLMConfig(...) + +# Get deployment options (placement groups, etc.) +serve_options = LLMServer.get_deployment_options(llm_config) + +# Decorate with serve options +server_cls = serve.deployment(LLMServer).options( + stream=True, **serve_options) + +# Bind the decorated class to its constructor parameters +server_app = server_cls.bind(llm_config) + +# Run the application +serve_handle = serve.run(server_app) + +# Use the deployment handle +result = serve_handle.chat.remote(request=...).result() +``` + +#### Physical placement + +`LLMServer` controls physical placement of its constituent actors through placement groups. By default, it uses: + +- `{CPU: 1}` for the replica actor itself (no GPU resources). +- `world_size` number of `{GPU: 1}` bundles for the GPU workers. + +The `world_size` is computed as `tensor_parallel_size × pipeline_parallel_size`. The vLLM engine allocates TP and PP ranks based on bundle proximity, prioritizing TP ranks on the same node. + +The PACK strategy tries to place all resources on a single node, but provisions different nodes when necessary. This works well for most deployments, though heterogeneous model deployments might occasionally run TP across nodes. + +```{figure} ../images/placement.png +--- +width: 600px +name: placement +--- +Physical placement strategy for GPU workers +``` + +#### Engine management + +When `LLMServer` starts, it: + +1. Creates a vLLM engine client. +2. Spawns a background process that uses Ray's distributed executor backend. +3. Uses the parent actor's placement group to instantiate child GPU worker actors. +4. Executes the model's forward pass on these GPU workers. + +```{figure} ../images/llmserver.png +--- +width: 600px +name: llmserver +--- +Illustration of `LLMServer` managing vLLM engine instance. +``` + +### OpenAiIngress + +`OpenAiIngress` provides an OpenAI-compatible FastAPI ingress that routes traffic to the appropriate model. It handles: + +- **Standard endpoint definitions**: `/v1/chat/completions`, `/v1/completions`, `/v1/embeddings`, etc. +- **Request routing logic**: The execution of custom router logic (for example, prefix-aware or session-aware routing). +- **Model multiplexing**: LoRA adapter management and routing. + +The following example shows a complete deployment with `OpenAiIngress`: + +```python +from ray import serve +from ray.serve.llm import LLMConfig +from ray.serve.llm.deployment import LLMServer +from ray.serve.llm.ingress import OpenAiIngress, make_fastapi_ingress + +llm_config = LLMConfig(...) + +# Construct the LLMServer deployment +serve_options = LLMServer.get_deployment_options(llm_config) +server_cls = serve.deployment(LLMServer).options(**serve_options) +llm_server = server_cls.bind(llm_config) + +# Get ingress default options +ingress_options = OpenAiIngress.get_deployment_options([llm_config]) + +# Decorate with FastAPI app +ingress_cls = make_fastapi_ingress(OpenAiIngress) + +# Make it a serve deployment with the right options +ingress_cls = serve.deployment(ingress_cls, **ingress_options) + +# Bind with llm_server deployment handle +ingress_app = ingress_cls.bind([llm_server]) + +# Run the application +serve.run(ingress_app) +``` + +:::{note} +You can create your own ingress deployments and connect them to existing LLMServer deployments. This is useful when you want to customize request tracing, authentication layers, etc. +::: + +#### Network topology and RPC patterns + +When the ingress makes an RPC call to `LLMServer` through the deployment handle, it can reach any replica across any node. However, the default request router prioritizes replicas on the same node to minimize cross-node RPC overhead, which is insignificant in LLM serving applications (only a few milliseconds impact on TTFT at high concurrency). + +The following figure illustrates the data flow: + +```{figure} ../images/llmserver-ingress-rpc.png +--- +width: 600px +name: llmserver-ingress-rpc +--- +Request routing from ingress to LLMServer replicas. Solid lines represent preferred local RPC calls; dashed lines represent potential cross-node RPC calls when local replicas are busy. +``` + +#### Scaling considerations + +**Ingress-to-LLMServer ratio**: The ingress event loop can become the bottleneck at high concurrency. In such situations, upscaling the number of ingress replicas can mitigate CPU contention. We recommend keeping at least a 2:1 ratio between the number of ingress replicas and LLMServer replicas. This architecture allows the system to dynamically scale the component that is the bottleneck. + +**Autoscaling coordination**: To maintain proper ratios during autoscaling, configure `target_ongoing_requests` proportionally: + +- Profile your vLLM configuration to find the maximum concurrent requests (for example, 64 requests). +- Choose an ingress-to-LLMServer ratio (for example, 2:1). +- Set LLMServer's `target_ongoing_requests` to say 75% of max capacity (for example, 48). +- Set ingress's `target_ongoing_requests` to maintain the ratio (for example, 24). + +## Architecture patterns + +Ray Serve LLM supports several deployment patterns for different scaling scenarios: + +### Data parallel attention pattern + +Create multiple inference engine instances that process requests in parallel while coordinating across expert layers and sharding requests across attention layers. Useful for serving sparse MoE models for high-throughput workloads. + +**When to use**: High request volume, kv-cache limited, need to maximize throughput. + +See: {doc}`serving-patterns/data-parallel` + +### Prefill-decode disaggregation + +Separate prefill and decode phases to optimize resource utilization and scale each phase independently. + +**When to use**: Prefill-heavy workloads where there's tension between prefill and decode, cost optimization with different GPU types. + +See: {doc}`serving-patterns/prefill-decode` + +### Custom request routing + +Implement custom routing logic for specific optimization goals such as cache locality or session affinity. + +**When to use**: Workloads with repeated prompts, session-based interactions, or specific routing requirements. + +See: {doc}`routing-policies` + +## Design principles + +Ray Serve LLM follows these key design principles: + +1. **Engine-agnostic**: Support multiple inference engines (vLLM, SGLang, etc.) through the `LLMEngine` protocol. +2. **Composable patterns**: Combine serving patterns (data parallel attention, prefill-decode, custom routing) for complex deployments. +3. **Builder pattern**: Use builders to construct complex deployment graphs declaratively. +4. **Separation of concerns**: Keep infrastructure logic (placement, scaling) separate from application logic (routing, processing). +5. **Protocol-based extensibility**: Define clear protocols for engines, servers, and ingress to enable custom implementations. + +## See also + +- {doc}`core` - Technical implementation details and extension points +- {doc}`serving-patterns/index` - Detailed serving pattern documentation +- {doc}`routing-policies` - Request routing architecture and patterns +- {doc}`../user-guides/index` - Practical deployment guides + diff --git a/doc/source/serve/llm/architecture/routing-policies.md b/doc/source/serve/llm/architecture/routing-policies.md index af49b05481c6..8d70063e0010 100644 --- a/doc/source/serve/llm/architecture/routing-policies.md +++ b/doc/source/serve/llm/architecture/routing-policies.md @@ -54,8 +54,9 @@ Ray Serve LLM provides multiple request routing policies to optimize for differe ### Default routing: Power of Two Choices The default router uses the Power of Two Choices algorithm to: -1. Randomly sample two replicas -2. Route to the replica with fewer ongoing requests + +1. Randomly sample two replicas. +2. Route to the replica with fewer ongoing requests. This provides good load balancing with minimal coordination overhead. @@ -64,10 +65,11 @@ This provides good load balancing with minimal coordination overhead. The `PrefixCacheAffinityRouter` optimizes for workloads with shared prefixes by routing requests with similar prefixes to the same replicas. This improves KV cache hit rates in vLLM's Automatic Prefix Caching (APC). The routing strategy: -1. **Check load balance**: If replicas are balanced (queue difference < threshold), use prefix matching -2. **High match rate (≥10%)**: Route to replicas with highest prefix match -3. **Low match rate (<10%)**: Route to replicas with lowest cache utilization -4. **Fallback**: Use Power of Two Choices when load is imbalanced + +1. **Check load balance**: If replicas are balanced (queue difference < threshold), use prefix matching. +2. **High match rate (≥10%)**: Route to replicas with highest prefix match. +3. **Low match rate (<10%)**: Route to replicas with lowest cache utilization. +4. **Fallback**: Use Power of Two Choices when load is imbalanced. For more details, see {ref}`prefix-aware-routing-guide`. @@ -111,13 +113,15 @@ Centralized metric store pattern for custom routing ``` **Pros:** -- Simple implementation - no need to modify deployment logic for recording replica statistics -- Request metrics are immediately available -- Strong consistency guarantees + +- Simple implementation - no need to modify deployment logic for recording replica statistics. +- Request metrics are immediately available. +- Strong consistency guarantees. **Cons:** -- A single actor can become a bottleneck in high-throughput applications where TTFT is impacted by the RPC call (~1000s of requests/s) -- Requires an additional network hop for every routing decision + +- A single actor can become a bottleneck in high-throughput applications where TTFT is impacted by the RPC call (~1000s of requests/s). +- Requires an additional network hop for every routing decision. ### Pattern 2: Metrics broadcasted from Serve controller @@ -156,35 +160,38 @@ Broadcast metrics pattern for custom routing ``` **Pros:** -- Scalable to higher throughput -- No additional RPC overhead per routing decision -- Distributed routing decision making + +- Scalable to higher throughput. +- No additional RPC overhead per routing decision. +- Distributed routing decision making. **Cons:** -- Time lag between the request router's view of statistics and the ground truth state of the replicas -- Eventual consistency - routers may base decisions on slightly stale data -- More complex implementation requiring coordination with the Serve controller + +- Time lag between the request router's view of statistics and the ground truth state of the replicas. +- Eventual consistency - routers may base decisions on slightly stale data. +- More complex implementation requiring coordination with the Serve controller. -- **Use Pattern 1 (Centralized store)** when you need strong consistency, have moderate throughput requirements, or want simpler implementation -- **Use Pattern 2 (Broadcast metrics)** when you need very high throughput, can tolerate eventual consistency, or want to minimize per-request overhead +- **Use Pattern 1 (Centralized store)** when you need strong consistency, have moderate throughput requirements, or want simpler implementation. +- **Use Pattern 2 (Broadcast metrics)** when you need very high throughput, can tolerate eventual consistency, or want to minimize per-request overhead. ## Custom routing policies You can implement custom routing policies by extending Ray Serve's [`RequestRouter`](../../api/doc/ray.serve.request_router.RequestRouter.rst) base class. For detailed examples and step-by-step guides on implementing custom routers, see {ref}`custom-request-router-guide`. Key methods to implement: -- [`choose_replicas()`](../../api/doc/ray.serve.request_router.RequestRouter.choose_replicas.rst): Select which replicas should handle a request -- [`on_request_routed()`](../../api/doc/ray.serve.request_router.RequestRouter.on_request_routed.rst): Update the router state after a request is routed -- [`on_replica_actor_died()`](../../api/doc/ray.serve.request_router.RequestRouter.on_replica_actor_died.rst): Clean up the state when a replica dies + +- [`choose_replicas()`](../../api/doc/ray.serve.request_router.RequestRouter.choose_replicas.rst): Select which replicas should handle a request. +- [`on_request_routed()`](../../api/doc/ray.serve.request_router.RequestRouter.on_request_routed.rst): Update the router state after a request is routed. +- [`on_replica_actor_died()`](../../api/doc/ray.serve.request_router.RequestRouter.on_replica_actor_died.rst): Clean up the state when a replica dies. ### Utility mixins Ray Serve provides mixin classes that add common functionality to routers. See the {ref}`custom-request-router-guide` for examples: -- [`LocalityMixin`](../../api/doc/ray.serve.request_router.LocalityMixin.rst): Prefers replicas on the same node to reduce network latency -- [`MultiplexMixin`](../../api/doc/ray.serve.request_router.MultiplexMixin.rst): Tracks which models are loaded on each replica for LoRA deployments -- [`FIFOMixin`](../../api/doc/ray.serve.request_router.FIFOMixin.rst): Ensures FIFO ordering of requests +- [`LocalityMixin`](../../api/doc/ray.serve.request_router.LocalityMixin.rst): Prefers replicas on the same node to reduce network latency. +- [`MultiplexMixin`](../../api/doc/ray.serve.request_router.MultiplexMixin.rst): Tracks which models are loaded on each replica for LoRA deployments. +- [`FIFOMixin`](../../api/doc/ray.serve.request_router.FIFOMixin.rst): Ensures FIFO ordering of requests. @@ -192,15 +199,15 @@ Ray Serve provides mixin classes that add common functionality to routers. See t The typical lifecycle of request routers includes the following stages: -1. **Initialization**: Router created with list of replicas -2. **Request routing**: `choose_replicas()` called for each request -3. **Callback**: `on_request_routed()` called after successful routing -4. **Replica failure**: `on_replica_actor_died()` called when replica dies -5. **Cleanup**: Router cleaned up when deployment is deleted +1. **Initialization**: Router created with list of replicas. +2. **Request routing**: `choose_replicas()` called for each request. +3. **Callback**: `on_request_routed()` called after successful routing. +4. **Replica failure**: `on_replica_actor_died()` called when replica dies. +5. **Cleanup**: Router cleaned up when deployment is deleted. #### Async operations -Routers should use async operations for best performance, for example: +Routers should use async operations for best performance. The following example demonstrates the recommended pattern: ```python # Recommended pattern: Async operation @@ -216,7 +223,7 @@ async def choose_replicas(self, ...): #### State management -For routers with state, use appropriate synchronization, for example: +For routers with state, use appropriate synchronization. The following example shows the recommended pattern: ```python class StatefulRouter(RequestRouter): diff --git a/doc/source/serve/llm/architecture/serving-patterns/data-parallel.md b/doc/source/serve/llm/architecture/serving-patterns/data-parallel.md new file mode 100644 index 000000000000..2909ae45936a --- /dev/null +++ b/doc/source/serve/llm/architecture/serving-patterns/data-parallel.md @@ -0,0 +1,205 @@ +(serve-llm-architecture-data-parallel)= +# Data parallel attention + +Data parallel attention (DP) is a serving pattern that creates multiple inference engine instances to process requests in parallel. This pattern is most useful when you combine it with expert parallelism for sparse MoE models. In this case, the experts are parallelized across multiple machines and attention (QKV) layers are replicated across GPUs, providing an opportunity to shard across requests. + +In this serving pattern, engine replicas aren't isolated. In fact, they need to run in sync with each other to serve a large number of requests concurrently. + +## Architecture overview + +```{figure} ../../images/dp.png +--- +width: 700px +name: dp-architecture +--- +Data parallel attention architecture showing DPRankAssigner coordinating multiple LLMServer replicas. +``` + +In data parallel attention serving: + +- The system creates `dp_size` replicas of the LLM server. +- Each replica runs an independent inference engine with the same model. +- Requests are distributed across replicas through Ray Serve's routing. +- All replicas work together as a cohesive unit. + + +### When to use DP + +Data parallel attention serving works best when: + +- **Large sparse MoE with MLA**: Allows reaching larger batch sizes by utilizing the sparsity of the experts more efficiently. MLA (Multi-head Latent Attention) reduces KV cache memory requirements. +- **High throughput required**: You need to serve many concurrent requests. +- **KV-cache limited**: Adding more KV cache capacity increases throughput, so that parallelization of experts could effectively increase the capacity of KV-cache for handling concurrent requests. + +### When not to use DP + +Consider alternatives when: + +- **Low to medium throughput**: If you can't saturate the MoE layers, don't use DP. +- **Non-MLA Attention with sufficient TP**: DP is most beneficial with MLA (Multi-head Latent Attention), where KV cache can't be sharded along the head dimension. For models with GQA (Grouped Query Attention), you can use TP to shard the KV cache up to the degree where `TP_size <= num_kv_heads`. Beyond that point, TP requires KV cache replication, which wastes memory—DP becomes a better choice to avoid duplication. For example, for Qwen-235b, using `DP=2, TP=4, EP=8` makes more sense than `DP=8, EP=8` because you can still shard the KV cache with TP=4 before needing to replicate it. Benchmark these configurations with your workload to determine the optimal setup. +- **Non-MoE models**: The main reason for using DP at the cost of this complexity is to lift the effective batch size during decoding for saturating the experts. + +## Components + +The following are the main components of DP deployments: + +### DPServer + +`DPServer` extends `LLMServer` with data parallel attention coordination. The following pseudocode shows the structure: + +```python +from ray import serve + +class DPServer(LLMServer): + """LLM server with data parallel attention coordination.""" + + async def __init__( + self, + llm_config: LLMConfig, + rank_assigner_handle: DeploymentHandle, + dp_size: int, + **kwargs + ): + self.rank_assigner = rank_assigner_handle + self.dp_size = dp_size + + # Get assigned rank from coordinator and pass it to engine. + replica_id = serve.get_replica_context().replica_id + llm_config.rank = await self.rank_assigner.assign_rank.remote(replica_id) + + # Call parent initialization + await super().__init__(llm_config, **kwargs) +``` + +Key responsibilities: + +- Register with the rank assigner coordinator. +- Obtain a unique rank (0 to `dp_size-1`). +- Coordinate with other replicas for collective operations. +- Handle replica failures and re-registration. + +### DPRankAssigner + +`DPRankAssigner` is a singleton coordinator that manages rank assignment for data parallel attention replicas. The following pseudocode shows the structure: + +```python +class DPRankAssigner: + """Coordinator for data parallel attention rank assignment.""" + + def __init__(self, dp_size: int): + self.dp_size = dp_size + self.assigned_ranks: Set[int] = set() + self.rank_to_replica: Dict[int, str] = {} + self.lock = asyncio.Lock() + + async def assign_rank(self, replica_id: str) -> int: + """Assign a rank to a replica. + + Returns: + int: Assigned rank (0 to dp_size-1) + """ + async with self.lock: + # Find first available rank + for rank in range(self.dp_size): + if rank not in self.assigned_ranks: + self.assigned_ranks.add(rank) + self.rank_to_replica[rank] = replica_id + return rank + + async def release_rank(self, rank: int): + """Release a rank when replica dies.""" + async with self.lock: + self.assigned_ranks.discard(rank) + self.rank_to_replica.pop(rank, None) +``` + +Key responsibilities: + +- Assign unique ranks to replicas. +- Ensure exactly `dp_size` replicas are serving. + +## Request flow + +```{figure} ../../images/dp_flow.png +--- +width: 700px +name: dp-flow +--- +Data parallel attention request flow from client to distributed replicas. +``` + +The following is the request flow through a data parallel attention deployment: + +1. **Client request**: HTTP request arrives at ingress. +2. **Ingress routing**: Ingress uses deployment handle to call DPServer. +3. **Ray Serve routing**: Ray Serve's request router selects a replica. + - Default: Power of Two Choices (load balancing). + - Custom: Prefix-aware, session-aware, etc. +4. **Replica processing**: Selected DPServer replica processes request. +5. **Engine inference**: vLLM engine generates response. +6. **Streaming response**: Tokens stream back to client. + +The key difference from basic serving is that all the `dp_size` replicas are working in coordination with each other rather than in isolation. + +## Scaling + +### Scaling behavior + +Data parallel attention deployments require a fixed number of replicas equal to `dp_size`, as autoscaling isn't supported for this pattern. You must set `num_replicas` to `dp_size`, or if using `autoscaling_config`, both `min_replicas` and `max_replicas` must equal `dp_size`. + + +## Design considerations + +### Coordination overhead + +The `DPRankAssigner` introduces minimal coordination overhead: + +- **Startup**: Each replica makes one RPC to get its rank. +- **Runtime**: No coordination overhead during request processing. + +The singleton actor pattern ensures consistency during startup time. + +### Placement strategy + +The PACK strategy places each replica's resources together: + +- Tensor parallel workers for one replica pack on the same node when possible. +- Different replicas can be on different nodes. +- This minimizes inter-node communication within each replica. + +## Combining with other patterns + +### DP + Prefill-decode disaggregation + +You can run data parallel attention on both prefill and decode phases: + +``` +┌─────────────────────────────────────────────┐ +│ OpenAiIngress │ +└─────────────┬───────────────────────────────┘ + │ + ▼ + ┌─────────────┐ + │PDProxyServer│ + └──┬───────┬──┘ + │ │ + ┌─────┘ └─────┐ + ▼ ▼ +┌──────────┐ ┌──────────┐ +│ Prefill │ │ Decode │ +│ DP-2 │ │ DP-4 │ +│ │ │ │ +│ Replica0 │ │ Replica0 │ +│ Replica1 │ │ Replica1 │ +└──────────┘ │ Replica2 │ + │ Replica3 │ + └──────────┘ +``` + +## See also + +- {doc}`../overview` - High-level architecture overview +- {doc}`../core` - Core components and protocols +- {doc}`prefill-decode` - Prefill-decode disaggregation architecture +- {doc}`../routing-policies` - Request routing architecture + diff --git a/doc/source/serve/llm/architecture/serving-patterns/index.md b/doc/source/serve/llm/architecture/serving-patterns/index.md new file mode 100644 index 000000000000..9dbc81e4a780 --- /dev/null +++ b/doc/source/serve/llm/architecture/serving-patterns/index.md @@ -0,0 +1,20 @@ +# Serving patterns + +Architecture documentation for distributed LLM serving patterns. + +```{toctree} +:maxdepth: 1 + +Data parallel attention +Prefill-decode disaggregation +``` + +## Overview + +Ray Serve LLM supports several serving patterns that can be combined for complex deployment scenarios: + +- **Data parallel attention**: Scale throughput by running multiple coordinated engine instances that shard requests across attention layers. +- **Prefill-decode disaggregation**: Optimize resource utilization by separating prompt processing from token generation. + +These patterns are composable and can be mixed to meet specific requirements for throughput, latency, and cost optimization. + diff --git a/doc/source/serve/llm/architecture/serving-patterns/prefill-decode.md b/doc/source/serve/llm/architecture/serving-patterns/prefill-decode.md new file mode 100644 index 000000000000..49a51523be24 --- /dev/null +++ b/doc/source/serve/llm/architecture/serving-patterns/prefill-decode.md @@ -0,0 +1,208 @@ +(serve-llm-architecture-prefill-decode)= +# Prefill-decode disaggregation + +Prefill-decode (PD) disaggregation is a serving pattern that separates the prefill phase (processing input prompts) from the decode phase (generating tokens). This pattern optimizes resource utilization by scaling each phase independently based on its specific requirements. + +## Architecture overview + +```{figure} ../../images/pd.png +--- +width: 700px +name: pd-architecture +--- +Prefill-decode disaggregation architecture with PDProxyServer coordinating prefill and decode deployments. +``` + +In prefill-decode disaggregation: + +- **Prefill deployment**: Processes input prompts and generates initial KV cache. +- **Decode deployment**: Uses transferred KV cache to generate output tokens. +- **Independent scaling**: Each phase scales based on its own load. +- **Resource optimization**: Different engine configurations for different phases. + +## Why disaggregate? + +### Resource characteristics + +Prefill and decode have different computational patterns: + +| Phase | Characteristics | Resource Needs | +|-------|----------------|----------------| +| Prefill | Processes the entire prompt at once | High compute, lower memory | +| | Parallel token processing | Benefits from high FLOPS | +| | Short duration per request | Can use fewer replicas when decode-limited | +| Decode | Generates one token at a time | Lower compute, high memory | +| | Auto-regressive generation | Benefits from large batch sizes | +| | Long duration (many tokens) | Needs more replicas | + +### Scaling benefits + +Disaggregation enables: + +- **Cost optimization**: The correct ratio of prefill to decode instances improves overall throughput per node. +- **Dynamic traffic adjustment**: Scale prefill and decode independently depending on workloads (prefill-heavy versus decode-heavy) and traffic volume. +- **Efficiency**: Prefill serves multiple requests while decode generates, allowing one prefill instance to feed multiple decode instances. + +## Components + +### PDProxyServer + +`PDProxyServer` orchestrates the disaggregated serving: + +```python +class PDProxyServer: + """Proxy server for prefill-decode disaggregation.""" + + def __init__( + self, + prefill_handle: DeploymentHandle, + decode_handle: DeploymentHandle, + ): + self.prefill_handle = prefill_handle + self.decode_handle = decode_handle + + async def chat( + self, + request: ChatCompletionRequest, + ) -> AsyncGenerator[str, None]: + """Handle chat completion with PD flow. + + Flow: + 1. Send request to prefill deployment + 2. Prefill processes prompt, transfers KV to decode + 3. Decode generates tokens, streams to client + """ + # Prefill phase + prefill_result = await self.prefill_handle.chat.remote(request) + + # Extract KV cache metadata + kv_metadata = prefill_result["kv_metadata"] + + # Decode phase with KV reference + async for chunk in self.decode_handle.chat.remote( + request, + kv_metadata=kv_metadata + ): + yield chunk +``` + +Key responsibilities: + +- Route requests between prefill and decode. +- Handle KV cache metadata transfer. +- Stream responses from decode to client. +- Manage errors in either phase per request. + +### Prefill LLMServer + +Standard `LLMServer` configured for prefill: + +```python +prefill_config = LLMConfig( + model_loading_config=dict( + model_id="llama-3.1-8b", + model_source="meta-llama/Llama-3.1-8B-Instruct" + ), + engine_kwargs=dict( + # Prefill-specific configuration + kv_transfer_config={ + "kv_connector": "NixlConnector", + "kv_role": "kv_both", + }, + ), +) +``` + +### Decode LLMServer + +Standard `LLMServer` configured for decode: + +```python +decode_config = LLMConfig( + model_loading_config=dict( + model_id="llama-3.1-8b", + model_source="meta-llama/Llama-3.1-8B-Instruct" + ), + engine_kwargs=dict( + # Decode-specific configuration + kv_transfer_config={ + "kv_connector": "NixlConnector", + "kv_role": "kv_both", + }, + ), +) +``` + + +### Request flow + +```{figure} ../../images/pd.png +--- +width: 700px +name: pd-flow +--- +Prefill-decode request flow showing KV cache transfer between phases. +``` + +Detailed request flow: + +1. **Client request**: HTTP POST to `/v1/chat/completions`. +2. **Ingress**: Routes to `PDProxyServer`. +3. **Proxy → Prefill**: `PDProxyServer` calls prefill deployment. + - Prefill server processes prompt. + - Generates KV cache. + - Transfers KV to storage backend. + - Returns KV metadata (location, size, etc.). +4. **Proxy → Decode**: `PDProxyServer` calls decode deployment with KV metadata. + - Decode server loads KV cache from storage. + - Begins token generation. + - Streams tokens back through proxy. +5. **Response streaming**: Client receives generated tokens. + +:::{note} +The KV cache transfer is transparent to the client. From the client's perspective, it's a standard OpenAI API call. +::: + +## Performance characteristics + +### When to use PD disaggregation + +Prefill-decode disaggregation works best when: + +- **Long generations**: Decode phase dominates total end-to-end latency. +- **Imbalanced phases**: Prefill and decode need different resources. +- **Cost optimization**: Use different GPU types for each phase. +- **High decode load**: Many requests are in decode phase simultaneously. +- **Batch efficiency**: Prefill can batch multiple requests efficiently. + +### When not to use PD + +Consider alternatives when: + +- **Short outputs**: Decode latency minimal, overhead not worth it. +- **Network limitations**: KV transfer overhead too high. +- **Small models**: Both phases fit comfortably on same resources. + + +## Design considerations + +### KV cache transfer latency + +The latency of KV cache transfer between prefill and decode affects overall request latency and it's mostly determined by network bandwidth. NIXL has different backend plugins, but its performance on different network stacks isn't mature yet. You should inspect your deployment to verify NIXL uses the right network backend for your environment. + +### Phase load balancing + +The system must balance load between prefill and decode phases. Mismatched scaling can lead to: + +- **Prefill bottleneck**: Requests queue at prefill, decode replicas idle. +- **Decode bottleneck**: Prefill completes quickly, decode can't keep up. + +Monitor both phases and adjust replica counts and autoscaling policies accordingly. + +## See also + +- {doc}`../overview` - High-level architecture overview +- {doc}`../core` - Core components and protocols +- {doc}`data-parallel` - Data parallel attention architecture +- {doc}`../../user-guides/prefill-decode` - Practical deployment guide + diff --git a/doc/source/serve/llm/images/dp.png b/doc/source/serve/llm/images/dp.png new file mode 100644 index 000000000000..369fc6e74b6c Binary files /dev/null and b/doc/source/serve/llm/images/dp.png differ diff --git a/doc/source/serve/llm/images/dp_flow.png b/doc/source/serve/llm/images/dp_flow.png new file mode 100644 index 000000000000..544e85dba0f4 Binary files /dev/null and b/doc/source/serve/llm/images/dp_flow.png differ diff --git a/doc/source/serve/llm/images/llmserver-ingress-rpc.png b/doc/source/serve/llm/images/llmserver-ingress-rpc.png new file mode 100644 index 000000000000..f8064018dd91 Binary files /dev/null and b/doc/source/serve/llm/images/llmserver-ingress-rpc.png differ diff --git a/doc/source/serve/llm/images/llmserver.png b/doc/source/serve/llm/images/llmserver.png new file mode 100644 index 000000000000..a4313530701c Binary files /dev/null and b/doc/source/serve/llm/images/llmserver.png differ diff --git a/doc/source/serve/llm/images/pd.png b/doc/source/serve/llm/images/pd.png new file mode 100644 index 000000000000..5e46878845e3 Binary files /dev/null and b/doc/source/serve/llm/images/pd.png differ diff --git a/doc/source/serve/llm/images/placement.png b/doc/source/serve/llm/images/placement.png new file mode 100644 index 000000000000..0c1e49e6a1e2 Binary files /dev/null and b/doc/source/serve/llm/images/placement.png differ diff --git a/doc/source/serve/llm/index.md b/doc/source/serve/llm/index.md index 35232220b75d..fc8520fbff34 100644 --- a/doc/source/serve/llm/index.md +++ b/doc/source/serve/llm/index.md @@ -8,7 +8,7 @@ Ray Serve LLM provides a high-performance, scalable framework for deploying Larg Ray Serve LLM excels at highly distributed multi-node inference workloads: -- **Advanced parallelism strategies**: Seamlessly combine pipeline parallelism, tensor parallelism, expert parallelism, and data parallelism for models of any size. +- **Advanced parallelism strategies**: Seamlessly combine pipeline parallelism, tensor parallelism, expert parallelism, and data parallel attention for models of any size. - **Prefill-decode disaggregation**: Separates and optimizes prefill and decode phases independently for better resource utilization and cost efficiency. - **Custom request routing**: Implements prefix-aware, session-aware, or custom routing logic to maximize cache hits and reduce latency. - **Multi-node deployments**: Serves massive models that span multiple nodes with automatic placement and coordination. @@ -22,7 +22,7 @@ Ray Serve LLM excels at highly distributed multi-node inference workloads: - 🔄 Multi-LoRA support with shared base models - 🚀 Engine-agnostic architecture (vLLM, SGLang, etc.) - 📊 Built-in metrics and Grafana dashboards -- 🎯 Advanced serving patterns (PD disaggregation, data parallelism) +- 🎯 Advanced serving patterns (PD disaggregation, data parallel attention) ## Requirements diff --git a/doc/source/serve/llm/quick-start.md b/doc/source/serve/llm/quick-start.md index 5762978842c1..e211701f11a4 100644 --- a/doc/source/serve/llm/quick-start.md +++ b/doc/source/serve/llm/quick-start.md @@ -206,8 +206,6 @@ serve.run(ingress_deployment, blocking=True) :::: -See also {ref}`serve-deepseek-tutorial` for an example of deploying DeepSeek models. - ## Production deployment For production deployments, Ray Serve LLM provides utilities for config-driven deployments. You can specify your deployment configuration with YAML files: