diff --git a/blog/2025-11-07-sglang-diffusion.md b/blog/2025-11-07-sglang-diffusion.md index 7755dec34..539d3115a 100644 --- a/blog/2025-11-07-sglang-diffusion.md +++ b/blog/2025-11-07-sglang-diffusion.md @@ -7,7 +7,7 @@ previewImg: /images/blog/sgl-diffusion/sgl-diffusion-banner-16-9.png We are excited to introduce SGLang Diffusion, which brings SGLang's state-of-the-art performance to accelerate image and video generation for diffusion models. SGLang Diffusion supports major open-source video and image generation models (Wan, Hunyuan, Qwen-Image, Qwen-Image-Edit, Flux) while providing fast inference speeds and ease of use via multiple API entry points (OpenAI-compatible API, CLI, Python interface). SGLang Diffusion delivers 1.2x - 5.9x speedup across diverse workloads. -In collaboration with the FastVideo team, we provide a complete ecosystem for diffusion models, from post-training to production serving. The code is available [here](https://github.com/sgl-project/sglang/tree/main/python/sglang/multimodal_gen). +In collaboration with the FastVideo team, we provide a complete ecosystem for diffusion models, from post-training to production serving. The code is available [here](https://github.com/sgl-project/sglang/tree/main/python/sglang/multimodal_gen). Documentation is available [here](https://docs.sglang.io/diffusion/index.html).