From 059506a63600cae3522872b884d55b8e6a5aec53 Mon Sep 17 00:00:00 2001 From: Samyam Rajbhandari Date: Wed, 20 Jul 2022 17:31:25 -0700 Subject: [PATCH] Update index.md edits --- docs/index.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/index.md b/docs/index.md index 3804eb191c36..daa35f0eb7ce 100755 --- a/docs/index.md +++ b/docs/index.md @@ -24,13 +24,13 @@ title: "Latest News" # DeepSpeed: Extreme Speed and Scale for DL Training and Inference -DeepSpeed is an easy-to-use deep learning optimization suite that enables unprecedented scale and speed for Deep Learning Training and Inference. +DeepSpeed is an easy-to-use deep learning optimization suite that enables unprecedented scale and speed for Deep Learning Training and Inference. With DeepSpeed, you can: -- DeepSpeed empowers data scientists to Train/Inference dense or sparse models with billions or trillions of parameters -- Achieve excellent system throughput and efficiently scale to thousands of GPUs -- Train/Inference on resource constrained GPU systems -- Achieve unprecedented low latency and high thoughput for inference -- Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs +

Train/Inference dense or sparse models with billions or trillions of parameters

+

Achieve excellent system throughput and efficiently scale to thousands of GPUs

+

Train/Inference on resource constrained GPU systems

+

Achieve unprecedented low latency and high thoughput for inference

+

Achieve extreme compression for an unparalleled inference latency and model size reduction with low costs

## Three main innovation pillars @@ -50,7 +50,7 @@ DeepSpeed brings together innovations in parallelism technology such as tensor, ### DeepSpeed-Compression: -To further increase the infrence efficency, DeepSpeed provides a new feature, that offers an easy-to-use and flexible-to-compose compression library for researchers and practitioners to compress their models while delivering faster speed, smaller model size, and significantly reduced compression cost. Meanwhile, new innovations, like ZeroQuant and XTC, are included under the DeepSpeed-Compression pillar. (See [here](https://www.deepspeed.ai/tutorials/model-compression/) for more details) +To further increase the inference efficency, DeepSpeed offers easy-to-use and flexible-to-compose compression techniques for researchers and practitioners to compress their models while delivering faster speed, smaller model size, and significantly reduced compression cost. Moreover, SoTA innovations on compression like ZeroQuant and XTC are included under the DeepSpeed-Compression pillar. (See [here](https://www.deepspeed.ai/tutorials/model-compression/) for more details) ## DeepSpeed Software Suite