From 7bf8cd117b1d86fa340f3bbf26f316dd94fe60a9 Mon Sep 17 00:00:00 2001 From: Zhihong Chen Date: Sun, 14 Aug 2022 16:28:55 +0800 Subject: [PATCH] fix typos in readme. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 05af94f32ca6..d961eb884f58 100755 --- a/README.md +++ b/README.md @@ -84,7 +84,7 @@ DeepSpeed has been used to train many different large-scale models, below is a l * [YaLM (100B)](https://github.com/yandex/YaLM-100B) * [GPT-NeoX (20B)](https://github.com/EleutherAI/gpt-neox) * [AlexaTM (20B)](https://www.amazon.science/blog/20b-parameter-alexa-model-sets-new-marks-in-few-shot-learning) - * [Turing NLG (17B](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/) + * [Turing NLG (17B)](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/) * [METRO-LM (5.4B)](https://arxiv.org/pdf/2204.06644.pdf) DeepSpeed has been integrated with several different popular open-source DL frameworks such as: