diff --git a/docs/source/advanced/tpu.rst b/docs/source/advanced/tpu.rst index b9688ce425b5f..09a614f31c854 100644 --- a/docs/source/advanced/tpu.rst +++ b/docs/source/advanced/tpu.rst @@ -64,8 +64,7 @@ To get a TPU on colab, follow these steps: .. code-block:: - !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py - !python pytorch-xla-env-setup.py --version 1.7 --apt-packages libomp5 libopenblas-dev + !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl 5. Once the above is done, install PyTorch Lightning (v 0.7.0+). diff --git a/docs/source/starter/introduction_guide.rst b/docs/source/starter/introduction_guide.rst index 551b8182caa7d..c399047034fc0 100644 --- a/docs/source/starter/introduction_guide.rst +++ b/docs/source/starter/introduction_guide.rst @@ -572,9 +572,7 @@ Next, install the required xla library (adds support for PyTorch on TPUs) .. code-block:: shell - !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py - - !python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev + !pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl In distributed training (multiple GPUs and multiple TPU cores) each GPU or TPU core will run a copy of this program. This means that without taking any care you will download the dataset N times which