Skip to content

Commit d3a31bc

Browse files
authored
fix docs links (#6057)
1 parent fc9bb53 commit d3a31bc

File tree

7 files changed

+11
-11
lines changed

7 files changed

+11
-11
lines changed

.github/stale.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ pulls:
2929
markComment: >
3030
This pull request has been automatically marked as stale because it has not had recent activity.
3131
It will be closed in 7 days if no further activity occurs. If you need further help see our docs:
32-
https://pytorch-lightning.readthedocs.io/en/latest/CONTRIBUTING.html#pull-request
32+
https://pytorch-lightning.readthedocs.io/en/latest/generated/CONTRIBUTING.html#pull-request
3333
or ask the assistance of a core contributor here or on Slack.
3434
Thank you for your contributions.
3535
# Comment to post when closing a stale issue. Set to `false` to disable

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Lightning forces the following structure to your code which makes it reusable an
6363

6464
Once you do this, you can train on multiple-GPUs, TPUs, CPUs and even in 16-bit precision without changing your code!
6565

66-
Get started with our [2 step guide](https://pytorch-lightning.readthedocs.io/en/stable/new-project.html)
66+
Get started with our [2 step guide](https://pytorch-lightning.readthedocs.io/en/latest/starter/new-project.html)
6767

6868
---
6969

@@ -219,7 +219,7 @@ trainer.fit(autoencoder, DataLoader(train), DataLoader(val))
219219
```
220220

221221
## Advanced features
222-
Lightning has over [40+ advanced features](https://pytorch-lightning.readthedocs.io/en/stable/trainer.html#trainer-flags) designed for professional AI research at scale.
222+
Lightning has over [40+ advanced features](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags) designed for professional AI research at scale.
223223

224224
Here are some examples:
225225

@@ -379,7 +379,7 @@ class LitAutoEncoder(pl.LightningModule):
379379
## Community
380380

381381
The lightning community is maintained by
382-
- [16 core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
382+
- [10+ core contributors](https://pytorch-lightning.readthedocs.io/en/latest/governance.html) who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
383383
- 400+ community contributors.
384384

385385
Lightning is also part of the [PyTorch ecosystem](https://pytorch.org/ecosystem/) which requires projects to have solid testing, documentation and support.

notebooks/01-mnist-hello-world.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -176,13 +176,13 @@
176176
" - This is where we can download the dataset. We point to our desired dataset and ask torchvision's `MNIST` dataset class to download if the dataset isn't found there.\n",
177177
" - **Note we do not make any state assignments in this function** (i.e. `self.something = ...`)\n",
178178
"\n",
179-
"2. [setup(stage)](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html#setup) ⚙️\n",
179+
"2. [setup(stage)](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning-module.html#setup) ⚙️\n",
180180
" - Loads in data from file and prepares PyTorch tensor datasets for each split (train, val, test). \n",
181181
" - Setup expects a 'stage' arg which is used to separate logic for 'fit' and 'test'.\n",
182182
" - If you don't mind loading all your datasets at once, you can set up a condition to allow for both 'fit' related setup and 'test' related setup to run whenever `None` is passed to `stage` (or ignore it altogether and exclude any conditionals).\n",
183183
" - **Note this runs across all GPUs and it *is* safe to make state assignments here**\n",
184184
"\n",
185-
"3. [x_dataloader()](https://pytorch-lightning.readthedocs.io/en/latest/lightning-module.html#data-hooks) ♻️\n",
185+
"3. [x_dataloader()](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning-module.html#data-hooks) ♻️\n",
186186
" - `train_dataloader()`, `val_dataloader()`, and `test_dataloader()` all return PyTorch `DataLoader` instances that are created by wrapping their respective datasets that we prepared in `setup()`"
187187
]
188188
},

notebooks/02-datamodules.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
"\n",
2424
"This notebook will walk you through how to start using Datamodules.\n",
2525
"\n",
26-
"The most up to date documentation on datamodules can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html).\n",
26+
"The most up to date documentation on datamodules can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html).\n",
2727
"\n",
2828
"---\n",
2929
"\n",

notebooks/03-basic-gan.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@
9191
"source": [
9292
"### MNIST DataModule\n",
9393
"\n",
94-
"Below, we define a DataModule for the MNIST Dataset. To learn more about DataModules, check out our tutorial on them or see the [latest docs](https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html)."
94+
"Below, we define a DataModule for the MNIST Dataset. To learn more about DataModules, check out our tutorial on them or see the [latest docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html)."
9595
]
9696
},
9797
{

notebooks/06-mnist-tpu-training.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
"\n",
3434
"In this notebook, we'll train a model on TPUs. Changing one line of code is all you need to that.\n",
3535
"\n",
36-
"The most up to documentation related to TPU training can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/tpu.html).\n",
36+
"The most up to documentation related to TPU training can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/advanced/tpu.html).\n",
3737
"\n",
3838
"---\n",
3939
"\n",
@@ -114,7 +114,7 @@
114114
"source": [
115115
"### Defining The `MNISTDataModule`\n",
116116
"\n",
117-
"Below we define `MNISTDataModule`. You can learn more about datamodules in [docs](https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html) and [datamodule notebook](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/02-datamodules.ipynb)."
117+
"Below we define `MNISTDataModule`. You can learn more about datamodules in [docs](https://pytorch-lightning.readthedocs.io/en/latest/extensions/datamodules.html) and [datamodule notebook](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/02-datamodules.ipynb)."
118118
]
119119
},
120120
{

notebooks/07-cifar10-baseline.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@
185185
},
186186
"source": [
187187
"### Lightning Module\n",
188-
"Check out the [`configure_optimizers`](https://pytorch-lightning.readthedocs.io/en/stable/lightning_module.html#configure-optimizers) method to use custom Learning Rate schedulers. The OneCycleLR with SGD will get you to around 92-93% accuracy in 20-30 epochs and 93-94% accuracy in 40-50 epochs. Feel free to experiment with different LR schedules from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate"
188+
"Check out the [`configure_optimizers`](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#configure-optimizers) method to use custom Learning Rate schedulers. The OneCycleLR with SGD will get you to around 92-93% accuracy in 20-30 epochs and 93-94% accuracy in 40-50 epochs. Feel free to experiment with different LR schedules from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate"
189189
]
190190
},
191191
{

0 commit comments

Comments
 (0)