Skip to content

Commit 760690e

Browse files
ibro45surajpaibcoderabbitai[bot]
authored
Add docs with auto API reference generation when pushed to main. (#110)
* Add initial mkdocs * Add mkdocs CI * Update CI * Change lighter color scheme * Set md-nav to correct color in slate * Update docs [WIP] * Update projects * Add docs publishing workflow * Make docs-publish triggered by pushes on `main' * Fix typo in docs (MyXRayDataset) * Add info on docs contributions * Add auto API reference generation * Add list of submodules and py to each module, move version and logging * Update README * Update README * Update lighter/utils/dynamic_imports.py Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update lighter/utils/runner.py Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update docs/overrides/components/navigation-links3.css Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update docstrings and reference generation * Fix module type hint * Update docs publish * Update version --------- Co-authored-by: Suraj Pai <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
1 parent d978e73 commit 760690e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+3629
-110
lines changed

.github/workflows/docs-publish.yml

+28
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
name: Docs Publish
2+
on:
3+
push:
4+
branches:
5+
- main
6+
permissions:
7+
contents: write
8+
jobs:
9+
deploy:
10+
runs-on: ubuntu-latest
11+
steps:
12+
- uses: actions/checkout@v4
13+
- name: Configure Git Credentials
14+
run: |
15+
git config user.name github-actions[bot]
16+
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
17+
- uses: actions/setup-python@v5
18+
with:
19+
python-version: 3.x
20+
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
21+
- uses: actions/cache@v4
22+
with:
23+
key: mkdocs-material-${{ env.cache_id }}
24+
path: .cache
25+
restore-keys: |
26+
mkdocs-material-
27+
- run: pip install mkdocs-material mkdocs-autorefs mkdocstrings mkdocs-gen-files mkdocs-literate-nav mkdocs-section-index
28+
- run: mkdocs gh-deploy --force

.gitignore

+2
Original file line numberDiff line numberDiff line change
@@ -150,3 +150,5 @@ projects/*
150150
!projects/README.md
151151
!projects/cifar10
152152
**/predictions/
153+
*/.DS_Store
154+
.DS_Store

CONTRIBUTING.md

+39-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,17 @@
1-
# How to contribute
1+
1. [Code contribution](#code-contribution)
2+
- [Dependencies](#dependencies)
3+
- [Codestyle](#codestyle)
4+
- [Checks](#checks)
5+
- [Before submitting](#before-submitting)
6+
- [Other help](#other-help)
7+
8+
2. [Documentation contribution](#documentation-contribution)
9+
- [Dependencies](#dependencies-1)
10+
- [Serving the documentation locally](#serving-the-documentation-locally)
11+
- [Deploying the documentation to GitHub Pages](#deploying-the-documentation-to-github-pages)
12+
13+
14+
# Code contribution
215

316
## Dependencies
417

@@ -45,3 +58,28 @@ You can contribute by spreading a word about this library.
4558
It would also be a huge contribution to write
4659
a short article on how you are using this project.
4760
You can also share your best practices with us.
61+
62+
63+
# Documentation contribution
64+
Our documentation is located in the `docs/` folder and is built using `mkdocs` and `mkdocs-material`.
65+
66+
The API reference is generated automatically from the docstrings in the code using `mkdocstrings`. Our docstrings follow the `google` style.
67+
68+
## Dependencies
69+
To install `mkdocs-material` together with the required dependencies run:
70+
71+
```bash
72+
pip install mkdocs-material mkdocs-autorefs mkdocstrings mkdocs-gen-files mkdocs-literate-nav mkdocs-section-index
73+
```
74+
75+
## Serving the documentation locally
76+
While working on the documentation, you can serve it locally to see the changes in real-time.
77+
78+
```bash
79+
cd docs/
80+
mkdocs serve
81+
```
82+
83+
## Deploying the documentation to GitHub Pages
84+
85+
The documentation is automatically deployed once the changes are merged into the `main` branch.

README.md

+26-11
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
<div align="center">
22
<picture>
3+
<!-- old code that allows different pics for light/dark mode -->
4+
<!--
35
<source media="(prefers-color-scheme: dark)" srcset="./assets/images/lighter_dark.png">
46
<source media="(prefers-color-scheme: light)" srcset="./assets/images/lighter_light.png">
5-
<img align="center" alt="Lighter logo" src="h/assets/images/lighter_dark.png">
7+
-->
8+
<img align="center" alt="Lighter logo" src="./assets/images/lighter.png">
69
</picture>
710
</div>
811
<br/>
@@ -12,35 +15,47 @@
1215
</div>
1316

1417

15-
Welcome to `lighter`, an elegant and powerful wrapper for [PyTorch Lightning](https://github.com/Lightning-AI/lightning) that simplifies the way you build and manage your deep learning experiments. Unleash your model's potential through a unified, **configuration-based** approach that streamlines the experimentation process, empowering both beginners and experts in the field.
18+
With `lighter`, focus on your deep learning experiments and forget about boilerplate through:
19+
1. **Task-agnostic** training logic already implemented for you (classification, segmentation, self-supervised, etc.)
20+
2. **Configuration-based** approach that will ensure that you can always reproduce your experiments and know what hyperparameters you used.
21+
3. Extremely **simple integration of custom** models, datasets, transforms, or any other components to your experiments.
22+
23+
&nbsp;
24+
25+
`lighter` stands on the shoulder of these two giants:
26+
- [MONAI Bundle](https://docs.monai.io/en/stable/bundle_intro.html) - Configuration system. Similar to [Hydra](https://github.com/facebookresearch/hydra), but with additional features.
27+
- [PyTorch Lightning](https://github.com/Lightning-AI/lightning) - Our [`LighterSystem`](https://project-lighter.github.io/lighter/reference/system/) is based on the PyTorch Lightning [`LightningModule`](https://lightning.ai/docs/pytorch/stable/common/lightning_module.html) and implements all the necessary training logic for you. Couple it with the PyTorch Lightning [Trainer](https://lightning.ai/docs/pytorch/stable/common/trainer.html) and you're good to go.
28+
29+
Simply put, `lighter = config(trainer + system)` 😇
1630

1731

1832
## 📖 Usage
1933

20-
- [Documentation](https://project-lighter.github.io/lighter/)
34+
- [📚 Documentation](https://project-lighter.github.io/lighter/)
35+
- [🎥 YouTube Channel](https://www.youtube.com/channel/UCef1oTpv2QEBrD2pZtrdk1Q)
2136

2237
## 🚀 Install
2338

2439
Current release:
25-
````
40+
```bash
2641
pip install project-lighter
27-
````
42+
```
2843

2944
Pre-release (up-to-date with the main branch):
30-
````
45+
```bash
3146
pip install project-lighter --pre
32-
````
47+
```
3348

3449
For development:
35-
````
50+
```bash
3651
make setup
3752
make install # Install lighter via Poetry
3853
make pre-commit-install # Set up the pre-commit hook for code formatting
3954
poetry shell # Once installed, activate the poetry shell
40-
````
55+
```
4156

4257
## 💡 Projects
43-
List of projects that use `lighter`:
58+
Projects that use `lighter`:
4459

4560
| Project | Description |
4661
| --- | --- |
@@ -49,7 +64,7 @@ List of projects that use `lighter`:
4964

5065
## 📄 Cite:
5166

52-
If you find `lighter` useful in your research or project, please consider citing it. Here's an example BibTeX citation entry:
67+
If you find `lighter` useful in your research or project, please consider citing it:
5368

5469
```bibtex
5570
@software{lighter,

assets/images/lighter.png

84.8 KB
Loading

assets/images/lighter_dark.png

-139 KB
Binary file not shown.

assets/images/lighter_light.png

-139 KB
Binary file not shown.

docs/advanced/callbacks.md

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
🚧 Under construction 🚧

docs/advanced/inferer.md

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
🚧 Under construction 🚧

docs/advanced/postprocessing.md

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
🚧 Under construction 🚧

docs/assets/extra.css

+29
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
[data-md-color-scheme="default"] {
2+
--md-primary-fg-color: #459AA3;
3+
--md-accent-fg-color: #459AA3;
4+
color: #14181e;
5+
}
6+
7+
[data-md-color-scheme="slate"] {
8+
--md-primary-fg-color: #459AA3;
9+
--md-accent-fg-color: #459AA3;
10+
--md-default-bg-color: #14181e;
11+
}
12+
[data-md-color-scheme="slate"] .md-nav {
13+
--md-typeset-a-color: #459AA3;
14+
}
15+
16+
@media only screen and (min-width: 75.25em) {
17+
.md-main__inner {
18+
max-width: 1000em;
19+
}
20+
.md-sidebar--primary {
21+
left: 0;
22+
}
23+
.md-sidebar--secondary {
24+
right: 0;
25+
margin-left: 0em;
26+
-webkit-transform: none;
27+
transform: none;
28+
}
29+
}

docs/assets/images/Lighter_demo.png

690 KB
Loading

docs/assets/images/PL_demo.png

1.46 MB
Loading

docs/assets/images/lighter_banner.png

87.7 KB
Loading
53.7 KB
Loading

docs/assets/images/lighter_logo.png

47.4 KB
Loading

docs/basics/config.md

+201
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
# Configuration System
2+
3+
Lighter is a configuration-centric framework where the config. is used for setting up the machine learning workflow from model architecture selection, loss function, optimizer, dataset preparation and running the training/evaluation/inference process.
4+
5+
Our configuration system is heavily based on MONAI bundle parser but with a standardized structure. For every configuration, we expect several items to be mandatorily defined.
6+
7+
Let us take a simple example config to dig deeper into the configuration system of Lighter. You can go through the config and click on the + for more information about specific concepts.
8+
9+
<div class="annotate" markdown>
10+
11+
```yaml title="cifar10.yaml"
12+
13+
trainer:
14+
_target_ (1): pytorch_lightning.Trainer
15+
max_epochs (2): 100
16+
17+
system:
18+
_target_: lighter.LighterSystem
19+
batch_size: 512
20+
21+
model:
22+
_target_: torchvision.models.resnet18
23+
num_classes: 10
24+
25+
criterion:
26+
_target_: torch.nn.CrossEntropyLoss
27+
28+
optimizer:
29+
_target_: torch.optim.Adam
30+
params: "$@system#model.parameters()" (3)
31+
lr: 0.001
32+
33+
datasets:
34+
train:
35+
_target_: torchvision.datasets.CIFAR10
36+
download: True
37+
root: .datasets
38+
train: True
39+
transform:
40+
_target_: torchvision.transforms.Compose
41+
transforms: (4)
42+
- _target_: torchvision.transforms.ToTensor
43+
- _target_: torchvision.transforms.Normalize
44+
mean: [0.5, 0.5, 0.5]
45+
std: [0.5, 0.5, 0.5]
46+
47+
```
48+
</div>
49+
1. `_target_` is a special reserved keyword that initializes a python object from the provided text. In this case, a `Trainer` object from the `pytorch_lightning` library is initialized
50+
2. `max_epochs` is an argument of the `Trainer` class which is passed through this format. Any argument for the class can be passed similarly.
51+
3. `$@` is a combination of `$` which evaluates a python expression and `@` which references a python object. In this case we first reference the model with `@model` which is the `torchvision.models.resnet18` defined earlier and then access its parameters using `[email protected]()`
52+
4. YAML allows passing a list in the format below where each `_target_` specifices a transform that is added to the list of transforms in `Compose`. The `torchvision.datasets.CIFAR10` accepts these with a `transform` argument and applies them to each item.
53+
54+
## Configuration Concepts
55+
As seen in the [Quickstart](./quickstart.md), Lighter has two main components:
56+
57+
### Trainer Setup
58+
```yaml
59+
trainer:
60+
_target_: pytorch_lightning.Trainer # (1)!
61+
max_epochs: 100
62+
```
63+
64+
The trainer object (`pytorch_lightning.Trainer`) is initialized through the `_target_` key. For more info on `_target_` and special keys, click [here](#special-syntax-and-keywords)
65+
66+
The `max_epochs` is an argument provided to the `pytorch_lightning.Trainer` object during its instantiation. All arguments that are accepted during instantiation can be provided similarly.
67+
68+
### LighterSystem Configuration
69+
While Lighter borrows the Trainer from Pytorch Lightning, LighterSystem is a custom component unique to Lighter that draws on several concepts of PL such as LightningModule to provide a simple way to capture all the integral elements of a deep learning system.
70+
71+
Concepts encapsulated by LighterSystem include,
72+
73+
#### Model definition
74+
The `torchvision` library is installed by default in Lighter and therefore, you can choose different torchvision models here. We also have `monai` packaged with Lighter, so if you are looking to use a ResNet, all you need to modify to fit this new model in your config is,
75+
76+
=== "Torchvision ResNet18"
77+
78+
```yaml
79+
LighterSystem:
80+
...
81+
model:
82+
_target_: torchvision.models.resnet18
83+
num_classes: 10
84+
...
85+
```
86+
87+
=== "MONAI ResNet50"
88+
89+
```yaml
90+
LighterSystem:
91+
...
92+
model:
93+
_target_: monai.networks.nets.resnet50
94+
num_classes: 10
95+
spatial_dims: 2
96+
...
97+
```
98+
99+
=== "MONAI 3DResNet50"
100+
101+
```yaml
102+
LighterSystem:
103+
...
104+
model:
105+
_target_: monai.networks.nets.resnet50
106+
num_classes: 10
107+
spatial_dims: 3
108+
...
109+
```
110+
111+
<br/>
112+
#### Criterion/Loss
113+
114+
Similar to overriding models, when exploring different loss types in Lighter, you can easily switch between various loss functions provided by libraries such as `torch` and `monai`. This flexibility allows you to experiment with different approaches to optimize your model's performance without changing code!! Below are some examples of how you can modify the criterion section in your configuration file to use different loss functions.
115+
116+
=== "CrossEntropyLoss"
117+
```yaml
118+
LighterSystem:
119+
...
120+
criterion:
121+
_target_: torch.nn.CrossEntropyLoss
122+
...
123+
```
124+
125+
=== "MONAI's Dice Loss"
126+
```yaml
127+
LighterSystem:
128+
...
129+
criterion:
130+
_target_: monai.losses.DiceLoss
131+
...
132+
```
133+
134+
<br/>
135+
#### Optimizer
136+
137+
Same as above, you can experiment with different optimizer parameters. Model parameters are directly passed to the optimizer in `params` argument.
138+
```yaml hl_lines="5"
139+
LighterSystem:
140+
...
141+
optimizer:
142+
_target_: torch.optim.Adam
143+
params: "$@system#model.parameters()"
144+
lr: 0.001
145+
...
146+
```
147+
148+
You can also define a scheduler for the optimizer as below,
149+
```yaml hl_lines="10"
150+
LighterSystem:
151+
...
152+
optimizer:
153+
_target_: torch.optim.Adam
154+
params: "$@system#model.parameters()"
155+
lr: 0.001
156+
157+
scheduler:
158+
_target_: torch.optim.lr_scheduler.CosineAnnealingLR
159+
optimizer: "@system#optimizer"
160+
eta_min: 1.0e-06
161+
T_max: "%trainer#max_epochs"
162+
163+
...
164+
```
165+
Here, the optimizer is passed to the scheduler with the `optimizer` argument. `%trainer#max_epochs` is also passed to the scheduler where it fetches `max_epochs` from the Trainer class.
166+
167+
<br/>
168+
#### Datasets
169+
170+
The most commonly changed part of the config is often the datasets as common workflows involve training/inferring on your own dataset. We provide a `datasets` key with `train`, `val`, `test` and `predict` keys that generate dataloaders for each of the different workflows provided by pytorch lightning. These are described in detail [here](./workflows.md)
171+
172+
<div class="annotate" markdown>
173+
174+
```yaml
175+
LighterSystem:
176+
...
177+
datasets:
178+
train:
179+
_target_: torchvision.datasets.CIFAR10 (1)
180+
download: True
181+
root: .datasets
182+
train: True
183+
transform: (2)
184+
_target_: torchvision.transforms.Compose
185+
transforms:
186+
- _target_: torchvision.transforms.ToTensor
187+
- _target_: torchvision.transforms.Normalize
188+
mean: [0.5, 0.5, 0.5]
189+
std: [0.5, 0.5, 0.5]
190+
...
191+
```
192+
193+
</div>
194+
1. Define your own dataset class here or use several existing dataset clases. Read more about [this](./projects.md)
195+
2. Transforms can be applied to each element of the dataset by initialization a `Compose` object and providing it a list of transforms. This is often the best way to adapt constraints for your data.
196+
197+
### Special Syntax and Keywords
198+
- `_target_`: Indicates the Python class to instantiate. If a function is provided, a partial function is created. Any configuration key set with `_target_` will map to a python object.
199+
- **@**: References another configuration value. Using this syntax, keys mapped to python objects can be accessed. For instance, the learning rate of an optimizer, `optimizer` instianted to `torch.optim.Adam` using `_target_` can be accessed using `@model#lr` where `lr` is an attribute of the `torch.optim.Adam` class.
200+
- **$**: Used for evaluating Python expressions.
201+
- **%**: Macro for textual replacement in the configuration.

0 commit comments

Comments
 (0)