You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Callbacks in Lighter allow you to customize and extend the training process. You can define custom actions to be executed at various stages of the training loop.
4
+
5
+
## Freezer Callback
6
+
The `LighterFreezer` callback allows you to freeze certain layers of the model during training. This can be useful for transfer learning or fine-tuning.
7
+
8
+
## Writer Callbacks
9
+
Lighter provides writer callbacks to save predictions in different formats. The `LighterFileWriter` and `LighterTableWriter` are examples of such callbacks.
10
+
11
+
-**LighterFileWriter**: Writes predictions to files, supporting formats like images, videos, and ITK images.
12
+
-**LighterTableWriter**: Saves predictions in a table format, such as CSV.
13
+
14
+
For more details on how to implement and use callbacks, refer to the [PyTorch Lightning Callback documentation](https://pytorch-lightning.readthedocs.io/en/stable/extensions/callbacks.html).
The inferer in Lighter is used for making predictions on data. It is typically used in validation, testing, and prediction workflows.
4
+
5
+
## Using Inferers
6
+
Inferers must be classes with a `__call__` method that accepts two arguments: the input to infer over and the model itself. They are used to handle complex inference scenarios, such as patch-based or sliding window inference.
7
+
8
+
## MONAI Inferers
9
+
Lighter integrates with MONAI inferers, which cover most common inference scenarios. You can use MONAI's sliding window or patch-based inferers directly in your Lighter configuration.
10
+
11
+
For more information on MONAI inferers, visit the [MONAI documentation](https://docs.monai.io/en/stable/inferers.html).
Postprocessing in Lighter allows you to apply custom transformations to data at various stages of the workflow. This can include modifying inputs, targets, predictions, or entire batches.
4
+
5
+
## Defining Postprocessing Functions
6
+
Postprocessing functions can be defined in the configuration file under the `postprocessing` key. They can be applied to:
7
+
-**Batch**: Modify the entire batch before it is passed to the model.
8
+
-**Criterion**: Modify inputs, targets, or predictions before loss calculation.
9
+
-**Metrics**: Modify inputs, targets, or predictions before metric calculation.
10
+
-**Logging**: Modify inputs, targets, or predictions before logging.
Copy file name to clipboardExpand all lines: docs/basics/config.md
+25-19
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,14 @@
1
1
# Configuration System
2
2
3
-
Lighter is a configuration-centric framework where the config. is used for setting up the machine learning workflow from model architecture selection, loss function, optimizer, dataset preparation and running the training/evaluation/inference process.
3
+
Lighter is a configuration-centric framework that uses YAML files to set up machine learning workflows. These configurations cover everything from model architecture selection, loss functions, and optimizers to dataset preparation and the execution of training, evaluation, and inference processes.
4
4
5
-
Our configuration system is heavily based on MONAI bundle parser but with a standardized structure. For every configuration, we expect several items to be mandatorily defined.
5
+
Our configuration system is inspired by the MONAI bundle parser, offering a standardized structure. Each configuration requires several mandatory components to be defined.
6
6
7
-
Let us take a simple example config to dig deeper into the configuration system of Lighter. You can go through the config and click on the + for more information about specific concepts.
7
+
The configuration is divided into two main components:
8
+
-**Trainer**: Handles the training process, including epochs, devices, etc.
9
+
-**LighterSystem**: Encapsulates the model, optimizer, datasets, and other components.
10
+
11
+
Let's explore a simple example configuration to understand Lighter's configuration system better. You can expand each section for more details on specific concepts.
8
12
9
13
<divclass="annotate"markdown>
10
14
@@ -49,7 +53,9 @@ system:
49
53
1.`_target_` is a special reserved keyword that initializes a python object from the provided text. In this case, a `Trainer` object from the `pytorch_lightning` library is initialized
50
54
2.`max_epochs` is an argument of the `Trainer` class which is passed through this format. Any argument for the class can be passed similarly.
51
55
3.`$@` is a combination of `$` which evaluates a python expression and `@` which references a python object. In this case we first reference the model with `@model` which is the `torchvision.models.resnet18` defined earlier and then access its parameters using `[email protected]()`
52
-
4. YAML allows passing a list in the format below where each `_target_` specifices a transform that is added to the list of transforms in `Compose`. The `torchvision.datasets.CIFAR10` accepts these with a `transform` argument and applies them to each item.
56
+
4. YAML allows passing a list in the format below where each `_target_` specifies a transform that is added to the list of transforms in `Compose`. The `torchvision.datasets.CIFAR10` accepts these with a `transform` argument and applies them to each item.
57
+
58
+
5. Datasets are defined for different modes: train, val, test, and predict. Each dataset can have its own transforms and configurations.
53
59
54
60
## Configuration Concepts
55
61
As seen in the [Quickstart](./quickstart.md), Lighter has two main components:
@@ -61,17 +67,17 @@ As seen in the [Quickstart](./quickstart.md), Lighter has two main components:
61
67
max_epochs: 100
62
68
```
63
69
64
-
The trainer object (`pytorch_lightning.Trainer`) is initialized through the `_target_` key. For more info on `_target_` and special keys, click [here](#special-syntax-and-keywords)
70
+
The trainer object (`pytorch_lightning.Trainer`) is initialized using the `_target_` key. For more information on `_target_` and other special keys, see [Special Syntax and Keywords](#special-syntax-and-keywords).
65
71
66
-
The `max_epochs` is an argument provided to the `pytorch_lightning.Trainer` object during its instantiation. All arguments that are accepted during instantiation can be provided similarly.
72
+
The `max_epochs` parameter is passed to the `pytorch_lightning.Trainer` object during instantiation. You can provide any argument accepted by the class in this manner.
67
73
68
74
### LighterSystem Configuration
69
-
While Lighter borrows the Trainer from Pytorch Lightning, LighterSystem is a custom component unique to Lighter that draws on several concepts of PL such as LightningModule to provide a simple way to capture all the integral elements of a deep learning system.
75
+
While Lighter utilizes the Trainer from PyTorch Lightning, LighterSystem is a unique component that incorporates concepts from PL, such as LightningModule, to encapsulate all essential elements of a deep learning system in a straightforward manner.
70
76
71
77
Concepts encapsulated by LighterSystem include,
72
78
73
79
#### Model definition
74
-
The `torchvision` library is installed by default in Lighter and therefore, you can choose different torchvision models here. We also have `monai` packaged with Lighter, so if you are looking to use a ResNet, all you need to modify to fit this new model in your config is,
80
+
The `torchvision` library is included by default in Lighter, allowing you to select various torchvision models. Additionally, Lighter includes `monai`, enabling you to easily switch to a ResNetmodel by adjusting your configuration as follows:
75
81
76
82
=== "Torchvision ResNet18"
77
83
@@ -111,7 +117,7 @@ The `torchvision` library is installed by default in Lighter and therefore, you
111
117
<br/>
112
118
#### Criterion/Loss
113
119
114
-
Similar to overriding models, when exploring different loss types in Lighter, you can easily switch between various loss functions provided by libraries such as `torch` and `monai`. This flexibility allows you to experiment with different approaches to optimize your model's performance without changing code!! Below are some examples of how you can modify the criterion section in your configuration file to use different loss functions.
120
+
Just as you can override models, Lighter allows you to switch between various loss functions from libraries like `torch` and `monai`. This flexibility lets you experiment with different optimization strategies without altering your code. Here are examples of how to modify the criterion section in your configuration to use different loss functions:
115
121
116
122
=== "CrossEntropyLoss"
117
123
```yaml
@@ -134,7 +140,7 @@ Similar to overriding models, when exploring different loss types in Lighter, yo
134
140
<br/>
135
141
#### Optimizer
136
142
137
-
Same as above, you can experiment with different optimizer parameters. Model parameters are directly passed to the optimizer in `params` argument.
143
+
Similarly, you can experiment with different optimizer parameters. Model parameters are passed directly to the optimizer via the `params` argument.
138
144
```yaml hl_lines="5"
139
145
LighterSystem:
140
146
...
@@ -145,7 +151,7 @@ LighterSystem:
145
151
...
146
152
```
147
153
148
-
You can also define a scheduler for the optimizer as below,
154
+
You can also define a scheduler for the optimizer as shown below:
149
155
```yaml hl_lines="10"
150
156
LighterSystem:
151
157
...
@@ -162,12 +168,12 @@ LighterSystem:
162
168
163
169
...
164
170
```
165
-
Here, the optimizer is passed to the scheduler with the `optimizer` argument. `%trainer#max_epochs` is also passed to the scheduler where it fetches `max_epochs` from the Trainer class.
171
+
In this example, the optimizer is passed to the scheduler using the `optimizer` argument. The `%trainer#max_epochs` syntax retrieves the `max_epochs` value from the Trainer class.
166
172
167
173
<br/>
168
174
#### Datasets
169
175
170
-
The most commonly changed part of the config is often the datasets as common workflows involve training/inferring on your own dataset. We provide a `datasets` key with `train`, `val`, `test` and `predict` keys that generate dataloaders for each of the different workflows provided by pytorch lightning. These are described in detail [here](./workflows.md)
176
+
Datasets are often the most frequently modified part of the configuration, as workflows typically involve training or inferring on custom datasets. The `datasets` key includes `train`, `val`, `test`, and `predict` sub-keys, which generate dataloaders for each workflow supported by PyTorch Lightning. Detailed information is available [here](./workflows.md).
171
177
172
178
<div class="annotate" markdown>
173
179
@@ -191,11 +197,11 @@ LighterSystem:
191
197
```
192
198
193
199
</div>
194
-
1. Define your own dataset class here or use several existing dataset clases. Read more about [this](./projects.md)
195
-
2. Transforms can be applied to each element of the dataset by initialization a `Compose` object and providing it a list of transforms. This is often the best way to adapt constraints for your data.
200
+
1. Define your own dataset class here or use existing dataset classes. Learn more about this [here](./projects.md).
201
+
2. Transforms can be applied to each dataset element by initializing a `Compose` object and providing a list of transforms. This approach is often the best way to adapt constraints for your data.
196
202
197
203
### Special Syntax and Keywords
198
-
- `_target_`: Indicates the Python class to instantiate. If a function is provided, a partial function is created. Any configuration key set with `_target_` will map to a python object.
199
-
- **@**: References another configuration value. Using this syntax, keys mapped to python objects can be accessed. For instance, the learning rate of an optimizer, `optimizer` instianted to `torch.optim.Adam` using `_target_` can be accessed using `@model#lr` where `lr` is an attribute of the `torch.optim.Adam` class.
200
-
- **$**: Used for evaluating Python expressions.
201
-
- **%**: Macro for textual replacement in the configuration.
204
+
- `_target_`: Specifies the Python class to instantiate. If a function is provided, a partial function is created. Any configuration key with `_target_` maps to a Python object.
205
+
- **@**: References another configuration value. This syntax allows access to keys mapped to Python objects. For example, the learning rate of an optimizer instantiated as `torch.optim.Adam` can be accessed using `@model#lr`, where `lr` is an attribute of the `torch.optim.Adam` class.
206
+
- **$**: Evaluates Python expressions.
207
+
- **%**: Acts as a macro for textual replacement in the configuration.
Copy file name to clipboardExpand all lines: docs/basics/projects.md
+9-10
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,17 @@
1
1
# Using Lighter in your own projects
2
2
3
-
With Lighter, you can be as hands-on as you wish when using it in your project. For example, you can use a pre-defined Lighter configuration and,
3
+
Lighter offers a flexible framework for integrating deep learning workflows into your projects. Whether you're starting with a pre-defined configuration or building a custom setup, Lighter adapts to your needs. Here’s how you can leverage Lighter:
4
4
5
5
-[x] Train on your own dataset
6
6
-[x] Train on your data + Add a custom model architecture
7
7
-[x] Train on your data + Add a custom model architecture + Add a complex loss function
8
8
-[x] Customization per your imagination!
9
9
10
-
Lets start by looking at each of these one by one. At the end of this, you will hopefully have a better idea of how best you can leverage lighter
10
+
Let's start by looking at each of these one by one. At the end of this, you will hopefully have a better idea of how best you can leverage Lighter.
11
11
12
12
### Training on your own dataset
13
13
14
-
If you are reproducing another study you often start with a pre-defined configuration. Let us take the case of `cifar10.yaml`shown in [Quickstart](./quickstart.md). You have a dataset of Chest X-rays that you want to use to reproduce the same training that was done on CIFAR10. With lighter, all you need to change is the highlighted sections.
14
+
When reproducing a study or adapting a model to new data, you often start with a pre-defined configuration. For instance, consider the `cifar10.yaml`example from our [Quickstart](./quickstart.md). Suppose you have a dataset of Chest X-rays and wish to replicate the training process used for CIFAR10. With Lighter, you only need to modify specific sections of the configuration.
15
15
16
16
```yaml title="cifar10.yaml" hl_lines="18-29"
17
17
system:
@@ -45,7 +45,7 @@ system:
45
45
std: [0.5, 0.5, 0.5]
46
46
```
47
47
48
-
To replace this with your own dataset, you can create a Pytorch dataset that produces images and targets in the same format as torchvision datasets, i.e (image, target) tuple.
48
+
To integrate your dataset, create a PyTorch dataset class that outputs a dictionary with `input`, `target`, and optionally `id` keys. This ensures compatibility with Lighter's configuration system.
Lighter works with the default torchvision format of (image, target) and also with `dict` with `input` and `target` keys. The input/target key or tuple can contain complex input/target organization, e.g. multiple images for input and multiple labels for target
92
+
> **Note:** Lighter requires the dataset to return a dictionary with `input`, `target`, and optionally `id` keys. This format allows for complex input/target structures, such as multiple images or labels.
94
93
95
94
96
-
Now that you have built your dataset, all you need to do is add it to the lighter config! But wait, how will Lighter know where your code is? All lighter configs contain a `project` key that takes the full path to where your python code is located. Once you set this up, call `project.my_xray_dataset.`and Lighter will pick up the dataset.
95
+
Once your datasetis ready, integrate it into the Lighter configuration. The `project` key in the config specifies the path to your Python code, allowing Lighter to locate and utilize your dataset. Simply reference your dataset class, and Lighter will handle the rest.
97
96
98
97
In the above example, the path of the dataset is `/home/user/project/my_xray_dataset.py`. Copy the config shown above, make the following changes and run on the terminal
0 commit comments