You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
call on_load_checkpoint for LightningDataModule.
currently it only works in case of fitting, but since datamodule is a part of training/evaluation process I think we should consider calling this hook on validate/test/predict calls.
Motivation
If I implement the complete data logic inside LightningModule and want to reload any data related states, it will work, but if I use a datamodule it won't.
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered:
Proposed refactoring or deprecation
call
on_load_checkpoint
forLightningDataModule
.currently it only works in case of fitting, but since datamodule is a part of training/evaluation process I think we should consider calling this hook on validate/test/predict calls.
Motivation
If I implement the complete data logic inside
LightningModule
and want to reload any data related states, it will work, but if I use a datamodule it won't.Is there any reason why this was ignored?
Pitch
Simple fix. Move it up and update/add tests.
https://github.com/PyTorchLightning/pytorch-lightning/blob/dbe1662dc38d5217328ab459743b1113869a628c/pytorch_lightning/trainer/trainer.py#L1017-L1018
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: