-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
7 changed files
with
151 additions
and
131 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,123 @@ | ||
""" | ||
Profiling your training run can help you understand if there are any bottlenecks in your code. | ||
Built-in checks | ||
---------------- | ||
PyTorch Lightning supports profiling standard actions in the training loop out of the box, including: | ||
- on_epoch_start | ||
- on_epoch_end | ||
- on_batch_start | ||
- tbptt_split_batch | ||
- model_forward | ||
- model_backward | ||
- on_after_backward | ||
- optimizer_step | ||
- on_batch_end | ||
- training_step_end | ||
- on_training_end | ||
Enable simple profiling | ||
------------------------- | ||
If you only wish to profile the standard actions, you can set `profiler=True` when constructing | ||
your `Trainer` object. | ||
.. code-block:: python | ||
trainer = Trainer(..., profiler=True) | ||
The profiler's results will be printed at the completion of a training `fit()`. | ||
.. code-block:: python | ||
Profiler Report | ||
Action | Mean duration (s) | Total time (s) | ||
----------------------------------------------------------------- | ||
on_epoch_start | 5.993e-06 | 5.993e-06 | ||
get_train_batch | 0.0087412 | 16.398 | ||
on_batch_start | 5.0865e-06 | 0.0095372 | ||
model_forward | 0.0017818 | 3.3408 | ||
model_backward | 0.0018283 | 3.4282 | ||
on_after_backward | 4.2862e-06 | 0.0080366 | ||
optimizer_step | 0.0011072 | 2.0759 | ||
on_batch_end | 4.5202e-06 | 0.0084753 | ||
on_epoch_end | 3.919e-06 | 3.919e-06 | ||
on_train_end | 5.449e-06 | 5.449e-06 | ||
Advanced Profiling | ||
-------------------- | ||
If you want more information on the functions called during each event, you can use the `AdvancedProfiler`. | ||
This option uses Python's cProfiler_ to provide a report of time spent on *each* function called within your code. | ||
.. _cProfiler: https://docs.python.org/3/library/profile.html#module-cProfile | ||
.. code-block:: python | ||
profiler = AdvancedProfiler() | ||
trainer = Trainer(..., profiler=profiler) | ||
The profiler's results will be printed at the completion of a training `fit()`. This profiler | ||
report can be quite long, so you can also specify an `output_filename` to save the report instead | ||
of logging it to the output in your terminal. The output below shows the profiling for the action | ||
`get_train_batch`. | ||
.. code-block:: python | ||
Profiler Report | ||
Profile stats for: get_train_batch | ||
4869394 function calls (4863767 primitive calls) in 18.893 seconds | ||
Ordered by: cumulative time | ||
List reduced from 76 to 10 due to restriction <10> | ||
ncalls tottime percall cumtime percall filename:lineno(function) | ||
3752/1876 0.011 0.000 18.887 0.010 {built-in method builtins.next} | ||
1876 0.008 0.000 18.877 0.010 dataloader.py:344(__next__) | ||
1876 0.074 0.000 18.869 0.010 dataloader.py:383(_next_data) | ||
1875 0.012 0.000 18.721 0.010 fetch.py:42(fetch) | ||
1875 0.084 0.000 18.290 0.010 fetch.py:44(<listcomp>) | ||
60000 1.759 0.000 18.206 0.000 mnist.py:80(__getitem__) | ||
60000 0.267 0.000 13.022 0.000 transforms.py:68(__call__) | ||
60000 0.182 0.000 7.020 0.000 transforms.py:93(__call__) | ||
60000 1.651 0.000 6.839 0.000 functional.py:42(to_tensor) | ||
60000 0.260 0.000 5.734 0.000 transforms.py:167(__call__) | ||
You can also reference this profiler in your LightningModule to profile specific actions of interest. | ||
If you don't want to always have the profiler turned on, you can optionally pass a `PassThroughProfiler` | ||
which will allow you to skip profiling without having to make any code changes. Each profiler has a | ||
method `profile()` which returns a context handler. Simply pass in the name of your action that you want | ||
to track and the profiler will record performance for code executed within this context. | ||
.. code-block:: python | ||
from pytorch_lightning.profiler import Profiler, PassThroughProfiler | ||
class MyModel(LightningModule): | ||
def __init__(self, hparams, profiler=None): | ||
self.hparams = hparams | ||
self.profiler = profiler or PassThroughProfiler() | ||
def custom_processing_step(self, data): | ||
with profiler.profile('my_custom_action'): | ||
# custom processing step | ||
return data | ||
profiler = Profiler() | ||
model = MyModel(hparams, profiler) | ||
trainer = Trainer(profiler=profiler, max_epochs=1) | ||
""" | ||
|
||
from pytorch_lightning.debugging.profilers import BaseProfiler, Profiler, AdvancedProfiler, PassThroughProfiler | ||
|
||
__all__ = [ | ||
'BaseProfiler', | ||
'Profiler', | ||
'AdvancedProfiler', | ||
'PassThroughProfiler', | ||
] |
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,122 +1,13 @@ | ||
""" | ||
Profiling your training run can help you understand if there are any bottlenecks in your code. | ||
Built-in checks | ||
---------------- | ||
PyTorch Lightning supports profiling standard actions in the training loop out of the box, including: | ||
- on_epoch_start | ||
- on_epoch_end | ||
- on_batch_start | ||
- tbptt_split_batch | ||
- model_forward | ||
- model_backward | ||
- on_after_backward | ||
- optimizer_step | ||
- on_batch_end | ||
- training_step_end | ||
- on_training_end | ||
Enable simple profiling | ||
------------------------- | ||
If you only wish to profile the standard actions, you can set `profiler=True` when constructing | ||
your `Trainer` object. | ||
.. code-block:: python | ||
trainer = Trainer(..., profiler=True) | ||
The profiler's results will be printed at the completion of a training `fit()`. | ||
.. code-block:: python | ||
Profiler Report | ||
Action | Mean duration (s) | Total time (s) | ||
----------------------------------------------------------------- | ||
on_epoch_start | 5.993e-06 | 5.993e-06 | ||
get_train_batch | 0.0087412 | 16.398 | ||
on_batch_start | 5.0865e-06 | 0.0095372 | ||
model_forward | 0.0017818 | 3.3408 | ||
model_backward | 0.0018283 | 3.4282 | ||
on_after_backward | 4.2862e-06 | 0.0080366 | ||
optimizer_step | 0.0011072 | 2.0759 | ||
on_batch_end | 4.5202e-06 | 0.0084753 | ||
on_epoch_end | 3.919e-06 | 3.919e-06 | ||
on_train_end | 5.449e-06 | 5.449e-06 | ||
Advanced Profiling | ||
-------------------- | ||
If you want more information on the functions called during each event, you can use the `AdvancedProfiler`. | ||
This option uses Python's cProfiler_ to provide a report of time spent on *each* function called within your code. | ||
.. _cProfiler: https://docs.python.org/3/library/profile.html#module-cProfile | ||
.. code-block:: python | ||
profiler = AdvancedProfiler() | ||
trainer = Trainer(..., profiler=profiler) | ||
The profiler's results will be printed at the completion of a training `fit()`. This profiler | ||
report can be quite long, so you can also specify an `output_filename` to save the report instead | ||
of logging it to the output in your terminal. The output below shows the profiling for the action | ||
`get_train_batch`. | ||
.. code-block:: python | ||
Profiler Report | ||
Profile stats for: get_train_batch | ||
4869394 function calls (4863767 primitive calls) in 18.893 seconds | ||
Ordered by: cumulative time | ||
List reduced from 76 to 10 due to restriction <10> | ||
ncalls tottime percall cumtime percall filename:lineno(function) | ||
3752/1876 0.011 0.000 18.887 0.010 {built-in method builtins.next} | ||
1876 0.008 0.000 18.877 0.010 dataloader.py:344(__next__) | ||
1876 0.074 0.000 18.869 0.010 dataloader.py:383(_next_data) | ||
1875 0.012 0.000 18.721 0.010 fetch.py:42(fetch) | ||
1875 0.084 0.000 18.290 0.010 fetch.py:44(<listcomp>) | ||
60000 1.759 0.000 18.206 0.000 mnist.py:80(__getitem__) | ||
60000 0.267 0.000 13.022 0.000 transforms.py:68(__call__) | ||
60000 0.182 0.000 7.020 0.000 transforms.py:93(__call__) | ||
60000 1.651 0.000 6.839 0.000 functional.py:42(to_tensor) | ||
60000 0.260 0.000 5.734 0.000 transforms.py:167(__call__) | ||
You can also reference this profiler in your LightningModule to profile specific actions of interest. | ||
If you don't want to always have the profiler turned on, you can optionally pass a `PassThroughProfiler` | ||
which will allow you to skip profiling without having to make any code changes. Each profiler has a | ||
method `profile()` which returns a context handler. Simply pass in the name of your action that you want | ||
to track and the profiler will record performance for code executed within this context. | ||
.. code-block:: python | ||
from pytorch_lightning.profiler import Profiler, PassThroughProfiler | ||
class MyModel(LightningModule): | ||
def __init__(self, hparams, profiler=None): | ||
self.hparams = hparams | ||
self.profiler = profiler or PassThroughProfiler() | ||
def custom_processing_step(self, data): | ||
with profiler.profile('my_custom_action'): | ||
# custom processing step | ||
return data | ||
profiler = Profiler() | ||
model = MyModel(hparams, profiler) | ||
trainer = Trainer(profiler=profiler, max_epochs=1) | ||
.. warning:: `profiler` package has been renamed to `debugging` since v0.6.0. | ||
The deprecated module name will be removed in v0.8.0. | ||
""" | ||
|
||
from pytorch_lightning.profiler.profiler import Profiler, AdvancedProfiler, PassThroughProfiler | ||
import warnings | ||
|
||
warnings.warn("`profiler` package has been renamed to `debugging` since v0.7.2." | ||
" The deprecated module name will be removed in v0.9.0.", DeprecationWarning) | ||
|
||
__all__ = [ | ||
'Profiler', | ||
'AdvancedProfiler', | ||
'PassThroughProfiler', | ||
] | ||
from pytorch_lightning.debugging import ( # noqa: E402 | ||
BaseProfiler, Profiler, AdvancedProfiler, PassThroughProfiler | ||
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters