Skip to content

Commit f93ea37

Browse files
authored
Merge branch 'master' into usort
2 parents ce4d6b9 + 4c2b62b commit f93ea37

File tree

2 files changed

+16
-7
lines changed

2 files changed

+16
-7
lines changed

Diff for: docs/source/metrics.rst

+11-2
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,17 @@ ignite.metrics
44
Metrics provide a way to compute various quantities of interest in an online
55
fashion without having to store the entire output history of a model.
66

7-
In practice a user needs to attach the metric instance to an engine. The metric
8-
value is then computed using the output of the engine's ``process_function``:
7+
Attach Engine API
8+
------------------
9+
10+
The metrics as stated above are computed in a online fashion, which means that the metric instance accumulates some internal counters on
11+
each iteration and metric value is computed once the epoch is ended. Internal counters are reset after every epoch. In practice, this is done with the
12+
help of three methods: :meth:`~ignite.metrics.metric.Metric.reset()`, :meth:`~ignite.metrics.metric.Metric.update()` and :meth:`~ignite.metrics.metric.Metric.compute()`.
13+
14+
Therefore, a user needs to attach the metric instance to the engine so that the above three methods can be triggered on execution of certain :class:`~ignite.engine.events.Events`.
15+
The :meth:`~ignite.metrics.metric.Metric.reset()` method is triggered on ``EPOCH_STARTED`` event and it is responsible to reset the metric to its initial state. The :meth:`~ignite.metrics.metric.Metric.update()` method is triggered
16+
on ``ITERATION_COMPLETED`` event as it updates the state of the metric using the passed batch output. And :meth:`~ignite.metrics.metric.Metric.compute()` is triggered on ``EPOCH_COMPLETED``
17+
event. It computes the metric based on its accumulated states. The metric value is computed using the output of the engine's ``process_function``:
918

1019
.. code-block:: python
1120

Diff for: ignite/handlers/param_scheduler.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -804,22 +804,22 @@ class LRScheduler(ParamScheduler):
804804
805805
default_trainer = get_default_trainer()
806806
807-
@default_trainer.on(Events.ITERATION_STARTED)
808-
def print_lr():
809-
print(default_optimizer.param_groups[0]["lr"])
810-
811807
from torch.optim.lr_scheduler import StepLR
812808
813809
torch_lr_scheduler = StepLR(default_optimizer, step_size=3, gamma=0.1)
814810
815811
scheduler = LRScheduler(torch_lr_scheduler)
816812
813+
@default_trainer.on(Events.ITERATION_COMPLETED)
814+
def print_lr():
815+
print(default_optimizer.param_groups[0]["lr"])
816+
817817
# In this example, we assume to have installed PyTorch>=1.1.0
818818
# (with new `torch.optim.lr_scheduler` behaviour) and
819819
# we attach scheduler to Events.ITERATION_COMPLETED
820820
# instead of Events.ITERATION_STARTED to make sure to use
821821
# the first lr value from the optimizer, otherwise it is will be skipped:
822-
default_trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)
822+
default_trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
823823
824824
default_trainer.run([0] * 8, max_epochs=1)
825825

0 commit comments

Comments
 (0)