-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics docs #2184
Metrics docs #2184
Conversation
docs/source/metrics.rst
Outdated
|
||
Example: | ||
|
||
.. code-block:: python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets make this as test
.. code-block:: python | |
.. testcode:: |
|
||
1. A Metric class you can use to implement metrics with built-in distributed (ddp) support which are device agnostic. | ||
2. A collection of popular metrics already implemented for you. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we mention that the package also comes with a interface to sklearn metrics (with a warning that this is slow due to casting back-and-forth of the tensors)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should. We should also include that sklearn has to be installed separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make it clear how to import the different backends. Maybe something like:
Use native backend
import pytorch_lightning.metrics.native as plm
Use sklearn backend
import pytorch_lightning.metrics.sklearn as plm
Use default (native if available else sklearn)
import pytorch_lightning.metrics as plm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SkafteNicki can you add the sklearn details in here?
Maybe we should also show an example on how to parametrize ddp reduction during And an example where you use a functional (and maybe also with ddp reduction/autocasting via |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think it is good idea to mix .. testcode::
in python files
then you force every contributor to run a test on docs instead of simple pytest
also, it is not consistent with what we already have
moreover, there is no simple way to run sphinx test on the particular file you always have to run all
so lets stay with doctest in python files and sphinx test in RST files
@@ -1,30 +1,15 @@ | |||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would keep a short version here...
""" | ||
super().__init__(name='precision', | ||
reduce_group=reduce_group, | ||
reduce_op=reduce_op) | ||
|
||
# Fixme: bug with result value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SkafteNicki mind check this one?
Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
Out:: | ||
|
||
tensor(0.7500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doe s not make sense
>>> target = torch.tensor([0, 1, 2, 2]) | ||
>>> metric = Precision() | ||
>>> metric(pred, target) | ||
tensor(1.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@williamFalcon is this correct?
Fixes #2142