Skip to content

Commit

Permalink
remove deprecated compute_on_step from torchmetrics/image (#979)
Browse files Browse the repository at this point in the history
* remove deprecated compute_on_step from torchmetrics/image

* flake8

* update changelog

* fix overwrite in changelog

Co-authored-by: Jirka <[email protected]>
Co-authored-by: Nicki Skafte Detlefsen <[email protected]>
  • Loading branch information
3 people authored Apr 25, 2022
1 parent f4afaf5 commit ee60f67
Show file tree
Hide file tree
Showing 8 changed files with 13 additions and 80 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Removed deprecated `compute_on_step` argument in Regression ([#967](https://github.com/PyTorchLightning/metrics/pull/967))


- Removed deprecated `compute_on_step` argument in Image ([#979](https://github.com/PyTorchLightning/metrics/pull/979))


### Fixed

- Fixed "Sort currently does not support bool dtype on CUDA" error in MAP for empty preds ([#983](https://github.com/PyTorchLightning/metrics/pull/983))
Expand Down
14 changes: 1 addition & 13 deletions torchmetrics/image/fid.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,10 +148,6 @@ class FrechetInceptionDistance(Metric):
is installed. Either install as ``pip install torchmetrics[image]`` or
``pip install torch-fidelity``
.. note:: the ``forward`` method can be used but ``compute_on_step`` is disabled by default (oppesit of
all other metrics) as this metric does not really make sense to calculate on a single batch. This
means that by default ``forward`` will just call ``update`` underneat.
Args:
feature:
Either an integer or ``nn.Module``:
Expand All @@ -164,13 +160,6 @@ class FrechetInceptionDistance(Metric):
reset_real_features: Whether to also reset the real features. Since in many cases the real dataset does not
change, the features can cached them to avoid recomputing them which is costly. Set this to ``False`` if
your dataset does not change.
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
References:
Expand Down Expand Up @@ -215,10 +204,9 @@ def __init__(
self,
feature: Union[int, torch.nn.Module] = 2048,
reset_real_features: bool = True,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)

rank_zero_warn(
"Metric `FrechetInceptionDistance` will save all extracted features in buffer."
Expand Down
16 changes: 2 additions & 14 deletions torchmetrics/image/inception.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict, List, Optional, Tuple, Union
from typing import Any, Dict, List, Tuple, Union

import torch
from torch import Tensor
Expand Down Expand Up @@ -46,10 +46,6 @@ class InceptionScore(Metric):
is installed. Either install as ``pip install torchmetrics[image]`` or
``pip install torch-fidelity``
.. note:: the ``forward`` method can be used but ``compute_on_step`` is disabled by default (oppesit of
all other metrics) as this metric does not really make sense to calculate on a single batch. This
means that by default ``forward`` will just call ``update`` underneat.
Args:
feature:
Either an str, integer or ``nn.Module``:
Expand All @@ -60,13 +56,6 @@ class InceptionScore(Metric):
an ``[N,d]`` matrix where ``N`` is the batch size and ``d`` is the feature size.
splits: integer determining how many splits the inception score calculation should be split among
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
References:
Expand Down Expand Up @@ -105,10 +94,9 @@ def __init__(
self,
feature: Union[str, int, torch.nn.Module] = "logits_unbiased",
splits: int = 10,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)

rank_zero_warn(
"Metric `InceptionScore` will save all extracted features in buffer."
Expand Down
13 changes: 1 addition & 12 deletions torchmetrics/image/kid.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,6 @@ class KernelInceptionDistance(Metric):
is installed. Either install as ``pip install torchmetrics[image]`` or
``pip install torch-fidelity``
.. note:: the ``forward`` method can be used but ``compute_on_step`` is disabled by default (oppesit of
all other metrics) as this metric does not really make sense to calculate on a single batch. This
means that by default ``forward`` will just call ``update`` underneat.
Args:
feature: Either an str, integer or ``nn.Module``:
Expand All @@ -109,12 +105,6 @@ class KernelInceptionDistance(Metric):
reset_real_features: Whether to also reset the real features. Since in many cases the real dataset does not
change, the features can cached them to avoid recomputing them which is costly. Set this to ``False`` if
your dataset does not change.
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
References:
Expand Down Expand Up @@ -173,10 +163,9 @@ def __init__(
gamma: Optional[float] = None, # type: ignore
coef: float = 1.0,
reset_real_features: bool = True,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)

rank_zero_warn(
"Metric `Kernel Inception Distance` will save all extracted features in buffer."
Expand Down
11 changes: 2 additions & 9 deletions torchmetrics/image/lpip.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict, List, Optional
from typing import Any, Dict, List

import torch
from torch import Tensor
Expand Down Expand Up @@ -59,12 +59,6 @@ class LearnedPerceptualImagePatchSimilarity(Metric):
Args:
net_type: str indicating backbone network type to use. Choose between `'alex'`, `'vgg'` or `'squeeze'`
reduction: str indicating how to reduce over the batch dimension. Choose between `'sum'` or `'mean'`.
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
Expand Down Expand Up @@ -98,10 +92,9 @@ def __init__(
self,
net_type: str = "alex",
reduction: Literal["sum", "mean"] = "mean",
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)

if not _LPIPS_AVAILABLE:
raise ModuleNotFoundError(
Expand Down
9 changes: 1 addition & 8 deletions torchmetrics/image/psnr.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,12 +44,6 @@ class PeakSignalNoiseRatio(Metric):
dim:
Dimensions to reduce PSNR scores over, provided as either an integer or a list of integers. Default is
None meaning scores will be reduced across all dimensions and all batches.
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Raises:
Expand Down Expand Up @@ -78,10 +72,9 @@ def __init__(
base: float = 10.0,
reduction: Literal["elementwise_mean", "sum", "none", None] = "elementwise_mean",
dim: Optional[Union[int, Tuple[int, ...]]] = None,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)

if dim is None and reduction != "elementwise_mean":
rank_zero_warn(f"The `reduction={reduction}` will not have any effect when `dim` is None.")
Expand Down
18 changes: 2 additions & 16 deletions torchmetrics/image/ssim.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,12 +47,6 @@ class StructuralSimilarityIndexMeasure(Metric):
return_contrast_sensitivity: If true, the constant term is returned as a second argument.
The luminance term can be obtained with luminance=ssim/contrast
Mutually exclusive with ``return_full_image``
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Return:
Expand Down Expand Up @@ -81,12 +75,11 @@ def __init__(
data_range: Optional[float] = None,
k1: float = 0.01,
k2: float = 0.03,
compute_on_step: Optional[bool] = None,
return_full_image: bool = False,
return_contrast_sensitivity: bool = False,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)
rank_zero_warn(
"Metric `SSIM` will save all targets and"
" predictions in buffer. For large datasets this may lead"
Expand Down Expand Up @@ -157,12 +150,6 @@ class MultiScaleStructuralSimilarityIndexMeasure(Metric):
normalize: When MultiScaleStructuralSimilarityIndexMeasure loss is used for training, it is desirable to use
normalizes to improve the training stability. This `normalize` argument is out of scope of the original
implementation [1], and it is adapted from https://github.com/jorge-pessoa/pytorch-msssim instead.
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Return:
Expand Down Expand Up @@ -206,10 +193,9 @@ def __init__(
k2: float = 0.03,
betas: Tuple[float, ...] = (0.0448, 0.2856, 0.3001, 0.2363, 0.1333),
normalize: Literal["relu", "simple", None] = None,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)
rank_zero_warn(
"Metric `MS_SSIM` will save all targets and"
" predictions in buffer. For large datasets this may lead"
Expand Down
9 changes: 1 addition & 8 deletions torchmetrics/image/uqi.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,12 +35,6 @@ class UniversalImageQualityIndex(Metric):
- ``'none'`` or ``None``: no reduction will be applied
data_range: Range of the image. If ``None``, it is determined from the image (max - min)
compute_on_step:
Forward only calls ``update()`` and returns None if this is set to False.
.. deprecated:: v0.8
Argument has no use anymore and will be removed v0.9.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Expand All @@ -67,10 +61,9 @@ def __init__(
sigma: Sequence[float] = (1.5, 1.5),
reduction: Literal["elementwise_mean", "sum", "none", None] = "elementwise_mean",
data_range: Optional[float] = None,
compute_on_step: Optional[bool] = None,
**kwargs: Dict[str, Any],
) -> None:
super().__init__(compute_on_step=compute_on_step, **kwargs)
super().__init__(**kwargs)
rank_zero_warn(
"Metric `UniversalImageQualityIndex` will save all targets and"
" predictions in buffer. For large datasets this may lead"
Expand Down

0 comments on commit ee60f67

Please sign in to comment.