All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
Note: we move fast, but still we preserve 0.1 version (one feature release) back compatibility.
-
Added NLP metrics:
-
Added a default VSCode devcontainer configuration (#621)
-
Added Signal to Distortion Ratio (
SDR
) to audio package (#565) -
Added
MinMaxMetric
to wrappers (#556) -
Added
ignore_index
to to retrieval metrics (#676) -
Added support for multi references in
ROUGEScore
(#680)
-
Scalar metrics will now consistently have additional dimensions squeezed (#622)
-
Metrics having third party dependencies removed from global import (#463)
-
BLEUScore
now expects untokenized input to stay consistent with all the other text metrics (#640) -
Renamed
torchmetrics.collections
totorchmetrics.metrics_collections
to avoid clashing with system'scollections
package (#695)
- Renamed IoU -> Jaccard Index (#662)
-
Removed
embedding_similarity
metric (#638) -
Removed argument
concatenate_texts
fromwer
metric (#638) -
Removed arguments
newline_sep
anddecimal_places
fromrouge
metric (#638)
- Fixed
torch.sort
currently does not support booldtype
on CUDA (#665) - Fixed mAP properly checks if ground truths are empty (#684)
- Fixed initialization of tensors to be on correct device for
MAP
metric (#673)
- Migrate MAP metrics from pycocotools to PyTorch (#632)
- Use
torch.topk
instead oftorch.argsort
in retrieval precision for speedup (#627)
- Fix empty predictions in MAP metric (#594, #610, #624)
- Fix edge case of AUROC with
average=weighted
on GPU (#606) - Fixed
forward
in compositional metrics (#645)
- Added audio metrics:
- Added Information retrieval metrics:
- Added NLP metrics:
- Added other metrics:
- Added
MAP
(mean average precision) metric to new detection package (#467) - Added support for float targets in
nDCG
metric (#437) - Added
average
argument toAveragePrecision
metric for reducing multi-label and multi-class problems (#477) - Added
MultioutputWrapper
(#510) - Added metric sweeping:
- Added simple aggregation metrics:
SumMetric
,MeanMetric
,CatMetric
,MinMetric
,MaxMetric
(#506) - Added pairwise submodule with metrics (#553)
pairwise_cosine_similarity
pairwise_euclidean_distance
pairwise_linear_similarity
pairwise_manhatten_distance
AveragePrecision
will now as default output themacro
average for multilabel and multiclass problems (#477)half
,double
,float
will no longer change the dtype of the metric states. Usemetric.set_dtype
instead (#493)- Renamed
AverageMeter
toMeanMetric
(#506) - Changed
is_differentiable
from property to a constant attribute (#551) ROC
andAUROC
will no longer throw an error when either the positive or negative class is missing. Instead return 0 score and give a warning
- Deprecated
torchmetrics.functional.self_supervised.embedding_similarity
in favour of new pairwise submodule
- Removed
dtype
property (#493)
- Fixed bug in
F1
withaverage='macro'
andignore_index!=None
(#495) - Fixed bug in
pit
by using the returned first result to initialize device and type (#533) - Fixed
SSIM
metric using too much memory (#539) - Fixed bug where
device
property was not properly update when metric was a child of a module (#542)
- Added
device
anddtype
properties (#462) - Added
TextTester
class for robustly testing text metrics (#450)
- Added support for float targets in
nDCG
metric (#437)
- Removed
rouge-score
as dependency for text package (#443) - Removed
jiwer
as dependency for text package (#446) - Removed
bert-score
as dependency for text package (#473)
- Fixed ranking of samples in
SpearmanCorrCoef
metric (#448) - Fixed bug where compositional metrics where unable to sync because of type mismatch (#454)
- Fixed metric hashing (#478)
- Fixed
BootStrapper
metrics not working on GPU (#462) - Fixed the semantic ordering of kernel height and width in
SSIM
metric (#474)
- Added Text-related (NLP) metrics:
- Added
MetricTracker
wrapper metric for keeping track of the same metric over multiple epochs (#238) - Added other metrics:
- Added support in
nDCG
metric for target with values larger than 1 (#349) - Added support for negative targets in
nDCG
metric (#378) - Added
None
as reduction option inCosineSimilarity
metric (#400) - Allowed passing labels in (n_samples, n_classes) to
AveragePrecision
(#386)
- Moved
psnr
andssim
fromfunctional.regression.*
tofunctional.image.*
(#382) - Moved
image_gradient
fromfunctional.image_gradients
tofunctional.image.gradients
(#381) - Moved
R2Score
fromregression.r2score
toregression.r2
(#371) - Pearson metric now only store 6 statistics instead of all predictions and targets (#380)
- Use
torch.argmax
instead oftorch.topk
whenk=1
for better performance (#419) - Moved check for number of samples in R2 score to support single sample updating (#426)
- Rename
r2score
>>r2_score
andkldivergence
>>kl_divergence
infunctional
(#371) - Moved
bleu_score
fromfunctional.nlp
tofunctional.text.bleu
(#360)
- Removed restriction that
threshold
has to be in (0,1) range to support logit input ( #351 #401) - Removed restriction that
preds
could not be bigger thannum_classes
to support logit input (#357) - Removed module
regression.psnr
andregression.ssim
(#382): - Removed (#379):
- function
functional.mean_relative_error
num_thresholds
argument inBinnedPrecisionRecallCurve
- function
- Fixed bug where classification metrics with
average='macro'
would lead to wrong result if a class was missing (#303) - Fixed
weighted
,multi-class
AUROC computation to allow for 0 observations of some class, as contribution to final AUROC is 0 (#376) - Fixed that
_forward_cache
and_computed
attributes are also moved to the correct device if metric is moved (#413) - Fixed calculation in
IoU
metric when usingignore_index
argument (#328)
- Fixed DDP by
is_sync
logic toMetric
(#339)
- Added Image-related metrics:
- Added Audio metrics: SNR, SI_SDR, SI_SNR (#292)
- Added other metrics:
- Added
add_metrics
method toMetricCollection
for adding additional metrics after initialization (#221) - Added pre-gather reduction in the case of
dist_reduce_fx="cat"
to reduce communication cost (#217) - Added better error message for
AUROC
whennum_classes
is not provided for multiclass input (#244) - Added support for unnormalized scores (e.g. logits) in
Accuracy
,Precision
,Recall
,FBeta
,F1
,StatScore
,Hamming
,ConfusionMatrix
metrics (#200) - Added
squared
argument toMeanSquaredError
for computingRMSE
(#249) - Added
is_differentiable
property toConfusionMatrix
,F1
,FBeta
,Hamming
,Hinge
,IOU
,MatthewsCorrcoef
,Precision
,Recall
,PrecisionRecallCurve
,ROC
,StatScores
(#253) - Added
sync
andsync_context
methods for manually controlling when metric states are synced (#302)
- Forward cache is reset when
reset
method is called (#260) - Improved per-class metric handling for imbalanced datasets for
precision
,recall
,precision_recall
,fbeta
,f1
,accuracy
, andspecificity
(#204) - Decorated
torch.jit.unused
toMetricCollection
forward (#307) - Renamed
thresholds
argument to binned metrics for manually controlling the thresholds (#322) - Extend typing (#324, #326, #327)
- Deprecated
functional.mean_relative_error
, usefunctional.mean_absolute_percentage_error
(#248) - Deprecated
num_thresholds
argument inBinnedPrecisionRecallCurve
(#322)
- Removed argument
is_multiclass
(#319)
- AUC can also support more dimensional inputs when all but one dimension are of size 1 (#242)
- Fixed
dtype
of modular metrics after reset has been called (#243) - Fixed calculation in
matthews_corrcoef
to correctly match formula (#321)
- Added
is_differentiable
property:
MetricCollection
should return metrics with prefix onitems()
,keys()
(#209)- Calling
compute
beforeupdate
will now give warning (#164)
- Removed
numpy
as direct dependency (#212)
- Fixed auc calculation and add tests (#197)
- Fixed loading persisted metric states using
load_state_dict()
(#202) - Fixed
PSNR
not working withDDP
(#214) - Fixed metric calculation with unequal batch sizes (#220)
- Fixed metric concatenation for list states for zero-dim input (#229)
- Fixed numerical instability in
AUROC
metric for large input (#230)
- Added
BootStrapper
to easily calculate confidence intervals for metrics (#101) - Added Binned metrics (#128)
- Added metrics for Information Retrieval ((PL^5032)):
- Added other metrics:
- Added
average='micro'
as an option in AUROC for multilabel problems (#110) - Added multilabel support to
ROC
metric (#114) - Added testing for
half
precision (#77, #135 ) - Added
AverageMeter
for ad-hoc averages of values (#138) - Added
prefix
argument toMetricCollection
(#70) - Added
__getitem__
as metric arithmetic operation (#142) - Added property
is_differentiable
to metrics and test for differentiability (#154) - Added support for
average
,ignore_index
andmdmc_average
inAccuracy
metric (#166) - Added
postfix
arg toMetricCollection
(#188)
- Changed
ExplainedVariance
from storing all preds/targets to tracking 5 statistics (#68) - Changed behaviour of
confusionmatrix
for multilabel data to better matchmultilabel_confusion_matrix
from sklearn (#134) - Updated FBeta arguments (#111)
- Changed
reset
method to usedetach.clone()
instead ofdeepcopy
when resetting to default (#163) - Metrics passed as dict to
MetricCollection
will now always be in deterministic order (#173) - Allowed
MetricCollection
pass metrics as arguments (#176)
- Rename argument
is_multiclass
->multiclass
(#162)
- Prune remaining deprecated (#92)
- Fixed when
_stable_1d_sort
to work whenn>=N
(PL^6177) - Fixed
_computed
attribute not being correctly reset (#147) - Fixed to Blau score (#165)
- Fixed backwards compatibility for logging with older version of pytorch-lightning (#182)
- Decoupled PL dependency (#13)
- Refactored functional - mimic the module-like structure: classification, regression, etc. (#16)
- Refactored utilities - split to topics/submodules (#14)
- Refactored
MetricCollection
(#19)
- Added
Accuracy
metric now generalizes to Top-k accuracy for (multi-dimensional) multi-class inputs using thetop_k
parameter (PL^4838) - Added
Accuracy
metric now enables the computation of subset accuracy for multi-label or multi-dimensional multi-class inputs with thesubset_accuracy
parameter (PL^4838) - Added
HammingDistance
metric to compute the hamming distance (loss) (PL^4838) - Added
StatScores
metric to compute the number of true positives, false positives, true negatives and false negatives (PL^4839) - Added
R2Score
metric (PL^5241) - Added
MetricCollection
(PL^4318) - Added
.clone()
method to metrics (PL^4318) - Added
IoU
class interface (PL^4704) - The
Recall
andPrecision
metrics (and their functional counterpartsrecall
andprecision
) can now be generalized to Recall@K and Precision@K with the use oftop_k
parameter (PL^4842) - Added compositional metrics (PL^5464)
- Added AUC/AUROC class interface (PL^5479)
- Added
QuantizationAwareTraining
callback (PL^5706) - Added
ConfusionMatrix
class interface (PL^4348) - Added multiclass AUROC metric (PL^4236)
- Added
PrecisionRecallCurve, ROC, AveragePrecision
class metric (PL^4549) - Classification metrics overhaul (PL^4837)
- Added
F1
class metric (PL^4656) - Added metrics aggregation in Horovod and fixed early stopping (PL^3775)
- Added
persistent(mode)
method to metrics, to enable and disable metric states being added tostate_dict
(PL^4482) - Added unification of regression metrics (PL^4166)
- Added persistent flag to
Metric.add_state
(PL^4195) - Added classification metrics (PL^4043)
- Added new Metrics API. (PL^3868, PL^3921)
- Added EMB similarity (PL^3349)
- Added SSIM metrics (PL^2671)
- Added BLEU metrics (PL^2535)