You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections.
Metric collections just got faster
Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, recall, and f1 score in classification. In TorchMetrics, we have for a long time provided the MetricCollection object for chaining such metrics together for an easy interface to calculate them all at once. However, in many cases, such a collection of metrics shares some of the underlying computations that have been repeated for every metric in the collection. In Torchmetrics v0.8 we have introduced the concept of compute_groups to MetricCollection that will, as default, be auto-detected and group metrics that share some of the same computations.
Thus, if you are using MetricCollections in your code, upgrading to TorchMetrics v0.8 should automatically make your code run faster without any code changes.
Many exciting new metrics
TorchMetrics v0.8 includes several new metrics within the classification and image domain, both for the functional and modular API. We refer to the documentation for the full description of all metrics if you want to learn more about them.
SpectralAngleMapper or SAM was added to the image package. This metric can calculate the spectral similarity between given reference spectra and estimated spectra.
CoverageError was added to the classification package. This metric can be used when you are working with multi-label data. The metric works similar to the sklearn counterpart and computes how far you need to go through ranked scores such that all true labels are covered.
LabelRankingAveragePrecision and LabelRankingLoss were added to the classification package. Both metrics are used in multi-label ranking problems, where the goal is to give a better rank to the labels associated with each sample. Each metric gives a measure of how well your model is doing this.
ErrorRelativeGlobalDimensionlessSynthesis or ERGAS was added to the image package. This metric can be used to calculate the accuracy of Pan sharpened images considering the normalized average error of each band of the resulting image.
UniversalImageQualityIndex was added to the image package. This metric can assess the difference between two images, which considers three different factors when computed: loss of correlation, luminance distortion, and contrast distortion.
ClasswiseWrapper was added to the wrapper package. This wrapper can be used in combinations with metrics that return multiple values (such as classification metrics with the average=None argument). The wrapper will unwrap the result into a dict with a label for each value.
[0.8.0] - 2022-04-14
Added
Added WeightedMeanAbsolutePercentageError to regression package (New metric: WMAPE #948)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections.
Metric collections just got faster
Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, recall, and f1 score in classification. In TorchMetrics, we have for a long time provided the MetricCollection object for chaining such metrics together for an easy interface to calculate them all at once. However, in many cases, such a collection of metrics shares some of the underlying computations that have been repeated for every metric in the collection. In Torchmetrics v0.8 we have introduced the concept of compute_groups to MetricCollection that will, as default, be auto-detected and group metrics that share some of the same computations.
Thus, if you are using MetricCollections in your code, upgrading to TorchMetrics v0.8 should automatically make your code run faster without any code changes.
Many exciting new metrics
TorchMetrics v0.8 includes several new metrics within the classification and image domain, both for the functional and modular API. We refer to the documentation for the full description of all metrics if you want to learn more about them.
SpectralAngleMapper
or SAM was added to the image package. This metric can calculate the spectral similarity between given reference spectra and estimated spectra.CoverageError
was added to the classification package. This metric can be used when you are working with multi-label data. The metric works similar to thesklearn
counterpart and computes how far you need to go through ranked scores such that all true labels are covered.LabelRankingAveragePrecision
andLabelRankingLoss
were added to the classification package. Both metrics are used in multi-label ranking problems, where the goal is to give a better rank to the labels associated with each sample. Each metric gives a measure of how well your model is doing this.ErrorRelativeGlobalDimensionlessSynthesis
or ERGAS was added to the image package. This metric can be used to calculate the accuracy of Pan sharpened images considering the normalized average error of each band of the resulting image.UniversalImageQualityIndex
was added to the image package. This metric can assess the difference between two images, which considers three different factors when computed: loss of correlation, luminance distortion, and contrast distortion.ClasswiseWrapper
was added to the wrapper package. This wrapper can be used in combinations with metrics that return multiple values (such as classification metrics with the average=None argument). The wrapper will unwrap the result into adict
with a label for each value.[0.8.0] - 2022-04-14
Added
WeightedMeanAbsolutePercentageError
to regression package (New metric: WMAPE #948)CoverageError
(Multilabel Ranking metrics #787)LabelRankingAveragePrecision
andLabelRankingLoss
(Multilabel Ranking metrics #787)SpectralAngleMapper
(Add new metrics:SAM
#885)ErrorRelativeGlobalDimensionlessSynthesis
(Adds new image metric -ERGAS
#894)UniversalImageQualityIndex
(Added new image metric - UQI #824)SpectralDistortionIndex
(Adds new image metric -d_lambda
#873)MetricCollection
inMetricTracker
(Support for collection in Tracker #718)StructuralSimilarityIndexMeasure
(3D extension for SSIM #818)MetricCollection
(Smart update of metric collection #709)ClasswiseWrapper
for better logging of classification metrics with multiple output values (Better support for classwise logging #832)**kwargs
argument for passing additional arguments to base class (Refactor/move args to kwargs #833)ignore_index
for the Accuracy metric (Negativeignore_index
for the Accuracy metric #362)adaptive_k
for theRetrievalPrecision
metric (Addedadaptive_k
argument to IR Precision metric #910)reset_real_features
argument image quality assessment metrics (Optionally Avoid recomputing features #722)compute_on_cpu
to all metrics (New argumentcompute_on_cpu
#867)Changed
num_classes
injaccard_index
a required argument (Update num_classes in jaccard score to be a required argument #853, Removeget_num_classes
injaccard_index
#914)permutation_invariant_training
(Improved shape checking ofpermutation_invariant_training
#864)None
(Refactor: allow reduction None #891)MetricTracker.best_metric
will now give a warning when computing on metric that do not have a best (Makebest_metric
in MetricTracker more robust #913)Deprecated
compute_on_step
(Deprecate/compute on step #792)dist_sync_on_step
,process_group
,dist_sync_fn
direct argument (Refactor/move args to kwargs #833)Removed
WER
andfunctional.wer
SSIM
andfunctional.ssim
PSNR
andfunctional.psnr
FBeta
andfunctional.fbeta
F1
andfunctional.f1
Hinge
andfunctional.hinge
IoU
andfunctional.iou
MatthewsCorrcoef
PearsonCorrcoef
SpearmanCorrcoef
MAP
andfunctional.pairwise.manhatten
PESQ
andfunctional.audio.pesq
PIT
andfunctional.audio.pit
SDR
andfunctional.audio.sdr
andfunctional.audio.si_sdr
SNR
andfunctional.audio.snr
andfunctional.audio.si_snr
STOI
andfunctional.audio.stoi
Fixed
MAP
metric in specific cases (Fix MAP device placement #950)ClasswiseWrapper
with theprefix
argument ofMetricCollection
(Fix compatibility between ClasswiseWrapper and prefix/postfix arg in MetricCollection #843)BestScore
on GPU (Fix BertScore on GPU #912)ROUGEScore
(Fix RougeL/RougeLSum implementation #944)Contributors
@ankitaS11, @ashutoshml, @Borda, @hookSSi, @justusschock, @lucadiliello, @quancs, @rusty1s, @SkafteNicki, @stancld, @vumichien, @weningerleon, @yassersouri
If we forgot someone due to not matching commit email with GitHub account, let us know :]
This discussion was created from the release Faster collection and more metrics!.
Beta Was this translation helpful? Give feedback.
All reactions