-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classification: option to disable input formatting [wip] #1676
base: master
Are you sure you want to change the base?
Conversation
@SkafteNicki how is it going here? 🐰 |
Hi @SkafteNicki, do you guys have any update on this one please? |
We need to address failing checks |
@SkafteNicki how is it going here? 🐰 |
@@ -329,6 +349,16 @@ def binary_precision_recall_curve( | |||
Specifies a target value that is ignored and does not contribute to the metric calculation | |||
validate_args: bool indicating if input arguments and tensors should be validated for correctness. | |||
Set to ``False`` for faster computations. | |||
input_format: str or bool specifying the format of the input preds tensor. Can be one of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this looks good to me...
cc: @Lightning-AI/core-metrics @awaelchli
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #1676 +/- ##
======================================
- Coverage 69% 69% -0%
======================================
Files 307 307
Lines 17352 17406 +54
======================================
+ Hits 11961 11992 +31
- Misses 5391 5414 +23 |
Thanks for working on the bug in #1604! I suspect this bug is silently affecting people currently using torchmetrics. It looks like the current solution is a new argument I see two options
I personally see no reason to include the
|
@idc9 thanks for giving your opinions on this issue.
|
@SkafteNicki In cases where GPU memory limits batch sizes to be very small, there will be no "reasonably-sized input", so "auto" will be buggy. What about not having "auto", but rather having only "probs" and "logits", with "probs" being the default and verifying that its input is in [0, 1]? This will be backwards compatible, except raise an error in the case that the old version was doing the wrong thing. The error message can suggest the user might mean |
Sorry I should have been more careful with my language! Let me rephrase “The auto option will always be buggy if the user inputs logits” to “The ‘auto’ option leaves open the possibility for the bug to happen when the user inputs logits”. Since the auto option leaves open the possibility of this bug occurring I suspect we don’t we want to preserve backwards compatibility. This is a subtle/unexpected issue many users won’t immediately realize it can be an issue. Given it’s straightforward for the user to specify logits/prob, I’m curious if there is any case where ‘auto’ is useful (i.e. should it just be removed)? As @SkafteNicki pointed out the logits bug will only occur if all of the predicted probabilities lie in [0.5, 0.73]. It’s true this is likely not the usual scenario but it probably does come up. A few examples where it may be more common include: small batches, early phases of model training, difficult applications where hold-out probabilities are (sadly) closeish to 0.5, unbalanced classes, situations where your label prediction rule thresholds the probability at some value different from 0.5. |
@NoahAtKintsugi and @idc9 Alright, I am willing to give me on this, but we are then going to break backwards compatibility, which I do not take as a light issue. If we do this change we need to do it right. You are probably right that it is better to break stuff now and then fix this issue that may be a potential hidden problem for some users. @Borda and @justusschock please give some opinions on this. Do we
|
I think due to stability, we may add a warning but being having a hard switch from 2.0 |
Defaulting to |
@SkafteNicki, how is this one going? :) |
d0a5568
to
9fc79ae
Compare
What does this PR do?
Fixes #1526
Fixes #1604
Fixes #1989
Fixes #2195
Fixes #2329
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃