-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy metric for preds at half precision is zero with pl=1.0.8 #5013
Comments
Hi! thanks for your contribution!, great first issue! |
It is indeed fixed in |
It is fixed in master but would be broken again in #4838, so thanks for catching this :) |
@luzuku And one small note: if you create tensors with |
* Add stuff * Change metrics documentation layout * Add stuff * Change testing utils * Replace len(*.shape) with *.ndim * More descriptive error message for input formatting * Replace movedim with permute * PEP 8 compliance * Division with float * Style changes in error messages * More error message style improvements * Fix typo in docs * Add more descriptive variable names in utils * Change internal var names * Break down error checking for inputs into separate functions * Remove the (N, ..., C) option in MD-MC * Simplify select_topk * Remove detach for inputs * Fix typos * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Jirka Borovec <[email protected]> * Update docs/source/metrics.rst Co-authored-by: Jirka Borovec <[email protected]> * Minor error message changes * Update pytorch_lightning/metrics/utils.py Co-authored-by: Jirka Borovec <[email protected]> * Reuse case from validation in formatting * Refactor code in _input_format_classification * Small improvements * PEP 8 * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Update docs/source/metrics.rst Co-authored-by: Rohit Gupta <[email protected]> * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]> * Alphabetical reordering of regression metrics * Change default value of top_k and add error checking * Extract basic validation into separate function * Update to new top_k default * Update desciption of parameters in input formatting * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Check that probabilities in preds sum to 1 (for MC) * Fix coverage * Split accuracy and hamming loss * Remove old redundant accuracy * Minor changes * Fix imports * Improve docstring descriptions * Fix edge case and simplify testing * Fix docs * PEP8 * Reorder imports * Update changelog * Update docstring * Update docstring * Reverse formatting changes for tests * Change parameter order * Remove formatting changes 2/2 * Remove formatting 3/3 * . * Improve description of top_k parameter * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]> * Remove unneeded assert * Update pytorch_lightning/metrics/functional/accuracy.py Co-authored-by: Rohit Gupta <[email protected]> * Remove unneeded assert * Explicit checking of parameter values * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Apply suggestions from code review * Fix top_k checking * PEP8 * Don't check dist_sync in test * add back check_dist_sync_on_step * Make sure half-precision inputs are transformed (#5013) * Fix typo * Rename hamming loss to hamming distance * Fix tests for half precision * Fix docs underline length * Fix doc undeline length * Replace mdmc_accuracy parameter with subset_accuracy * Update changelog * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]> * Suggestions from code review * Fix number in docs * Update pytorch_lightning/metrics/classification/accuracy.py * Replace topk by argsort in select_topk * Fix changelog * Add test for wrong params * Add Google Colab badges (#5111) * Add colab badges to notebook Add colab badges to notebook to notebooks 4 & 5 * Add colab badges Co-authored-by: chaton <[email protected]> * Fix hanging metrics tests (#5134) * Use torch.topk again as ddp hanging tests fixed in #5134 * Fix unwanted notebooks change * Fix too long line in hamming_distance * Apply suggestions from code review * Apply suggestions from code review * protect * Update CHANGELOG.md Co-authored-by: Teddy Koker <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: chaton <[email protected]> Co-authored-by: Rohit Gupta <[email protected]> Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Justus Schock <[email protected]> Co-authored-by: Roger Shieh <[email protected]> Co-authored-by: Shachar Mirkin <[email protected]>
* Add stuff * Change metrics documentation layout * Add stuff * Add stat scores * Change testing utils * Replace len(*.shape) with *.ndim * More descriptive error message for input formatting * Replace movedim with permute * PEP 8 compliance * WIP * Add reduce_scores function * Temporarily add back legacy class_reduce * Division with float * PEP 8 compliance * Remove precision recall * Replace movedim with permute * Add back tests * Add empty newlines * Add empty line * Fix permute * Fix some issues with old versions of PyTorch * Style changes in error messages * More error message style improvements * Fix typo in docs * Add more descriptive variable names in utils * Change internal var names * Break down error checking for inputs into separate functions * Remove the (N, ..., C) option in MD-MC * Simplify select_topk * Remove detach for inputs * Fix typos * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Jirka Borovec <[email protected]> * Update docs/source/metrics.rst Co-authored-by: Jirka Borovec <[email protected]> * Minor error message changes * Update pytorch_lightning/metrics/utils.py Co-authored-by: Jirka Borovec <[email protected]> * Reuse case from validation in formatting * Refactor code in _input_format_classification * Small improvements * PEP 8 * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Update docs/source/metrics.rst Co-authored-by: Rohit Gupta <[email protected]> * Update pytorch_lightning/metrics/classification/utils.py Co-authored-by: Rohit Gupta <[email protected]> * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]> * Alphabetical reordering of regression metrics * Change default value of top_k and add error checking * Extract basic validation into separate function * Update to new top_k default * Update desciption of parameters in input formatting * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Check that probabilities in preds sum to 1 (for MC) * Fix coverage * Split accuracy and hamming loss * Remove old redundant accuracy * Minor changes * Fix imports * Improve docstring descriptions * Fix imports * Fix edge case and simplify testing * Fix docs * PEP8 * Reorder imports * Add top_k parameter * Update changelog * Update docstring * Update docstring * Reverse formatting changes for tests * Change parameter order * Remove formatting changes 2/2 * Remove formatting 3/3 * . * Improve description of top_k parameter * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]> * Remove unneeded assert * Update pytorch_lightning/metrics/functional/accuracy.py Co-authored-by: Rohit Gupta <[email protected]> * Remove unneeded assert * Explicit checking of parameter values * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Apply suggestions from code review * Fix top_k checking * PEP8 * Don't check dist_sync in test * add back check_dist_sync_on_step * Make sure half-precision inputs are transformed (#5013) * Fix typo * Rename hamming loss to hamming distance * Fix tests for half precision * Fix docs underline length * Fix doc undeline length * Replace mdmc_accuracy parameter with subset_accuracy * Update changelog * Fix unwanted accuracy change * Enable top_k for ML prob inputs * Test that default threshold is 0.5 * Fix typo * Update top_k description in helpers * updates * Update styling and add back tests * Remove excess spaces * fix torch.where for old versions * fix linting * Update docstring * Fix docstring * Apply suggestions from code review (mostly docs) * Default threshold to None, accept only (0,1) * Change wrong threshold message * Improve documentation and add tests * Add back ddp tests * Change stat reduce method and default * Remove DDP tests and fix doctests * Fix doctest * Update changelog * Refactoring * Fix typo * Refactor * Increase coverage * Fix linting * Consistent use of backticks * Fix too long line in docs * Apply suggestions from code review * Fix deprecation test * Fix deprecation test * Default threshold back to 0.5 * Minor documentation fixes * Add types to tests Co-authored-by: Teddy Koker <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: chaton <[email protected]> Co-authored-by: Rohit Gupta <[email protected]> Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Justus Schock <[email protected]>
🐛 Bug
The accuracy metric is wrong if
preds
are given with half precision. See example.To Reproduce
Expected behavior
The accuracy metric should not fail silently. Either an Error needs to be raised when preds are half precision or it should work correctly.
Environment
conda
,pip
, source): condaAdditional context
This might already be fixed in master. I filed the issue regardless because I don't have time to check.
The text was updated successfully, but these errors were encountered: