-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Mean Absolute Percentage Error metric #235
Comments
Hi! thanks for your contribution!, great first issue! |
@pranjaldatta looks like a great addtion. Would you be up for sending a PR? |
@SkafteNicki Thanks! I'll send a PR right away in 2/3 days. I'll make sure to reflect some of the limitations in the associated documentation. |
Is there a reason why MAPE in torchmetrics doesn't multiply with 100 to actually get the percent? |
I agree, I think this should be mentioned explicitly in the docs to avoid confusion. It's a "mean absolute relative error". @pranjaldatta |
🚀 Feature
Add Mean Absolute Percentage Error to the list of regression metrics.
Motivation
It's a neat metric that is included in the scikit-learn package - considered to be a bit more intuitive than MAE and more sensitive to relative errors. More can be found on its Wikipedia article.
Pitch
Mean Absolute Percentage Error to be included among the regression metrics.
Alternatives
Additional context
I would be really grateful if someone could judge whether such a metric is necessary in the first place (considering it has some documented issues as noted in the wiki here). Although I guess, scikit-learn deals with the 0-related issues by adding an arbitrary constant to the denominator.
If the community finds that such a metric would be a helpful addition to this awesome package, I would love to contribute this!
The text was updated successfully, but these errors were encountered: