-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Evaluating regression with metric operators #5234
base: develop
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
@@ -293,12 +309,27 @@ def evaluate_samples( | |||
_confs = confs | |||
_ids = ids | |||
|
|||
# Metric operators. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested with AbsoluteError
and Mean
operators: https://github.com/manushreegangwar/custom_metrics/blob/main/eval_metrics/__init__.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sample showing absolute error 'eval' and absolute error 'eval_absolute_error' computed using metric operator:
<Sample: {
'id': '674e42f0567b7e1228157c7a',
'media_type': 'image',
'filepath': '/home/manushree/fiftyone/quickstart/data/000880.jpg',
'tags': ['validation'],
'metadata': None,
'created_at': datetime.datetime(2024, 12, 9, 16, 33, 19, 703000),
'last_modified_at': datetime.datetime(2024, 12, 9, 16, 33, 20, 27000),
'ground_truth': <Regression: {
'id': '67571bcfaf69d19398e8e084',
'tags': [],
'value': 0.11694350486842164,
'confidence': None,
}>,
'predictions': <Regression: {
'id': '67571bcfaf69d19398e8e085',
'tags': [],
'value': 0.6432772871473967,
'confidence': 0.8470792054938999,
}>,
'weather': 'rainy',
'eval': 0.5263337822789751,
'eval_absolute_error': 0.5263337822789751,
}>
mean absolute error 5.07
mean eval absolute error 5.07
ed789cb
to
d5ca574
Compare
@@ -69,6 +70,8 @@ def evaluate_regressions( | |||
gt_field ("ground_truth"): the name of the field containing the | |||
ground truth :class:`fiftyone.core.labels.Regression` instances | |||
eval_key (None): a string key to use to refer to this evaluation | |||
eval_metrics (None): a list of tuples of ``fiftyone.operators.Operator`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont have full context, but if you’re thinking of this being a custom plugin, calling it outside the plugin framework here will likely cause circular dependencies
What changes are proposed in this pull request?
This PR proposes how to use metric operators for computing per-sample and aggregate metrics (not to be merged).
How is this patch tested? If it is not, please explain why.
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changes