-
Notifications
You must be signed in to change notification settings - Fork 588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Model Evaluation panel: load_evaluation
runtime error
#5254
Comments
Having a look at: fiftyone/fiftyone/operators/builtins/panels/model_evaluation/__init__.py Lines 111 to 128 in 0948c7c
Isn't it an issue to load the TP, FP and FN values (via (and these fields seems specific to the detections eval which could explain why classification works fine - i.e. |
Not sure it's correct, but a patch could be: diff --git a/fiftyone/operators/builtins/panels/model_evaluation/__init__.py b/fiftyone/operators/builtins/panels/model_evaluation/__init__.py
index cb33082f9..fa638198d 100644
--- a/fiftyone/operators/builtins/panels/model_evaluation/__init__.py
+++ b/fiftyone/operators/builtins/panels/model_evaluation/__init__.py
@@ -104,7 +104,7 @@ class EvaluationPanel(Panel):
total += metrics["confidence"]
return total / count if count > 0 else None
- def get_tp_fp_fn(self, ctx):
+ def get_tp_fp_fn(self, ctx, results):
view_state = ctx.panel.get_state("view") or {}
key = view_state.get("key")
dataset = ctx.dataset
@@ -112,17 +112,17 @@ class EvaluationPanel(Panel):
fp_key = f"{key}_fp"
fn_key = f"{key}_fn"
tp_total = (
- sum(ctx.dataset.values(tp_key))
+ sum(results.samples.values(tp_key))
if dataset.has_field(tp_key)
else None
)
fp_total = (
- sum(ctx.dataset.values(fp_key))
+ sum(results.samples.values(fp_key))
if dataset.has_field(fp_key)
else None
)
fn_total = (
- sum(ctx.dataset.values(fn_key))
+ sum(results.samples.values(fn_key))
if dataset.has_field(fn_key)
else None
)
@@ -298,7 +298,7 @@ class EvaluationPanel(Panel):
per_class_metrics
)
metrics["tp"], metrics["fp"], metrics["fn"] = self.get_tp_fp_fn(
- ctx
+ ctx, results
)
metrics["mAP"] = self.get_map(results)
evaluation_data = {
cc @imanjra |
@brimoor I don't find the time for a more detailed report so close to Christmas, but I have other (maybe related?) issues with the Model Eval Panel - for me, I saw two issues:
Generally speaking great feature though! Love the 'generate view with 1 click' feature. If 1.2.0 won't fix it, I might write a proper report in January. Keep up the good work! And looking forward towards comparing more than 2 models :D |
Describe the problem
When using the new model evaluation panel from FiftyOne v1.1.0 on the vanilla tutorial, things work like a charm.
But, it fails with an
unsupported operand type(s) for +: 'int' and 'NoneType'
runtime error when evaluating detections via this slightly more elaborated example.Code to reproduce issue
The vanilla example works fine:
But, this example fails:
Then open the app > Model evaluation tab > Select the
"eval"
evaluation:With the following stack trace in the console:
This is happening during the call to the
"@voxel51/operators/model_evaluation_panel_builtin"
operator with the"load_evaluation "
method.Given the line which failed, I checked that all corresponding values are not
None
:(since this looks like a
sum([1, None, 3])
kind of problem)=> Is it a bug or misuse? If so, please let me know what I am missing here.
System information
python --version
): Python 3.12.7fiftyone --version
): FiftyOne v1.1.0, Voxel51, Inc.Other info/logs
One difference with the vanilla tutorial is that the evaluation is computed from a view. (FWIW,
evaluate_classifications
seems to work fine with views).Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
from the FiftyOne community
The text was updated successfully, but these errors were encountered: