Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge release/v1.2.0 to develop #5276

Merged
merged 58 commits into from
Dec 18, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
285145b
Fixed Hugging Face Transformers not using GPU
danielgural Oct 24, 2024
3d6084f
Fixed some classes not having device on them
danielgural Oct 24, 2024
038138e
optimize main thread to worker transfer when recoloring
sashankaryal Dec 9, 2024
ece501a
fix typo
sashankaryal Dec 9, 2024
89fea41
check if array buffer detached
sashankaryal Dec 10, 2024
d3bf174
add clarifying comments
sashankaryal Dec 10, 2024
4f76488
cleanup overlays in the worker listener callback instead
sashankaryal Dec 12, 2024
1510b47
remove bitmap = null (obj closed for modification)
sashankaryal Dec 12, 2024
e22b56e
remove unnecessary sample null check guard
sashankaryal Dec 12, 2024
686be45
fix #5254
brimoor Dec 13, 2024
d41fffc
TP/FP/NP support for binary classification model evaluation
imanjra Dec 13, 2024
f627b82
Merge pull request #5267 from voxel51/model-eval-fixes
brimoor Dec 13, 2024
c702bb2
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 13, 2024
981c42d
include all labels in views
brimoor Dec 13, 2024
a9ea1c3
filtering comparison field as well
brimoor Dec 15, 2024
35f15b2
Merge pull request #5247 from voxel51/fix/transfer-bitmaps-back
sashankaryal Dec 17, 2024
a8e5a3f
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 17, 2024
8997a39
consistent table heading row text style
imanjra Dec 16, 2024
7c51825
fix evaluation timestamp
imanjra Dec 16, 2024
4cdeff0
hide unsupported metrics in model eval panel
imanjra Dec 16, 2024
09bb793
gracefully handle unsupported model evaluation
imanjra Dec 16, 2024
a78dd41
use mask targets in model evaluation panel
imanjra Dec 6, 2024
0db34b4
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 17, 2024
1ddd2bf
do not add valid list field filters to collapsed paths (#5280)
benjaminpkane Dec 17, 2024
a7ab1d9
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 17, 2024
f704079
add extended selection (#5286)
benjaminpkane Dec 17, 2024
8ee8647
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 17, 2024
9c3cb74
add detections fields to additional media fields
sashankaryal Dec 17, 2024
bd7d2de
use pydash.get instead of _deep_get
sashankaryal Dec 17, 2024
d0462b9
use detections fields in media urls creation, too
sashankaryal Dec 17, 2024
89d1853
add support for collection overlay types in disk decoder
sashankaryal Dec 17, 2024
d2038b0
return if no path
sashankaryal Dec 17, 2024
cd8eaee
fix src bug
sashankaryal Dec 17, 2024
5d6f683
add clarification comment for sources
sashankaryal Dec 17, 2024
bc44e3e
don't get rid of query params if source is defined
sashankaryal Dec 17, 2024
d9e5f38
bump brain version
brimoor Dec 17, 2024
3d5bd81
Merge pull request #5290 from voxel51/bump-brain
brimoor Dec 18, 2024
da08841
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
d872a3b
similarity race condition patch
ehofesmann Dec 13, 2024
4fdcc9c
lint
brimoor Dec 18, 2024
ec2c2e8
Merge pull request #5273 from voxel51/bugfix/sim-sort-patch
brimoor Dec 18, 2024
4aa7000
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
2115658
Merge pull request #5289 from voxel51/fix/detections-sources
sashankaryal Dec 18, 2024
6f49543
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
5603258
model evaluation load_view bug fixes
imanjra Dec 18, 2024
ae494a5
Merge pull request #5268 from voxel51/model-eval-fixes2
brimoor Dec 18, 2024
4d83471
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
ff0c2e7
Merge pull request #4987 from voxel51/zoo_gpu
danielgural Dec 18, 2024
01807ac
decode png header to know num channels
sashankaryal Dec 18, 2024
b86906a
use const png signature
sashankaryal Dec 18, 2024
b4acf0f
remove invalid tests
sashankaryal Dec 18, 2024
d9649c3
Merge pull request #5294 from voxel51/fix/png-decoding
sashankaryal Dec 18, 2024
480780f
Merge branch 'merge/release/v1.2.0' of https://github.com/voxel51/fif…
voxel51-bot Dec 18, 2024
b81c1e7
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
94c3fe9
Reset to initial buffers in video looker (#5293)
benjaminpkane Dec 18, 2024
888ad9d
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
c82bb81
Fix cached looker font sizes (#5287)
benjaminpkane Dec 18, 2024
d8b5a19
Merge branch 'release/v1.2.0' of https://github.com/voxel51/fiftyone …
voxel51-bot Dec 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,7 @@ export default function Evaluation(props: EvaluationProps) {
const evaluationConfig = evaluationInfo.config;
const evaluationMetrics = evaluation.metrics;
const evaluationType = evaluationConfig.type;
const evaluationMethod = evaluationConfig.method;
const compareEvaluationInfo = compareEvaluation?.info || {};
const compareEvaluationKey = compareEvaluationInfo?.key;
const compareEvaluationTimestamp = compareEvaluationInfo?.timestamp;
Expand All @@ -174,6 +175,9 @@ export default function Evaluation(props: EvaluationProps) {
const compareEvaluationType = compareEvaluationConfig.type;
const isObjectDetection = evaluationType === "detection";
const isSegmentation = evaluationType === "segmentation";
const isBinaryClassification =
evaluationType === "classification" && evaluationMethod === "binary";
const showTpFpFn = isObjectDetection || isBinaryClassification;
const infoRows = [
{
id: "evaluation_key",
Expand Down Expand Up @@ -385,7 +389,7 @@ export default function Evaluation(props: EvaluationProps) {
? "compare"
: "selected"
: false,
hide: !isObjectDetection,
hide: !showTpFpFn,
},
{
id: "fp",
Expand All @@ -400,7 +404,7 @@ export default function Evaluation(props: EvaluationProps) {
? "compare"
: "selected"
: false,
hide: !isObjectDetection,
hide: !showTpFpFn,
},
{
id: "fn",
Expand All @@ -415,7 +419,7 @@ export default function Evaluation(props: EvaluationProps) {
? "compare"
: "selected"
: false,
hide: !isObjectDetection,
hide: !showTpFpFn,
},
];

Expand Down
73 changes: 43 additions & 30 deletions fiftyone/operators/builtins/panels/model_evaluation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,17 @@
|
"""

from collections import defaultdict, Counter
import os
import traceback
import fiftyone.operators.types as types

from collections import defaultdict, Counter
import numpy as np

from fiftyone import ViewField as F
from fiftyone.operators.categories import Categories
from fiftyone.operators.panel import Panel, PanelConfig
from fiftyone.core.plots.plotly import _to_log_colorscale
import fiftyone.operators.types as types


STORE_NAME = "model_evaluation_panel_builtin"
Expand Down Expand Up @@ -95,6 +97,12 @@ def on_load(self, ctx):
ctx.panel.set_data("permissions", permissions)
self.load_pending_evaluations(ctx)

def is_binary_classification(self, info):
return (
info.config.type == "classification"
and info.config.method == "binary"
)

def get_avg_confidence(self, per_class_metrics):
count = 0
total = 0
Expand All @@ -104,29 +112,29 @@ def get_avg_confidence(self, per_class_metrics):
total += metrics["confidence"]
return total / count if count > 0 else None

def get_tp_fp_fn(self, ctx):
view_state = ctx.panel.get_state("view") or {}
key = view_state.get("key")
dataset = ctx.dataset
tp_key = f"{key}_tp"
fp_key = f"{key}_fp"
fn_key = f"{key}_fn"
tp_total = (
sum(ctx.dataset.values(tp_key))
if dataset.has_field(tp_key)
else None
)
fp_total = (
sum(ctx.dataset.values(fp_key))
if dataset.has_field(fp_key)
else None
)
fn_total = (
sum(ctx.dataset.values(fn_key))
if dataset.has_field(fn_key)
else None
)
return tp_total, fp_total, fn_total
def get_tp_fp_fn(self, info, results):
# Binary classification
if self.is_binary_classification(info):
neg_label, pos_label = results.classes
tp_count = np.count_nonzero(
(results.ytrue == pos_label) & (results.ypred == pos_label)
)
fp_count = np.count_nonzero(
(results.ytrue != pos_label) & (results.ypred == pos_label)
)
fn_count = np.count_nonzero(
(results.ytrue == pos_label) & (results.ypred != pos_label)
)
return tp_count, fp_count, fn_count

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Review calculations for false positives and false negatives in object detection

In the get_tp_fp_fn method, the calculations for false positives (fp_count) and false negatives (fn_count) in the object detection section may not align with standard definitions.

  • Currently, fp_count is calculated as np.count_nonzero(results.ytrue == results.missing), which counts instances where the ground truth is missing but predictions exist. This may actually represent false negatives.
  • Similarly, fn_count is calculated as np.count_nonzero(results.ypred == results.missing), which counts instances where predictions are missing but ground truths exist. This may represent false positives.

Consider revising the calculations to ensure they correctly represent false positives and false negatives according to standard object detection metrics.

# Object detection
if info.config.type == "detection":
tp_count = np.count_nonzero(results.ytrue == results.ypred)
fp_count = np.count_nonzero(results.ytrue == results.missing)
fn_count = np.count_nonzero(results.ypred == results.missing)
return tp_count, fp_count, fn_count

return None, None, None

def get_map(self, results):
try:
Expand Down Expand Up @@ -298,7 +306,7 @@ def load_evaluation(self, ctx):
per_class_metrics
)
metrics["tp"], metrics["fp"], metrics["fn"] = self.get_tp_fp_fn(
ctx
info, results
)
metrics["mAP"] = self.get_map(results)
evaluation_data = {
Expand Down Expand Up @@ -418,10 +426,15 @@ def load_view(self, ctx):
gt_field, F("label") == y
).filter_labels(pred_field, F("label") == x)
elif view_type == "field":
view = ctx.dataset.filter_labels(
pred_field, F(computed_eval_key) == field
)

if self.is_binary_classification(info):
uppercase_field = field.upper()
view = ctx.dataset.match(
{computed_eval_key: {"$eq": uppercase_field}}
)
else:
view = ctx.dataset.filter_labels(
pred_field, F(computed_eval_key) == field
)
if view is not None:
ctx.ops.set_view(view)

Expand Down
Loading