Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding label IDs in Segmentation Evaluation results #5323

Closed
wants to merge 11 commits into from

Conversation

prernadh
Copy link
Contributor

@prernadh prernadh commented Dec 27, 2024

What changes are proposed in this pull request?

Adding Segmentation Ids in Segementation Evaluation results fixes the click through behavior when you attach plots to sessions in FiftyOne Notebooks.

How is this patch tested? If it is not, please explain why.

Manually tested

Release Notes

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

Adds in click-through capabilities for segmentation evaluation confusion matrices.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Code to test-


import fiftyone as fo
import fiftyone.zoo as foz

dataset = foz.load_zoo_dataset('quickstart',max_samples=3)

# Generate some toy semantic segmentations
for idx, sample in enumerate(dataset.iter_samples(autosave=True)):
    mask = idx*np.ones((10,10),dtype=np.uint8)
    seg = fo.Segmentation(mask=mask)
    sample['segmentation'] = seg

    mask = (idx+1)*np.ones((10,10),dtype=np.uint8)
    seg = fo.Segmentation(mask=mask)
    sample['segmentation2'] = seg

res = dataset.evaluate_segmentations(
    "segmentation2",
    gt_field="segmentation",
    eval_key="eval_seg"
)

plot = res.plot_confusion_matrix()
plot.show()

session = fo.launch_app(dataset)
session.plots.attach(plot)

# Click on dark cells of confusion matrix. Expected clicking on eg (r,c) = (0,1) to select the first sample in the view.```

<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

- **New Features**
  - Enhanced segmentation evaluation with more detailed performance insights.
  - Added capability to track pixel-level classification matches across samples.

- **Improvements**
  - Expanded evaluation results to provide more granular information about segmentation performance.
  - Updated evaluation process to include detailed pixel-level match data in results reporting.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Copy link
Contributor

coderabbitai bot commented Dec 27, 2024

Walkthrough

The pull request modifies the segmentation evaluation functionality in the fiftyone/utils/eval/segmentation.py file. A new matches list is introduced to capture detailed information about pixel-level matches during evaluation. The evaluate_samples method in the SimpleEvaluation class is updated to populate this list with data from the confusion matrix, including class names, counts, and identifiers. The constructor of the SegmentationResults class is also modified to accept the matches list, enhancing the granularity of segmentation performance evaluation.

Changes

File Change Summary
fiftyone/utils/eval/segmentation.py - Added matches list to store detailed segmentation match information
- Updated evaluate_samples method to populate matches list
- Modified SegmentationResults constructor to accept matches parameter

Possibly related PRs

  • various model evaluation fixes and enhancements #5123: This PR includes enhancements to the evaluation metrics display, which relates to the changes made in the main PR regarding the evaluation process and confusion matrix handling.
  • fix confusion matrix in model evaluation panel #5186: This PR addresses fixes to the confusion matrix in the model evaluation panel, which is directly relevant to the changes made in the main PR that enhance segmentation evaluation functionality.
  • Show "missing" counts in confusion matrices #5187: This PR adds "missing" counts to confusion matrices, which aligns with the main PR's focus on improving evaluation metrics and detailed reporting in segmentation evaluation.
  • Fixing #5254 #5267: This PR involves modifications to the evaluation process, which is closely related to the enhancements made in the main PR regarding segmentation evaluation metrics.

Suggested labels

enhancement

Suggested reviewers

  • brimoor
  • imanjra

Poem

🐰 Pixel by pixel, we match and we see
Segmentation's secrets, now clear as can be
A list of matches, precise and so bright
Revealing the patterns hidden from sight
Evaluation's magic, a computational delight! 🔍

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@prernadh prernadh marked this pull request as ready for review December 30, 2024 16:16
@prernadh prernadh requested a review from allenleetc December 30, 2024 16:17
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
fiftyone/utils/eval/segmentation.py (2)

352-352: Potential memory overhead from accumulating matches
Storing all matches in a single global list (matches = []) for the entire dataset could become memory-intensive for large collections. Consider storing matches on a per-sample or streaming basis if memory usage becomes an issue.


462-470: Conditional parsing of matches
These lines correctly parse the custom matches (with IDs) if provided, or fall back to self._parse_confusion_matrix(...) otherwise. This design offers flexibility for capturing more granular information when needed. However, if your dataset is large, consider using a more memory-friendly approach (e.g., storing aggregated metrics or streaming).

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 18dde78 and c14d7b2.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (5 hunks)
🔇 Additional comments (3)
fiftyone/utils/eval/segmentation.py (3)

431-431: Passing matches to SegmentationResults
Keeping track of matches via the constructor parameter is good for enabling more interactive confusion matrix features, especially if you plan to link confusion matrix cells to their respective segmentation IDs.


457-457: New parameter for matches
Introducing matches=None as a default parameter is a clean way to preserve backward compatibility and only store segmentation matches when available.


479-480: Injecting IDs into the parent constructor
By passing ytrue_ids and ypred_ids, you ensure that the segmentation results can reference the original segmentation objects for more interactive features, such as clickable confusion matrices.

fiftyone/utils/eval/segmentation.py Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
fiftyone/utils/eval/segmentation.py (1)

459-459: Add type hints and validation for matches parameter

The matches parameter should be documented and validated to ensure correct tuple structure.

     def __init__(
         self,
         samples,
         config,
         eval_key,
         pixel_confusion_matrix,
         classes,
-        matches=None,
+        matches: Optional[List[Tuple[str, str, int, str, str]]] = None,
         missing=None,
         backend=None,
     ):
         """
         Args:
             ...
             matches: Optional list of tuples containing (true_class, pred_class,
                 count, ground_truth_id, prediction_id) for each non-zero entry
                 in the confusion matrix
             ...
         """
         pixel_confusion_matrix = np.asarray(pixel_confusion_matrix)
         ytrue_ids = None
         ypred_ids = None

         if matches is not None:
+            # Validate matches structure
+            if not all(len(m) == 5 for m in matches):
+                raise ValueError("Each match must be a 5-tuple")
             ytrue, ypred, weights, ytrue_ids, ypred_ids = zip(*matches)

Also applies to: 464-469

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c14d7b2 and 7c56ce7.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (5 hunks)
🔇 Additional comments (2)
fiftyone/utils/eval/segmentation.py (2)

481-482: LGTM: IDs properly passed to parent class

The implementation correctly passes the segmentation IDs to the parent class, enabling the click-through functionality in confusion matrices.


393-396: 🛠️ Refactor suggestion

Consider optimizing matches collection for multi-frame samples

The current implementation appends matches for each image within the sample loop, which could lead to redundant entries for multi-frame samples. Consider aggregating matches at the sample level instead.

-                non_zero_indexes = np.nonzero(image_conf_mat)
-                for index in zip(*non_zero_indexes):
-                    matches.append((classes[index[0]], classes[index[1]], image_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))
+                sample_conf_mat += image_conf_mat
+
+            # Collect matches after processing all frames in the sample
+            non_zero_indexes = np.nonzero(sample_conf_mat)
+            for index in zip(*non_zero_indexes):
+                matches.append((classes[index[0]], classes[index[1]], sample_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))

Likely invalid or redundant comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
fiftyone/utils/eval/segmentation.py (1)

393-396: ⚠️ Potential issue

Move matches collection outside the image loop to prevent duplicates

The matches are currently being collected for each image within a sample, which could lead to duplicate entries for multi-frame samples. This issue was previously identified and marked as fixed, but appears to still be present.

Apply this diff to fix the issue:

-                non_zero_indexes = np.nonzero(image_conf_mat)
-                for index in zip(*non_zero_indexes):
-                    matches.append((classes[index[0]], classes[index[1]], image_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))
+            # After processing all images in the sample
+            non_zero_indexes = np.nonzero(sample_conf_mat)
+            for index in zip(*non_zero_indexes):
+                matches.append((classes[index[0]], classes[index[1]], sample_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))
🧹 Nitpick comments (2)
fiftyone/utils/eval/segmentation.py (2)

465-473: Add type checking for matches unpacking

The matches unpacking logic could be more robust with type checking to prevent runtime errors if the matches data is malformed.

Apply this diff to add type checking:

         if matches is not None:
+            if not isinstance(matches, list) or not all(isinstance(m, tuple) and len(m) == 5 for m in matches):
+                raise ValueError("Expected matches to be a list of 5-element tuples")
             ytrue, ypred, weights, ytrue_ids, ypred_ids = zip(*matches)
         else:
             ytrue, ypred, weights = self._parse_confusion_matrix(
                 pixel_confusion_matrix, classes
             )

460-460: Update docstring to document the new matches parameter

The matches parameter should be documented in the class docstring to maintain API documentation completeness.

Add this to the docstring:

         classes: a list of class labels corresponding to the confusion matrix
+        matches (None): optional list of tuples containing (true_label, pred_label, count, true_id, pred_id)
         missing (None): a missing (background) class
         backend (None): a :class:`SegmentationEvaluation` backend
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7c56ce7 and 2043bdc.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (5 hunks)
🔇 Additional comments (1)
fiftyone/utils/eval/segmentation.py (1)

426-427: LGTM: Clean integration with existing functionality

The handling of empty matches and the integration with the parent class constructor is well implemented.

Also applies to: 482-483

@prernadh prernadh requested a review from kaixi-wang December 30, 2024 17:58
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
fiftyone/utils/eval/segmentation.py (4)

353-353: Consider memory optimization for large datasets

The unbounded growth of the matches list could lead to memory issues when processing large datasets. Consider implementing batch processing or streaming the results.

Also applies to: 394-397


465-473: Add input validation for matches parameter

Consider adding validation to ensure that when matches is provided, all tuples have the expected length and types.

Add this validation before the unpacking:

 if matches is not None:
+    if not all(len(m) == 5 for m in matches):
+        raise ValueError("All matches must be 5-tuples")
     ytrue, ypred, weights, ytrue_ids, ypred_ids = zip(*matches)

501-514: Improve code readability with helper function

Consider extracting the ID handling logic into a helper function to improve readability and maintainability.

+    @staticmethod
+    def _get_ids_or_none(ids):
+        return ids if ids is not None else itertools.repeat(None)

     @classmethod
     def _from_dict(cls, d, samples, config, eval_key, **kwargs):
         ytrue = d["ytrue"]
         ypred = d["ypred"]
         weights = d["weights"]
 
-        ytrue_ids = d.get("ytrue_ids", None)
-        if ytrue_ids is None:
-            ytrue_ids = itertools.repeat(None)
-
-        ypred_ids = d.get("ypred_ids", None)
-        if ypred_ids is None:
-            ypred_ids = itertools.repeat(None)
+        ytrue_ids = cls._get_ids_or_none(d.get("ytrue_ids"))
+        ypred_ids = cls._get_ids_or_none(d.get("ypred_ids"))
 
         matches = list(zip(ytrue, ypred, weights, ytrue_ids, ypred_ids))

434-434: Add docstring for the matches parameter

Since this is a user-facing change that enables click-through behavior, consider adding documentation for the matches parameter in the class docstring.

     """Class that stores the results of a segmentation evaluation.

     Args:
         samples: the :class:`fiftyone.core.collections.SampleCollection` used
         config: the :class:`SegmentationEvaluationConfig` used
         eval_key: the evaluation key
         pixel_confusion_matrix: a pixel value confusion matrix
         classes: a list of class labels corresponding to the confusion matrix
+        matches: a list of tuples containing (true_class, pred_class, count, gt_id, pred_id)
+            that enables click-through behavior in confusion matrices
         missing (None): a missing (background) class
         backend (None): a :class:`SegmentationEvaluation` backend
     """
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2043bdc and 7ce8217.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (7 hunks)
🔇 Additional comments (1)
fiftyone/utils/eval/segmentation.py (1)

394-400: ⚠️ Potential issue

Risk of duplicate entries in multi-frame samples

The matches are being appended inside the image loop, which could lead to duplicate entries when processing multi-frame samples. Consider aggregating matches at the sample level instead.

Apply this diff to fix the issue:

 sample_conf_mat = np.zeros((nc, nc), dtype=int)
 for image in images:
     image_conf_mat = _compute_pixel_confusion_matrix(...)
     sample_conf_mat += image_conf_mat
-    non_zero_indexes = np.nonzero(image_conf_mat)
-    for index in zip(*non_zero_indexes):
-        matches.append((classes[index[0]], classes[index[1]], image_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))

 # After processing all frames
+non_zero_indexes = np.nonzero(sample_conf_mat)
+for index in zip(*non_zero_indexes):
+    matches.append((classes[index[0]], classes[index[1]], sample_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))

Likely invalid or redundant comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
fiftyone/utils/eval/segmentation.py (3)

394-397: Consider batch processing for large datasets

While the logic is correct, appending to the matches list for each non-zero element in every image's confusion matrix could lead to memory growth with large datasets. Consider batch processing or streaming the results.

-                non_zero_indexes = np.nonzero(image_conf_mat)
-                for index in zip(*non_zero_indexes):
-                    matches.append((classes[index[0]], classes[index[1]], image_conf_mat[index[0], index[1]], gt_seg.id, pred_seg.id))
+                # Process matches in batches
+                non_zero_indexes = np.nonzero(image_conf_mat)
+                batch_matches = [(classes[i], classes[j], image_conf_mat[i, j], gt_seg.id, pred_seg.id)
+                               for i, j in zip(*non_zero_indexes)]
+                matches.extend(batch_matches)

466-474: Handle empty matches list edge case

The code handles both cases (with and without matches) but could be more explicit about empty list handling. Consider adding a specific condition for empty matches.

-        if matches is not None:
+        if matches and matches is not None:  # Check for non-empty list
             ytrue, ypred, weights, ytrue_ids, ypred_ids = zip(*matches)
         else:
             ytrue, ypred, weights = self._parse_confusion_matrix(

Also applies to: 483-484


502-514: Consider memory-efficient deserialization for large datasets

The deserialization logic is correct but could be memory-intensive for large datasets as it creates a list of all matches in memory. Consider implementing a generator-based approach for large datasets.

-        matches = list(zip(ytrue, ypred, weights, ytrue_ids, ypred_ids))
+        # For large datasets, consider using a generator
+        matches = zip(ytrue, ypred, weights, ytrue_ids, ypred_ids)
+        if not config.stream_matches:  # Add a config option for streaming
+            matches = list(matches)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ce8217 and 7c0799b.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (8 hunks)
🔇 Additional comments (2)
fiftyone/utils/eval/segmentation.py (2)

353-353: LGTM: Matches list initialization

The initialization of the matches list is clean and appropriate for storing the segmentation match details.


449-449: LGTM: Constructor parameter addition

The new matches parameter is well-documented with clear type hints in the docstring. The optional nature of the parameter maintains backward compatibility.

Also applies to: 461-463

@prernadh prernadh force-pushed the prerna/fix-plot-conf-mat-clicking branch from 7c0799b to 56eb28c Compare December 31, 2024 15:54
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
fiftyone/utils/eval/segmentation.py (1)

502-514: Consider adding type validation for IDs.

While the implementation handles missing IDs well, consider adding type validation for the cases where IDs are provided to ensure they match the expected format.

 ytrue_ids = d.get("ytrue_ids", None)
 if ytrue_ids is None:
     ytrue_ids = itertools.repeat(None)
+else:
+    if not isinstance(ytrue_ids, (list, tuple)):
+        raise ValueError("ytrue_ids must be a list or tuple")

 ypred_ids = d.get("ypred_ids", None)
 if ypred_ids is None:
     ypred_ids = itertools.repeat(None)
+else:
+    if not isinstance(ypred_ids, (list, tuple)):
+        raise ValueError("ypred_ids must be a list or tuple")
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7c0799b and 56eb28c.

📒 Files selected for processing (1)
  • fiftyone/utils/eval/segmentation.py (8 hunks)
🔇 Additional comments (2)
fiftyone/utils/eval/segmentation.py (2)

Line range hint 449-463: LGTM! Constructor changes look good.

The changes to the constructor properly handle both cases (with and without matches) and maintain backward compatibility.


395-397: Verify the match collection logic to prevent double-counting.

The match collection happens inside the image loop. Let's verify that this doesn't lead to duplicate entries when processing multiple frames.

✅ Verification successful

No double-counting issue in match collection logic

Based on the code analysis:

  1. matches list is initialized once at the beginning of the evaluation process
  2. Matches are collected within the confusion matrix computation loop, where each match represents a unique intersection between ground truth and predicted segmentation masks
  3. The collection happens after computing the pixel confusion matrix for each image, ensuring each match is recorded exactly once
  4. The matches are appended with unique IDs from both ground truth and predicted segmentations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential duplicate entries in match collection
# Look for patterns where matches are collected within frame loops
ast-grep --pattern 'for $frame in $frames:
  $$$
  matches.append($$$)
  $$$'

Length of output: 79


Script:

#!/bin/bash
# Let's check the context around where matches are collected
rg -B 10 -A 5 "matches\.append.*segmentation" fiftyone/utils/eval/segmentation.py

Length of output: 83


Script:

#!/bin/bash
# Let's look at the broader context of matches collection
rg -B 10 -A 5 "matches\.append.*id" fiftyone/utils/eval/segmentation.py

Length of output: 764


Script:

#!/bin/bash
# Let's check how matches are initialized and where they're used
rg -B 5 -A 5 "matches\s*=\s*\[\]" fiftyone/utils/eval/segmentation.py

Length of output: 480

ytrue_ids = None
ypred_ids = None

if matches is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@prernadh mmm this an appealing way to store the data because it means _parse_confusion_matrix doesn't need to be called when loading results 🧠

Prior to this refactor and before you created #5330, I had forked your initial implementation with dicts and implemented Model Evaluation panel callbacks in #5331.

As I mentioned on that PR, there's a few implementation details we probably aught to benchmark to make sure the panel is as performant as possible, but I could definitely see us using the data model here 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, what were you thinking in terms of benchmarking @brimoor ?

@prernadh
Copy link
Contributor Author

prernadh commented Jan 3, 2025

Subsumed by #5332 . Closing this PR

@prernadh prernadh closed this Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants