Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): update faster-coco-eval requirement from ==1.5.* to ==1.6.* in /requirements #2750

Merged
merged 12 commits into from
Oct 11, 2024

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Sep 16, 2024

Updates the requirements on faster-coco-eval to permit the latest version.

Release notes

Sourced from faster-coco-eval's releases.

1.6.0

v1.6.0

  • Rework mask_api with pybind11 C++ .
  • Rework RLE support.
  • Create test files for all components.
  • The math_matches function has been reworked, with an emphasis on using C++ code.
  • Added more documentation of functions. Optimized existing ones.
  • Added rleToBoundary func with 2 backend ["mask_api", "opencv"]
  • IoU type boundary support (further testing is needed)
  • Create async rle and boundary comput discussion
Changelog

Sourced from faster-coco-eval's changelog.

v1.6.0

  • Rework mask_api with pybind11 C++ .
  • Rework RLE support.
  • Create test files for all components.
  • The math_matches function has been reworked, with an emphasis on using C++ code.
  • Added more documentation of functions. Optimized existing ones.
  • Added rleToBoundary func with 2 backend ["mask_api", "opencv"]
  • IoU type boundary support (further testing is needed)
  • Create async rle and boundary comput discussion

v1.5.7

  • Compare COCOEval bug fix.

v1.5.6

  • Replace CED MSE curve with MAE (px) curve
  • Add CED examples
  • Display IoU and MAE for keypoints
  • Reworked eval._prepare to clear up the return flow
  • Reworked the C++ part of COCOevalEvaluateImages and COCOevalAccumulate
    • Add new COCOevalEvaluateAccumulate to combine these two calls. You can use old style separate_eval==True (default=False)
    • COCOevalAccumulate & COCOevalEvaluateAccumulate -> COCOeval_faster.eval is now correctly created as numpy arrays.
  • Append LVIS dataset support lvis_style=True in COCOeval_faster
cocoEval = COCOeval_faster(cocoGt, cocoDt, iouType, lvis_style=True, print_function=print)
cocoEval.params.maxDets = [300]

v1.5.5

  • Add CED MSE curve
  • Review tests
  • Review COCOeval_faster.math_matches function and COCOeval_faster.compute_mIoU function

v1.5.3 - v1.5.4

  • Worked out the ability to work with skeletons and various key points
  • eval.state_as_dict Now works for key points

v1.5.2

  • Change comparison to colab_example
  • append utils with opencv conver_mask_to_poly (extra)
  • append drop_cocodt_by_score for extra eval

v1.5.1

... (truncated)

Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

📚 Documentation preview 📚: https://torchmetrics--2750.org.readthedocs.build/en/2750/

@dependabot dependabot bot requested a review from SkafteNicki as a code owner September 16, 2024 23:59
@dependabot dependabot bot added the test / CI testing or CI label Sep 16, 2024
@dependabot dependabot bot requested a review from a team September 16, 2024 23:59
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch from d7ed1fa to e9d1f6f Compare September 17, 2024 14:05
@Borda
Copy link
Member

Borda commented Sep 17, 2024

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose

@dependabot dependabot bot force-pushed the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch 3 times, most recently from c1015b6 to f660f41 Compare September 24, 2024 17:15
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch 2 times, most recently from 5e923b6 to 1afa179 Compare October 3, 2024 19:00
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch from 1afa179 to 1b3fb01 Compare October 8, 2024 17:30
Updates the requirements on [faster-coco-eval](https://github.com/MiXaiLL76/faster_coco_eval) to permit the latest version.
- [Release notes](https://github.com/MiXaiLL76/faster_coco_eval/releases)
- [Changelog](https://github.com/MiXaiLL76/faster_coco_eval/blob/main/history.md)
- [Commits](MiXaiLL76/faster_coco_eval@1.5.2...1.6.0)

---
updated-dependencies:
- dependency-name: faster-coco-eval
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot force-pushed the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch from 1b3fb01 to 8e18015 Compare October 8, 2024 17:30
@SkafteNicki
Copy link
Member

For my on sake of trying to remember what I did:

  • Needed to add special code for dealing with the cases when either no ground truths are provided or any predictions. This will in new versions of faster-coco-eval just lead to an error whereas in older versions (and pycocotools) this just returns -1 for all scores
  • Needed to reinitialize the dataset and eval objects when calculating per class. The new version of faster-coco-eval injects some attributes which messes with repeated evaluations.

@SkafteNicki SkafteNicki self-assigned this Oct 9, 2024
@SkafteNicki SkafteNicki added this to the v1.5.0 milestone Oct 9, 2024
Copy link

codecov bot commented Oct 9, 2024

Codecov Report

Attention: Patch coverage is 93.93939% with 2 lines in your changes missing coverage. Please review.

Project coverage is 36%. Comparing base (151fef1) to head (e022592).
Report is 1 commits behind head on master.

❗ There is a different number of reports uploaded between BASE (151fef1) and HEAD (e022592). Click for more details.

HEAD has 80 uploads less than BASE
Flag BASE (151fef1) HEAD (e022592)
Windows 6 3
python3.8 6 3
cpu 40 20
torch1.13.1+cpu 6 3
macOS 8 4
python3.10 10 5
torch2.0.1 4 2
torch2.4.0+cpu 2 1
python3.11 6 3
torch2.4.0 2 1
torch1.12.1+cpu 2 1
Linux 26 13
python3.9 18 9
torch2.0.1+cpu 6 3
torch1.10.2+cpu 2 1
torch1.11.0+cpu 2 1
torch2.3.1+cpu 4 2
torch2.2.2+cpu 4 2
torch2.1.2+cpu 2 1
torch2.4.0+cu121 2 1
torch1.13.1 2 1
Additional details and impacted files
@@           Coverage Diff            @@
##           master   #2750     +/-   ##
========================================
- Coverage      69%     36%    -33%     
========================================
  Files         329     329             
  Lines       18085   18086      +1     
========================================
- Hits        12506    6567   -5939     
- Misses       5579   11519   +5940     

@SkafteNicki
Copy link
Member

Install of recent version of faster-coco-eval does not seem to work on windows and mac.
Have opened issue: MiXaiLL76/faster_coco_eval#39 for help

MiXaiLL76 added a commit to MiXaiLL76/faster_coco_eval that referenced this pull request Oct 9, 2024
MiXaiLL76 added a commit to MiXaiLL76/faster_coco_eval that referenced this pull request Oct 9, 2024
@MiXaiLL76
Copy link

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose

In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.

It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.

https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315
Here in the test the score values ​​for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.

I will try to figure it out, but I am not sure if it is possible yet.

@SkafteNicki
Copy link
Member

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose

In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.

It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.

https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315 Here in the test the score values ​​for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.

I will try to figure it out, but I am not sure if it is possible yet.

@MiXaiLL76 thanks for the new release of faster-coco-eval. I had already found workarounds for both tests failing on out side so do not worry about that. Hopefully this PR will just pass without further problems now.

@Borda Borda requested review from MiXaiLL76 and Borda October 10, 2024 19:03
@Borda Borda enabled auto-merge (squash) October 10, 2024 19:05
@Borda Borda requested a review from MiXaiLL76 October 11, 2024 06:16
@Borda Borda merged commit c2c68b6 into master Oct 11, 2024
69 of 70 checks passed
@Borda Borda deleted the dependabot-pip-requirements-faster-coco-eval-eq-1.6.star branch October 11, 2024 06:38
@mergify mergify bot added the ready label Oct 11, 2024
@SkafteNicki
Copy link
Member

Thanks @MiXaiLL76 for the fast response!

@MiXaiLL76
Copy link

@SkafteNicki
I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)

@SkafteNicki
Copy link
Member

@SkafteNicki I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)

Hi @MiXaiLL76 thanks for letting me know. I can see that we set our requirement file to

faster-coco-eval >=1.6.3, <1.7.0

which means that the new version should be auto tested for new commits. The latest commit to master seems to have passed no problems so everything should be good. But thanks for letting us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants