Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Match HOTA with official result #186

Merged
merged 11 commits into from
May 27, 2024
Merged

Match HOTA with official result #186

merged 11 commits into from
May 27, 2024

Conversation

Justin900429
Copy link
Contributor

@Justin900429 Justin900429 commented Dec 16, 2023

Following the #183 to support the HOTA metrics and match the offical result.

Currently support only for specifiying a threshold. To compute the whole metrics, try the following sample:

import os

import numpy as np

import motmetrics as mm

"""Tests metrics for sequences TUD-Campus and TUD-Stadtmitte."""
dnames = [
    "TUD-Campus",
    "TUD-Stadtmitte",
]

DATA_DIR = "motmetrics/data"


def compute_motchallenge(dname):
    df_gt = mm.io.loadtxt(os.path.join(dname, "gt.txt"))
    df_test = mm.io.loadtxt(os.path.join(dname, "test.txt"))
    th_list = np.arange(0.05, 0.99, 0.05)
    res_list = mm.utils.compare_to_groundtruth_reweighting(df_gt, df_test, "iou", distth=th_list)
    return res_list


accs = [compute_motchallenge(os.path.join(DATA_DIR, d)) for d in dnames]
mh = mm.metrics.create()

for dataset_idx, dname in enumerate(dnames):
    deta = []
    assa = []
    hota = []
    for alpha_idx in range(len(accs[dataset_idx])):
        summary = mh.compute_many(
            [accs[dataset_idx][alpha_idx]],
            metrics=[
                "deta_alpha",
                "assa_alpha",
                "hota_alpha",
            ],
            names=[dnames[0]],
            generate_overall=False,
        )
        deta.append(float(summary["deta_alpha"].iloc[0]))
        assa.append(float(summary["assa_alpha"].iloc[0]))
        hota.append(float(summary["hota_alpha"].iloc[0]))

    deta = sum(deta) / len(deta)
    assa = sum(assa) / len(assa)
    hota = sum(hota) / len(hota)
    print(f"{dname}: HOTA: {hota * 100:.3f} | AssA: {assa * 100:.3f} | DetA: {deta * 100:.3f}")

The results will be (this is the same as official one):

TUD-Campus: HOTA: 39.140 | AssA: 36.912 | DetA: 41.805
TUD-Stadtmitte: HOTA: 39.785 | AssA: 40.884 | DetA: 39.227

Checklist

  • Add docstring for added function
  • A more efficient implementation
  • An appropriate unit test for HOTA

Note

The current implementation is totally NOT efficient enough, need the help from community to improve it.

@cheind
Copy link
Owner

cheind commented Dec 17, 2023

@Justin900429 thanks a lot! py-motmetrics can cache intermediate results. If you look here https://github.com/cheind/py-motmetrics/blob/develop/motmetrics/metrics.py#L204 a cache is being setup for all intermediate metrics that are computed, which is then being populate here https://github.com/cheind/py-motmetrics/blob/develop/motmetrics/metrics.py#L354. So as long as your intermediate values become metrics themselves, you should be able to cache their results.

@Justin900429
Copy link
Contributor Author

@cheind thanks for pointing out that, but we need to compute different threshold to obtain the $\text{HOTA}_\alpha$ before computing the final $\text{HOTA}$. Is this possible with cache?

@cheind
Copy link
Owner

cheind commented Dec 18, 2023

@Justin900429 that is a good question. I haven't looked at HOTA yet, but if it the computation is similar to Average Precision (AP) metric, one could instead pre-compute a 'tensor' of values for which the individual AP thresholds become simple sums over individual axis.

@Justin900429
Copy link
Contributor Author

Justin900429 commented Dec 18, 2023

Thresholds are fixed and should be [0.05, 0.1, ..., 0.95]. The current PR supports all the needed predifined functions (added to metrics.py). However, the current iimplementation is inefficient and not user-friendly, here are some issues:

  1. Users should create a list of accumulator and compute the average manually.
  2. The update for accumulator along with HOTA is not compatiable with "online" tracking, because we should have all the preds and gts to compute the global alignment score in advance.
  3. According to 2., a postprocess mechanism should be designed and implemented.

@cheind
Copy link
Owner

cheind commented Dec 18, 2023

@Justin900429 thanks for your update! Would it be feasible to implement HOTA like AP is implemented for COCO datasets? See here https://github.com/ppwwyyxx/cocoapi/blob/4670067b35e7b65d618c9746c456fe2c1128049c/PythonAPI/pycocotools/cocoeval.py#L315 where in accumulate a tensor with multiple axis is defined that carries all the information to compute AP(th) quickly as done in summarize.

One question regarding the current state: does it make sense to merge now and optimize later? Should we update the docs?

@Justin900429
Copy link
Contributor Author

Justin900429 commented Dec 18, 2023

@cheind i tried using the idea from COCO, but this might take more time to ensure this won't break all the others. It is ok to be merged but the doc should be well prepared (i'll finish this later). The usage will be totally different to every other metrics. At least it should be used only after all the predictions are done and is not compaitable to update.

@johnnynunez
Copy link

any news?

@mikel-brostrom
Copy link

👀

@Justin900429
Copy link
Contributor Author

Sorry for the late reply. I had updated the README for the HOTA. The "for-loop" now is a little bit annoying, I will merge it into mh.compute_many in the future commit.

@mikel-brostrom
Copy link

Thanks for the update @Justin900429! Really looking forward to this getting merged 🚀

@Justin900429
Copy link
Contributor Author

I figured out how to remove for loop :) It looks simple now. Efficiency is the next step.

@Justin900429
Copy link
Contributor Author

Hi @cheind, it can be merged now if the workflow is passed. I will switch back to this to improve efficiency in the future with a new PR. Thanks a lot 😊.

@cheind cheind merged commit b687013 into cheind:develop May 27, 2024
3 checks passed
@cheind
Copy link
Owner

cheind commented May 27, 2024

@Justin900429 amazing work - thanks a lot for your time and effort! Just merged into develop :)

@mikel-brostrom
Copy link

mikel-brostrom commented May 27, 2024

I have been waiting for this PR for years @Justin900429. We can all now finally ditch the trackeval repo in favor of this package 😄. Thanks to everybody involved in the process of getting this merged.

@cheind
Copy link
Owner

cheind commented May 27, 2024

As I'm not actively researching MOT topics anymore, I wonder if we should let any MOT challenge sites know about this particular addition?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants