You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in the "talking to me" challenge. So I follow the README and download the
"annotations" and "clips" dataset. According to the docs, I think the data and annotations
of TTM challenge are shared with the other benchmark - AV Diarization. I do find "av_train.json"
and "av_val.json" that seem to have necessary annotations for the TTM task, but "av_test_unannotated.json"
doesn't seem to have those annotations (tracking paths, target IDs, ...). How can I find the
test subset for this challenge?
The text was updated successfully, but these errors were encountered:
I didn't know the test set of "talking to me" challenge is separated from other AV Diarization tasks at that time (I've downloaded it by following the instruction in this repo.) However the test data for this task only have resized face crops of the target person. I think some information, such as the original size of the bounding box or the face crops of other people in the scene, which is available in training time can be helpful and be released without jeopardizing other challenges.
Hi,
I'm interested in the "talking to me" challenge. So I follow the README and download the
"annotations" and "clips" dataset. According to the docs, I think the data and annotations
of TTM challenge are shared with the other benchmark - AV Diarization. I do find "av_train.json"
and "av_val.json" that seem to have necessary annotations for the TTM task, but "av_test_unannotated.json"
doesn't seem to have those annotations (tracking paths, target IDs, ...). How can I find the
test subset for this challenge?
The text was updated successfully, but these errors were encountered: