Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Widerface #8

Closed
StetchPlane opened this issue Nov 12, 2021 · 7 comments
Closed

About Widerface #8

StetchPlane opened this issue Nov 12, 2021 · 7 comments

Comments

@StetchPlane
Copy link

How is anchor set when ddod is applied to widerface?(including base_sizes, ratios, scales_per_octave, octave_base_scale)

@zehuichen123
Copy link
Owner

We simply adopt the same settings in TinaFace, as well as the test-time augmentation. Note that you may need to add clip_gradient to stabilize the training process.

@zehuichen123
Copy link
Owner

And you may need to borrow some code from https://github.com/Media-Smart/vedadet where mmdet does not implement, like data augmentation.

@StetchPlane
Copy link
Author

I added three innovative points in ddod to tinaface before, and the other settings did not change, but the experimental accuracy decreased, especially the "atss_cost_assigner". However, I did not use clip_gradient in this process.

@zehuichen123
Copy link
Owner

zehuichen123 commented Nov 18, 2021

Yes, if clip_gradient is not added, the result is quite bad, perhaps 89/90? I did not remember exactly. We infer that the reason mainly stems from the vast small gt boxes in WiderFace.

@StetchPlane
Copy link
Author

Snipaste_2021-11-18_16-40-03
What is the experimental setting of "clip_gradient"? Do you still have an impression?

@zehuichen123
Copy link
Owner

Yes. If not, maybe 10 for max_norm. The loss will be quite different from the one without clip_gradient at the very beginning of the training.

@StetchPlane
Copy link
Author

Thank you very much for your patience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants