Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COCO Caption benchmark #129

Open
HoinJung opened this issue Apr 16, 2024 · 0 comments
Open

COCO Caption benchmark #129

HoinJung opened this issue Apr 16, 2024 · 0 comments

Comments

@HoinJung
Copy link

HoinJung commented Apr 16, 2024

Hi,
Thanks for sharing this awesome work.
I have a question on COCO image captioning benchmark. The BLEU4 scores in Table 6 is around 40, but my implementation shows BLEU4 score around 4.0. The score evaluation code is fine because it shows similar performance for CLIP-cap or BLIP.
As a default setting in demo, I'm using scale 7.5 and ddim_eta=0, but it doesn't improve when I change scale=2.0 and ddim_eta=1 following the Table 12. Is there anything I can do to produce comparable BLEU4 score for COCO dataset?

Here's the example result I made.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant