-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the test results of charades_sta. #8
Comments
Yea, the problem is the model is always predicting "Throughout the entire video." at the first turn and terminates the recursive grounding process. Why is there a CUDA out of memory error traceback in your log in line 102-119? Maybe the model is not correctly loaded? |
I ran the code again and there was no "out of memory" error, but the result was the same. After debugging the program, I found that the prediction for each video is “llm_message = '('”. |
Here is my recursive grounding log: I compared my recursive grounding log (which works as expected) with yours. The main difference is the For hawkeye model we use 0 as Line 112 in 635b1c5
llama_config.bos_token_id = 0
llama_config.eos_token_id = 1 As for how this problem arises, all special tokens of Hawkeye were inherited from vicuna-v0, which were further inherited from LLaMA (v1). I can vaguely recall that when llama was first released in early 2023, its special token was once inconsistent in different released versions and caused some confusion at that time. Please let me know if this solves your problem so I can fix this bug. |
I tried, but it didn't work. |
Great work!
I got strange results. Could you help me?
../outputs/charades_sta-recursive_grounding-4_turns.jsonl num examples: 3720
turns: 1
mean iou: 0.2698
[email protected]/0.5/0.7: 0.3430/0.0022/0.0000
turns: 2
mean iou: 0.2698
[email protected]/0.5/0.7: 0.3430/0.0022/0.0000
turns: 3
mean iou: 0.2698
[email protected]/0.5/0.7: 0.3430/0.0022/0.0000
turns: 4
mean iou: 0.2698
[email protected]/0.5/0.7: 0.3430/0.0022/0.0000
The log file is:
charades_sta-recursive_grounding-4_turns.log
Only change to current code is
HawkEye/models/blip2/blip2.py
Line 28 in 635b1c5
local_files_only=True->False for download necessary files
The text was updated successfully, but these errors were encountered: