Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to reproduce the result of w2a16 using llama2 7b #102

Open
stackByStack opened this issue Dec 16, 2024 · 1 comment
Open

Fail to reproduce the result of w2a16 using llama2 7b #102

stackByStack opened this issue Dec 16, 2024 · 1 comment

Comments

@stackByStack
Copy link

Command

CUDA_VISIBLE_DEVICES=0 python main.py \
--model /PATH/TO/LLaMA/llama-7b  \
--epochs 40 --output_dir ./log/llama-7b-w2a16 \
--eval_ppl --wbits 2 --abits 16 --lwc --lwc_lr 5e-3 --multigpu

lwc_lr is set as 5e-3 according to 4.1 settings in the paper.

log

[2024-12-14 11:33:14 root] (main.py 258): INFO Namespace(model='/PATH/TO/LLaMA/llama2/Llama-2-7b-hf', cache_dir='./cache', output_dir='./log/llama-7b-w2a16', save_dir=None, resume=None, real_quant=False, calib_dataset='wikitext2', nsamples=128, batch_size=1, seed=2, tasks='', eval_ppl=True, num_fewshot=0, wbits=2, abits=16, group_size=None, alpha=0.5, let_lr=0.005, lwc_lr=0.005, wd=0, epochs=40, let=False, lwc=True, aug_loss=False, symmetric=False, disable_zero_point=False, a_dynamic_method='per_token', w_dynamic_method='per_channel', limit=-1, multigpu=True, deactive_amp=False, attn_implementation='eager', net=None, act_scales=None, act_shifts=None)

...

[2024-12-14 16:34:27 root] (omniquant.py 274): INFO layer 31 iter 39 loss:6.287716865539551 norm:0.018731053918600082 max memory_allocated 10873.7470703125 

[2024-12-14 16:34:30 root] (main.py 353): INFO 18074.52874803543

[2024-12-14 16:34:43 root] (main.py 100): INFO load calibration from ./cache/testloader_Llama_wikitext2_all.cache

[2024-12-14 16:36:28 root] (main.py 144): INFO wikitext2 : 5469.2900390625

[2024-12-14 16:36:28 root] (main.py 100): INFO load calibration from ./cache/testloader_Llama_c4_all.cache

[2024-12-14 16:39:12 root] (main.py 144): INFO c4 : 17540.67578125
@stackByStack
Copy link
Author

I have used checkpoint Llama-2-7b-w2a16.pth in the repo and successfully reproduced the result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant