-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Low-bit optim] Add Llama2-7B finetune benchmarks #746
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/746
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 5 Unrelated FailuresAs of commit 2de6df0 with merge base ba2d3b1 ( NEW FAILURE - The following job has failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @mlazos who was looking at large compile times
@msaroufim Any blockers to merge this? The failing CPU test is unrelated, though I'm probably in charge of it since it's FP6-LLM 🌚. Seems like something change with CPU inductor. Some thoughts on reducing compile time. There are 2 approaches to compile optimizer step in low-bit optim:
Currently Adam8bit and AdamFp8 use approach (2) (with static shape) since it is faster (but compile much slower), while Adam4bit uses approach (1) (with dynamic shape) since there are excessive memory usage for "Adam4bit + approach (2)". Approach (1) requires dynamic shape to avoid hitting recompiles limit. Now looking back, perhaps we can do approach (1) with static shape + temporarily remove recompile limit? I have seen FlexAttention doing this It's probably safe to do so, since for a given model, the number of recompiles for |
I'm gonna add some of your comments here to the README since they're helpful |
* add Llama3.1-8B finetune bench * update doc * Update README.md --------- Co-authored-by: Mark Saroufim <[email protected]>
* run quantization on MPS * add quantization and secrets * version check
Update: change Llama3.1-8B-instruct to Llama2-7B
Fine-tune Llama2-7B on Alpaca dataset. Full BF16, 1 epoch, A100, fixed random seed. Benchmark is done with torchtune.
Summary
truthfulqa_mc2
accNOTE:
Observations
Command used (change
optimizer
andcheckpointer.output_dir
across runs)Fancy graphs!
Compare across different n-bit optimizers
Compare 8-bit AdamW between ao and bnb. The fact that the two graphs overlap show that our implementation is correct and competitive in speed (except compile time 😭)!