-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactored benchmark tests #196
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add a readme in benchmark/ to elaborate how to run these?
add'l stuff Not working Somewhat working Working benchmarking script Fixed script for overwriting values Updated swiglu Updated benchmark_rope Updated benchmark_rms_norm Updated flce Updated geglu Updated embedding Updated layernorm working notebook Cleared outputs Reran for latest liger version, switched to quantiles Fixe checkstyle Added instructions fixed typo
79ee25b
to
a1a56a8
Compare
Added instructions to contributing.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome! Follow up task can be making decimal of speed benchmark configurable. Triton imple is hard coded. cc @austin362667
Taking the raw results from triton.testing.do_bench here, we already get the unrounded values. I think the For example you can see in the CSV:
|
ah i see! you are writing the csv on your own |
Summary
benchmark/data/all_benchmark_data.csv
) that contains more complete information on how the benchmarking test was setup:make run-benchmarks
Testing Done
make test
to ensure correctnessmake checkstyle
to ensure code stylemake test-convergence
to ensure convergence