-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchamarking #1353
Benchamarking #1353
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1353
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2a4a964 with merge base 5eb6339 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D66512859 |
Summary: Add benchmarks for experimental torchao kernels. Differential Revision: D66512859
de00766
to
75a0c6d
Compare
This pull request was exported from Phabricator. Differential Revision: D66512859 |
torchao/_models/llama/generate.py
Outdated
import os | ||
import subprocess | ||
import tempfile | ||
def cmake_build_torchao_ops(temp_build_dir): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this required? why not add to torchao default build?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this is required, I'd suggest to put these in a util and probably call it in int8_dynamic_activation_intx_weight
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to add it to a default build for ARM CPU if you have a code pointer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that would be great, here is our build code for kernels I think:
Line 53 in 5eb6339
def get_extensions(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't look like cmake is being used there, and it would require more refactoring to use it. I factored out the install script into a utility, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @msaroufim and @atalman if any of you can provide some guidance on how we can add cmake build script to torchao/setup.py that would be great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok I'll land this for now. On creating build scripts in torchao/setup.py, I'll leave it up to you all. We already have build scripts for these kernels in ExecuTorch and torchchat, so they are not difficult to use on those surfaces.
cc @kimishpatel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but if you add build scripts to torchao, will you still need to maintain these in torchchat and executorch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, because both executorch/torchchat need to build executorch kernels, but if we added build scripts to torchao, they'd likely only build aten kernels (unless torchao takes a dependency on executorch).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, so looks like it's the same kernel implementation, but different build script for torchao/executorch/torchchat in this case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG thanks, had one question/comment
Summary: Add benchmarks for experimental torchao kernels. Differential Revision: D66512859
75a0c6d
to
2a4a964
Compare
This pull request was exported from Phabricator. Differential Revision: D66512859 |
Summary: Add benchmarks for experimental torchao kernels.
Differential Revision: D66512859