Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchamarking #1353

Merged
merged 1 commit into from
Nov 27, 2024
Merged

Benchamarking #1353

merged 1 commit into from
Nov 27, 2024

Conversation

metascroy
Copy link
Contributor

Summary: Add benchmarks for experimental torchao kernels.

Differential Revision: D66512859

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 26, 2024
Copy link

pytorch-bot bot commented Nov 26, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1353

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 2a4a964 with merge base 5eb6339 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D66512859

metascroy added a commit to metascroy/ao that referenced this pull request Nov 26, 2024
Summary:

Add benchmarks for experimental torchao kernels.

Differential Revision: D66512859
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D66512859

import os
import subprocess
import tempfile
def cmake_build_torchao_ops(temp_build_dir):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required? why not add to torchao default build?

Copy link
Contributor

@jerryzh168 jerryzh168 Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this is required, I'd suggest to put these in a util and probably call it in int8_dynamic_activation_intx_weight

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to add it to a default build for ARM CPU if you have a code pointer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that would be great, here is our build code for kernels I think:

ao/setup.py

Line 53 in 5eb6339

def get_extensions():

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like cmake is being used there, and it would require more refactoring to use it. I factored out the install script into a utility, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @msaroufim and @atalman if any of you can provide some guidance on how we can add cmake build script to torchao/setup.py that would be great

Copy link
Contributor Author

@metascroy metascroy Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'll land this for now. On creating build scripts in torchao/setup.py, I'll leave it up to you all. We already have build scripts for these kernels in ExecuTorch and torchchat, so they are not difficult to use on those surfaces.

cc @kimishpatel

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but if you add build scripts to torchao, will you still need to maintain these in torchchat and executorch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because both executorch/torchchat need to build executorch kernels, but if we added build scripts to torchao, they'd likely only build aten kernels (unless torchao takes a dependency on executorch).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so looks like it's the same kernel implementation, but different build script for torchao/executorch/torchchat in this case

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG thanks, had one question/comment

Summary:

Add benchmarks for experimental torchao kernels.

Differential Revision: D66512859
@facebook-github-bot
Copy link

This pull request was exported from Phabricator. Differential Revision: D66512859

@jerryzh168 jerryzh168 added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Nov 26, 2024
@metascroy metascroy merged commit 04a25e7 into pytorch:main Nov 27, 2024
18 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants