Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking updates for semi-structured sparse training #398

Merged
merged 3 commits into from
Jun 20, 2024

Conversation

jcaip
Copy link
Contributor

@jcaip jcaip commented Jun 18, 2024

Summary:

This PR does the following:

  • adds e2e ViT benchmarks for semi-structured sparse training
  • adds nn.Linear microbenchmarks
  • removes extra xformers benchmarking utils I copied over
  • removes MLP block benchmarks
  • updated README.md with new benchmarks + accuracy benchmarks

Given we have nn.Linear microbenchmarks and e2e benchmarks, I felt that
the MLP block benchmarks were unnecessary

As a sanity check, I ran the MLP benchmarks with the new benchmarking
suite and the old one, and got the same results:

NEW:
Screenshot 2024-06-18 at 6 16 37 PM

OLD:

dense w24
f16 (44160,1024,4096,1024) 11534.3 9204.7

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

jcaip added 2 commits June 18, 2024 16:17
Summary:

This PR does the following:
- adds e2e ViT benchmarks for semi-structured sparse training
- adds nn.Linear microbenchmarks
- removes extra xformers benchmarking utils I copied over
- removes MLP block benchmarks
- updated README.md with new benchmarks + accuracy benchmarks

Given we have nn.Linear microbenchmarks and e2e benchmarks, I felt that
the MLP block benchmarks were unnecessary

As a sanity check, I ran the MLP benchmarks with the new benchmarking
suite and the old one, and got the same results:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Copy link

pytorch-bot bot commented Jun 18, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/398

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 881ae2c with merge base 6b0ca2d (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 18, 2024
@jcaip jcaip requested a review from msaroufim June 18, 2024 23:40
Copy link
Member

@msaroufim msaroufim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool thank you! This is significantly clearer. I do want us to think a bit harder about the top line metric since 6% might not be super compelling to people not familiar with limitations of sparsity

@jcaip
Copy link
Contributor Author

jcaip commented Jun 20, 2024

@msaroufim We could compare to masking based approaches (which are slower than dense training) for a larger number, but I think it'd be a bit confusing since I'm assuming most users are coming with a dense model and not an existing sparse training script they want to accelerate.

@jcaip jcaip merged commit 5559405 into main Jun 20, 2024
12 of 13 checks passed
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
* Benchmarking updates for semi-structured sparse training

Summary:

This PR does the following:
- adds e2e ViT benchmarks for semi-structured sparse training
- adds nn.Linear microbenchmarks
- removes extra xformers benchmarking utils I copied over
- removes MLP block benchmarks
- updated README.md with new benchmarks + accuracy benchmarks

Given we have nn.Linear microbenchmarks and e2e benchmarks, I felt that
the MLP block benchmarks were unnecessary

As a sanity check, I ran the MLP benchmarks with the new benchmarking
suite and the old one, and got the same results:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* update

* add units
yanbing-j pushed a commit to yanbing-j/ao that referenced this pull request Dec 9, 2024
* cli

* typos
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants