Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving hf_eval.py #342

Merged
merged 1 commit into from
Jun 11, 2024
Merged

Improving hf_eval.py #342

merged 1 commit into from
Jun 11, 2024

Conversation

HDCharles
Copy link
Contributor

Summary:

made it so you can quantize on cpu rather than cuda. Added options to change batch_size and max_length and added -q for quantize

Test Plan:

python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 20 --compile
python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 200 --compile

Reviewers:

Subscribers:

Tasks:

Tags:

Summary:

made it so you can quantize on cpu rather than cuda. Added options to
change batch_size and max_length and added -q for quantize

Test Plan:

python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 20 --compile
python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 200 --compile

Reviewers:

Subscribers:

Tasks:

Tags:
Copy link

pytorch-bot bot commented Jun 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/342

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit afdc0b9 with merge base 79f2c7f (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 11, 2024
@HDCharles HDCharles merged commit 71cf0a5 into main Jun 11, 2024
13 checks passed
dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
Summary:

made it so you can quantize on cpu rather than cuda. Added options to
change batch_size and max_length and added -q for quantize

Test Plan:

python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 20 --compile
python hf_eval.py --limit 8 -q int8wo --batch_size 8 --max_length 200 --compile

Reviewers:

Subscribers:

Tasks:

Tags:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants