Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AQT tensor parallel for float8_dynamic_quant #1078

Merged
merged 3 commits into from
Oct 16, 2024
Merged

Conversation

jainapurva
Copy link
Contributor

@jainapurva jainapurva commented Oct 15, 2024

Added support for tensor parallel for float8_dynamic_activation_float8_weight

  • - Supports PerTensor scaling
  • - Supports PerRow scaling

Added op implementations:

  • aten.slice.Tensor
  • aten.view.default

Copy link

pytorch-bot bot commented Oct 15, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/1078

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3dfc799 with merge base e7b33bc (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 15, 2024
return return_and_correct_aliasing(
func, args, kwargs, args[0]._apply_fn_to_data(lambda x: aten.slice.Tensor(x, dim, start, end, step))
)
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this case btw, is this ndim==0 or some other case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's ndim==0. I've updated the condition to check for ndim=0, instead of generic condition.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, thanks!

@jainapurva jainapurva marked this pull request as ready for review October 16, 2024 04:22
@jainapurva jainapurva merged commit 7a35695 into main Oct 16, 2024
17 checks passed
yanbing-j pushed a commit to yanbing-j/ao that referenced this pull request Dec 9, 2024
…at/ folder (pytorch#1076)

* [Hackability Refactor] Move known_model_params under torchchat (pytorch#1073)

* [Hackability Refactor] Migrate CLI call sites to explicitly go through torchchat.py (pytorch#1075)

* [Hackability Refactor] Move model.py underneath torchchat/ (pytorch#1077)

* Move model.py

* Clear out init to avoid package circular import

* [Hackability Refactor] Move select top level docs into folders within torchchat (pytorch#1080)

* [Hackability Refactor] Move the top level util folder into torchchat/utils (pytorch#1079)

* [Hackability Refactor] Move the top level util file into torchchat/utils/

* Cleared out init to avoid packing

* [Hackability Refactor] Collapse gguf_util into gguf_loader (pytorch#1078)

* [Hackability Refactor] Collapse gguf_util into gguf_loader

* Update bad import

* [Hackability Refactor] Move model_config into torchchat/model_config (pytorch#1082)

* [Hackability Refactor] Move cli related files under torchchat/cli (pytorch#1083)

* [Hackability Refactor] Move build/util into torchchat/utils (pytorch#1084)

* [Hackability Refactor] Easy Moves: eval, gguf_loader, quantize, model_dist (pytorch#1085)

* [Hackability Refactor] Easy Cheap Moves: eval, gguf_loader, quantize, model_dist

* Update eval.py call sites that slipped through the initial pass

* [Hackability Refactor] Update missed direct file calls to use torchchat.py (pytorch#1088)

* [Hackability Refactor] Move export and generate under torchchat/ (pytorch#1089)

* [Hackability Refactor] Move scripts under torchchat/utils (pytorch#1090)

* [Hackability Refactor] Move scripts under torchchat/utils

* Fix install script for AOTI

* Update referenced path in build_android

* Adding missing utils path

* Add another layer for torchchat

* Move the source command depending on if TC root is defined

* [Hackability Refactor] Move installation related files into install/ (pytorch#1081)

* [Hackability Refactor] Move installation related files into install/

* Fix install req path

* Test fix with install path for bash

* Debug messages

* Remove changes to install in et_python_libs

* Remove debug echo

* Fix pin path for et

* [Hackability Refactor] Restricted Lint (pytorch#1091)

* [Hackability Refactor] Removing __main__ from export/generate/eval (pytorch#1092)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants