-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TorchRec][IR] Add IR serializer for KTRegroupAsDict Module #1900
Conversation
This pull request was exported from Phabricator. Differential Revision: D56282744 |
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
deb11d0
to
ffbca7f
Compare
This pull request was exported from Phabricator. Differential Revision: D56282744 |
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
ffbca7f
to
3ff2015
Compare
This pull request was exported from Phabricator. Differential Revision: D56282744 |
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
3ff2015
to
ea9b83c
Compare
This pull request was exported from Phabricator. Differential Revision: D56282744 |
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
…port (pytorch#1900) Summary: reference: * D54009459 Differential Revision: D56282744
ea9b83c
to
628c83d
Compare
This pull request was exported from Phabricator. Differential Revision: D56282744 |
628c83d
to
b25c4f7
Compare
This pull request was exported from Phabricator. Differential Revision: D56282744 |
Summary: Pull Request resolved: pytorch#1900 # context * previously `KTRegroupAsDict` can't really supported by torch.export (IR) because this module has an intialization step as running the first batch. * during the export the `KTRegroupAsDict` module will be initialized by a fake_tensor which is wrong * if we initialize the module before torch.export, the device would be an issue. * another issue is that current torch.export [can't support conditional logic in training](https://pytorch.org/docs/stable/cond.html), where initialization step only runs once. > torch.cond is a prototype feature in PyTorch. It has limited support for input and output types and doesn’t support training currently. Please look forward to a more stable implementation in a future version of PyTorch. NOTE: this is more like a workaround solution, real solution needs support from pytorch compile for conditional logic # details * we treat the `KTRegroupAsDict` as another sparse_arch and do the model swap before and after torch.export. * more context: D59019375 Differential Revision: D56282744
This pull request was exported from Phabricator. Differential Revision: D56282744 |
b25c4f7
to
15b0a4f
Compare
Summary: Pull Request resolved: #1900 # context * previously `KTRegroupAsDict` can't really supported by torch.export (IR) because this module has an intialization step as running the first batch. * during the export the `KTRegroupAsDict` module will be initialized by a fake_tensor which is wrong * if we initialize the module before torch.export, the device would be an issue. * another issue is that current torch.export [can't support conditional logic in training](https://pytorch.org/docs/stable/cond.html), where initialization step only runs once. > torch.cond is a prototype feature in PyTorch. It has limited support for input and output types and doesn’t support training currently. Please look forward to a more stable implementation in a future version of PyTorch. NOTE: this is more like a workaround solution, real solution needs support from pytorch compile for conditional logic # details * we treat the `KTRegroupAsDict` as another sparse_arch and do the model swap before and after torch.export. * more context: D59019375 Reviewed By: PaulZhang12 Differential Revision: D56282744 fbshipit-source-id: b86f6eafa3d453735df6c9d00b33b16f70279dea
context
KTRegroupAsDict
can't really supported by torch.export (IR) because this module has an intialization step as running the first batch.KTRegroupAsDict
module will be initialized by a fake_tensor which is wrongNOTE: this is more like a workaround solution, real solution needs support from pytorch compile for conditional logic
details
KTRegroupAsDict
as another sparse_arch and do the model swap before and after torch.export.