We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hi, I found that Pytorch started to support fbgemm operators: https://github.com/pytorch/pytorch/blob/2cc01cc6d3ad2aff47e8460667ba654b2e4c9f21/aten/src/ATen/native/native_functions.yaml#L14784 . The kernels implementation in PyTorch are totally the same as Fbgemm. Would you like to switch all of the torch.ops.fbgemm.xxxx usage in model to torch.ops.aten.xxx, for torch.xxx() in future ?
The text was updated successfully, but these errors were encountered:
Hi - thanks for your interest in our work!
We do utilize many jagged operators that are not in fbgemm (see e.g., https://github.com/facebookresearch/generative-recommenders/blob/main/generative_recommenders/ops/triton/triton_jagged.py) so this may not be high priority on our radar yet. We can consolidate as we further refactor codebase in the future (Xing/Linjian can probably comment more on this).
Sorry, something went wrong.
Thanks for your reply. I will follow up codebase change.
No branches or pull requests
hi,
I found that Pytorch started to support fbgemm operators: https://github.com/pytorch/pytorch/blob/2cc01cc6d3ad2aff47e8460667ba654b2e4c9f21/aten/src/ATen/native/native_functions.yaml#L14784 .
The kernels implementation in PyTorch are totally the same as Fbgemm.
Would you like to switch all of the torch.ops.fbgemm.xxxx usage in model to torch.ops.aten.xxx, for torch.xxx() in future ?
The text was updated successfully, but these errors were encountered: