Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support int32_t indices/offsets for caching handling logics #811

Closed
wants to merge 1 commit into from

Conversation

jianyuh
Copy link
Member

@jianyuh jianyuh commented Dec 12, 2021

Summary:
In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Differential Revision: D33045589

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 13, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Differential Revision: D33045589

fbshipit-source-id: 9f2f31110a0a070f7dadfe643b8585789303f145
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 13, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Differential Revision: D33045589

fbshipit-source-id: 65bb1de7503e5c076f54cb964f16aaf75f8c0047
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Differential Revision: D33045589

fbshipit-source-id: d2ccf90fe4d0a5ee40627bc9ec591c683fbfc993
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Differential Revision: D33045589

fbshipit-source-id: 42ebcd899bb5dc6735eaf67cad48ac3b168d60ca
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 4cdc7cec15e07c51af999276bf5366199eb216b5
jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 28, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 89047f7cc09aee2f4ff18aa5ed3fbd0a86c16dd2
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 5f8566797413989cf19d354f583f6a6fc87385bc
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 29, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 02fa0be6c6917d82258204f0de8d15266af29c77
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

jianyuh added a commit to jianyuh/FBGEMM that referenced this pull request Dec 30, 2021
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 14a32f4c7d6ecb3673c7db6216d1771f995e5aa8
)

Summary:
Pull Request resolved: pytorch#811

In training, we assume the indices / offsets are int64_t for embedding (TBE), but in inference, we assume the indices / offsets are int32_t.

This Diff enables both int32_t and int64_t supports for the caching logics so that we can reuse the same functions for both training and inference, while reducing the extra overhead to convert the indices/offsets from int to long or vice versa.

Reviewed By: jspark1105

Differential Revision: D33045589

fbshipit-source-id: 6b96a8333c35ae1151149fb3c5c2ad11f7f6507d
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D33045589

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants