Skip to content
This repository was archived by the owner on Nov 22, 2022. It is now read-only.

Commit fb64c42

Browse files
Debojeet Chatterjeefacebook-github-bot
Debojeet Chatterjee
authored andcommitted
Start, End index calculations fix for unicode characters. (#1171)
Summary: Pull Request resolved: #1171 The existing GPT2BPETokenizer incorrectly calculates the start and end indices for unicode characters. This is because for multi-byte characters, we need to additionally use the byte decoder on the decoded bytes to get back the original token that was encoded. Reviewed By: chenyangyu1988 Differential Revision: D18697646 fbshipit-source-id: 8f4d32a1caa40d8d06e7be31dfd4a6846692531a
1 parent e261466 commit fb64c42

File tree

1 file changed

+7
-0
lines changed

1 file changed

+7
-0
lines changed

pytext/data/tokenizers/tokenizer.py

+7
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,13 @@ def __init__(self, bpe: GPT2BPEEncoder):
210210
def tokenize(self, input_str: str) -> List[Token]:
211211
bpe_ids = self.bpe.encode(input_str)
212212
char_tokens = [self.bpe.decoder[id].lstrip(u"\u0120") for id in bpe_ids]
213+
# fix for incorrect decoding of utf-8 chars
214+
char_tokens = [
215+
bytearray([self.bpe.byte_decoder[char] for char in char_token]).decode(
216+
"utf-8"
217+
)
218+
for char_token in char_tokens
219+
]
213220
lengths = [len(token) for token in char_tokens]
214221
tokens = []
215222
end = 0

0 commit comments

Comments
 (0)