-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Introducing Tiktoken Tokenizer #6981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #6981 +/- ##
==========================================
+ Coverage 68.80% 68.81% +0.01%
==========================================
Files 1249 1256 +7
Lines 249686 250425 +739
Branches 25485 25569 +84
==========================================
+ Hits 171795 172335 +540
- Misses 71294 71466 +172
- Partials 6597 6624 +27
Flags with carried forward coverage won't be shown. Click here to find out more.
|
| return true; | ||
| } | ||
|
|
||
| int[] encodedIds = BytePairEncoder.BytePairEncode(Encoding.UTF8.GetBytes(sequence), _encoder); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be really nice to reduce the overheads here. It can be done separately, but this is a lot of allocation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked through the issue #6989
| } | ||
| } | ||
|
|
||
| return utf8Bytes.Count > 0 ? Encoding.UTF8.GetString(utf8Bytes.ToArray()) : string.Empty; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we only target netstandard2.0, or do we multitarget and build this for netcoreapp as well? There are newer APIs that make this cheaper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tracked through the issue #6989
src/Microsoft.ML.Tokenizers/PreTokenizer/TikTokenPreTokenizer.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.ML.Tokenizers/PreTokenizer/TikTokenPreTokenizer.cs
Outdated
Show resolved
Hide resolved
| return outList; | ||
| } | ||
|
|
||
| private static T[] Slice<T>(this T[] array, int start, int end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There looks to be a fair amount of allocation being incurred from all this slicing. That can't be reduced?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tracked this in the issue #6989
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just that one question about the empty dispose (though I saw it in a couple other places too, my question applies there too)
This modification introduces support for the Tiktoken tokenizer into the Microsoft ML tokenizers library. The logic is largely derived from the Microsoft Tokenizers Library, and the update includes optimizations and adjustments to the public APIs. Further refinements for the APIs are pending and are being tracked through issue #6982.
Usage
APIs changes
namespace Microsoft.ML.Tokenizers { public class Tokenizer { + /// <summary> + /// Encodes input text to object has the tokens list, tokens Ids, tokens offset mapping. + /// </summary> + /// <param name="sequence">The text to tokenize.</param> + /// <param name="skipSpecialTokens">Indicate if want to skip the special tokens during the encoding.</param> + /// <returns>The tokenization result includes the tokens list, tokens Ids, tokens offset mapping.</returns> + public TokenizerResult Encode(string sequence, bool skipSpecialTokens); // overload adding skipSpecialTokens parameter. + /// <summary> + /// Encodes input text to tokens Ids. + /// </summary> + /// <param name="sequence">The text to tokenize.</param> + /// <param name="skipSpecialTokens">Indicate if want to skip the special tokens during the encoding.</param> + /// <returns>The tokenization result includes the tokens list, tokens Ids, tokens offset mapping.</returns> + public IReadOnlyList<int> EncodeToIds(string sequence, bool skipSpecialTokens = false); + /// <summary> + /// Create tokenizer based on model name + /// </summary> + /// <param name="modelName">Model name</param> + /// <param name="extraSpecialTokens">Extra special tokens other than the built-in ones for the model</param> + /// <param name="normalizer">To normalize the text before tokenization</param> + /// <returns>The tokenizer</returns> + public static async Task<Tokenizer> CreateByModelNameAsync( + string modelName, + IReadOnlyDictionary<string, int>? extraSpecialTokens = null, + Normalizer? normalizer = null) } - public class Split : IEquatable<Split> + public readonly struct Split : IEquatable<Split> { - public Split(string token, (int Index, int End) offset) + public Split(string token, (int Index, int End) offset, bool isSpecialToken = false) + /// <summary> + /// Gets if the current Split is a special token. + /// </summary> + public bool IsSpecialToken { get; } } public abstract class PreTokenizer { + // Primarily focused on optimizing to minimize memory allocations and enable the enumeration of one item at a time, + // rather than holding a large list in a collection. + // This change will reflect in all public classes which implementing this interface. - public abstract IReadOnlyLIst<Split> PreTokenize(string sentence); + public abstract IEnumerable<Split> PreTokenize(string sentence, bool skipSpecialTokens = false); } public sealed class TokenizerResult { - public TokenizerResult(string originalString, string normalizedString, IReadOnlyList<Split> splits, bool offsetsMappedToOriginalString); + public TokenizerResult(string originalString, string normalizedString, IEnumerable<Split> splits, bool offsetsMappedToOriginalString); } public abstract class Model { + public virtual IReadOnlyList<Token> Tokenize(string sequence, bool isSpecialToken); // overload to add isSpecialToken parameter. + public virtual bool TokenizeToIds(string sequence, bool isSpecialToken, List<int> accumulatedIds); // To be consumed by Tokenizer.EncodeToIds + public virtual int? TokenToId(string token, bool skipSpecialTokens); // overload to add isSpecialToken parameter. } + public sealed class Tiktoken : Model + { + public Tiktoken(string tikTokenBpeFile, IReadOnlyDictionary<string, int>? specialTokensEncoder = null, int cacheSize = DefaultCacheSize); + public Tiktoken(Stream tikTokenBpeFileStream, IReadOnlyDictionary<string, int>? specialTokensEncoder = null, int cacheSize = DefaultCacheSize); + public IReadOnlyDictionary<string, int>? SpecialTokens { get; } + // Implement the Model abstract methods + } + public sealed class TikTokenPreTokenizer : PreTokenizer + { + public TikTokenPreTokenizer(string regexPattern, IReadOnlyDictionary<string, int>? specialTokensEncoder); + // Implement the Model abstract methods + }