-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
refactor(metrics): Replace tiktoken with gpt-tokenizer #1245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
6ba3d74
refactor(metrics): Replace tiktoken with gpt-tokenizer for token coun…
yamadashy a7fb3fa
perf(metrics): Share pre-built encoding data across worker threads
yamadashy 278849f
chore(metrics): Remove tokenizer benchmark scripts and tiktoken devDe…
yamadashy a0e83df
fix(metrics): Fix workerData access for Tinypool and update website r…
yamadashy 7c1eb0c
refactor(metrics): Remove pre-built encoding sharing and add encoding…
yamadashy cbb2091
docs(metrics): Clarify free() is retained for public API compatibility
yamadashy b810e22
perf(metrics): Use resolveEncodingAsync to load only needed BPE data
yamadashy 37f632b
fix(metrics): Guard against TOCTOU race in async getTokenCounter
yamadashy File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,31 +1,33 @@ | ||
| import type { TiktokenEncoding } from 'tiktoken'; | ||
| import { logger } from '../../shared/logger.js'; | ||
| import { TokenCounter } from './TokenCounter.js'; | ||
| import type { TokenEncoding } from './tokenEncoding.js'; | ||
|
|
||
| // Worker-level cache for TokenCounter instances by encoding | ||
| const tokenCounters = new Map<TiktokenEncoding, TokenCounter>(); | ||
| const tokenCounters = new Map<TokenEncoding, TokenCounter>(); | ||
|
|
||
| /** | ||
| * Get or create a TokenCounter instance for the given encoding. | ||
| * This ensures only one TokenCounter exists per encoding per worker thread to optimize memory usage. | ||
| */ | ||
| export const getTokenCounter = (encoding: TiktokenEncoding): TokenCounter => { | ||
| export const getTokenCounter = async (encoding: TokenEncoding): Promise<TokenCounter> => { | ||
| let tokenCounter = tokenCounters.get(encoding); | ||
| if (!tokenCounter) { | ||
| tokenCounter = new TokenCounter(encoding); | ||
| tokenCounters.set(encoding, tokenCounter); | ||
| tokenCounter = await TokenCounter.create(encoding); | ||
| // Guard against concurrent calls: only set if no other call populated the cache | ||
| if (!tokenCounters.has(encoding)) { | ||
| tokenCounters.set(encoding, tokenCounter); | ||
| } else { | ||
| tokenCounter = tokenCounters.get(encoding)!; | ||
| } | ||
| } | ||
| return tokenCounter; | ||
|
yamadashy marked this conversation as resolved.
|
||
| }; | ||
|
|
||
| /** | ||
| * Free all TokenCounter resources and clear the cache. | ||
| * Clear all TokenCounter instances from the cache. | ||
| * This should be called when the worker is terminating. | ||
| */ | ||
| export const freeTokenCounters = (): void => { | ||
| for (const [encoding, tokenCounter] of tokenCounters.entries()) { | ||
| tokenCounter.free(); | ||
| logger.debug(`Freed TokenCounter resources for encoding: ${encoding}`); | ||
| } | ||
| tokenCounters.clear(); | ||
| logger.debug('Cleared TokenCounter cache'); | ||
| }; | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| /** | ||
| * Supported token encoding names. | ||
| * These match the encoding names supported by gpt-tokenizer. | ||
| */ | ||
| export const tokenEncodings = [ | ||
| 'o200k_base', | ||
| 'o200k_harmony', | ||
| 'cl100k_base', | ||
| 'p50k_base', | ||
| 'p50k_edit', | ||
| 'r50k_base', | ||
| ] as const; | ||
|
|
||
| export type TokenEncoding = (typeof tokenEncodings)[number]; |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 README.md not updated after tiktoken → gpt-tokenizer migration (CONTRIBUTING.md violation)
CONTRIBUTING.md requires: "You have updated relevant documentation (especially README.md) if you've added or changed functionality." This PR replaces
tiktokenwithgpt-tokenizerbut does not update the README.md, which still contains two now-incorrect references to tiktoken:README.md:1360describestokenCount.encodingas using "OpenAI's tiktoken tokenizer" and links to tiktoken's GitHub/model.py.README.md:1791liststiktokenas an external bundling dependency that "Loads WASM files dynamically at runtime" — butgpt-tokenizeris pure JavaScript and does not use WASM.Both references are factually incorrect after this change and will mislead users.
Prompt for agents
Was this helpful? React with 👍 or 👎 to provide feedback.