-
Notifications
You must be signed in to change notification settings - Fork 99
Closed
Description
auto tokenizer = Tokenizer::FromBlobJSON(*blob);
int i = 0;
while (q->try_pop(next)) {
if (next.empty())
return;
i++;
std::vector tokens(tokenizer->Encode(next));
std::cout << count_processed++ << ", tokens count " << tokens.size() << std::endl;
if (i >= 1000) {
tokenizer = Tokenizer::FromBlobJSON(*blob);
i = 0;
}
}
Hey there, for some reason it seems like we're hitting a memory leak somewhere in the tokenizer. We're trying to process 100k pieces of text across multiple threads and count the number of tokens.
However, it seems like the token data is not freed unless the entire tokenizer is destructed. We assume it's happening here:
Is there a way to individually free the "encode ids" returned from tokenizer->Encode
so we can use a single tokenizer instance per thread? Right now, we are re-creating the tokenizer after 1000 samples.
Metadata
Metadata
Assignees
Labels
No labels