Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Newcomer for help: If the same training corpus is used, is there a way to save the pre-tokenized data and load it directly next time? #5851

Open
1 task done
Wiselnn570 opened this issue Oct 29, 2024 · 1 comment
Labels
pending This problem is yet to be addressed

Comments

@Wiselnn570
Copy link

Reminder

  • I have read the README and searched the existing issues.

System Info

Currently, each time I fine-tune the program, it takes 6 hours to pre-tokenize, which is too time-consuming.
(p.s. I've tried launching in a streaming manner, but the GPU utilization is almost zero. The training time is even longer compared to offline preprocessing, which is not acceptable.)

Reproduction

[None]

Expected behavior

If the pre-tokenized data is saved locally, it can be loaded directly the next time for the same data, allowing to speed this process.

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Oct 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

2 participants