WIP: add oscar slurm script for preprocess_data_dist #4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I can't run on JZ, but for a concrete example, I think a script like the one in this PR could be used with the new
preprocess_data_dist.pyscript. This requires the JSON support added in PR bigscience-workshop/Megatron-DeepSpeed#60To process a JSON file, the script first generates an "index" that records the starting byte offset and length of each line in the source JSON file. That index file is stored beside the source JSON file to be reused in future runs. The index enables quick random access to the (variable-length) lines in the JSON file.
In the example SLURM script in this PR, for the source file:
the
preprocess_data_dist.pyscript will create the following files as a result of indexing the source JSON file: