|
1 | 1 | # Data Selection for Language Models via Importance Resampling (DSIR)
|
| 2 | +[](https://opensource.org/licenses/MIT) |
| 3 | +[](https://arxiv.org/abs/2302.03169) |
2 | 4 |
|
3 |
| -This repository contains pre-filtered datasets and code for selecting relevant language model training data from The Pile. |
| 5 | +This repository contains the [DSIR](https://arxiv.org/abs/2302.03169) data selection tool for selecting relevant language model training data from any raw data source given a target dataset, as well as pre-filtered datasets and some pretrained models. |
| 6 | + |
| 7 | +DSIR is built for: |
| 8 | +- fast, large-scale (trillion-token scale) data selection from large raw text datasets (Pile, RefinedWeb, RedPajama, ...) |
| 9 | +- selecting data that is distributed like a given target dataset (domain-specific data, Wikipedia, ...). Relevance and diversity are balanced automatically. |
| 10 | + |
| 11 | +Compute needed: |
| 12 | +- 1 CPU node |
| 13 | +- a large amount of RAM (at least few hundred GB) |
| 14 | +- a high number of cores (parallelism on a file level. For best performance, use as many CPU cores as data files) |
| 15 | + |
| 16 | + |
| 17 | + |
| 18 | +Code related to the DSIR paper's experiments are in the `experimental/` directory. |
| 19 | + |
| 20 | +## Quickstart |
| 21 | + |
| 22 | +Install from pip: |
| 23 | +``` |
| 24 | +pip install data-selection |
| 25 | +``` |
| 26 | + |
| 27 | +Install from source by cloning this repo and installing via pip: |
| 28 | +``` |
| 29 | +git clone [email protected]:/p-lambda/dsir |
| 30 | +pip install ./dsir |
| 31 | +``` |
| 32 | + |
| 33 | +To select data, initialize a `HashedNgramDSIR` object and call the following functions: |
| 34 | +``` |
| 35 | +from data_selection import HashedNgramDSIR |
| 36 | +
|
| 37 | +raw_datasets = [<list of paths>] |
| 38 | +target_datasets = [<list of paths>] |
| 39 | +
|
| 40 | +dsir = HashedNgramDSIR(raw_datasets, num_proc=30) |
| 41 | +dsir.fit_importance_estimator(target_datasets) |
| 42 | +dsir.compute_importance_weights() |
| 43 | +dsir.resample(out_dir='resampled', num_to_sample=1000000, cache_dir='/scr/resampled_cache') |
| 44 | +``` |
| 45 | +This will save 1M examples in `jsonl` files inside an output directory named `resampled`. The files will first be written to `cache_dir` and moved to `out_dir` upon completion. |
| 46 | + |
| 47 | +The `dsir` intermediate results (after `fit_importance_estimator` and `compute_importance_weights`) can be saved and loaded for later use, for example to resample a different number of examples: |
| 48 | +``` |
| 49 | +dsir.save('dsir_params') |
| 50 | +
|
| 51 | +# later on |
| 52 | +dsir.load('dsir_params') |
| 53 | +dsir.resample(out_dir='resampled', num_to_sample=10000000, cache_dir='/scr/resampled_cache') |
| 54 | +``` |
4 | 55 |
|
5 | 56 | ## Pre-filtered datasets
|
6 | 57 | Note: previous versions of the datasets had a small validation and test split (50000 examples each), but we concatenated these onto the end of the train set (in the order validation, then test) to better align with the paper. The datasets should be further shuffled during preprocessing before training.
|
@@ -52,32 +103,12 @@ In the table below, `{dataset}` can be replaced with one of `{ag, amazon, citati
|
52 | 103 | | heuristiccls-roberta-continuedpretrain-{dataset} | Link format: `https://huggingface.co/sangmichaelxie/dsir-roberta-continuedpretrain-{dataset}` | 6.4B tokens (25M examples) | 256 | 25000 | roberta-base | roberta-base | RoBERTa model with continued pretraining on data selected by heurstic classification with target={dataset} |
|
53 | 104 | | randomselect-roberta-continuedpretrain | [Link](https://huggingface.co/sangmichaelxie/randomselect-roberta-continuedpretrain) | 6.4B tokens (25M examples) | 256 | 25000 | roberta-base | roberta-base | RoBERTa model with continued pretraining on random subset of The Pile |
|
54 | 105 |
|
55 |
| -## Code for data selection |
56 |
| - |
57 |
| -To select your own subset of The Pile, all you need is a small set of target examples representing the kind of data you want to select. |
58 |
| -This target dataset should be in jsonl format -- it can also be a dataset from HuggingFace Datasets. Note that our current workflow requires about 2TB of storage space --- we're working on reducing this! All the code should be run from the outer `dsir` directory. |
59 |
| -1. Create a virtualenv using `requirements.txt`: `virtualenv .venv; source .venv/bin/activate; pip install -r requirements.txt` |
60 |
| -2. Download The Pile to `PILE_PATH` and change the corresponding variables in `config.sh`. |
61 |
| -3. Run preprocessing on The Pile: Run `bash preprocessing/run_slurm.sh`. You can also run `bash preprocessing/run.sh` directly using the arguments in `preprocessing/run_slurm.sh`. This only needs to be run once. |
62 |
| -4. Precompute quality filter stats: Run `bash preprocessing/quality_scores/run_slurm_quality_stats.sh`. After this, run `bash preprocessing/quality_scores/run_merge_quality_scores.sh`. This only needs to be run once. (We're working on streamlining steps 3 and 4. Stay tuned!) |
63 |
| -5. Run DSIR: For an example, run `bash data_selection/run_cmds.sh`. For new target datasets, some information about which fields in the dataset to use should be placed in the `dsname_to_args` dictionary at the top of the `data_selection/dsir_pipeline.py` file. If you wish to retrieve from custom subsets of the Pile (for example, only select data from one chunk of the Pile), you will need to tweak one part of the code, in the main part of the script (an example is provided of how to do so as a comment). Many of the steps in DSIR can be cached and will only run the first time. For example, resampling a different number of examples with the same target dataset uses cached importance weights. |
64 |
| - |
65 |
| -## Code for pretraining and GLUE evaluation |
66 |
| - |
67 |
| -We provide scripts for training BERT-style masked language models on the selected data and evaluating it on GLUE in the `train` and `glue_eval` directories, respectively. All code should be run from the outer `dsir` directory. |
68 |
| -1. Install further dependencies using `train/requirements.txt`: `pip install -r train/requirements.txt` |
69 |
| -2. Change the `PRETRAIN_OUTPUT_DIR` variable in `config.sh`. |
70 |
| -3. Write a job command in `train/run_slurm.sh`. An example command in this file. You will need to change the path to the training data. If you want to skip preprocessing (if it's already done), set the first of two boolean variables to `false`. By setting both to `true`, there will be two jobs launched: one for preprocessing and one for pretraining. The pretraining job should take about 50 hours on 4 RTX 3090 GPUs. Kick off the jobs by running `bash train/run_slurm.sh`. |
71 |
| -4. Evaluate the trained model by editing the evaluation job command in `glue_eval/run_eval_exps.sh` with the path to the model checkpoint. This script runs 5 seeds for each GLUE dataset. The results and finetuned models will be saved a new `finetune_runs` directory inside the pretrained model checkpoint directory. Kick off the jobs by running `bash glue_exps/run_eval_exps.sh`. |
72 |
| -5. Read the GLUE results by running `python read_glue_results.py --results_dir </path/to/checkpoint>/finetune_runs` in the `glue_eval` directory. |
73 |
| - |
74 |
| - |
75 | 106 | ## Citation Information
|
76 | 107 | Paper: <https://arxiv.org/abs/2302.03169>
|
77 | 108 | ```
|
78 | 109 | @article{xie2023data,
|
79 | 110 | author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
|
80 |
| - journal = {arXiv preprint arXiv:2302.03169}, |
| 111 | + journal = {Advances in Neural Information Processing Systems (NeurIPS)}, |
81 | 112 | title = {Data Selection for Language Models via Importance Resampling},
|
82 | 113 | year = {2023},
|
83 | 114 | }
|
|
0 commit comments