-
Notifications
You must be signed in to change notification settings - Fork 0
Tests #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests #1
Conversation
| ) | ||
|
|
||
| return (tokens, position_ids, attention_mask), (labels, loss_mask) | ||
| return (tokens, position_ids, attention_mask), (labels, loss_mask), prefix_indices |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is okay as all values beyong the 3rd index are ignored. Which is nice to be used to expose variables needed for testing.
|
Glad to see you started working on it, @thomasw21 FYI, there will be more tests coming from: bigscience-workshop#47
well a lot more I guess we can just drop the helper library into
Let me know if you prefer I set things up first and then you can drop in all your awesome additions. |
|
Cool! So I don't have much experience with pytest (none actually) or how it's done in transformers. Maybe you could setup it up and then I can migrate to the new framework? Either way I think I can still merge this (on my prefix lm branch but also on master) as those are runnable tests (see commands in the description). Which might be helpful for engs to at least check they haven't broken anything on their setup. Once we setup CI it'll be much smoother (and costly). The tests here don't guarantee you haven't broken anything, but IMO they are a good attempt at catching some basic things (like my attention issue). |
|
Sounds good, @thomasw21. You can merge and I will setup the test suite tomorrow and then you will move/adapt it if you prefer that order. |
* ICT zeroshot evaluation code * made more generic, aligned with other tasks * Fixed based on review recoemmendation * fixed another issue * implementing DPR * implementation dpr * adding dpr code * removed commnets * removed commnets * removed commnets * DPR evaluation debugging * DPR ongoing * DPR finetune and evaluation * fixing model evaluation of retriver * added pre ad post process * added pre ad post process * evaluation works! * debugging DPR * fix copy-n-paste error remove erroneous arg. * Typo fix in readme * t5 fixes * before cleaning the comments * vit pipeline fixes * cleaning the code * additional cleaning * renaming the folders * Add temporary assert to finetuning until it can be fixed. * Fixed issues with ICT pretraining * updated the evaluation script for retriver * updated the evaluation script for retriver * updated the evaluation script for retriver * updated the evaluation script for retriver * added exit interval for finetuning * updating the scripts * updating no load rng * updating script * Update T5 scripts * resolved hang issue * fixed the tensor size miss-mass issue * fixed the evaluation hangs * Adding readme * Adding readme * Adding readme * Adding readme * Adding readme * Adding readme * Adding readme * Adding readme * Clean up README.md a bit * addressed comments * updated readme * updated readme * updated readme * updated readme * Basic handling of prefix lm by updating the mask * Add prefix option to gpt temporarily and prevent it to use custom kernel * Add argument for prefix lm, in order to configure masking strategy * Woops * loss_on_targets_only flag, assert that current prefix implementation only works with reset_attention_mask set to True and attempt to fix empty slice issue * Format * Reverse renaming * Allow prefix on partial document at the end * WIP: add prefix per row feature * Document the use of None * Woops * Handle empty document better * We might not be able to concat empty tensors * Handle empty tensor seperately * Debug * Test * Add loss masking as script argument * Turns out deepspeed integration of attention matrices prevented dynamic masks * Add more asserts * Prefix can only see the prefix, it cannot see target * Remove prefix-lm argument as we split the pretrain script * Iz PR review * Make masking row dependent when using prefix * Revert "Merge remote-tracking branch 'origin/master' into prefix_lm" This reverts commit d49d6e5, reversing changes made to 28a712d. * Tests (#1) * WIP: test * Still trying to figure out deepspeed * WIP * Test test * Test how to setup deepspeed in unit tests * Test something else * Empty strings might be problematic * Remove unecessary arguments * Woops * Remove global variables at the end of each test and init deepspeed * Woops * Maybe adding classmethod * Woops * Add debug print to check that tear down happends * Reset global variables before * Let's test this * Try something else * WIP * More fix * More fix * More stuff to fix * We really want to compare vectors and not coordinates * Reformat * check something out * fix test * Remove prefix-lm flag as it's integrated * Woops * Add test for without reset attention mask * Fix test for non reset attention mask * Fix test * Update code for prefix lm Co-authored-by: Mostofa Patwary <mostofa.patwary@gmail.com> Co-authored-by: Mostofa Patwary <mpatwary@nvidia.com> Co-authored-by: Stas Bekman <stas00@users.noreply.github.com> Co-authored-by: Devrim <46989091+devrimcavusoglu@users.noreply.github.com> Co-authored-by: Stas Bekman <stas@stason.org> Co-authored-by: Vijay Korthikanti <vkorthikanti@nvidia.com> Co-authored-by: Jared Casper <jcasper@nvidia.com> Co-authored-by: Mohammad Shoeybi <mshoeybi@nvidia.com> Co-authored-by: Deepak Narayanan <dnarayanan@nvidia.com>
Context
I started experimenting with tests after coming up with an issue regarding prefix lm where the deepspeed version of gpt has some hacks that changes some behaviours concerning attention mechanisms. https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/main/pretrain_gpt.py#L56-L71
basically the issue, is that deepspeed would override some arguments passed down forcing the model to ignore whatever mask I would feed to the rest, causin everything to run smoothly but using a causal mask instead of a prefix one.
What I want to test
Ideally I would want to test out
deepspeed --num_gpus 1 pretrain_gpt.py ${ARGS}. However that prevents us from easily handling different types of inputs because data is fed via a file. Instead I did the following:It's not ideal, as it feels like a hacky way to intialise the model correctly. But it allows to use the model "interactively", ie I can process an input and then test outputs.
We use
patchto simulate different arguments.How I ran test
TODO