Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
LysandreJik
left a comment
There was a problem hiding this comment.
Thanks for your PR! Did you launch it as a trial to see if it works? I see the following should be completed, on line 303:
[setup, run_tests_gpu, run_examples_gpu, run_pipelines_tf_gpu, run_pipelines_torch_gpu, run_all_tests_torch_cuda_extensions_gpu]
You are right, that line should be changed. I haven't launched it (just tried with a dummy example). I will launch it now. |
|
You can launch it with only 1-2 models in each run, for example by updating this line: to This way you'll test the full behavior without having 12-hour long iterations. |
|
It took sometime, but the run looks good. https://github.com/huggingface/transformers/actions/runs/2276209307 |
a9c25d1 to
493b384
Compare
|
Looks good, thanks @ydshieh! |
* split single_gpu and multi_gpu * update needs in send_result Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
What does this PR do?
Fix the scheduled CI issue caused by the 256 limits (jobs generated from matrix).
Note that the workflow run page has a graph that has no single-gpu and multi-gpu on it. But on the left side, the job names have matrix mentioned.