Skip to content

Support XPU for auto-paralllel LLaMa #9796

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Fix CI errors

bbbeb62
Select commit
Loading
Failed to load commit list.
Merged

Support XPU for auto-paralllel LLaMa #9796

Fix CI errors
bbbeb62
Select commit
Loading
Failed to load commit list.
Codecov / codecov/project failed Feb 6, 2025 in 0s

52.19% (target 58.00%)

View this Pull Request on Codecov

52.19% (target 58.00%)

Details

Codecov Report

Attention: Patch coverage is 6.45161% with 29 lines in your changes missing coverage. Please review.

Project coverage is 52.19%. Comparing base (13053a7) to head (bbbeb62).
Report is 13 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/transformers/llama/modeling_auto.py 7.14% 26 Missing ⚠️
paddlenlp/trainer/auto_trainer.py 0.00% 3 Missing ⚠️

❌ Your project check has failed because the head coverage (52.19%) is below the target coverage (58.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9796      +/-   ##
===========================================
+ Coverage    52.06%   52.19%   +0.13%     
===========================================
  Files          734      730       -4     
  Lines       116591   115871     -720     
===========================================
- Hits         60703    60480     -223     
+ Misses       55888    55391     -497     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.