cp: Onboarding LLAMA3 70B LoRa to B300 and B200 chips (2397) into r0.3.0#2581
cp: Onboarding LLAMA3 70B LoRa to B300 and B200 chips (2397) into r0.3.0#2581svcnvidia-nemo-ci wants to merge 1 commit intor0.3.0from
Onboarding LLAMA3 70B LoRa to B300 and B200 chips (2397) into r0.3.0#2581Conversation
Signed-off-by: Raghav Hrishikeshan Mukundan <rmukundan@nvidia.com> Signed-off-by: NeMo Bot <nemo-bot@nvidia.com>
|
/ok to test 84e8d6f |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughThe PR adds LLAMA3 70B LoRA configuration support for B200 and B300 GPUs. New configuration functions are introduced in llama3_llm_finetune.py, base configurations and public aliases are defined in llama3_workload_base_configs.py, and these configurations are exported through the module's public API in init.py. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Merge via #2509 |
beep boop [🤖]: Hi @rhmukundan 👋,
Summary by CodeRabbit