Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recipe changes for performance #11763

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open

Conversation

guyueh1
Copy link
Contributor

@guyueh1 guyueh1 commented Jan 6, 2025

What does this PR do ?

Recipe changes for performance in 25.01 release

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

guyueh1 and others added 3 commits January 6, 2025 09:33
Signed-off-by: Guyue Huang <[email protected]>

Conflicts:
	nemo/lightning/run/plugins.py
Signed-off-by: Guyue Huang <[email protected]>
nemo/lightning/run/plugins.py Outdated Show resolved Hide resolved
nemo/lightning/run/plugins.py Outdated Show resolved Hide resolved
@guyueh1 guyueh1 marked this pull request as ready for review January 8, 2025 19:06
guyueh1 and others added 2 commits January 8, 2025 11:07
Conflicts:
	nemo/lightning/run/plugins.py
erhoo82
erhoo82 previously approved these changes Jan 8, 2025
Copy link
Collaborator

@erhoo82 erhoo82 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@erhoo82 erhoo82 self-requested a review January 14, 2025 19:22
erhoo82
erhoo82 previously approved these changes Jan 14, 2025
@@ -168,3 +182,17 @@ class TransformerLayerTPOverlapCfg:
proj_fprop=PipelineOverlapCfg(num_sm=24, cga_size=2, num_splits=4, set_sm_margin=True, fp8_buf=True),
fc2_fprop=RingExchangeOverlapCfg(num_sm=1, set_sm_margin=True),
)

# Nemotron 340B
userbuffers_bf16_h100_h18432_tp8_mbs1_seqlen4096 = TransformerLayerTPOverlapCfg(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an overlap config for hopper or blackwell?

@erhoo82 erhoo82 enabled auto-merge (squash) January 14, 2025 19:23
if tp_size > 1 or cp_size > 1:
executor.env_vars["CUDA_DEVICE_MAX_CONNECTIONS"] = "1"
if torch.cuda.is_available():
major, _ = torch.cuda.get_device_capability()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@erhoo82 This method won't work because it's run on the cluster frontend node, not after slurm allocation. We need to found another way.

@erhoo82 erhoo82 added Run CICD and removed Run CICD labels Jan 14, 2025
auto-merge was automatically disabled January 15, 2025 17:58

Head branch was pushed to by a user without write access

@guyueh1
Copy link
Contributor Author

guyueh1 commented Jan 24, 2025

This PR is ready, @erhoo82 please review.

@guyueh1
Copy link
Contributor Author

guyueh1 commented Jan 28, 2025

@erhoo82 this PR is ready, it's needed by 25.02, let's review and merge it.

os.environ.pop('CUDA_DEVICE_MAX_CONNECTIONS')
else:
if tp_size > 1 or cp_size > 1:
os.environ['CUDA_DEVICE_MAX_CONNECTIONS'] = "1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could also be good to add a doc string for this condition.
Set the device connection to 1 to enforce the kernel queuing order from the host to the execution order on GPU. This is needed to schedule a communication kernel before the overlapping persistent GEMM kernel. Otherwise, the communication kernel will be pushed to the end of the GEMM kernel so failing to overlap the kernels

if major > 9:
if (tp_size > 1 or cp_size > 1) and (dp_size > 1 or pp_size > 1):
# Default is 8, but for this case, we need extra connections
# to avoid serialization of streams
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default is 8, but for this case, we need extra connections to avoid serialization of streams
to
We need extra connections to avoid serialization of streams, so we use the max connections of 32 instead of the default device connection of 8.

Signed-off-by: Guyue Huang <[email protected]>
erhoo82
erhoo82 previously approved these changes Jan 29, 2025
Copy link
Collaborator

@erhoo82 erhoo82 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@erhoo82 erhoo82 added Run CICD and removed Run CICD labels Jan 29, 2025
Signed-off-by: Guyue Huang <[email protected]>
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.llm.recipes.tp_overlap_configs.userbuffers
nemo/collections/llm/recipes/tp_overlap_configs/userbuffers.py:19:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/llm/recipes/tp_overlap_configs/userbuffers.py:24:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/llm/recipes/tp_overlap_configs/userbuffers.py:34:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/llm/recipes/tp_overlap_configs/userbuffers.py:42:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/llm/recipes/tp_overlap_configs/userbuffers.py:50:0: C0115: Missing class docstring (missing-class-docstring)
************* Module nemo.lightning.pytorch.callbacks.megatron_comm_overlap
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:81:0: C0301: Line too long (121/119) (line-too-long)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:287:0: C0301: Line too long (124/119) (line-too-long)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:251:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:318:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:322:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:326:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/lightning/pytorch/callbacks/megatron_comm_overlap.py:330:4: C0116: Missing function or method docstring (missing-function-docstring)

-----------------------------------
Your code has been rated at 9.44/10

Mitigation guide:

  • Add sensible and useful docstrings to functions and methods
  • For trivial methods like getter/setters, consider adding # pylint: disable=C0116 inside the function itself
  • To disable multiple functions/methods at once, put a # pylint: disable=C0116 before the first and a # pylint: enable=C0116 after the last.

By applying these rules, we reduce the occurance of this message in future.

Thank you for improving NeMo's documentation!

@guyueh1 guyueh1 requested a review from erhoo82 January 30, 2025 23:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants