Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix for output_generation_logits in tensorrtllm #11820

Merged
merged 2 commits into from
Jan 11, 2025

Conversation

athitten
Copy link
Collaborator

@athitten athitten commented Jan 10, 2025

What does this PR do ?

Gets the value of generation_logits_available from inputs["output_generation_logits"][0][0] so that it is a bool as opposed to inputs["output_generation_logits"] where it's a np array and can give error when the array is len > 1.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Copy link
Contributor

[🤖]: Hi @athitten 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

@oyilmaz-nvidia oyilmaz-nvidia merged commit 7f3ac6b into main Jan 11, 2025
199 of 202 checks passed
@oyilmaz-nvidia oyilmaz-nvidia deleted the athitten/fix_output_generation_logits branch January 11, 2025 18:48
janekl pushed a commit that referenced this pull request Jan 13, 2025
janekl pushed a commit that referenced this pull request Jan 13, 2025
janekl added a commit that referenced this pull request Jan 13, 2025
Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
pablo-garay pushed a commit that referenced this pull request Jan 16, 2025
)

* Revert "Revert Mcore update since it caused regression (#11791)"

This reverts commit 84b2bf0.

* Fix Gemma2 Attention init args (#11792)

* Use _get_mlp_module_spec from Megatron Core rather than redefine locally (#11834)

* Use _get_mlp_module_spec from MCore rather than redefine

Signed-off-by: Jan Lasek <[email protected]>

* Apply isort and black reformatting

Signed-off-by: janekl <[email protected]>

* Update nemo/collections/nlp/models/language_modeling/megatron/gpt_layer_modelopt_spec.py

Co-authored-by: oliver könig <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: oliver könig <[email protected]>

* Bugfix for output_generation_logits in tensorrtllm (#11820) (#11833)

Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Ao Tang <[email protected]>
Co-authored-by: Jan Lasek <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
)

* Revert "Revert Mcore update since it caused regression (#11791)"

This reverts commit 84b2bf0.

* Fix Gemma2 Attention init args (#11792)

* Use _get_mlp_module_spec from Megatron Core rather than redefine locally (#11834)

* Use _get_mlp_module_spec from MCore rather than redefine

Signed-off-by: Jan Lasek <[email protected]>

* Apply isort and black reformatting

Signed-off-by: janekl <[email protected]>

* Update nemo/collections/nlp/models/language_modeling/megatron/gpt_layer_modelopt_spec.py

Co-authored-by: oliver könig <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: oliver könig <[email protected]>

* Bugfix for output_generation_logits in tensorrtllm (#11820) (#11833)

Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Ao Tang <[email protected]>
Co-authored-by: Jan Lasek <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
Signed-off-by: Abhinav Garg <[email protected]>
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
NVIDIA#11799)

* Revert "Revert Mcore update since it caused regression (NVIDIA#11791)"

This reverts commit 84b2bf0.

* Fix Gemma2 Attention init args (NVIDIA#11792)

* Use _get_mlp_module_spec from Megatron Core rather than redefine locally (NVIDIA#11834)

* Use _get_mlp_module_spec from MCore rather than redefine

Signed-off-by: Jan Lasek <[email protected]>

* Apply isort and black reformatting

Signed-off-by: janekl <[email protected]>

* Update nemo/collections/nlp/models/language_modeling/megatron/gpt_layer_modelopt_spec.py

Co-authored-by: oliver könig <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: oliver könig <[email protected]>

* Bugfix for output_generation_logits in tensorrtllm (NVIDIA#11820) (NVIDIA#11833)

Signed-off-by: Abhishree <[email protected]>
Signed-off-by: Jan Lasek <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>

---------

Signed-off-by: Jan Lasek <[email protected]>
Signed-off-by: janekl <[email protected]>
Signed-off-by: Abhishree <[email protected]>
Co-authored-by: Ao Tang <[email protected]>
Co-authored-by: Jan Lasek <[email protected]>
Co-authored-by: janekl <[email protected]>
Co-authored-by: Abhishree Thittenamane <[email protected]>
Signed-off-by: Youngeun Kwon <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants