[misc] fix: fix device#1671
Merged
vermouth1992 merged 2 commits intoverl-project:mainfrom May 24, 2025
Merged
Conversation
sunyi0505
approved these changes
May 24, 2025
sunyi0505
reviewed
May 24, 2025
Collaborator
sunyi0505
left a comment
There was a problem hiding this comment.
I think it is necessary to modify the test cases in the verl.tests.npu by adding trainer. device=npu
sunyi0505
approved these changes
May 24, 2025
tongyx361
approved these changes
May 24, 2025
ETOgaosion
pushed a commit
to Jianbing-D/verl
that referenced
this pull request
Jun 8, 2025
### Checklist Before Starting - [X] Search for similar PR(s). ### What does this PR do? Currently, the device to run on depends on whether `is_cuda_available` is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use `is_cuda_available` to set the device. Instead, we should set the device explicitly. In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run. Why CI can't trigger this problem: because we directly run `python3 xxx` on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs. ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
wwwjn
pushed a commit
to wwwjn/verl
that referenced
this pull request
Jun 10, 2025
### Checklist Before Starting - [X] Search for similar PR(s). ### What does this PR do? Currently, the device to run on depends on whether `is_cuda_available` is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use `is_cuda_available` to set the device. Instead, we should set the device explicitly. In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run. Why CI can't trigger this problem: because we directly run `python3 xxx` on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs. ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
chenjiaoAngel
added a commit
to chenjiaoAngel/verl
that referenced
this pull request
Nov 14, 2025
### Checklist Before Starting - [X] Search for similar PR(s). ### What does this PR do? Currently, the device to run on depends on whether `is_cuda_available` is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use `is_cuda_available` to set the device. Instead, we should set the device explicitly. In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run. Why CI can't trigger this problem: because we directly run `python3 xxx` on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs. ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
TimurTaepov
pushed a commit
to giorgossideris/verl
that referenced
this pull request
Dec 20, 2025
### Checklist Before Starting - [X] Search for similar PR(s). ### What does this PR do? Currently, the device to run on depends on whether `is_cuda_available` is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use `is_cuda_available` to set the device. Instead, we should set the device explicitly. In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run. Why CI can't trigger this problem: because we directly run `python3 xxx` on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs. ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
vyomakesh0728
added a commit
to vyomakesh0728/verl
that referenced
this pull request
Jan 22, 2026
### Checklist Before Starting - [X] Search for similar PR(s). ### What does this PR do? Currently, the device to run on depends on whether `is_cuda_available` is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use `is_cuda_available` to set the device. Instead, we should set the device explicitly. In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run. Why CI can't trigger this problem: because we directly run `python3 xxx` on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs. ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc. ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add CI test(s) if necessary.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Checklist Before Starting
What does this PR do?
Currently, the device to run on depends on whether
is_cuda_availableis True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to useis_cuda_availableto set the device. Instead, we should set the device explicitly.In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run.
Why CI can't trigger this problem: because we directly run
python3 xxxon CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs.High-Level Design
Specific Changes
API
Usage Example
# Add code snippet or script demonstrating how to use thisTest
Additional Info.
Checklist Before Submitting
[BREAKING]to the PR title if it breaks any API.