Skip to content

[misc] fix: fix device#1671

Merged
vermouth1992 merged 2 commits intoverl-project:mainfrom
vermouth1992:chi/fix/device
May 24, 2025
Merged

[misc] fix: fix device#1671
vermouth1992 merged 2 commits intoverl-project:mainfrom
vermouth1992:chi/fix/device

Conversation

@vermouth1992
Copy link
Collaborator

@vermouth1992 vermouth1992 commented May 24, 2025

Checklist Before Starting

  • Search for similar PR(s).

What does this PR do?

Currently, the device to run on depends on whether is_cuda_available is True on the driver process. However, the driver process may be a CPU process that can't see cuda devices even when cuda devices are available. Thus, it's not appropriate to use is_cuda_available to set the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we can use different devices for different workloads. Thus, setting device explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run python3 xxx on CI machine instead of using a standard ray cluster that has dedicated CPUs for head. CI machines all have GPUs.

High-Level Design

Demonstrate the high-level design if this PR is complex.

Specific Changes

List the specific changes.

API

Demonstrate how the API changes if any.

Usage Example

Provide usage example(s) for easier usage.

# Add code snippet or script demonstrating how to use this 

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc.

Additional Info.

  • Issue Number: Fixes issue # or discussion # if any.
  • Training: [Note which backend this PR will affect: FSDP, Megatron, both, or none]
  • Inference: [Note which backend this PR will affect: vLLM, SGLang, both, or none]

Checklist Before Submitting

  • Read the Contribute Guide.
  • Apply pre-commit checks.
  • Add [BREAKING] to the PR title if it breaks any API.
  • Update the documentation about your changes in the docs.
  • Add CI test(s) if necessary.

Copy link
Collaborator

@sunyi0505 sunyi0505 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is necessary to modify the test cases in the verl.tests.npu by adding trainer. device=npu

@sunyi0505 sunyi0505 self-requested a review May 24, 2025 12:17
@vermouth1992 vermouth1992 enabled auto-merge (squash) May 24, 2025 12:31
@vermouth1992 vermouth1992 merged commit c60546d into verl-project:main May 24, 2025
24 checks passed
ETOgaosion pushed a commit to Jianbing-D/verl that referenced this pull request Jun 8, 2025
### Checklist Before Starting

- [X] Search for similar PR(s).

### What does this PR do?

Currently, the device to run on depends on whether `is_cuda_available`
is True on the driver process. However, the driver process may be a CPU
process that can't see cuda devices even when cuda devices are
available. Thus, it's not appropriate to use `is_cuda_available` to set
the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we
can use different devices for different workloads. Thus, setting device
explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run `python3 xxx`
on CI machine instead of using a standard ray cluster that has dedicated
CPUs for head. CI machines all have GPUs.

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
wwwjn pushed a commit to wwwjn/verl that referenced this pull request Jun 10, 2025
### Checklist Before Starting

- [X] Search for similar PR(s).

### What does this PR do?

Currently, the device to run on depends on whether `is_cuda_available`
is True on the driver process. However, the driver process may be a CPU
process that can't see cuda devices even when cuda devices are
available. Thus, it's not appropriate to use `is_cuda_available` to set
the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we
can use different devices for different workloads. Thus, setting device
explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run `python3 xxx`
on CI machine instead of using a standard ray cluster that has dedicated
CPUs for head. CI machines all have GPUs.

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
chenjiaoAngel added a commit to chenjiaoAngel/verl that referenced this pull request Nov 14, 2025
### Checklist Before Starting

- [X] Search for similar PR(s).

### What does this PR do?

Currently, the device to run on depends on whether `is_cuda_available`
is True on the driver process. However, the driver process may be a CPU
process that can't see cuda devices even when cuda devices are
available. Thus, it's not appropriate to use `is_cuda_available` to set
the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we
can use different devices for different workloads. Thus, setting device
explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run `python3 xxx`
on CI machine instead of using a standard ray cluster that has dedicated
CPUs for head. CI machines all have GPUs.

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
TimurTaepov pushed a commit to giorgossideris/verl that referenced this pull request Dec 20, 2025
### Checklist Before Starting

- [X] Search for similar PR(s).

### What does this PR do?

Currently, the device to run on depends on whether `is_cuda_available`
is True on the driver process. However, the driver process may be a CPU
process that can't see cuda devices even when cuda devices are
available. Thus, it's not appropriate to use `is_cuda_available` to set
the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we
can use different devices for different workloads. Thus, setting device
explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run `python3 xxx`
on CI machine instead of using a standard ray cluster that has dedicated
CPUs for head. CI machines all have GPUs.

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
vyomakesh0728 added a commit to vyomakesh0728/verl that referenced this pull request Jan 22, 2026
### Checklist Before Starting

- [X] Search for similar PR(s).

### What does this PR do?

Currently, the device to run on depends on whether `is_cuda_available`
is True on the driver process. However, the driver process may be a CPU
process that can't see cuda devices even when cuda devices are
available. Thus, it's not appropriate to use `is_cuda_available` to set
the device. Instead, we should set the device explicitly.

In the future, we may have a ray cluster with both NPU and GPU, and we
can use different devices for different workloads. Thus, setting device
explicitly would be a better choice in the long run.

Why CI can't trigger this problem: because we directly run `python3 xxx`
on CI machine instead of using a standard ray cluster that has dedicated
CPUs for head. CI machines all have GPUs.

### High-Level Design

> Demonstrate the high-level design if this PR is complex.

### Specific Changes

> List the specific changes.

### API

> Demonstrate how the API changes if any.

### Usage Example

> Provide usage example(s) for easier usage.

```python
# Add code snippet or script demonstrating how to use this 
```

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluatuion results, etc.

### Additional Info.

- **Issue Number**: Fixes issue # or discussion # if any.
- **Training**: [Note which backend this PR will affect: FSDP, Megatron,
both, or none]
- **Inference**: [Note which backend this PR will affect: vLLM, SGLang,
both, or none]

### Checklist Before Submitting

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting).
- [ ] Add `[BREAKING]` to the PR title if it breaks any API.
- [ ] Update the documentation about your changes in the
[docs](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add CI test(s) if necessary.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants