-
Notifications
You must be signed in to change notification settings - Fork 27
Multi-Stage Awake: Support tag-based Resume and Pause #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
btw I see CUDA graph in your figure, thus #21 |
fzyzcjy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! Some nits
fzyzcjy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, only a bit of nits!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only some tiny nits (if you have time maybe spend 3 minute to change and I am ready to merge; if in a hurry I am also ok for current code)
Co-authored with: MrAta ([email protected]) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? ### Motivation In RL Ecosystem which use colocate design like [verl](https://github.com/volcengine/verl/tree/main), we need to offload training model and load serving model & KV Cache frequently. #### Background - Currently SGLang is using [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) to pause and resume. - [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) is a open source repo that provided easy to use api to hack **cudaMalloc** and **cudaFree** to make sure the virtual address could be consistent after pause and resume, which is critical to ensure CUDA Graph work. - CUDA Graph is critical to make sure SGLang runs faster in decoding phases. #### Here is the current behavior of VERL + SGLang  1. During Training, we have training model and optimizer state in the GPU Memory, and once training is done, we will offload optimizer state to cpu and keep the model weights in GPU, which is needed in Update Weight. 2. During Update Weight, we awake the SGLang engine, so those paused memory of Model Weights and KV Cache will come back. Then we update model from training model to serving model on the fly using the api: `update_weights_in_tensor` 3. After Model being updated, we delete the training model from GPU Memory. Above design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far: - **Small KV Cache**: We need to use relative lower number of mem fraction ratio (e.g: 0.6), hence our KV Cache has less tokens. Given KV Cache has less tokens, we will hit `RuntimeError: Prefill out of memory. Try to lower your batch size.` when we try prefill large number of requests. - **Out of Memory**: If we use mem fraction ratio 0.8 and run RL for 32B model on 8 H100, it will OOM during update weight #### Challenge - `torch_memory_saver` currently only supports Singleton, hence SGLang will pause and resume KV Cache + Weights together, they are treated as the same group of memory controlled by the singleton `torch_memory_saver` instance #### Proposal  1. During Training, we do the same 2. During Update Weight Stage 1, we awake the model weights from SGLang and then update weights 3. During Update Weight Stage 2, we delete the training model weights from GPU Memory 4. Awake the SGLang's KV Cache  ### Benefit With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache ### Solution: Keep using Singleton and provide tag based pause/resume - [x] Support tag based resume/pause: fzyzcjy/torch_memory_saver#20 - [x] Support Multiple Stage Awake in SGLang: sgl-project/sglang#7099 - [ ] Support Multiple Stage Awake in verl: #1911 ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test   ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] New CI unit test(s) are added to cover the code path. - [ ] Rely on existing unit tests on CI that covers the code path. --------- Co-authored-by: Chayenne <[email protected]>
Co-authored with: MrAta ([email protected]) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? ### Motivation In RL Ecosystem which use colocate design like [verl](https://github.com/volcengine/verl/tree/main), we need to offload training model and load serving model & KV Cache frequently. #### Background - Currently SGLang is using [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) to pause and resume. - [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) is a open source repo that provided easy to use api to hack **cudaMalloc** and **cudaFree** to make sure the virtual address could be consistent after pause and resume, which is critical to ensure CUDA Graph work. - CUDA Graph is critical to make sure SGLang runs faster in decoding phases. #### Here is the current behavior of VERL + SGLang  1. During Training, we have training model and optimizer state in the GPU Memory, and once training is done, we will offload optimizer state to cpu and keep the model weights in GPU, which is needed in Update Weight. 2. During Update Weight, we awake the SGLang engine, so those paused memory of Model Weights and KV Cache will come back. Then we update model from training model to serving model on the fly using the api: `update_weights_in_tensor` 3. After Model being updated, we delete the training model from GPU Memory. Above design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far: - **Small KV Cache**: We need to use relative lower number of mem fraction ratio (e.g: 0.6), hence our KV Cache has less tokens. Given KV Cache has less tokens, we will hit `RuntimeError: Prefill out of memory. Try to lower your batch size.` when we try prefill large number of requests. - **Out of Memory**: If we use mem fraction ratio 0.8 and run RL for 32B model on 8 H100, it will OOM during update weight #### Challenge - `torch_memory_saver` currently only supports Singleton, hence SGLang will pause and resume KV Cache + Weights together, they are treated as the same group of memory controlled by the singleton `torch_memory_saver` instance #### Proposal  1. During Training, we do the same 2. During Update Weight Stage 1, we awake the model weights from SGLang and then update weights 3. During Update Weight Stage 2, we delete the training model weights from GPU Memory 4. Awake the SGLang's KV Cache  ### Benefit With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache ### Solution: Keep using Singleton and provide tag based pause/resume - [x] Support tag based resume/pause: fzyzcjy/torch_memory_saver#20 - [x] Support Multiple Stage Awake in SGLang: sgl-project/sglang#7099 - [ ] Support Multiple Stage Awake in verl: volcengine#1911 ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test   ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] New CI unit test(s) are added to cover the code path. - [ ] Rely on existing unit tests on CI that covers the code path. --------- Co-authored-by: Chayenne <[email protected]>
Co-authored with: MrAta ([email protected]) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? ### Motivation In RL Ecosystem which use colocate design like [verl](https://github.com/volcengine/verl/tree/main), we need to offload training model and load serving model & KV Cache frequently. #### Background - Currently SGLang is using [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) to pause and resume. - [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) is a open source repo that provided easy to use api to hack **cudaMalloc** and **cudaFree** to make sure the virtual address could be consistent after pause and resume, which is critical to ensure CUDA Graph work. - CUDA Graph is critical to make sure SGLang runs faster in decoding phases. #### Here is the current behavior of VERL + SGLang  1. During Training, we have training model and optimizer state in the GPU Memory, and once training is done, we will offload optimizer state to cpu and keep the model weights in GPU, which is needed in Update Weight. 2. During Update Weight, we awake the SGLang engine, so those paused memory of Model Weights and KV Cache will come back. Then we update model from training model to serving model on the fly using the api: `update_weights_in_tensor` 3. After Model being updated, we delete the training model from GPU Memory. Above design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far: - **Small KV Cache**: We need to use relative lower number of mem fraction ratio (e.g: 0.6), hence our KV Cache has less tokens. Given KV Cache has less tokens, we will hit `RuntimeError: Prefill out of memory. Try to lower your batch size.` when we try prefill large number of requests. - **Out of Memory**: If we use mem fraction ratio 0.8 and run RL for 32B model on 8 H100, it will OOM during update weight #### Challenge - `torch_memory_saver` currently only supports Singleton, hence SGLang will pause and resume KV Cache + Weights together, they are treated as the same group of memory controlled by the singleton `torch_memory_saver` instance #### Proposal  1. During Training, we do the same 2. During Update Weight Stage 1, we awake the model weights from SGLang and then update weights 3. During Update Weight Stage 2, we delete the training model weights from GPU Memory 4. Awake the SGLang's KV Cache  ### Benefit With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache ### Solution: Keep using Singleton and provide tag based pause/resume - [x] Support tag based resume/pause: fzyzcjy/torch_memory_saver#20 - [x] Support Multiple Stage Awake in SGLang: sgl-project/sglang#7099 - [ ] Support Multiple Stage Awake in verl: volcengine#1911 ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test   ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] New CI unit test(s) are added to cover the code path. - [ ] Rely on existing unit tests on CI that covers the code path. --------- Co-authored-by: Chayenne <[email protected]>
|
realize that, we may need one separate mem pool per tag in this mode. this is because, suppose:
I will also make a pluggable allocator version which may not have this issue though |
Ah I think I do understand the mechanism you mentioned in your comment (just read this blog few days ago: https://zhuanlan.zhihu.com/p/493646010) But I wonder how did you realize this problem? and what kind of issues are you foreseeing right now? Will it lead to illegal memory issue we recently hit? Context: |
b/c today I am refactoring and adding features to torch_memory_saver, and this comes to my mind.
I expect it to have issues like, tensors are wrongly tagged |
I'm happy to fix the pool arrangement issue, but tbh I am not very clear about the next. Should I create separate pool when we init those tensors (e.g: kv and weight) and make sure we are malloc memory using the pool passed in? |
|
my personal guess is that change
to sth like |
Co-authored with: MrAta ([email protected]) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? ### Motivation In RL Ecosystem which use colocate design like [verl](https://github.com/volcengine/verl/tree/main), we need to offload training model and load serving model & KV Cache frequently. #### Background - Currently SGLang is using [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) to pause and resume. - [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) is a open source repo that provided easy to use api to hack **cudaMalloc** and **cudaFree** to make sure the virtual address could be consistent after pause and resume, which is critical to ensure CUDA Graph work. - CUDA Graph is critical to make sure SGLang runs faster in decoding phases. #### Here is the current behavior of VERL + SGLang  1. During Training, we have training model and optimizer state in the GPU Memory, and once training is done, we will offload optimizer state to cpu and keep the model weights in GPU, which is needed in Update Weight. 2. During Update Weight, we awake the SGLang engine, so those paused memory of Model Weights and KV Cache will come back. Then we update model from training model to serving model on the fly using the api: `update_weights_in_tensor` 3. After Model being updated, we delete the training model from GPU Memory. Above design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far: - **Small KV Cache**: We need to use relative lower number of mem fraction ratio (e.g: 0.6), hence our KV Cache has less tokens. Given KV Cache has less tokens, we will hit `RuntimeError: Prefill out of memory. Try to lower your batch size.` when we try prefill large number of requests. - **Out of Memory**: If we use mem fraction ratio 0.8 and run RL for 32B model on 8 H100, it will OOM during update weight #### Challenge - `torch_memory_saver` currently only supports Singleton, hence SGLang will pause and resume KV Cache + Weights together, they are treated as the same group of memory controlled by the singleton `torch_memory_saver` instance #### Proposal  1. During Training, we do the same 2. During Update Weight Stage 1, we awake the model weights from SGLang and then update weights 3. During Update Weight Stage 2, we delete the training model weights from GPU Memory 4. Awake the SGLang's KV Cache  ### Benefit With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache ### Solution: Keep using Singleton and provide tag based pause/resume - [x] Support tag based resume/pause: fzyzcjy/torch_memory_saver#20 - [x] Support Multiple Stage Awake in SGLang: sgl-project/sglang#7099 - [ ] Support Multiple Stage Awake in verl: volcengine#1911 ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test   ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] New CI unit test(s) are added to cover the code path. - [ ] Rely on existing unit tests on CI that covers the code path. --------- Co-authored-by: Chayenne <[email protected]>
Co-authored with: MrAta ([email protected]) ### Checklist Before Starting - [x] Search for similar PR(s). ### What does this PR do? ### Motivation In RL Ecosystem which use colocate design like [verl](https://github.com/volcengine/verl/tree/main), we need to offload training model and load serving model & KV Cache frequently. #### Background - Currently SGLang is using [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) to pause and resume. - [torch_memory_saver](https://github.com/fzyzcjy/torch_memory_saver) is a open source repo that provided easy to use api to hack **cudaMalloc** and **cudaFree** to make sure the virtual address could be consistent after pause and resume, which is critical to ensure CUDA Graph work. - CUDA Graph is critical to make sure SGLang runs faster in decoding phases. #### Here is the current behavior of VERL + SGLang  1. During Training, we have training model and optimizer state in the GPU Memory, and once training is done, we will offload optimizer state to cpu and keep the model weights in GPU, which is needed in Update Weight. 2. During Update Weight, we awake the SGLang engine, so those paused memory of Model Weights and KV Cache will come back. Then we update model from training model to serving model on the fly using the api: `update_weights_in_tensor` 3. After Model being updated, we delete the training model from GPU Memory. Above design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far: - **Small KV Cache**: We need to use relative lower number of mem fraction ratio (e.g: 0.6), hence our KV Cache has less tokens. Given KV Cache has less tokens, we will hit `RuntimeError: Prefill out of memory. Try to lower your batch size.` when we try prefill large number of requests. - **Out of Memory**: If we use mem fraction ratio 0.8 and run RL for 32B model on 8 H100, it will OOM during update weight #### Challenge - `torch_memory_saver` currently only supports Singleton, hence SGLang will pause and resume KV Cache + Weights together, they are treated as the same group of memory controlled by the singleton `torch_memory_saver` instance #### Proposal  1. During Training, we do the same 2. During Update Weight Stage 1, we awake the model weights from SGLang and then update weights 3. During Update Weight Stage 2, we delete the training model weights from GPU Memory 4. Awake the SGLang's KV Cache  ### Benefit With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache ### Solution: Keep using Singleton and provide tag based pause/resume - [x] Support tag based resume/pause: fzyzcjy/torch_memory_saver#20 - [x] Support Multiple Stage Awake in SGLang: sgl-project/sglang#7099 - [ ] Support Multiple Stage Awake in verl: volcengine#1911 ### High-Level Design > Demonstrate the high-level design if this PR is complex. ### Specific Changes > List the specific changes. ### API > Demonstrate how the API changes if any. ### Usage Example > Provide usage example(s) for easier usage. ```python # Add code snippet or script demonstrating how to use this ``` ### Test   ### Additional Info. - **Issue Number**: Fixes issue # or discussion # if any. - **Training**: [Note which backend this PR will affect: FSDP, Megatron, both, or none] - **Inference**: [Note which backend this PR will affect: vLLM, SGLang, both, or none] ### Checklist Before Submitting - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl?tab=readme-ov-file#contribution-guide). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl?tab=readme-ov-file#code-linting-and-formatting). - [ ] Add `[BREAKING]` to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the [docs](https://github.com/volcengine/verl/tree/main/docs). - [ ] New CI unit test(s) are added to cover the code path. - [ ] Rely on existing unit tests on CI that covers the code path. --------- Co-authored-by: Chayenne <[email protected]>
Motivation
Closes: sgl-project/sglang#7009
In RL Ecosystem which use colocate design like verl, we need to offload training model and load serving model & KV Cache frequently.
Background
Here is the current behavior of VERL + SGLang
update_weights_in_tensorAbove design works pretty well so far, however, this would waste a big chunk of GPU Memory during rollout, which could cause a few issues we've seen so far:
RuntimeError: Prefill out of memory. Try to lower your batch size.when we try prefill large number of requests.Proposal
Benefit
With above feature, we can train larger model with same GPU, we can also make training/rollout more efficient given we can allocate larger KV Cache
Execution Plan: Keep using Singleton and provide tag based pause/resume