-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] RuntimeError: CUDA error: operation not permitted when stream is capturing #2544
Comments
It cannot be reproduced with the latest main branch. |
I started getting this error with PyTorch engine in the latest release for Qwen2-VL model. I get the error with batch size >= 6. When batch size is 1, everything runs fine. |
I still can not reproduce the error. Since there is a sgemm cublas error report, try replace lmdeploy/lmdeploy/pytorch/backends/default/rotary_embedding.py Lines 33 to 34 in 2e49fc3
with
|
When batch size is 1, everything runs fine. |
I have asked an expert, the error might come from the vision model on the default stream. Which would corruption the capturing of language model in the other stream. I will try fix it ASAP. |
Another question is, when I use triton python backend to deploy, and set dynamic batching, is it also easy to cause exceptions due to cuda graph capture of different batch_sizes? |
We would capture multiple graphs with different input sizes, and the input would be padded to the capture size before forward. It is safe to use dynamic batching. |
What is the specific capture strategy like? For example, the default capture batch size options may be 1, 2, 4, 8, etc. In this way, I can set the corresponding prefer batch size to obtain the best inference performance. |
Another curious question is why TurboMind supports the 2B-76B InternVL2 model but not the 1B model. Are there any plans to support it in the future? @grimoire |
https://github.com/grimoire/lmdeploy/tree/fix-vl-graphcapture I have set the capture mode to thread_local, which might fix the bug.
The engine would generate graphs with token numbers [1, 2, 4,..., 256], you don't have to care much about that since pytorch engine would schedule the requests to the best batch size.
Intervl2-1b use qwen2-0.5b as it's language model, which has |
Seems like it is branched not from the latest version but from 0.4.2. When installing, it downgraded pytorch and I get this error for Qwen2-VL: "Unrecognized configuration class <class 'transformers.models.qwen2_vl.configuration_qwen2_vl.Qwen2VLConfig'> for this kind of AutoModel". |
Are you using the main branch of my repo? I have create a draft PR #2560, Please try this. |
Sorry, forgot to switch branches! Yes, the issue doesn't occur when using the correct branch. |
This method works well for me. Also, I'm curious to know if there's any plan to bring TurboMind support to smaller models like Intervl2-1b, or is the workload too heavy to make it happen in the near future? |
Hi! I encountered the same problem. When using Qwen2-VL-7B, it works fine when batch_size is 1, but the same error is reported when it is set to 4. I used the latest version of lmdeploy. Has this problem been fixed now? |
Checklist
Describe the bug
使用lmdeploy v0.6.0加载InternVL2-1B,在循环中执行推理会报“RuntimeError: CUDA error: operation not permitted when stream is capturing”,怀疑跟v0.6.0支持cuda graph有关。
Reproduction
`
import os
import time
import torch
from lmdeploy import pipeline, TurbomindEngineConfig
from lmdeploy.vl import load_image
device = "cuda"
pwd = os.path.abspath(os.path.dirname(file))
model_path = os.path.join(pwd, 'InternVL2-1B')
pipe = pipeline(model_path,
backend_config=TurbomindEngineConfig(cache_max_entry_count=0.6))
BATCH_SIZE = 8
querys = [
'图片中有海吗',
]*BATCH_SIZE
image_paths = [os.path.join(pwd, "warmup/flag.jpg")]*BATCH_SIZE
image = load_image(image_paths[1])
response = pipe((querys[1], image))
prompts = [(query, load_image(img_url)) for img_url, query in zip(image_paths, querys)]
response = pipe(prompts)
print(response)
_REPEAT = 100
tic = time.time()
torch.cuda.synchronize()
for _ in range(_REPEAT):
response = pipe(prompts)
torch.cuda.synchronize()
toc = time.time()
print(response)
print(f'seconds per image:{(toc-tic)/BATCH_SIZE/_REPEAT}')
`
Environment
Error traceback
The text was updated successfully, but these errors were encountered: