Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] gradio界面对话报错 #508

Open
1 of 3 tasks
tuncha opened this issue Aug 16, 2024 · 1 comment
Open
1 of 3 tasks

[Bug] gradio界面对话报错 #508

tuncha opened this issue Aug 16, 2024 · 1 comment

Comments

@tuncha
Copy link

tuncha commented Aug 16, 2024

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

您好,我正在使用intern2VL-40B进行服务器部署gradio服务测试,但是我在测试中发现如果我进行了一轮上传图像对话后,想使用Reset重置界面上传新图像,此时报错RuntimeError: Current event loop is different from the one bound to loop task! 同样的,这个问题也会出现在对一张图像进行多轮对话的情况下,这是因为历史信息堆积导致错误还是操作有问题?

Reproduction

CUDA_VISIBLE_DEVICES=1,2 lmdeploy serve gradio /OpenGVLab/InternVL2-40B --model-name InternVL2-40B --backend turbomind --server-port 23333 --tp 2 --chat-template /data/personal/fengtianyi/code/MLLM/InternVL-main/chat.json

Environment

sys.platform: linux
Python: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) [GCC 12.3.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA A800 80GB PCIe
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
PyTorch: 2.0.1
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.1-Product Build 20220311 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.7
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.5
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.15.2
LMDeploy: 0.5.3+
transformers: 4.37.2
gradio: 4.41.0
fastapi: 0.112.0
pydantic: 2.8.2
triton: 2.3.1
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV8     PXB     PXB     PXB     PXB     PXB     PXB     SYS     0-31,64-95      0               N/A
GPU1    NV8      X      PXB     PXB     PXB     PXB     PXB     PXB     SYS     0-31,64-95      0               N/A
GPU2    PXB     PXB      X      PXB     PXB     PXB     PXB     NV8     SYS     0-31,64-95      0               N/A
GPU3    PXB     PXB     PXB      X      NV8     PXB     PXB     PXB     SYS     0-31,64-95      0               N/A
GPU4    PXB     PXB     PXB     NV8      X      PXB     PXB     PXB     SYS     0-31,64-95      0               N/A
GPU5    PXB     PXB     PXB     PXB     PXB      X      NV8     PXB     SYS     0-31,64-95      0               N/A
GPU6    PXB     PXB     PXB     PXB     PXB     NV8      X      PXB     SYS     0-31,64-95      0               N/A
GPU7    PXB     PXB     NV8     PXB     PXB     PXB     PXB      X      SYS     0-31,64-95      0               N/A
NIC0    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_bond_0

Error traceback

No response

@tuncha tuncha changed the title [Bug] [Bug] gradio界面对话报错 Aug 16, 2024
@G-z-w
Copy link
Collaborator

G-z-w commented Aug 26, 2024

This may be a problem with the lmdeploy server. You can try upgrading your lmdeploy. If the problem persists, please refer to this issue. link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants