Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] chat with converted DeepSeek-V2-Lite-Chat model, raise RuntimeError #2840

Open
3 tasks
zhulinJulia24 opened this issue Dec 2, 2024 · 0 comments
Open
3 tasks
Assignees

Comments

@zhulinJulia24
Copy link
Collaborator

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

cannot chat with converted DeepSeek-V2-Lite-Chat model, raise /lmdeploy/src/turbomind/models/llama/LlamaDecoderLayerWeight.cc:275

similar with #2689

Reproduction

  1. convert
    lmdeploy convert deepseek /nvme/qa_test_models/deepseek-ai/DeepSeek-V2-Lite-Chat --dst-path /nvme/qa_test_models/autotest_model/workspace_deepseek-ai/DeepSeek-V2-Lite-Chat --tp 1
  2. chat
    lmdeploy chat /nvme/qa_test_models/autotest_model/workspace_deepseek-ai/DeepSeek-V2-Lite-Chat
2024-12-02 11:08:31,364 - lmdeploy - WARNING - supported_models.py:104 - /nvme/qa_test_models/autotest_model/workspace_deepseek-ai/DeepSeek-V2-Lite-Chat seems to be a turbomind workspace, which can only be ran with turbomind engine.
chat_template_config:
ChatTemplateConfig(model_name='deepseek', system=None, meta_instruction=None, eosys=None, user=None, eoh=None, assistant=None, eoa=None, separator=None, capability='chat', stop_words=None)
engine_cfg:
TurbomindEngineConfig(dtype='auto', model_format=None, tp=1, session_len=163840, max_batch_size=1, cache_max_entry_count=0.8, cache_chunk_size=-1, cache_block_seq_len=64, enable_prefix_caching=False, quant_policy=0, rope_scaling_factor=0.0, use_logn_attn=False, download_dir=None, revision=None, max_prefill_token_num=8192, num_tokens_per_iter=0, max_prefill_iters=1)
 [TM][ERROR] /nvme/qa_test_models/autotest_model/workspace_deepseek-ai/DeepSeek-V2-Lite-Chat/triton_models/weights/layers.0.attention.w_qkv.0.weight and /nvme/qa_test_models/autotest_model/workspace_deepseek-ai/DeepSeek-V2-Lite-Chat/triton_models/weights/layers.0.attention.w_qkv.0.qweight does not exist
Traceback (most recent call last):
  File "/home/zhulin1/miniconda3/envs/v62new/bin/lmdeploy", line 8, in <module>
    sys.exit(run())
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/cli/entrypoint.py", line 42, in run
    args.run(args)
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/cli/cli.py", line 282, in chat
    run_chat(**kwargs)
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/chat.py", line 116, in main
    tm_model = tm.TurboMind.from_pretrained(model_path,
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 302, in from_pretrained
    return cls(model_path=pretrained_model_name_or_path,
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 105, in __init__
    self.model_comm = self._from_workspace(
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 271, in _from_workspace
    self._create_weight(model_comm)
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 152, in _create_weight
    future.result()
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/concurrent/futures/_base.py", line 458, in result
    return self.__get_result()
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/zhulin1/miniconda3/envs/v62new/lib/python3.10/site-packages/lmdeploy/turbomind/turbomind.py", line 145, in _create_weight_func
    model_comm.create_shared_weights(device_id, rank)
RuntimeError: [TM][ERROR]  Assertion fail: /lmdeploy/src/turbomind/models/llama/LlamaDecoderLayerWeight.cc:231

Environment

sys.platform: linux
Python: 3.10.15 (main, Oct  3 2024, 07:27:34) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: NVIDIA A100-SXM4-80GB
CUDA_HOME: /usr/local/cuda-11.7
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44)
PyTorch: 2.3.0+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.6 (Git Hash 86e6af5974177e513fd3fee58425e1063e7f1361)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.7
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.3.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.18.0+cu118
LMDeploy: 0.6.3+
transformers: 4.46.2
gradio: 5.5.0
fastapi: 0.115.4
pydantic: 2.9.2
triton: 2.3.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity
GPU0     X      NV12    NV12    NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU1    NV12     X      NV12    NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU2    NV12    NV12     X      NV12    NV12    NV12    NV12    NV12    0-27,56-83      0
GPU3    NV12    NV12    NV12     X      NV12    NV12    NV12    NV12    0-27,56-83      0
GPU4    NV12    NV12    NV12    NV12     X      NV12    NV12    NV12    28-55,84-111    1
GPU5    NV12    NV12    NV12    NV12    NV12     X      NV12    NV12    28-55,84-111    1
GPU6    NV12    NV12    NV12    NV12    NV12    NV12     X      NV12    28-55,84-111    1
GPU7    NV12    NV12    NV12    NV12    NV12    NV12    NV12     X      28-55,84-111    1

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

No response

@lvhan028 lvhan028 self-assigned this Dec 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants