Skip to content

Conversation

@BBuf
Copy link
Contributor

@BBuf BBuf commented Aug 31, 2023

这个Pr通过外挂一个tokenizer的方式支持了rwkv-4-world模型推理:

图片

但是在推理的时候会因为推理结果nan导致sampler取index挂掉,我定位了一下tokenizer的结果是符合预期的,但是经过模型推理之后的输出prob全是nan,我想问一下这个应该怎么定位?

报错如下:

[09:25:43] /bbuf/relax/src/runtime/relax_vm/lm_support.cc:421: InternalError: Check failed: sampled_index >= 0 (-1 vs. 0) : 
Stack trace:
  [bt] (0) /bbuf/mlc-llm/build/tvm/libtvm_runtime.so(tvm::runtime::Backtrace[abi:cxx11]()+0x2c) [0x7f4db9c4861c]
  [bt] (1) ./mlc-llm/build/mlc_chat_cli(tvm::runtime::detail::LogFatal::Entry::Finalize()+0x3d) [0x56012b9da50d]
  [bt] (2) /bbuf/mlc-llm/build/tvm/libtvm_runtime.so(tvm::runtime::relax_vm::SampleTopPFromProb(tvm::runtime::NDArray, double, double)+0x2ec) [0x7f4db9cadd6c]
  [bt] (3) /mlc-llm/build/tvm/libtvm_runtime.so(tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<int (tvm::runtime::NDArray, double, double)>::AssignTypedLambda<int (*)(tvm::runtime::NDArray, double, double)>(int (*)(tvm::runtime::NDArray, double, double), std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)+0x229) [0x7f4db9cb7a29]
  [bt] (4) /bbuf/mlc-llm/build/libmlc_llm.so(mlc::llm::LLMChat::SampleTokenFromLogits(tvm::runtime::NDArray, float, float)+0x66c) [0x7f4db9ffdebc]
  [bt] (5) /bbuf/mlc-llm/build/libmlc_llm.so(mlc::llm::LLMChat::PrefillStep(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, bool, mlc::llm::PlaceInPrompt)+0x1d0) [0x7f4dba001040]
  [bt] (6) /bbuf/mlc-llm/build/libmlc_llm.so(mlc::llm::LLMChatModule::GetFunction(tvm::runtime::String const&, tvm::runtime::ObjectPtr<tvm::runtime::Object> const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#5}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1f6) [0x7f4dba001466]
  [bt] (7) ./mlc-llm/build/mlc_chat_cli(+0xa858) [0x56012b9d0858]
  [bt] (8) ./mlc-llm/build/mlc_chat_cli(+0xeb80) [0x56012b9d4b80]

复现方式为:按照官方编译步骤去量化这个模型 https://huggingface.co/StarRing2022/RWKV-4-World-1.5B 。然后在输出的模型的params下把这里外挂的tokenizer_model文件放进去,地址为:https://github.com/daquexian/faster-rwkv/blob/master/tokenizer_model 。结构为:

图片

然后就可以复现了。

@BBuf
Copy link
Contributor Author

BBuf commented Aug 31, 2023

后面我逐层定位发现是因为系统Prompt使用tokenzier encode之后出现了45个数字,一旦有45这个数字(对应逗号)就会炸精度。rwkv团队反馈大概率是K炸现象,也就是小模型,第一层需要fp32。我在换3b和7b world模型验证,看看这个问题是否消失

@BBuf
Copy link
Contributor Author

BBuf commented Sep 1, 2023

后面换了3b和7b发现可以成功运行,不会出现nan了。

@BBuf BBuf marked this pull request as ready for review September 1, 2023 13:48
@BBuf
Copy link
Contributor Author

BBuf commented Sep 1, 2023

图片

@Hzfengsy Hzfengsy merged commit 9434126 into mlc-ai:main Sep 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants