We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AutoModel inference函数line 255 deep_update导致kwargs['cache']并不是传入的cache
deep_update
kwargs['cache']
cache
以下代码输出错乱
import copy import os import logging import soundfile import torch from funasr.auto.auto_model import AutoModel chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4] 480ms encoder_chunk_look_back = 4 #number of chunks to lookback for encoder self-attention decoder_chunk_look_back = 1 #number of encoder chunks to lookback for decoder cross-attention model = AutoModel(model="paraformer-zh-streaming") import soundfile import os wav_file = os.path.join(model.model_path, "example/asr_example.wav") speech, sample_rate = soundfile.read(wav_file) chunk_stride = chunk_size[1] * 960 # 600ms cache = {} cache1 = {} print(id(cache), id(cache1)) result = '' result1 = '' total_chunk_num = int(len((speech)-1)/chunk_stride+1) for i in range(total_chunk_num): speech_chunk = speech[i*chunk_stride:(i+1)*chunk_stride] is_final = i == total_chunk_num - 1 res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back) res1= model.generate(input=speech_chunk, cache=cache1, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back) if res: print(res[0]['text']) result += res[0]['text'] if res1: print(res1[0]['text']) result1 += res1[0]['text'] print(result) print(result1)
改为kwargs.update(cfg)就正常了
kwargs.update(cfg)
The text was updated successfully, but these errors were encountered:
Ok, we would check it.
Sorry, something went wrong.
No branches or pull requests
🐛 Bug
AutoModel inference函数line 255
deep_update
导致kwargs['cache']
并不是传入的cache
To Reproduce
以下代码输出错乱
Code sample
Expected behavior
改为
kwargs.update(cfg)
就正常了The text was updated successfully, but these errors were encountered: