-
Notifications
You must be signed in to change notification settings - Fork 786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where should I get decoder_model_merged
file from?
#917
Comments
I think it could be related to: xenova/whisper-web#24 |
Can you try with the Transformers.js v3 conversion script? git clone -b v3 https://github.com/xenova/transformers.js.git
cd transformers.js
pip install -q -r scripts/requirements.txt
python -m scripts.convert --quantize --model_id MODEL_ID_GOES_HERE |
Hey @xenova, |
Hey @xenova and everyone else who will get here,
|
Hey @xenova, |
@xenova one last update (in the meantime),
I would love to hear any further feedback from you, as we really want to integrate Thank you very much for your work ! |
I am also curious about having more information about the conversion flow - specifically I'd like to know how the timestamped models like this where trained. I have also run into issues with lots of quantification variants simply not working. |
Can you please help resolve this issue? The Python inference code (I just changed the model path): from transformers import AutoProcessor, pipeline
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from datasets import load_dataset
processor = AutoProcessor.from_pretrained("optimum/whisper-tiny.en")
model = ORTModelForSpeechSeq2Seq.from_pretrained("optimum/whisper-tiny.en")
speech_recognition = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor)
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
pred = speech_recognition(ds[0]["audio"]["array"]) |
Question
Hey,
I'm trying to use
whisper-web
demo with my finetuned model.After I managed connecting my model to the demo application, I'm getting errors related to this:
https://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/src/models.js#L771
Basically, when
transformers.js
tries to load a whisper model, it looks for files calleddecoder_model_merged.onnx
/decoder_model_merged_quantized.onnx
/decoder_model_merged_fp16.onnx
.The thing is, that the conversion script didn't create any of these files.
That's how the conversion script output looks like:
Please help me figure out what am I missing here.
P.S. After I'll get it to work, I'll be happy to open a PR on
whisper-web
repository that will enable using local models together with remote (on HF hub) models.Thanks !
The text was updated successfully, but these errors were encountered: