-
Notifications
You must be signed in to change notification settings - Fork 785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupported model type: whisper and CORS error #314
Comments
static async getInstance(progress_callback = null) {
if (this.instance === null) {
this.instance = pipeline(this.task, this.model, {
quantized: this.quantized,
progress_callback,
});
}
console.log("inside",this.instance)
return this.instance;
} ' while consoling this.instances it shows Promise {<pending>}
[[Prototype]]
:
Promise
[[PromiseState]]
:
"rejected"
[[PromiseResult]]
:
Error: Unsupported model type: whisper at AutoModelForCTC.from_pretrained (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/models.js:3550:19) at async eval (webpack-internal:///./node_modules/.pnpm/@xenova+transformers@2.6.0/node_modules/@xenova/transformers/src/pipelines.js:2087:33)
message
:
"Unsupported model type: whisper"
stack
:
"Error: Unsupported model type: whisper\n at AutoModelForCTC.from_pretrained (webpack-internal:///./node_modules/.pnpm/@[email protected]/node_modules/@xenova/transformers/src/models.js:3550:19)\n at async eval (webpack-internal:///./node_modules/.pnpm/@[email protected]/node_modules/@xenova/transformers/src/pipelines.js:2087:33)" |
Hi there. I believe this is due to an issue we just fixed in v2.6.1 (related to minification). Could you please upgrade to v2.6.1 and try again? Thanks! |
I just upgraded v2.6.1 again the same error persists?? |
Could you please post information about your environment, e.g., OS, browser, built tools? I am aware of a similar issue with users that use create-react-app, and if this is the case, please switch to a more up-to-date build tool like Vite. |
OS: Windows 11 |
we are using next JS ? There is no any support Vite for next js apllication |
Oh my apologies, I misread "create-next-app" as "create-react-app". Sorry about that! Could you post any information about your build process, such as any minification taking place? |
I am facing this locally in development server without minification . |
Do you perhaps have a repo where I can try reproduce this? Or could you post your next.config.js? Thanks! |
We are currently working in the private repo. We will share the repo later if required, need to prepare for that , but Now Here's the next config /** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
compress: false,
images: {
loader: "akamai",
path: "",
},
compiler: {
// Enables the styled-components SWC transform
styledComponents: true,
},
// lessLoaderOptions: {
// lessOptions: {
// javascriptEnabled: true,
// },
// },
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ["@svgr/webpack"],
});
return config;
},
};
module.exports = nextConfig; |
And which version of node / next.js / npm are you using? |
next-version:13.4.13 |
This might be the issue. In the docs, we recommend using a minimum node version of 18. 16.X has reached EOL. Could you try upgrade? |
I tried to run whisper model via |
@szprytny Could you provide more information about your environment? Are you using the latest version of Transformers.js? |
I have Pipeline went further. I got error |
And which bundler are you using? I am aware of issues with create-react-app. I haven't had any problems with vite, for example.
Yes this is because you exported with |
I did not run it as a web-app, I just tried to do inference using plain node script running with |
@szprytny Can you provide some sample code which resulted in this error? |
It seems that error
is misleading as the real problem was my model have newer IR version. Here is the script I used to run it import { WaveFile } from "wavefile";
import path from "path";
import { readFileSync } from "fs";
import { pipeline, env } from "@xenova/transformers";
env.localModelPath = "c:/model/onnx/";
const prepareAudio = (filePath: string): Float64Array => {
const wav = new WaveFile(readFileSync(path.normalize(filePath)));
wav.toBitDepth("32f");
wav.toSampleRate(16000);
let audioData = wav.getSamples();
return audioData;
};
const test = async () => {
let pipe = await pipeline("automatic-speech-recognition", "shmisper", {
local_files_only: true,
});
let out = await pipe(prepareAudio("c:/content/01_0.wav"));
console.log(out);
};
test(); |
I see... Indeed, that error message would be quite misleading. Could you try downgrade to |
I have the extact same problem. I changed the onnx version to 1.13.1. Small model works but not medium and large-v2 models |
Having same issue as main thread:
You mentioned here that we should use |
Yes, we use optimum behind the scenes. The purpose of the conversion script is to also perform quantization afterwards, but if this is not necessary for your use-case, you can use optimum directly and just structure the repo as the other transformers.js models on the HF Hub. |
I converted the |
@xenova I could reproduce this error on the v3 branch with the example |
The text was updated successfully, but these errors were encountered: