π₯ Transformers.js v3.2 β Moonshine for real-time speech recognition, Phi-3.5 Vision for multi-frame image understanding and reasoning, and more!
Table of contents:
π€ New models: Moonshine, Phi-3.5 Vision, EXAONE
Moonshine for real-time speech recognition
Moonshine is a family of speech-to-text models optimized for fast and accurate automatic speech recognition (ASR) on resource-constrained devices. They are well-suited to real-time, on-device applications like live transcription and voice command recognition, and are perfect for in-browser usage (demo coming soon). See #1099 for more information and here for the list of supported models.
Example: Automatic speech recognition w/ Moonshine tiny.
import { pipeline } from "@huggingface/transformers";
const transcriber = await pipeline("automatic-speech-recognition", "onnx-community/moonshine-tiny-ONNX");
const output = await transcriber("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav");
console.log(output);
// { text: 'And so my fellow Americans ask not what your country can do for you as what you can do for your country.' }
See example using the MoonshineForConditionalGeneration API
import { MoonshineForConditionalGeneration, AutoProcessor, read_audio } from "@huggingface/transformers";
// Load model and processor
const model_id = "onnx-community/moonshine-tiny-ONNX";
const model = await MoonshineForConditionalGeneration.from_pretrained(model_id, {
dtype: "q4",
});
const processor = await AutoProcessor.from_pretrained(model_id);
// Load audio and prepare inputs
const audio = await read_audio("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav", 16000);
const inputs = await processor(audio);
// Generate outputs
const outputs = await model.generate({ ...inputs, max_new_tokens: 100 });
// Decode outputs
const decoded = processor.batch_decode(outputs, { skip_special_tokens: true });
console.log(decoded[0]);
// And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.
Phi-3.5 Vision for multi-frame image understanding and reasoning
Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model that can be used for multi-frame image understanding and reasoning. See #1094 for more information and here for the list of supported models.
Examples:
See example code
Example: Single-frame (critique an image)
import {
AutoProcessor,
AutoModelForCausalLM,
TextStreamer,
load_image,
} from "@huggingface/transformers";
// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
dtype: {
vision_encoder: "q4", // 'q4' or 'q4f16'
prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
model: "q4f16", // 'q4f16'
},
});
// Load image
const image = await load_image("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/meme.png");
// Prepare inputs
const messages = [
{ role: "user", content: "<|image_1|>What's funny about this image?" },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});
const inputs = await processor(prompt, image, { num_crops: 4 });
// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
skip_prompt: true,
skip_special_tokens: true,
});
// Generate response
const output = await model.generate({
...inputs,
streamer,
max_new_tokens: 256,
});
Or, decode the output at the end:
// Decode and display the answer
const generated_ids = output.slice(null, [inputs.input_ids.dims[1], null]);
const answer = processor.batch_decode(generated_ids, {
skip_special_tokens: true,
});
console.log(answer[0]);
Example: Multi-frame (summarize slides)
import {
AutoProcessor,
AutoModelForCausalLM,
TextStreamer,
load_image,
} from "@huggingface/transformers";
// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
dtype: {
vision_encoder: "q4", // 'q4' or 'q4f16'
prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
model: "q4f16", // 'q4f16'
},
});
// Load images
const urls = [
"https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-1-2048.jpg",
"https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-2-2048.jpg",
"https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-3-2048.jpg",
];
const images = await Promise.all(urls.map(load_image));
// Prepare inputs
const placeholder = images.map((_, i) => `<|image_${i + 1}|>\n`).join("");
const messages = [
{ role: "user", content: placeholder + "Summarize the deck of slides." },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});
const inputs = await processor(prompt, images, { num_crops: 4 });
// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
skip_prompt: true,
skip_special_tokens: true,
});
// Generate response
const output = await model.generate({
...inputs,
streamer,
max_new_tokens: 256,
});
EXAONE 3.5 for bilingual (English and Korean) text generation
EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models, developed and released by LG AI Research. See #1084 for more information and here for the list of supported models.
Example: Text-generation w/ EXAONE-3.5-2.4B-Instruct
:
import { pipeline } from "@huggingface/transformers";
// Create a text generation pipeline
const generator = await pipeline(
"text-generation",
"onnx-community/EXAONE-3.5-2.4B-Instruct",
{ dtype: "q4f16" },
);
// Define the list of messages
const messages = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Tell me a joke." },
];
// Generate a response
const output = await generator(messages, { max_new_tokens: 128 });
console.log(output[0].generated_text.at(-1).content);
See example output
Sure! Here's a light joke for you:
Why don't scientists trust atoms?
Because they make up everything!
I hope you found that amusing! If you want another one, feel free to ask!
π Bug fixes
- Fix pyannote processor
post_process_speaker_diarization
in #1082. Thanks to @patrick-ve for reporting the issue!
π οΈ Other improvements
Full Changelog: 3.1.2...3.2.0