Skip to content

3.2.0

Latest
Compare
Choose a tag to compare
@xenova xenova released this 15 Dec 18:11
610391d

πŸ”₯ Transformers.js v3.2 β€” Moonshine for real-time speech recognition, Phi-3.5 Vision for multi-frame image understanding and reasoning, and more!

Table of contents:

πŸ€– New models: Moonshine, Phi-3.5 Vision, EXAONE

Moonshine for real-time speech recognition

Moonshine is a family of speech-to-text models optimized for fast and accurate automatic speech recognition (ASR) on resource-constrained devices. They are well-suited to real-time, on-device applications like live transcription and voice command recognition, and are perfect for in-browser usage (demo coming soon). See #1099 for more information and here for the list of supported models.

Example: Automatic speech recognition w/ Moonshine tiny.

import { pipeline } from "@huggingface/transformers";

const transcriber = await pipeline("automatic-speech-recognition", "onnx-community/moonshine-tiny-ONNX");
const output = await transcriber("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav");
console.log(output);
// { text: 'And so my fellow Americans ask not what your country can do for you as what you can do for your country.' }
See example using the MoonshineForConditionalGeneration API
import { MoonshineForConditionalGeneration, AutoProcessor, read_audio } from "@huggingface/transformers";

// Load model and processor
const model_id = "onnx-community/moonshine-tiny-ONNX";
const model = await MoonshineForConditionalGeneration.from_pretrained(model_id, {
    dtype: "q4",
});
const processor = await AutoProcessor.from_pretrained(model_id);

// Load audio and prepare inputs
const audio = await read_audio("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav", 16000);
const inputs = await processor(audio);

// Generate outputs
const outputs = await model.generate({ ...inputs, max_new_tokens: 100 });

// Decode outputs
const decoded = processor.batch_decode(outputs, { skip_special_tokens: true });
console.log(decoded[0]);
// And so my fellow Americans ask not what your country can do for you, ask what you can do for your country.

Phi-3.5 Vision for multi-frame image understanding and reasoning

Phi-3.5 Vision is a lightweight, state-of-the-art, open multimodal model that can be used for multi-frame image understanding and reasoning. See #1094 for more information and here for the list of supported models.

Examples:

Input Output
"What's funny about this image?" The humor in this image stems from the exaggerated depiction of human evolution, using the Shiba Inu dog breed to represent both ancient and modern humans. The left side shows a muscular, hunter-like figure labeled as 'Humans 100,000 years ago' with the caption 'me hungry me hunt mammoth,' suggesting a time when humans were physically robust and actively hunting. The right side contrasts this with a modern, slim Shiba Inu labeled as 'Humans today' with the caption 'why food delivery slow,' humorously commenting on the modern human's reliance on convenience and technology, such as food delivery services, rather than hunting for sustenance. The use of a dog, which is often associated with loyalty and companionship, adds a layer of irony and humor as it portrays humans in a more diminished, dependent state.
"Summarize the deck of slides."

To summarize, the slides are composed of these sections:

  • Introduction to Azure:

    The presentation introduces Microsoft Azure, a cloud computing platform. It highlights Azure's three service tiers: Hyper-scale, Enterprise, and Hybrid. The presenter is Dinesh Kumar Wickramasinghe, a Senior Software Engineer from CMS Private Limited in Sri Lanka.

  • Azure Overview:

    Azure is described as Microsoft's cloud computing platform, continuously expanding to meet current and future business challenges. It offers freedom to build, manage, and deploy applications on a global network using preferred tools and frameworks.

  • Cloud Computing Services:

    The presentation outlines three types of cloud computing services provided by Azure: Infrastructure-as-a-Service (IaaS) with a 'host' component, Platform-as-a-Service (PaaS) with a 'build' component, and Software-as-a-Service (SaaS) with a 'consume' component.

See example code

Example: Single-frame (critique an image)

import {
  AutoProcessor,
  AutoModelForCausalLM,
  TextStreamer,
  load_image,
} from "@huggingface/transformers";

// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
  legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
  dtype: {
    vision_encoder: "q4", // 'q4' or 'q4f16'
    prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
    model: "q4f16", // 'q4f16'
  },
});

// Load image
const image = await load_image("https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/meme.png");

// Prepare inputs
const messages = [
  { role: "user", content: "<|image_1|>What's funny about this image?" },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
  tokenize: false,
  add_generation_prompt: true,
});
const inputs = await processor(prompt, image, { num_crops: 4 });

// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
  skip_prompt: true,
  skip_special_tokens: true,
});

// Generate response
const output = await model.generate({
  ...inputs,
  streamer,
  max_new_tokens: 256,
});

Or, decode the output at the end:

// Decode and display the answer
const generated_ids = output.slice(null, [inputs.input_ids.dims[1], null]);
const answer = processor.batch_decode(generated_ids, {
  skip_special_tokens: true,
});
console.log(answer[0]);

Example: Multi-frame (summarize slides)

import {
  AutoProcessor,
  AutoModelForCausalLM,
  TextStreamer,
  load_image,
} from "@huggingface/transformers";

// Load processor and model
const model_id = "onnx-community/Phi-3.5-vision-instruct";
const processor = await AutoProcessor.from_pretrained(model_id, {
  legacy: true, // Use legacy to match python version
});
const model = await AutoModelForCausalLM.from_pretrained(model_id, {
  dtype: {
    vision_encoder: "q4", // 'q4' or 'q4f16'
    prepare_inputs_embeds: "q4", // 'q4' or 'q4f16'
    model: "q4f16", // 'q4f16'
  },
});

// Load images
const urls = [
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-1-2048.jpg",
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-2-2048.jpg",
  "https://image.slidesharecdn.com/azureintroduction-191206101932/75/Introduction-to-Microsoft-Azure-Cloud-3-2048.jpg",
];
const images = await Promise.all(urls.map(load_image));

// Prepare inputs
const placeholder = images.map((_, i) => `<|image_${i + 1}|>\n`).join("");
const messages = [
  { role: "user", content: placeholder + "Summarize the deck of slides." },
];
const prompt = processor.tokenizer.apply_chat_template(messages, {
  tokenize: false,
  add_generation_prompt: true,
});
const inputs = await processor(prompt, images, { num_crops: 4 });

// (Optional) Set up text streamer
const streamer = new TextStreamer(processor.tokenizer, {
  skip_prompt: true,
  skip_special_tokens: true,
});

// Generate response
const output = await model.generate({
  ...inputs,
  streamer,
  max_new_tokens: 256,
});

EXAONE 3.5 for bilingual (English and Korean) text generation

EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models, developed and released by LG AI Research. See #1084 for more information and here for the list of supported models.

Example: Text-generation w/ EXAONE-3.5-2.4B-Instruct:

import { pipeline } from "@huggingface/transformers";

// Create a text generation pipeline
const generator = await pipeline(
  "text-generation",
  "onnx-community/EXAONE-3.5-2.4B-Instruct",
  { dtype: "q4f16" },
);

// Define the list of messages
const messages = [
  { role: "system", content: "You are a helpful assistant." },
  { role: "user", content: "Tell me a joke." },
];

// Generate a response
const output = await generator(messages, { max_new_tokens: 128 });
console.log(output[0].generated_text.at(-1).content);
See example output
Sure! Here's a light joke for you:

Why don't scientists trust atoms?

Because they make up everything! 

I hope you found that amusing! If you want another one, feel free to ask!

πŸ› Bug fixes

  • Fix pyannote processor post_process_speaker_diarization in #1082. Thanks to @patrick-ve for reporting the issue!

πŸ› οΈ Other improvements

  • Improve unit testing framework in #1083 and #1095, bring coverage up to 91% (from 84%).

Full Changelog: 3.1.2...3.2.0