Skip to content

Hamtech-ai/wav2vec2-fa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

wav2vec 2.0 fa Finetued

model image Wav2vec 2.0 Image

The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out this blog for more information.

Paper

The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.

Fine-tuned facebook/wav2vec2-large-xlsr-53 in Persian (Farsi) using Common Voice plus Our own created Dataset(1/3 of high quality dataset). When using this model, make sure that your speech input is sampled at 16kHz.

Evaluation: 🌡️

We have evaluated the model on private dataset with different type of audios (unfortunately the dataset for testing and validation is not publicly available but to see a sample of the dataset check this link) :

Name test dataset (wer)
m3hrdadfi/wav2vec2-large-xlsr-persian-v3 0.56754
This New Model 0.40815
Base Multilingual Model 0.69746
  • This Table show if we add more data we will have much better result

How to use❓

Use FineTuned Model

This model is finetuned on m3hrdadfi/wav2vec2-large-xlsr-persian-v3 , so the process for train or evaluation is same

# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install parsivar
!pip install num2fawords

Normalizer

# Normalizer
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/dictionary.py
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/normalizer.py

If you are not sure your transcriptions are clean or not (having weird characters or any other alphabete chars ) use this code provided by m3hrdadfi/wav2vec2-large-xlsr-persian-v3

Cleaning (Fill the data part with your own data dir)

from normalizer import normalizer

def cleaning(text):
    if not isinstance(text, str):
        return None

    return normalizer({"sentence": text}, return_dict=False)

# edit these parts with your own data directory

data_dir = "data"


test = pd.read_csv(f"{data_dir}/yourtest.tsv", sep="	")
test["path"] = data_dir + "/clips/" + test["path"]
print(f"Step 0: {len(test)}")

test["status"] = test["path"].apply(lambda path: True if os.path.exists(path) else None)
test = test.dropna(subset=["path"])
test = test.drop("status", 1)
print(f"Step 1: {len(test)}")

test["sentence"] = test["sentence"].apply(lambda t: cleaning(t))
test = test.dropna(subset=["sentence"])
print(f"Step 2: {len(test)}")

test = test.reset_index(drop=True)
print(test.head())

test = test[["path", "sentence"]]
test.to_csv("/content/test.csv", sep="	", encoding="utf-8", index=False)

Prediction

import numpy as np
import pandas as pd

import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric

import IPython.display as ipd

model_name_or_path = "masoudmzb/wav2vec2-xlsr-multilingual-53-fa"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(model_name_or_path, device)

processor = Wav2Vec2Processor.from_pretrained(model_name_or_path)
model = Wav2Vec2ForCTC.from_pretrained(model_name_or_path).to(device)


def speech_file_to_array_fn(batch):
    speech_array, sampling_rate = torchaudio.load(batch["path"])
    speech_array = speech_array.squeeze().numpy()
    speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, processor.feature_extractor.sampling_rate)

    batch["speech"] = speech_array
    return batch


def predict(batch):
    features = processor(
        batch["speech"], 
        sampling_rate=processor.feature_extractor.sampling_rate, 
        return_tensors="pt", 
        padding=True
    )

    input_values = features.input_values.to(device)
    attention_mask = features.attention_mask.to(device)

    with torch.no_grad():
        logits = model(input_values, attention_mask=attention_mask).logits 

    pred_ids = torch.argmax(logits, dim=-1)

    batch["predicted"] = processor.batch_decode(pred_ids)
    return batch

# edit these parts with your own data directory
dataset = load_dataset("csv", data_files={"test": "/path_to/your_test.csv"}, delimiter="	")["test"]
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=4)

WER Score

wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))

Output

max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
    reference, predicted =  result["sentence"][i], result["predicted"][i]
    print("reference:", reference)
    print("predicted:", predicted)
    print('---')

training details: 🔭

One model was trained on Persian Mozilla dataset before So we Decided to continue from that one. Model is warm started from mehrdadfa’s checkpoint

Fine Tuning Recommendations: 🐤

For fine tuning you can check the link below. but be aware some Tips. you may need gradient_accumulation because you need more batch size. the are many hyperparameters make sure you set them properly :

  • learning_rate
  • attention_dropout
  • hidden_dropout
  • feat_proj_dropout
  • mask_time_prob
  • layer_drop

Fine Tuning Examples 👷‍♂️👷‍♀️

Dataset Fine Tuning Example
Fine Tune on Mozilla Turkish Dataset
Sample Code for Other Dataset And other Language github_link

Contact us: 🤝

If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.

Citation: ↩️

we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.

@misc{wav2vec2-xlsr-multilingual-53-fa,
  author          = {Paparnchi, Seyyed Mohammad Masoud},
  title           = {wav2vec2-xlsr-multilingual-53-fa},
  year            = 2021,
  publisher       = {GitHub},
  journal         = {GitHub repository},
  howpublished    = {\url{https://github.com/Hamtech-ai/wav2vec2-fa}},
}