Releases: coqui-ai/STT-models
Persian STT v0.1.0
Persian STT v0.1.0
Jump to section:
Model details
- Person or organization developing model: Maintained by oct4pie.
- Model language: Persian / Farsi /
fa
,fa-IR
- Model date: June 21, 2022
- Model type:
Speech-to-Text
- Model version:
v0.1.0
- Compatible with πΈ STT version:
v1.3.0
- License: GNU Lesser General Public License v3.0
- Citation details:
@techreport{persian-stt, author = {Mehdi Hajmollaahmad Naraghi}, title = {Persian STT v0.1.0}, institution = {Coqui}, address = {\url{https://coqui.ai/models}} year = {2022}, month = {June}, number = {STT-FA-0.1.0} }
- persian-tts GitHub Repo
- Where to send questions or comments about the model: You can leave an issue on
STT
issues, open a new discussion onSTT
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the Persian Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
Using the language model with settings lm_alpha=0.36669178512950323
and lm_beta=0.3457913671678824
(found via lm_optimizer.py
):
- Common-Voice clean: WER: 10.81%, CER: 2.506%
- More about the model at persian-tts repo
Real-Time Factor
Real-Time Factor (RTF) is defined as proccesing-time / length-of-audio
. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: .65
Model Size
For STT, you always must deploy an acoustic model, and it is often the case you also will want to deploy an application-specific language model.
Model type | Filename | Size |
---|---|---|
Acoustic model (tflite) | model.tflite |
45.3M |
Language model | kenlm.scorer |
1.63GB |
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on the following corpora: Common Voice 9.0 Persian (cleaned and with custom train/dev/test splits). In total approximately ~271 hours of data.
Evaluation data
The validation ("dev") sets were cleaned and generated from Common Voice 9.0 Persian.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
Hindi STT v0.8.99
Hindi STT v0.8.99
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Trained and released by BΓΌlent Γzden a member of Common Voice TΓΌrkΓ§e for the 3D Voice Chess project by Harikalar Kutusu.
- Model language: Hindi / ΰ€Ήΰ€Ώΰ€¨ΰ₯ΰ€¦ΰ₯ /
hi
- Model date: March 13, 2022
- Model type:
Speech-to-Text
- Model version:
v0.8.99
- Compatible with πΈ STT version:
v1.0.0
- License: CC-BY-SA 4.0
- Citation details:
@misc{hindi-stt, author = {BΓΌlent Γzden}, title = {Hindi STT v0.8.99}, institution = {Harikalar Kutusu}, address = {\url{https://coqui.ai/models}} year = {2022}, month = {March}, number = {STT-HI-0.8.99} }
- Where to send questions or comments about the model: You can leave an issue on
STT
issues, open a new discussion onSTT
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the Hindi Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
This model only includes acoustic model, as it is developed for the special purpose low-vocabulary application. The following is the results from the Acoustic Model training.
Test Corpus | WER | CER |
---|---|---|
Common Voice | 82.2% | 34.6% |
Model Size
model.tflite
: 46M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on the following corpora: Common Voice Corpus 8.0 for Hindi. Custom train/dev/test splits are created by Common Voice Corpora Creator with --duplicate-sentence-count 99 parameter, which allowed us to use the whole dataset. The dataset contains approximately ~11 hours of voice data (276 distinct voices, 65% male, 4% female).
Note: Our model numbering for Common Voice only data reflect Common Voice corpus version and Corpora Creator duplicate-sentence-count (dsc) setting (e.g. "v0.corpus.dsc").
Evaluation data
The validation ("dev") and test ("test") sets also came from CV as specified above.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
French STT v0.9
French STT v0.9
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Originally trained and released by the commonvoice-fr project, revived by Waser Technologies
- Model date: Accessed from Github on Jun 10, 2022
- Model type:
Speech-to-Text
- Model version:
v0.9
- Compatible with πΈ STT version:
v1.4.0
- Code: commonvoice-fr
- License: MPL 2.0
- Citation details:
@misc{commonvoice-fr, author = {commonvoice-fr Contributors}, title = {Common Voice Fr STT Model}, publisher = {Github}, journal = {GitHub repository}, howpublished = {\url{https://github.com/wasertech/commonvoice-fr/releases/tag/v0.9.0-fr-0.1}}, commit = {0a2d028b124691bbee656f43aa02251169dce69b} }
- Where to send questions or comments about the model: You can leave an issue on
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the French Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
The following Word Error Rates (WER) are reported on Github.
Test Corpus | WER | CER |
---|---|---|
African_Accented_French_test.csv | 47.7% | 6.6% |
Att-HACK | 12.9% | 7.1% |
M-AILABS | 9.9% | 3.3% |
trainingspeech | 10.9% | 4.1% |
Common Voice | 31.5% | 15.2% |
LinguaLibre | 67.6% | 21.6% |
MLS | 22.6% | 9.7% |
Real-Time Factor
Real-Time Factor (RTF) is defined as processing-time / length-of-audio
. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: ~0.3
Model Size
model.tflite
: 46M
kenlm.scorer
: 689M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This French STT model was trained on the following corpora:
- Lingua Libre (~40h)
- Common Voice FR (v8) (~850h, by allowing up to 32 duplicates)
- Training Speech (~180h)
- African Accented French (~15h)
- M-AILABS French (~315h)
- Multilingual LibriSpeech (~1,100h)
- Att-HACK (~75h)
Total : ~2,573h
(~1,925h by default)
Evaluation data
The model was tested on the following corpora.
- Lingua Libre
- Common Voice FR (v9)
- Training Speech
- African Accented French
- M-AILABS French
- Multilingual LibriSpeech
- Att-HACK
Data was augmented with the following parameters.
Parsed augmentations: [
Reverb(p=0.1, delay=ValueRange(start=50.0, end=50.0, r=30.0), decay=ValueRange(start=10.0, end=2.0, r=1.0)),
Resample(p=0.1, rate=ValueRange(start=12000, end=8000, r=4000)),
Codec(p=0.1, bitrate=ValueRange(start=48000, end=16000, r=0)),
Volume(p=0.1, dbfs=ValueRange(start=-10.0, end=-40.0, r=0.0)),
Pitch(p=0.1, pitch=ValueRange(start=1.0, end=1.0, r=0.2)),
Tempo(p=0.1, factor=ValueRange(start=1.0, end=1.0, r=0.5), max_time=-1.0),
FrequencyMask(p=0.1, n=ValueRange(start=1, end=3, r=0), size=ValueRange(start=1, end=5, r=0)),
TimeMask(p=0.1, domain='signal', n=ValueRange(start=3, end=10, r=2), size=ValueRange(start=50.0, end=100.0, r=40.0)),
Dropout(p=0.1, domain='spectrogram', rate=ValueRange(start=0.05, end=0.05, r=0.0)),
Add(p=0.1, domain='signal', stddev=ValueRange(start=0.0, end=0.0, r=0.5)),
Multiply(p=0.1, domain='features', stddev=ValueRange(start=0.0, end=0.0, r=0.5))
]
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
Czech STT v0.3.0
Czech STT v0.3.0
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Trained by VojtΔch DrΓ‘bek.
- Model language: Czech / ΔeΕ‘tina /
cs
- Model date: May 31, 2022
- Model type:
Speech-to-Text
- Model version:
v0.3.0
- Compatible with πΈ STT version:
v0.9.3
- License: CC-BY-NC 4.0
- Citation details:
@misc{czech-stt, author = {DrΓ‘bek, VojtΔch}, title = {Czech STT 0.3}, publisher = {comodoro}, journal = {deepspeech-cs}, howpublished = {\url{https://github.com/comodoro/deepspeech-cs}} }
- Where to send questions or comments about the model: You can leave an issue on the model release page or
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter or Matrix channel coqui-ai/STT.
Intended use
Speech-to-Text for the Czech Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment, Real-Time Factor, and model size on disk.
Transcription Accuracy
More information reported on Github.
Test Corpus | WER | CER |
---|---|---|
Acoustic model | ||
Czech Common voice 6.1 | 40.6% | 10.7% |
Vystadial 2016 | 50.6% | 19.6% |
Parliament Plenary Hearings | 21.3% | 5.3% |
ParCzech 3.0 | 21% | 6.2% |
With the attached scorer | ||
Czech Common voice 6.1 | 15.3% | 6.8% |
Vystadial 2016 | 35.7% | 20.1% |
Parliament Plenary Hearings | 9.7% | 3.7% |
ParCzech 3.0 | 10.1% | 4.5% |
Real-Time Factor
Real-Time Factor (RTF) is defined as processing-time / length-of-audio
. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: 0.73
Model Size
model.tflite
: 46M
scorer
: 461M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on the following corpora:
- Vystadial 2016 β Czech data
- OVM β OtΓ‘zky VΓ‘clava Moravce
- Czech Parliament Meetings
- Large Corpus of Czech Parliament Plenary Hearings
- Common Voice Czech
- Some private recordings and parts of audiobooks
Evaluation data
The model was evaluated on Common Voice Czech, Large Corpus of Czech Parliament Plenary Hearings, Vystadial 2016 β Czech data and ParCzech 3.0 test sets.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
YoloxΓ³chitl Mixtec STT
YoloxΓ³chitl Mixtec STT
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Originally trained by Joe Meyer.
- Model language: YoloxΓ³chitl Mixtec / /
xty
- Model date: April 17, 2022
- Model type:
Speech-to-Text
- Model version:
v0.1.0
- Compatible with πΈ STT version:
v1.0.0
- License: CC BY-NC-SA 3.0
- Citation details:
@techreport{xty-stt, author = {Meyer,Joe}, title = {YoloxΓ³chitl Mixtec STT 0.1}, institution = {Coqui}, address = {\url{https://github.com/coqui-ai/STT-models}} year = {2022}, month = {April}, number = {STT-SLR89-XTY-0.1} }
- Where to send questions or comments about the model: You can leave an issue on
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the YoloxΓ³chitl Mixtec Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
The following Word Error Rates and Character Error Rates are reported for a modified data set from OpenSLR SLR89. The official validated.tsv
had rows removed which had errors processing, and the data was re-processed by Cmmon Voice Utils to convert to 16kHz mono-channel PCM .wav files.
Test Corpus | WER | CER |
---|---|---|
OpenSLR | 48.85% | 18.04% |
Model Size
model.tflite
: 46M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on a modified data set from OpenSLR SLR89. The official validated.tsv
had rows removed which had errors processing, and the data was re-processed by Cmmon Voice Utils to convert to 16kHz mono-channel PCM .wav files.
Evaluation data
This model was evaluated on a modified data set from OpenSLR SLR89. The official validated.tsv
had rows removed which had errors processing, and the data was re-processed by Cmmon Voice Utils to convert to 16kHz mono-channel PCM .wav files.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
Sierra Totonac STT
Sierra Totonac STT
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Originally trained by BΓΌlent Γzden, a member of Common Voice TΓΌrkΓ§e.
- Model language: Totonac / Sierra Totonac /
tos
- Model date: April 12, 2022
- Model type:
Speech-to-Text
- Model version:
v1.0.0
- Compatible with πΈ STT version:
v1.3.0
- License: CC BY-NC-SA 3.0
- Citation details:
@techreport{totonac-stt, author = {BΓΌlent Γzden}, title = {Totonac STT 1.0}, institution = {Coqui}, address = {\url{https://github.com/coqui-ai/STT-models}} year = {2022}, month = {April}, number = {STT-TOS-1.0} }
- Where to send questions or comments about the model: You can leave an issue on
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the Sierra Totonac Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
Test Corpus | WER | CER |
---|---|---|
OpenSLR 107 | 87.5% | 25.8% |
Model Size
model.tflite
: 46M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on Totonac Speech with transcription corpus.
Evaluation data
This model was evaluated on Totonac Speech with transcription corpus.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
Western Highland Chatino STT
Western Highland Chatino STT
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Originally trained by BΓΌlent Γzden, a member of Common Voice TΓΌrkΓ§e.
- Model language: Western Highland Chatino /
ctp
- Model date: 12th April, 2022
- Model type:
Speech-to-Text
- Model version:
v1.0.0
- Compatible with STT version:
v1.3.0
- License: CC-BY-SA 4.0
- Where to send questions or comments about the model: You can leave an issue on
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for Western Highland Chatino on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
Test Corpus | WER | CER |
---|---|---|
GORILLA | 77.2% | 30.9% |
Model Size
model.tflite
: 46M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on GORILLA ctp
Citation
- Malgorzata E. Cavar, Damir Cavar, Hilaria Cruz (2016) "Endangered Language Documentation: Bootstrapping a Chatino Speech Corpus, Forced Aligner, ASR". Pages 4004-4011 Of N. Calzolari (et al. eds) Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) in PortoroΕΎ, Slovenia, European Language Resources Association (ELRA), Paris, France.
Evaluation data
This model was evaluated on GORILLA ctp
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyse private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
Swahili STT v0.8
Swahili STT v8.0 (Coqui)
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Maintained by Coqui.
- Model language: Swahili / kiswahili /
sw
- Model date: March 8, 2022
- Model type:
Speech-to-Text
- Model version:
v8.0
- Compatible with πΈ STT version:
v1.3.0
- License: Apache 2.0
- Citation details:
@techreport{swahili-stt, author = {Coqui}, title = {Swahili STT v8.0}, institution = {Coqui}, address = {\url{https://coqui.ai/models}} year = {2022}, month = {March}, number = {STT-SW-8.0} }
- Where to send questions or comments about the model: You can leave an issue on
STT
issues, open a new discussion onSTT
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the Swahili Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
Using the language model with settings lm_alpha=0.898202045251655
and lm_beta=2.2684674938753755
(found via lm_optimizer.py
):
- Swahili Common Voice 8.0 Test: WER: 15.8%, CER: 6.6%
Model Size
For STT, you always must deploy an acoustic model, and it is often the case you also will want to deploy an application-specific language model.
Model type | Vocabulary | Filename | Size |
---|---|---|---|
Acoustic model | open | model.tflite |
45M |
Language model | large | large-vocabulary.scorer |
321M |
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on the following corpora: Common Voice 8.0 Swahili.
Evaluation data
The validation ("dev") sets came from Common Voice 8.0.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
French STT v0.8
French STT v0.8 (commonvoice-fr)
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Originally trained and released by the commonvoice-fr project, revived by Waser Technologies
- Model date: Accessed from Github on February 9, 2022
- Model type:
Speech-to-Text
- Model version:
v0.8
- Compatible with πΈ STT version:
v1.2.0
- Code: commonvoice-fr
- License: MPL 2.0
- Citation details:
@misc{commonvoice-fr, author = {commonvoice-fr Contributors}, title = {Common Voice STT Model}, publisher = {Github}, journal = {GitHub repository}, howpublished = {\url{https://github.com/wasertech/commonvoice-fr/releases/tag/v0.8.0-fr-0.3}}, commit = {0a2d028b124691bbee656f43aa02251169dce69b} }
- Where to send questions or comments about the model: You can leave an issue on
STT-model
issues, open a new discussion onSTT-model
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the French Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
The following Word Error Rates (WER) are reported on Github.
Test Corpus | WER | CER |
---|---|---|
African_Accented_French_test.csv | 43.6% | 24.8% |
Att-HACK | 12.8% | 6.0% |
M-AILABS | 12.2% | 3.7% |
trainingspeech | 12.1% | 4.0% |
Common Voice | 37.0% | 19.4% |
LinguaLibre | 59.3% | 21.3% |
MLS | 26.8% | 12.2% |
Real-Time Factor
Real-Time Factor (RTF) is defined as processing-time / length-of-audio
. The exact real-time factor of an STT model will depend on the hardware setup, so you may experience a different RTF.
Recorded average RTF on laptop CPU: ``
Model Size
model.tflite
: 46M
kenlm.scorer
: 689M
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This French STT model was trained on the following corpora:
- Lingua Libre (~40h)
- Common Voice FR (v8) (~826h, by allowing up to 32 duplicates)
- Training Speech (~180h)
- African Accented French (~15h)
- M-AILABS French (~315h)
- Multilingual LibriSpeech (~1,100h)
- Att-HACK (~75h)
Total : ~2,551h
(~1,903h by default)
Evaluation data
The model was tested on the following corpora.
- Lingua Libre
- Common Voice FR (v8)
- Training Speech
- African Accented French
- M-AILABS French
- Multilingual LibriSpeech
- Att-HACK
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.
English STT v1.0.0-huge-vocab
English STT v1.0.0 (Huge Vocabulary)
Jump to section:
- Model details
- Intended use
- Performance Factors
- Metrics
- Training data
- Evaluation data
- Ethical considerations
- Caveats and recommendations
Model details
- Person or organization developing model: Maintained by Coqui.
- Model language: English / English /
en
- Model date: October 3, 2021
- Model type:
Speech-to-Text
- Model version:
v1.0.0
- Compatible with πΈ STT version:
v1.0.0
- License: Apache 2.0
- Citation details:
@techreport{english-stt, author = {Coqui}, title = {English STT v1.0.0}, institution = {Coqui}, address = {\url{https://coqui.ai/models}} year = {2021}, month = {October}, number = {STT-EN-1.0.0} }
- Where to send questions or comments about the model: You can leave an issue on
STT
issues, open a new discussion onSTT
discussions, or chat with us on Gitter.
Intended use
Speech-to-Text for the English Language on 16kHz, mono-channel audio.
Performance Factors
Factors relevant to Speech-to-Text performance include but are not limited to speaker demographics, recording quality, and background noise. Read more about STT performance factors here.
Metrics
STT models are usually evaluated in terms of their transcription accuracy, deployment Real-Time Factor, and model size on disk.
Transcription Accuracy
Using the huge-vocabulary.scorer
language model:
- Librispeech clean: WER: 4.5%, CER: 1.6%
- Librispeech other: WER: 13.6%, CER: 6.4%
Model Size
For STT, you always must deploy an acoustic model, and it is often the case you also will want to deploy an application-specific language model.
Model type | Vocabulary | Filename | Size |
---|---|---|---|
Acoustic model | open | model.tflite |
181M |
Language model | large | huge-vocabulary.scorer |
923M |
Approaches to uncertainty and variability
Confidence scores and multiple paths from the decoding beam can be used to measure model uncertainty and provide multiple, variable transcripts for any processed audio.
Training data
This model was trained on the following corpora: Common Voice 7.0 English (custom Coqui train/dev/test splits), LibriSpeech, and Multilingual Librispeech. In total approximately ~47,000 hours of data.
Evaluation data
The validation ("dev") sets came from CV, Librispeech, and MLS.
Ethical considerations
Deploying a Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.
Demographic Bias
You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.
Surveillance
Speech-to-Text may be mis-used to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in may countries. You should not assume consent to record and analyze private speech.
Caveats and recommendations
Machine learning models (like this STT model) perform best on data that is similar to the data on which they were trained. Read about what to expect from an STT model with regard to your data here.
In most applications, it is recommended that you train your own language model to improve transcription accuracy on your speech data.