-
Notifications
You must be signed in to change notification settings - Fork 450
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❓ VAD robustness to noise-only signals in ONNX v3 vs. v4 models #369
Comments
Hi! This is definetely an interesting area to cover for This poses a quesion of separating speech from extremely noisy backgrounds, if I understand correctly. Or when there always is noise and only sometimes speech.
We simply did not optimize for this metric, so it is more or less random.
We observed that for very loud noise our VAD behaves not very well.
Yes, this makes sense. The good news also is that we got a bit of support for our project, so it will enjoy some attention in the near future with regard to customization, generalization and flexibility. |
Hello! Thank you for your response, @snakers4.
Yes, it is not exactly "detecting speech", but "not triggering on non-speech" instead. What I had in mind is slightly related to the latter. Something like idle periods of a ASR-based dictation application, in which the VAD is always on: to my mind, v4 would trigger - say - twice as often as v3 for background noises (such as a dog barking), which in turn might leave the ASR exposed. For IoT applications, on the other hand, it also means unecessarily calling a power-hungrier system more frequently.
Ok, got it!
In fact, I only used the windowing and forward call from In any case, while waiting for - and looking forward to - v5, if you would be so nice to report the attempts to replicate such numbers in that table, I'll be happy to hear! |
This sounds like something related to my experience as well. After using v4 for a while I had to come back to v3. While overall speech detection seemed a bit better in v4 and more precise near word boundaries, it however exhibits a consistent tendency for false positives - long durations of non-speech (1-2 minutes) at the beginning and end of audio files are mistakenly flagged as having speech. For my uses this isn't worth a minor accuracy increase, I can simply increase padding between speech segments. Now I'm not ruling out a mistake in my code, and I have never tested it formally, but subjectively it seems like it might be related to this issue. |
@IntendedConsequence , juts a quick novice question: how does one envokes v3 model? Thanks. |
@dgoryeo I'm not sure what to tell you. I don't use python for silero v3/v4 anymore, just onnxruntime C api. If I were you I guess I would start by checking out an older repository revision before v4 update? https://github.com/snakers4/silero-vad/tree/v3.1 |
We have finally been able to start work on V5 using this data, among others. |
That’s great news. Great to know that V5 is being worked on.
On 23 Nov 2023, at 5:05 pm, Alexander Veysov ***@***.***> wrote:
We have finally been able to start work on V5 using this data, among others.
—
Reply to this email directly, view it on GitHub<#369 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AE3EASBYEQQ5T5SPAKLVOJTYF3RQVAVCNFSM6AAAAAA4TJFCJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRTHA2TOOBRHA>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
To be solved with a V5 release. |
@snakers4 Can we fine-tune VAD on our own data ? We have our in house segmented data just like to ask is it possible to fine tune this model or not. |
The new VAD version was released just now - #2 (comment) It was designed with this issue in mind and performance on noise-only data was significantly improved - https://github.com/snakers4/silero-vad/wiki/Quality-Metrics When designing for this task we were using your conclusions and ideas, so many thanks for this ticket Can you please re-run your and tests and if the issue persists - please open a new issue referring to this one Many thanks! |
Hello!
First of all thanks for the VAD model, it is great and really helpful!
I've been doing some experiments with the 16 kHz ONNX models in order to establish a baseline on noisy-speech as well as on non-speech-at-all data. Results on the former for both Ava Speech and LibriParty datasets seem to be in accordance with the quality metrics section of Silero's wiki: v4 is indeed better than v3.
However, for noise-only signals, I've been getting a consistent 2-3x worse result from v4 w.r.t. v3 on ESC-50, UrbanSound8k and FSD50K. This is concerning especially in a always-on scenario (let's say a "wild" one) where the VAD is used a pre-processing front-end to avoid calling a more power-hungry system (which is often the case.)
The following table shows the values for the error rate metric, namely
1-acc
, whereacc
is sklearn's acuracy_score, so lower means better, and the best are highlighted in bold. The numbers being measured are the sigmoid'ed output of both models' forward method (early returned fromget_speech_timestamps()
utility), with threshold of 0.5 and window size of 1536 samples.I'm sharing the same uttids of the files I've been using in my experiments. It is not exactly ready to go because I resegmented and dumped some resampled versions of the datasets to disk, but I believe it should be useful and even reproducible if necessary. The format is
uttid,bos,eos,label
, where BOS and EOS are start and end of speech segments. The value -1 in those fields means there's no speech segment at all 😄test_files.tar.gz
My environment:
Finally, some questions:
Thanks!
The text was updated successfully, but these errors were encountered: