Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions egs/gop/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
There is a copy of this document on Google Docs, which renders the equations better:
[link](https://docs.google.com/document/d/1pie-PU6u2NZZC_FzocBGGm6mpfBJMiCft9UoG0uA1kA/edit?usp=sharing)

* * *

# GOP on Kaldi

The Goodness of Pronunciation (GOP) is a variation of the posterior probability, for phone level pronunciation scoring.
GOP is widely used in pronunciation evaluation and mispronunciation detection tasks.

This implementation is mainly based on the following paper:

Hu, W., Qian, Y., Soong, F. K., & Wang, Y. (2015). Improved mispronunciation detection with deep neural network trained acoustic models and transfer learning based logistic regression classifiers. Speech Communication, 67(January), 154-166.

## GOP-GMM

In the conventional GMM-HMM based system, GOP was first proposed in (Witt et al., 2000). It was defined as the duration normalised log of the posterior:

$$
GOP(p)=\frac{1}{t_e-t_s+1} \log p(p|\mathbf o)
$$

where $\mathbf o$ is the input observations, $p$ is the canonical phone, $t_s, t_e$ are the start and end frame indexes.

Assuming $p(q_i)\approx p(q_j)$ for any $q_i, q_j$, we have:

$$
\log p(p|\mathbf o)=\frac{p(\mathbf o|p)p(p)}{\sum_{q\in Q} p(\mathbf o|q)p(q)}
\approx\frac{p(\mathbf o|p)}{\sum_{q\in Q} p(\mathbf o|q)}
$$

where $Q$ is the whole phone set.

The numerator of the equation is calculated from forced alignment result and the denominator is calculated from an Viterbi decoding with a unconstrained phone loop.

We do not implement GOP-GMM for Kaldi, as GOP-NN performs much better than GOP-GMM.

## GOP-NN

The definition of GOP-NN is a bit different from the GOP-GMM. GOP-NN was defined as the log phone posterior ratio between the canonical phone and the one with the highest score (Hu et al., 2015).

Firstly we define Log Phone Posterior (LPP):

$$
LPP(p)=\log p(p|\mathbf o; t_s,t_e)
$$

Then we define the GOP-NN using LPP:

$$
GOP(p)=\log \frac{LPP(p)}{\max_{q\in Q} LPP(q)}
$$

LPP could be calculated as:

$$
LPP(p) \approx \frac{1}{t_e-t_s+1} \sum_{t=t_s}^{t_e}\log p(p|o_t)
$$

$$
p(p|o_t) = \sum_{s \in p} p(s|o_t)
$$

where $s$ is the senone label, $\{s|s \in p\}$ is the states belonging to those triphones whose current phone is $p$.

## Phone-level Feature

Normally the classifier-based approach archives better performance than GOP-based approach.

Different from GOP based method, an extra supervised training process is needed. The input features for supervised training are phone-level, segmental features. The phone-level feature is defined as:

$$
{[LPP(p_1),\cdots,LPP(p_M), LPR(p_1|pi), \cdots, LPR(p_j|p_i),\cdots]}^T
$$

where the Log Posterior Ratio (LPR) between phone $p_j$ and $p_i$ is defined as:

$$
LPR(p_j|p_i) = \log p(p_j|\mathbf o; t_s, t_e) - \log p(p_i|\mathbf o; t_s, t_e)
$$

## Implementation

This implementation consists of a executable binary `bin/compute-gop` and some scripts.

`compute-gop` computes GOP and extracts phone-level features using nnet output probabilities.
The output probabilities are assumed to be from a log-softmax layer.

The script `run.sh` shows a typical pipeline based on librispeech's model and data.

In Hu's paper, GOP was computed using a feed-forward DNN.
We have tried to use the output-xent of a chain model to compute GOP, but the result was not good.
We guess the HMM topo of chain model may not fit for GOP.

The nnet3's TDNN (no chain) model performs well in GOP computing, so this recipe uses it.

## Acknowledgement
The author of this recipe would like to thank Xingyu Na for his works of model tuning and his helpful suggestions.
13 changes: 13 additions & 0 deletions egs/gop/s5/cmd.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# you can change cmd.sh depending on what type of queue you are using.
# If you have no queueing system and want to run on a local machine, you
# can change all instances 'queue.pl' to run.pl (but be careful and run
# commands one by one: most recipes will exhaust the memory on your
# machine). queue.pl works with GridEngine (qsub). slurm.pl works
# with slurm. Different queues are configured differently, with different
# queue names and different ways of specifying things like memory;
# to account for these differences you can create and edit the file
# conf/queue.conf to match your queue's configuration. Search for
# conf/queue.conf in http://kaldi-asr.org/doc/queue.html for more information,
# or search for the string 'default_config' in utils/queue.pl or utils/slurm.pl.

export cmd="run.pl"
12 changes: 12 additions & 0 deletions egs/gop/s5/local/make_testcase.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/bash

src=$1
dst=$2

# Select a very small set for testing
utils/subset_data_dir.sh --shortest $src 10 $dst

# make fake transcripts as negative examples
cp $dst/text $dst/text.ori
sed -i "s/ THERE / THOSE /" $dst/text
sed -i "s/ IN / ON /" $dst/text
27 changes: 27 additions & 0 deletions egs/gop/s5/path.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
export KALDI_ROOT=`pwd`/../../..
export PATH=$PWD/utils/:$KALDI_ROOT/tools/openfst/bin:$PWD:$PATH
[ ! -f $KALDI_ROOT/tools/config/common_path.sh ] && echo >&2 "The standard file $KALDI_ROOT/tools/config/common_path.sh is not present -> Exit!" && exit 1
. $KALDI_ROOT/tools/config/common_path.sh
export LC_ALL=C

# we use this both in the (optional) LM training and the G2P-related scripts
PYTHON='python2.7'

### Below are the paths used by the optional parts of the recipe

# We only need the Festival stuff below for the optional text normalization(for LM-training) step
FEST_ROOT=tools/festival
NSW_PATH=${FEST_ROOT}/festival/bin:${FEST_ROOT}/nsw/bin
export PATH=$PATH:$NSW_PATH

# SRILM is needed for LM model building
SRILM_ROOT=$KALDI_ROOT/tools/srilm
SRILM_PATH=$SRILM_ROOT/bin:$SRILM_ROOT/bin/i686-m64
export PATH=$PATH:$SRILM_PATH

# Sequitur G2P executable
sequitur=$KALDI_ROOT/tools/sequitur/g2p.py
sequitur_path="$(dirname $sequitur)/lib/$PYTHON/site-packages"

# Directory under which the LM training corpus should be extracted
LM_CORPUS_ROOT=./lm-corpus
81 changes: 81 additions & 0 deletions egs/gop/s5/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
#!/bin/bash

# Copyright 2019 Junbo Zhang
# Apache 2.0

# This script shows how to calculate Goodness of Pronunciation (GOP) and
# extract phone-level pronunciation feature for mispronunciations detection
# tasks. Read ../README.md or the following paper for details:
#
# "Hu et al., Improved mispronunciation detection with deep neural network
# trained acoustic models and transfer learning based logistic regression
# classifiers, 2015."

# You might not want to do this for interactive shells.
set -e

# Before running this recipe, you have to run the librispeech recipe firstly.
# This script assumes the following paths exist.
librispeech_eg=../../librispeech/s5
model=$librispeech_eg/exp/nnet3_cleaned/tdnn_sp
ivector=$librispeech_eg/exp/nnet3_cleaned/ivectors_test_clean_hires
lang=$librispeech_eg/data/lang
test_data=$librispeech_eg/data/test_clean_hires

for d in $model $ivector $lang $test_data; do
[ ! -d $d ] && echo "$0: no such path $d" && exit 1;
done

# Global configurations
stage=10
nj=4

data=test_10short
dir=exp/gop_$data

. ./cmd.sh
. ./path.sh
. parse_options.sh

if [ $stage -le 10 ]; then
# Prepare test data
[ -d data ] || mkdir -p data/$data
local/make_testcase.sh $test_data data/$data
fi

if [ $stage -le 20 ]; then
# Compute Log-likelihoods
steps/nnet3/compute_output.sh --cmd "$cmd" --nj $nj \
--online-ivector-dir $ivector data/$data $model exp/probs_$data
fi

if [ $stage -le 30 ]; then
steps/nnet3/align.sh --cmd "$cmd" --nj $nj --use_gpu false \
--online_ivector_dir $ivector data/$data $lang $model $dir
fi

if [ $stage -le 40 ]; then
# make a map which converts phones to "pure-phones"
# "pure-phone" means the phone whose stress and pos-in-word symbols are ignored
# eg. AE1_B --> AE, EH2_S --> EH, SIL --> SIL
utils/remove_symbols_from_phones.pl $lang/phones.txt $dir/phones-pure.txt \
$dir/phone-to-pure-phone.int

# Convert transition-id to pure-phone id
$cmd JOB=1:$nj $dir/log/ali_to_phones.JOB.log \
ali-to-phones --per-frame=true $model/final.mdl "ark,t:gunzip -c $dir/ali.JOB.gz|" \
"ark,t:-" \| utils/apply_map.pl -f 2- $dir/phone-to-pure-phone.int \| \
gzip -c \>$dir/ali-pure-phone.JOB.gz || exit 1;
fi

if [ $stage -le 50 ]; then
# Compute GOP and phone-level feature
$cmd JOB=1:$nj $dir/log/compute_gop.JOB.log \
compute-gop --phone-map=$dir/phone-to-pure-phone.int $model/final.mdl \
"ark,t:gunzip -c $dir/ali-pure-phone.JOB.gz|" \
"ark:exp/probs_$data/output.JOB.ark" \
"ark,t:$dir/gop.JOB.txt" "ark,t:$dir/phonefeat.JOB.txt" || exit 1;

echo "Done compute-gop, the results: \"$dir/gop.<JOB>.txt\" in posterior format."
echo "The phones whose gop values less than -5 could be treated as mispronunciations."
fi
1 change: 1 addition & 0 deletions egs/gop/s5/steps
1 change: 1 addition & 0 deletions egs/gop/s5/utils
127 changes: 0 additions & 127 deletions egs/librispeech/s5/local/nnet3/run_tdnn.sh

This file was deleted.

1 change: 1 addition & 0 deletions egs/librispeech/s5/local/nnet3/run_tdnn.sh
Loading