Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
159 commits
Select commit Hold shift + click to select a range
ac873d0
sequence: Basic outline of header files for sequence training of nnet3
vimalmanohar Dec 19, 2015
a146504
sequencue: Further work on sequence training code
vimalmanohar Dec 21, 2015
774d934
sequencue: Added functions to split and merge supervisions
vimalmanohar Dec 22, 2015
43d1b22
sequence: Further additions to sequence training
vimalmanohar Dec 24, 2015
1fc4aa2
sequence: Added discriminative training functions
vimalmanohar Dec 26, 2015
a68f426
sequence: Added binary files and modified lattice functions
vimalmanohar Dec 27, 2015
d4fd87a
sequence: Minor fix
vimalmanohar Dec 27, 2015
332dbd2
sequence: nnet3-discriminative-train program added
vimalmanohar Dec 28, 2015
81229bb
sequence: Modified some scripts to support new additions related to s…
vimalmanohar Dec 30, 2015
79f3e8e
sequence: Added scripts for discriminative training in nnet3
vimalmanohar Dec 30, 2015
3c9680e
align_fix: Added --use-gpu option to nnet3-align-compiled
vimalmanohar Dec 30, 2015
7ebeacb
changes regarding determinization of chain egs, merging from another …
danpovey Jan 7, 2016
6848f31
Merged ApplySignum from snr_clean
vimalmanohar Nov 19, 2015
fed777f
jesus branch: adding debugged version of determinization changes, to …
danpovey Jan 7, 2016
e210ccb
improvement to steps/nnet3/chain/get_egs.sh script for speed and memo…
danpovey Jan 7, 2016
8d342eb
nnet3_to_dot.py : Added support for round descriptors and removed bug…
vijayaditya Jan 7, 2016
b5fb3f9
Merge pull request #439 from vijayaditya/new_feature
danpovey Jan 7, 2016
63ce5f3
sequence: Fixed bug in Mpe variants
vimalmanohar Jan 7, 2016
116a77d
sequence: Fixed bugs in discriminative egs creation and added acousti…
vimalmanohar Jan 7, 2016
081d2fe
sequence: Added OtherStlVectorComparator in stl-utils.h
vimalmanohar Jan 7, 2016
1989a70
sequence: steps/nnet3: Modifications to discriminative scripts
vimalmanohar Jan 7, 2016
e97c558
sequence: swbd/s5c/local: run_discriminative script minor modification
vimalmanohar Jan 7, 2016
2f31251
sequence: Adding extract-column, vector-to-feat from snr
vimalmanohar Jan 7, 2016
7e5b1aa
jesus branch: merging changes from chain branch (mostly merged in tur…
danpovey Jan 8, 2016
29258f2
Optimizations to cudamatrix library
danpovey Jan 7, 2016
75b88b6
some fixes to previous commit regarding cudamatrix optimizations
danpovey Jan 8, 2016
8df46b8
master: merging most code changes from chain branch (but not the actu…
danpovey Jan 8, 2016
460da6f
minor documentation clarification
danpovey Jan 6, 2016
bc13196
cudamatrix: fixes to kernels that were using __syncthreads incorrectl…
danpovey Jan 8, 2016
343a712
minor documentation change: change recommended directory name.
danpovey Jan 8, 2016
8d1dcbd
documentation change: add entry to glossary
danpovey Jan 9, 2016
1cc28da
Add speed test for cudamatrix ApplyHeaviside that was missing
danpovey Jan 9, 2016
93edd3c
sequence: Major bug fix
vimalmanohar Jan 9, 2016
6a04924
sequence: Support averaging in vector-sum because sometimes it is pai…
vimalmanohar Jan 9, 2016
3033536
Fixed typo in nnet3/decode.sh
vimalmanohar Jan 9, 2016
c225be1
sequence: Minor changes to discriminative scripts
vimalmanohar Jan 9, 2016
8ae4275
sequence: Added preliminary version of script to adjust priors
vimalmanohar Jan 9, 2016
d3c7778
Modification to configure script to touch cudamatrix/cu-common.h when…
danpovey Jan 9, 2016
ab26eca
jesus branch: various new example scripts, and a couple new script op…
danpovey Jan 10, 2016
f9653dc
jesus branch: --extra-left-context-initial and --extra-right-context-…
danpovey Jan 10, 2016
c32b235
Tabs to spaces
Jan 11, 2016
cc50db7
Add note about patch failing due to line endings
Jan 11, 2016
1aac4b2
Don't convert patch file line endings to LF
Jan 11, 2016
40136da
Removed note about git archive mangling patch line endings
Jan 11, 2016
50d8431
Don't mangle patch file line endings in all directories
Jan 11, 2016
57e7f78
Change line endings in Windows patch file to CRLF
Jan 11, 2016
7f00d46
bug-fix to previous commit about --extra-right-context-final, error i…
danpovey Jan 11, 2016
bf1cdfb
jesus branch: some example-script changes
danpovey Jan 11, 2016
623e05f
fixes to chain derivative computation to prevent case when alphas bec…
danpovey Jan 11, 2016
6d3e499
fixes to chain derivative computation to prevent case when alphas bec…
danpovey Jan 11, 2016
b478057
modify extract_ivectors_online.sh to properly respect --online-ivecto…
danpovey Jan 12, 2016
ebb98fe
chain branch: merging changes from master
danpovey Jan 12, 2016
155dbc0
fix to how CUDA block and grid sizes are computed for common operatio…
danpovey Jan 12, 2016
723275c
chain branch: merging upstream/master
danpovey Jan 12, 2016
22207d4
Fix an overflow-related failure of chain training, identified by Xian…
danpovey Jan 13, 2016
ffa3146
script change [jesus branch: tuning-related]
danpovey Jan 13, 2016
e2a01c0
Fix an overflow-related failure of chain training, identified by Xian…
danpovey Jan 13, 2016
2fa82dc
hardening the chain computation a little more against numerical overf…
danpovey Jan 13, 2016
deb5dbe
jesus branch: new example scripts
danpovey Jan 13, 2016
d56c870
jesus branch: adding support for new-style derivative weights
danpovey Jan 13, 2016
4c938a8
fix regarding how return status from denominator computation is handled
danpovey Jan 13, 2016
1017986
Merge pull request #444 from Timmmm/windows_line_endings
jtrmal Jan 13, 2016
ad004c4
Merge pull request #445 from Timmmm/windows_docs
jtrmal Jan 13, 2016
138a9c8
Clarify the generate_solution.pl command in the Windows INSTALL
Jan 13, 2016
98e45e2
Formatting clean up and convert INSTALL to markdown
Jan 13, 2016
1ee5616
Rename INSTALL to INSTALL.md so github renders it.
Jan 13, 2016
4456ab0
Github markdown requires more spaces
Timmmm Jan 13, 2016
c976566
Adding code to support l2 regularization in training chain models
danpovey Jan 14, 2016
0dc60fb
Merging changes from jesus branch into jesus-cut-zero branch, RE l2 r…
danpovey Jan 14, 2016
a729d9c
Bug-fixes to previous commit, RE l2 regularization
danpovey Jan 14, 2016
f955354
jesus-cut-zero: merging changes from jesus branch
danpovey Jan 14, 2016
d3239d0
jesus branch: merging changes from jesus-cut-zero: example script cha…
danpovey Jan 14, 2016
242ec8c
jesus branch: merging changes from jesus-cut-zero branch... although …
danpovey Jan 15, 2016
1fce153
Implement --convert-repeated-to-block option in nnet3-am-copy.
galv Jan 13, 2016
c903384
jesus branch: merging changes from chain branch
danpovey Jan 15, 2016
1698b39
minor documentation change
danpovey Jan 14, 2016
83b94ae
bug fix to a rather old script: get_lda_block.sh (in fact this revert…
danpovey Jan 15, 2016
7c68d0c
Style fixes.
galv Jan 15, 2016
4fa93e0
some script cleanup for jesus-layer related scripts
danpovey Jan 15, 2016
f8be514
removing deprecated jesus-layer-related script
danpovey Jan 15, 2016
7e0cc88
chain branch: some more recent tuning scripts, testing l2-regularizat…
danpovey Jan 16, 2016
2bdc3a5
Merge pull request #451 from galv/repeatedtoblockaffine
danpovey Jan 17, 2016
43e7f58
sequenuce: Some clean up to discriminative-training
vimalmanohar Jan 18, 2016
8a7f43f
sequence: Normalizing the initial and final scores added while splitt…
vimalmanohar Jan 18, 2016
d73bab8
sequence: Modified nnet-discriminative-diagnostics to support more op…
vimalmanohar Jan 18, 2016
4555567
sequence: Added nnet3-modify-learning-rates.cc
vimalmanohar Jan 18, 2016
f845099
sequence: Minor cosmetic change to nnet-discriminative-example.cc
vimalmanohar Jan 18, 2016
55ea7c9
sequence: Added some testing code to nnet3 discriminative training
vimalmanohar Jan 18, 2016
e0cbd32
Separated ARPA parsing from const LM construction
Jan 19, 2016
9abd21c
Clarify that MLK and OpenBLAS are alternatives.
Jan 19, 2016
354d7c2
Added PerDimWeightedAverage component for chain models in SWBD, works…
vijayaditya Jan 21, 2016
763676a
Merge pull request #462 from vijayaditya/chain_feature2
danpovey Jan 21, 2016
a629bc8
minor bug correction in steps/nnet3/tdnn/make_configs.py
vijayaditya Jan 21, 2016
7878aae
Merge pull request #463 from vijayaditya/chain_feature2
danpovey Jan 22, 2016
6265183
Improved error messages for ARPA file parsing
Jan 22, 2016
a140bdc
Minor bug removal in tdnn config generator
vijayaditya Jan 22, 2016
c2129e5
Merge pull request #465 from vijayaditya/chain_feature2
danpovey Jan 22, 2016
981db1e
minor bug removal in local/chain/run_tdnn_4q.sh
vijayaditya Jan 22, 2016
1c56724
Merge pull request #466 from vijayaditya/chain_feature2
danpovey Jan 22, 2016
1aab0b6
Changes per @danpovey's review in #458
Jan 22, 2016
61a551d
Merge pull request #458 from kkm000/arpa-1
danpovey Jan 22, 2016
bd0208f
More fixes for bugs introduced due to PR #462
vijayaditya Jan 23, 2016
50f2083
Merge pull request #467 from vijayaditya/chain_feature2
danpovey Jan 23, 2016
549af84
Merge pull request #448 from Timmmm/windows_docs
jtrmal Jan 23, 2016
8f2aa7f
chain branch: adding 4r example script with slightly better jesus-lay…
danpovey Jan 23, 2016
54960bc
implementing leaky-hmm idea
danpovey Jan 22, 2016
5ecc1de
chain-leaky-hmm branch: adding leaky-hmm options to script, and vario…
danpovey Jan 23, 2016
4efcde7
Adding code and script options for cross-entropy regularization of ch…
danpovey Jan 24, 2016
9ef4a3a
merging chain-xent changes in to leaky-hmm branch for convenient test…
danpovey Jan 24, 2016
e37091e
Various bug-fixes for the chain-xent (cross-entropy smoothing of chai…
danpovey Jan 24, 2016
6d1da0d
chain branch: various tuning scripts for swbd/s5c, demonstrating leak…
danpovey Jan 24, 2016
c3fedd1
chain branch: Adding results for 4v (regarding cross-entropy regulari…
danpovey Jan 24, 2016
ffcb552
Code simplification and cleanup that was enabled by the implementatio…
danpovey Jan 25, 2016
4d42ea2
chain branch: add sorting on num-transitions (for very tiny speedup).
danpovey Jan 25, 2016
ca2772b
chain branch: adding results to a script; a couple of new scripts.
danpovey Jan 25, 2016
3342530
chain branch: script edits with new results shown.
danpovey Jan 26, 2016
2f585de
Merge branch 'master' into chain
danpovey Jan 26, 2016
e46406b
change to arpa-file-parser.cc to suppress spurious compiler warning
danpovey Jan 26, 2016
a422890
Fix compilation issues when using KALDI_DOUBLEPRECISION=1 caused by h…
funous Jan 26, 2016
3431b74
chain branch: various new tuning-scripts for chain model.
danpovey Jan 27, 2016
8676636
chain branch: more Switchboard tuning scripts, with results.
danpovey Jan 27, 2016
f587473
remove redundant type hints
funous Jan 28, 2016
a125eb4
Added some scripts for parsing nnet3 logs
vijayaditya Jan 28, 2016
37261b5
chain models: add results from tuning scripts
danpovey Jan 28, 2016
e418aa5
Merge pull request #471 from vijayaditya/new_feature2
danpovey Jan 28, 2016
75a8334
Merge pull request #470 from speechmatics/bugfix-doubleprecision
danpovey Jan 28, 2016
2cf78a7
confidence calibration: the lattice depths in data preparation are no…
KarelVesely84 Jan 29, 2016
b515e03
Copy/paste bug in nnet3/get_egs.sh when using fMLLR
alumae Jan 29, 2016
5b98994
Merge pull request #472 from alumae/patch-1
danpovey Jan 29, 2016
d165318
minor bug-fix in cudamatrix code (forgot to check error status)
danpovey Jan 12, 2016
eb13407
selectively merging from xiaohui-zhang's branch, some code for writin…
danpovey Jan 29, 2016
ad7951d
Merge remote-tracking branch 'upstream/master' into chain
danpovey Jan 29, 2016
6294bf3
sequence: Minor changes to nnet3 sequence get working code
vimalmanohar Jan 31, 2016
9bfb6fa
sequence: Added a version of lattice-determinize that saves the acous…
vimalmanohar Jan 31, 2016
d3ce34a
sequence: Added test functions and also converting nnet2 to nnet3 degs
vimalmanohar Jan 31, 2016
e8d0b75
sequence: nnet3 sequeuce scripts
vimalmanohar Jan 31, 2016
accfc59
sequence: Minor fix in nnet3/decode.sh
vimalmanohar Jan 31, 2016
22e4df2
sequence: Fixed bug in nnet2 sequence training that did not take cont…
vimalmanohar Jan 31, 2016
4630e94
sequence: nnet3 sequence scripts in swbd and wsj
vimalmanohar Jan 31, 2016
7d9c43b
sequence: minor modifications to discriminative nnet2 scripts
vimalmanohar Feb 1, 2016
a1e930c
sequence: Bug fix to write frame_subsampling_factor to nnet3 info dir
vimalmanohar Feb 1, 2016
7da9a8c
sequence: Bug fix to scoring scripts
vimalmanohar Feb 1, 2016
1bcb8e1
snr: Added length tolerance to nnet3-get-egs
vimalmanohar Feb 1, 2016
b3f05c8
chain branch: minor bugfix (remove duplicately registered option)
danpovey Feb 3, 2016
9280600
sequence: Minor fixes
vimalmanohar Feb 3, 2016
55ba492
sequence: Fixed sequence script
vimalmanohar Feb 3, 2016
506c3e3
sequence : Added some options to top level scripts
vimalmanohar Feb 3, 2016
b7dea22
sequence: restoring old behavior of old behavior of smbr for now
vimalmanohar Feb 3, 2016
9bf0acf
sequence: Fixed some bugs in nnet3/nnet-utils.cc
vimalmanohar Feb 3, 2016
3f20fa6
snr: Added weights option to online2bin/ivector-extract-online2.cc
vimalmanohar Feb 12, 2016
34f2ffe
sequence: Fixed bug to directly read from normal Lattice type and add…
vimalmanohar Feb 14, 2016
b5c2412
sequence: Made a bunch of script modifications to support chain model…
vimalmanohar Feb 14, 2016
e2e676a
sequence: Moved around some functions common to chain and discriminat…
vimalmanohar Feb 14, 2016
707ea5e
sequeunce: Support reading subset in lattice-copy
vimalmanohar Feb 14, 2016
fc62a1c
sequence: swbd discriminative scripts top level
vimalmanohar Feb 14, 2016
502d25e
sequence: Added python explicitly for calling python scripts
vimalmanohar Feb 14, 2016
25acfaa
sequence: nnet3 adjust priors script
vimalmanohar Feb 14, 2016
05b21dc
semisup: Modified scoring script to get confidences
vimalmanohar Feb 14, 2016
3d2129a
sequence: Added some programs to work with degs
vimalmanohar Feb 14, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,6 @@ windows/INSTALL* eol=native
windows/NewGuidCmd.exe.config text eol=crlf
windows/NewGuidCmd.exe binary

# Prevent git changing CR-LF to LF when archiving (patch requires CR-LF on Windows).
**/*.patch -text

12 changes: 12 additions & 0 deletions egs/swbd/s5c/local/chain/README.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@

there are a lot of tuning experiments here.

ones to look at right now:
2y is a TDNN baseline
4f is a good jesus-layer system
4q is an improved TDNN with various bells and whistles from Vijay.
4r is a slightly-better jesus-layer system than 4f, with one more layer.
5e is the best configuration run so far.



224 changes: 224 additions & 0 deletions egs/swbd/s5c/local/chain/run_discriminative.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,224 @@
#!/bin/bash

set -e
set -o pipefail

# this is run_discriminative.sh

# This script does discriminative training on top of nnet3 system.
# note: this relies on having a cluster that has plenty of CPUs as well as GPUs,
# since the lattice generation runs in about real-time, so takes of the order of
# 1000 hours of CPU time.
#
# Note: rather than using any features we have dumped on disk, this script
# regenerates them from the wav data three times-- when we do lattice
# generation, numerator alignment and discriminative training. This made the
# script easier to write and more generic, because we don't have to know where
# the features and the iVectors are, but of course it's a little inefficient.
# The time taken is dominated by the lattice generation anyway, so this isn't
# a huge deal.

. cmd.sh


stage=0
train_stage=-10
get_egs_stage=-10
use_gpu=true
srcdir=exp/chain/tdnn_5e_sp
criterion=smbr
drop_frames=false # only matters for MMI.
frames_per_eg=150
frames_overlap_per_eg=30
effective_learning_rate=0.0000125
max_param_change=1
num_jobs_nnet=4
train_stage=-10 # can be used to start training in the middle.
decode_start_epoch=1 # can be used to avoid decoding all epochs, e.g. if we decided to run more.
num_epochs=4
degs_dir=
cleanup=false # run with --cleanup true --stage 6 to clean up (remove large things like denlats,
# alignments and degs).
regularization_opts=
lats_dir=
train_data_dir=data/train_nodup_sp_hires
online_ivector_dir=exp/nnet3/ivectors_train_nodup_sp
one_silence_class=true
truncate_deriv_weights=10
minibatch_size=64

adjust_priors=true

determinize=true
minimize=true
remove_output_symbols=true
remove_epsilons=true
collapse_transition_ids=true

modify_learning_rates=true
last_layer_factor=1.0

set -e
. cmd.sh
. ./path.sh
. ./utils/parse_options.sh


if $use_gpu; then
if ! cuda-compiled; then
cat <<EOF && exit 1
This script is intended to be used with GPUs but you have not compiled Kaldi with CUDA
If you want to use GPUs (and have them), go to src/, and configure and make on a machine
where "nvcc" is installed. Otherwise, call this script with --use-gpu false
EOF
fi
num_threads=1
else
# Use 4 nnet jobs just like run_4d_gpu.sh so the results should be
# almost the same, but this may be a little bit slow.
num_threads=16
fi

if [ ! -f ${srcdir}/final.mdl ]; then
echo "$0: expected ${srcdir}/final.mdl to exist; first run run_tdnn.sh or run_lstm.sh"
exit 1;
fi

lang=data/lang

if [ $stage -le 1 ]; then
# hardcode no-GPU for alignment, although you could use GPU [you wouldn't
# get excellent GPU utilization though.]
nj=350 # have a high number of jobs because this could take a while, and we might
# have some stragglers.
use_gpu=no
gpu_opts=

steps/nnet3/align.sh --cmd "$decode_cmd $gpu_opts" --use-gpu "$use_gpu" \
--online-ivector-dir $online_ivector_dir \
--scale-opts "--transition-scale=1.0 --acoustic-scale=1.0 --self-loop-scale=1.0" \
--nj $nj $train_data_dir $lang $srcdir ${srcdir}_ali || exit 1;
fi

if [ -z "$lats_dir" ]; then
lats_dir=${srcdir}_denlats
if [ $stage -le 2 ]; then
nj=50 # this doesn't really affect anything strongly, except the num-jobs for one of
# the phases of get_egs_discriminative2.sh below.
num_threads_denlats=6
subsplit=40 # number of jobs that run per job (but 2 run at a time, so total jobs is 80, giving
# total slots = 80 * 6 = 480.
steps/nnet3/make_denlats.sh --cmd "$decode_cmd --mem 1G --num-threads $num_threads_denlats" \
--self-loop-scale 1.0 --acwt 1.0 --extra-left-context 20 \
--online-ivector-dir $online_ivector_dir --determinize $determinize \
--nj $nj --sub-split $subsplit --num-threads "$num_threads_denlats" --config conf/decode.config \
$train_data_dir $lang $srcdir ${lats_dir} || exit 1;
fi
fi

left_context=`nnet3-am-info $srcdir/final.mdl | grep "left-context:" | awk '{print $2}'` || exit 1
right_context=`nnet3-am-info $srcdir/final.mdl | grep "right-context:" | awk '{print $2}'` || exit 1

frame_subsampling_opt=
if [ -f $srcdir/frame_subsampling_factor ]; then
frame_subsampling_opt="--frame-subsampling-factor $(cat $srcdir/frame_subsampling_factor)"
fi

cmvn_opts=`cat $srcdir/cmvn_opts` || exit 1

if [ -z "$degs_dir" ]; then
degs_dir=${srcdir}_degs_n${frames_per_eg}_o${frames_overlap_per_eg}_f
if $determinize; then
degs_dir=${degs_dir}d
fi
if $minimize; then
degs_dir=${degs_dir}m
fi
if $remove_output_symbols; then
degs_dir=${degs_dir}r
fi
if $remove_epsilons; then
degs_dir=${degs_dir}e
fi
if $collapse_transition_ids; then
degs_dir=${degs_dir}c
fi

if [ $stage -le 3 ]; then
if [[ $(hostname -f) == *.clsp.jhu.edu ]] && [ ! -d ${srcdir}_degs/storage ]; then
utils/create_split_dir.pl \
/export/b0{1,2,5,6}/$USER/kaldi-data/egs/swbd-$(date +'%m_%d_%H_%M')/s5/${srcdir}_degs/storage ${srcdir}_degs/storage
fi
# have a higher maximum num-jobs if
if [ -d ${srcdir}_degs/storage ]; then max_jobs=10; else max_jobs=5; fi

degs_opts="--determinize $determinize --minimize $minimize --remove-output-symbols $remove_output_symbols --remove-epsilons $remove_epsilons --collapse-transition-ids $collapse_transition_ids"

steps/nnet3/get_egs_discriminative.sh \
--cmd "$decode_cmd --max-jobs-run $max_jobs --mem 20G" --stage $get_egs_stage --cmvn-opts "$cmvn_opts" \
--adjust-priors $adjust_priors --acwt 1.0 \
--online-ivector-dir $online_ivector_dir --left-context $left_context --right-context $right_context $frame_subsampling_opt \
--criterion $criterion --frames-per-eg $frames_per_eg --frames-overlap-per-eg $frames_overlap_per_eg ${degs_opts} \
$train_data_dir $lang ${srcdir}_ali $lats_dir $srcdir/final.mdl $degs_dir || exit 1;
fi
fi

d=`basename $degs_dir`
dir=${srcdir}_${criterion}_${effective_learning_rate}_degs${d##*degs}_ms${minibatch_size}

if $one_silence_class; then
dir=${dir}_onesil
fi

if $modify_learning_rates; then
dir=${dir}_modify
fi

if [ "$last_layer_factor" != "1.0" ]; then
dir=${dir}_llf$last_layer_factor
fi

if [ $stage -le 4 ]; then
bash -x steps/nnet3/train_discriminative.sh --cmd "$decode_cmd" \
--stage $train_stage \
--effective-lrate $effective_learning_rate --max-param-change $max_param_change \
--criterion $criterion --drop-frames $drop_frames --acoustic-scale 1.0 \
--num-epochs $num_epochs --one-silence-class $one_silence_class --minibatch-size $minibatch_size \
--num-jobs-nnet $num_jobs_nnet --num-threads $num_threads \
--regularization-opts "$regularization_opts" \
--truncate-deriv-weights $truncate_deriv_weights --adjust-priors $adjust_priors \
--modify-learning-rates $modify_learning_rates --last-layer-factor $last_layer_factor \
${degs_dir} $dir || exit 1;
fi

decode_suff=sw1_tg
graph_dir=$srcdir/graph_sw1_tg
if [ $stage -le 14 ]; then
for decode_set in train_dev eval2000; do
(
steps/nnet3/decode.sh --acwt 1.0 --post-decode-acwt 10.0 \
--extra-left-context 20 \
--nj 50 --cmd "$decode_cmd" \
--online-ivector-dir exp/nnet3/ivectors_${decode_set} \
$graph_dir data/${decode_set}_hires $dir/decode_${decode_set}_${decode_suff} || exit 1;
if $has_fisher; then
steps/lmrescore_const_arpa.sh --cmd "$decode_cmd" \
data/lang_sw1_{tg,fsh_fg} data/${decode_set}_hires \
$dir/decode_${decode_set}_sw1_{tg,fsh_fg} || exit 1;
fi
) &
done
fi
wait;

if [ $stage -le 6 ] && $cleanup; then
# if you run with "--cleanup true --stage 6" you can clean up.
rm ${lats_dir}/lat.*.gz || true
rm ${srcdir}_ali/ali.*.gz || true
steps/nnet2/remove_egs.sh ${srcdir}_degs || true
fi


exit 0;


2 changes: 1 addition & 1 deletion egs/swbd/s5c/local/chain/run_tdnn_2e.sh
Original file line number Diff line number Diff line change
Expand Up @@ -276,4 +276,4 @@ b01:s5c: for l in y 2b 2e; do grep WER exp/chain/tdnn_${l}_sp/decode_train_dev_s
b01:s5c: for l in y 2b 2e; do grep WER exp/chain/tdnn_${l}_sp/decode_train_dev_sw1_fsh_fg/wer_* | utils/best_wer.sh ; done
%WER 16.57 [ 8155 / 49204, 1144 ins, 1988 del, 5023 sub ] exp/chain/tdnn_y_sp/decode_train_dev_sw1_fsh_fg/wer_11_0.0
%WER 16.83 [ 8282 / 49204, 1106 ins, 2115 del, 5061 sub ] exp/chain/tdnn_2b_sp/decode_train_dev_sw1_fsh_fg/wer_12_0.0
%WER 16.79 [ 8260 / 49204, 1090 ins, 2138 del, 5032 sub ] exp/chain/tdnn_2e_sp/decode_train_dev_sw1_fsh_fg/wer_12_0.0
%WER 16.79 [ 8260 / 49204, 1090 ins, 2138 del, 5032 sub ] exp/chain/tdnn_2e_sp/decode_train_dev_sw1_fsh_fg/wer_12_0.0
2 changes: 1 addition & 1 deletion egs/swbd/s5c/local/chain/run_tdnn_2r.sh
Original file line number Diff line number Diff line change
Expand Up @@ -301,4 +301,4 @@ LOG (lattice-best-path:main():lattice-best-path.cc:99) For utterance sp1.0-sw028
LOG (lattice-best-path:main():lattice-best-path.cc:124) Overall score per frame is 46.9461 = 0.0637047 [graph] + 46.8824 [acoustic] over 843 frames.
LOG (lattice-best-path:main():lattice-best-path.cc:128) Done 1 lattices, failed for 0
LOG (ali-to-phones:main():ali-to-phones.cc:134) Done 1 utterances.
sp1.0-sw02859-B_050239-051084 sil ow_S ay_B k_I m_I ax_I n_E hh_B ih_I m_I s_I eh_I l_I f_E ih_B f_E hh_B iy_E hh_B ae_I d_E s_B ah_I m_E t_B ae_I l_I ih_I n_I t_E ax_B r_I aw_I n_I d_E ay_S th_B ih_I ng_I k_E dh_B ey_I d_E b_B iy_E ax_S s_B uw_I p_I er_E t_B iy_I m_E b_B ah_I t_E hh_B iy_E k_B ae_I n_I t_E d_B uw_E ih_B t_E b_B ay_E hh_B ih_I m_I s_I eh_I l_I f_E hh_B iy_I z_E g_B aa_I t_E t_B ax_E hh_B ae_I v_E ax_S l_B ay_I n_E ih_B n_E f_B r_I ah_I n_I t_E ah_B v_E hh_B ih_I m_E dh_B ae_I t_E n_B ow_I z_E hh_B aw_E t_B ax_E b_B l_I aa_I k_E sil
sp1.0-sw02859-B_050239-051084 sil ow_S ay_B k_I m_I ax_I n_E hh_B ih_I m_I s_I eh_I l_I f_E ih_B f_E hh_B iy_E hh_B ae_I d_E s_B ah_I m_E t_B ae_I l_I ih_I n_I t_E ax_B r_I aw_I n_I d_E ay_S th_B ih_I ng_I k_E dh_B ey_I d_E b_B iy_E ax_S s_B uw_I p_I er_E t_B iy_I m_E b_B ah_I t_E hh_B iy_E k_B ae_I n_I t_E d_B uw_E ih_B t_E b_B ay_E hh_B ih_I m_I s_I eh_I l_I f_E hh_B iy_I z_E g_B aa_I t_E t_B ax_E hh_B ae_I v_E ax_S l_B ay_I n_E ih_B n_E f_B r_I ah_I n_I t_E ah_B v_E hh_B ih_I m_E dh_B ae_I t_E n_B ow_I z_E hh_B aw_E t_B ax_E b_B l_I aa_I k_E sil
Loading