You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have reproduced the result of aishell,but when i check the weight files,i find model-avg10 is about one fifth size of the others.
I only see computing average in the average_checkpoints,how can this process decreases the size of the model,just like quantization.
I also use the distiller to make comparison,the model-avg10 is almost the same size as quantization model.
The text was updated successfully, but these errors were encountered:
@hirofumi0810 Why do model average computing?For speeding up?But i test both,it doesn't seem to improve much,on my test dataset which consists of 30 thousands speeches,the original model.epoch-25 cost 15+ hours,while the averaged model cost 12 huors.Do you have some suggestions for speeding up the process,hiro?Thank you very much.
@lfgogogo If you have a lot of training data, checkpoint averaging might be ineffective. Please try to change n_average in score.sh.
But averaging is not related to speed performance at all.
I have reproduced the result of aishell,but when i check the weight files,i find model-avg10 is about one fifth size of the others.
I only see computing average in the average_checkpoints,how can this process decreases the size of the model,just like quantization.
I also use the distiller to make comparison,the model-avg10 is almost the same size as quantization model.
The text was updated successfully, but these errors were encountered: