Releases: bytedance/neurst
Releases · bytedance/neurst
v0.1.1
Added
- PyTorch version Transformer & SpeechTransformer model.
- Audio extraction for CommonVoice/IWSLT.
- Data sampler and dataset for multilingual machine translation
- Mixed training dataset with data sampler.
- Multilingual Translation task
- Instruction for
- training transformer models on WMT14 EN->DE
- weight pruning
- quantization aware training for transformer model
Fixed
- Compat with TensorFlow v2.4
v0.1.0
- Basic code structure for Encoder, Decoder, Model, DataPipeline, Tokenizer, Experiment, Metric, and Dataset.
- (Model) Adds implementation of pre-norm/post-norm Transformer, Speech Transformer, BERT, GPT-2, and Wav2Vec2.0.
- (Task) Adds implementation of sequence to sequence task and speech to text task (ASR, ST).
- (DataPipeline, Tokenizer) Adds wrappers for commonly used tokenizers: moses, bpe, jieba, character, sentencepiece, etc.
- (Dataset) Adds support for reading parallel corpus, speech corpora (libri-trans, MuST-C, and LibriSpeech), and TFRecords.
- (Experiment) Adds implementation of common training procedure with mixed precision training and various distributed strategies (
MirroredStrategy
,Horovod
,Byteps
). - (Metric) Adds implementation of BLEU and WER metrics.
- (Converter) Adds implementation of converting checkpoints from google BERT, OpenAI GPT-2, fairseq Transformer, and fairseq Wav2Vec2.0.
- Beam search decoding and top-k/p sampling.
- Supports averaging checkpoints, TFRecord generation, model restoring (see cli/README.md).
- Step-by-step recipes for training an end-to-end speech translation model (see examples/speech_to_text).