You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We already statistically generate word alignment information, it should be possible to go through parallel datasets, and generate word pairs of the most common words that are aligned. Since the alignments are subword units, the algorithm should match up the left side with the right side at the word-like units. This could then be used to generate a dataset of single word translations based on a variety of domain of data. The statistical distribution of those word pairs could also be computed.
At this point each aligned word pair would be equally as likely sampled as the next. However, the decoder should produce a statistical distribution, so we should consider strategies of how to present multiple examples of the same words. The dataset could generate duplicates of words based on the statistical distribution (after the deduplication step), or maybe OpusTrainer could produce the words on a certain distribution as an augmentation filter of some kind.
The text was updated successfully, but these errors were encountered:
We already statistically generate word alignment information, it should be possible to go through parallel datasets, and generate word pairs of the most common words that are aligned. Since the alignments are subword units, the algorithm should match up the left side with the right side at the word-like units. This could then be used to generate a dataset of single word translations based on a variety of domain of data. The statistical distribution of those word pairs could also be computed.
At this point each aligned word pair would be equally as likely sampled as the next. However, the decoder should produce a statistical distribution, so we should consider strategies of how to present multiple examples of the same words. The dataset could generate duplicates of words based on the statistical distribution (after the deduplication step), or maybe OpusTrainer could produce the words on a certain distribution as an augmentation filter of some kind.
The text was updated successfully, but these errors were encountered: