Replies: 3 comments
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> Had |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> Had
[April 14, 2020, 2:05pm]
I have found it here slash
github.com
#### mozilla/TTS/blob/master/layers/losses.py slash #L86
<br/> x curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts~ TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz mask, target curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts~ TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz mask, reduction='none')<br/> loss = loss.mul(out_weights.to(loss.device)).sum()<br/> else:<br/> mask = mask.expand_as(x)<br/> loss = functional.mse_loss(<br/> x curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts~ TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz mask, target curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts~ TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz mask, reduction='sum')<br/> loss = loss / mask.sum()<br/> return loss<br/> <br/> class AttentionEntropyLoss(nn.Module):<br/> # pylint: disable=R0201<br/> def forward(self, align):<br/> '''<br/> Forces attention to be more decisive by penalizing<br/> soft attention weights<br/> <br/> TODO: arguments<br/> TODO: unit_test<br/> '''<br/> entropy = torch.distributions.Categorical(probs=align).entropy()<br/>
What is the expected tensor size?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/how-to-use-attentionentropyloss]
Beta Was this translation helpful? Give feedback.
All reactions