You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added Support for Returning Attention Scores in TransformerEncoder call (keras-team#1879)
* Added: Return attention scores argument to transformer encoder
* Added: docstring for return_attention_scores and added a test to chek the working of the argument
* Fixed: Test case by removing print stmts and using self.assertAllEqual
* Fixed: Linting
training: a boolean indicating whether the layer should behave in
187
192
training mode or in inference mode.
193
+
return_attention_scores: a boolean indicating whether the output should be `(attention_output, attention_scores)` if `True` or `attention_output` if `False`. Defaults to `False`.
0 commit comments