Skip to content
This repository was archived by the owner on Jul 7, 2023. It is now read-only.
This repository was archived by the owner on Jul 7, 2023. It is now read-only.

t2t notebook attention visualization broken #917

@tomsbergmanis

Description

@tomsbergmanis

Description

When running https://colab.research.google.com/github/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/hello_t2t.ipynb
code for en2de translation and visualizing all attentions heads input-output view attentions are nonsensical: for each output word all 8 attention heads to the first input word "The". Clearly, that is not right.
The prerendered visualization shows a very different attentions that do not match those produced by code.

I got this issue by simply running the notebook.
screenshot from 2018-07-05 18 02 55

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions