Skip to content

Latest commit

 

History

History
21 lines (17 loc) · 570 Bytes

README.md

File metadata and controls

21 lines (17 loc) · 570 Bytes

Attention in Tensorflow

Plug-and-Play version of Attention implemented in Tensorflow (1.2.1).

Installation

Install Tensorflow for your system.

Usage

You can use this file with your code in the following manner :

    from attention import Attention
    ...
    output_vectors = lstm()
    atta = Attention(output_vectors)
    att_vec = atta.applyAttention()
    output = tf.nn.softmax(tf.matmul(Wh, att_vec) + b)
    ...

Current Status

Haven't checked, will be running it on something soon, hopefully.