-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[seq2seq translate] - cannot unsqueeze empty tensor #108
Comments
Does it have something to do with my python version? (I'm using miniconda3 |
Hi Were you able to find a solution for the unsqueeze error during evaluation? |
Hi @aayushee I modified my code based on https://github.com/yanwii/seq2seq, please take a look |
Hey. Need help here. Facing same issue |
The problem is when you are initializing your data it is not sorted correctly, however torchtext can do this quite easily for you. I have done it (with the SST dataset) in the LSTM RNN model in må respo here, look how i initilize the data sets and my forward pass. Github respo (The project is undergoing so there might be quite a lot of updates and small bugs some places that is fixed in the next couple of days): |
Hi all,
I'm trying to run the seq 2seq model. (seq2seq-translation-batched.ipnb)
My environment is python 3.6.4, torch 0.4.0
And I make some modification:
change
return F.softmax(attn_energies).unsqueeze(1)
to
return F.softmax(attn_energies, dim=1).unsqueeze(1)
(I cannot run the code without adding dim param)
change all
energy = hidden.dot(encoder_output)
//dot functionto
energy = hidden.mm(encoder_output.t())
//matmul function(still, cannot run without this change)
Then I failed in Putting it all together part
following is the log:
Please give me some suggestion, thanks.
The text was updated successfully, but these errors were encountered: