You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is dangerous to feed the output as the next input.
The network can zero its output before reading the input.
I suggest to use a deep copy of the output as the next input.
Hi , I met the similar problem as abrove when test rnn in evaluate mode.
the second time I call mRNN:forward , it just crashes. But in training mode, it's all OK.
can you help me?
require 'rnn'
require 'nngraph'
th = torch
inputSize,hiddenSize,outputSize = 5,5,5
local mX = nn.Identity()()
local mS = nn.Identity()()
local mH,mA = (mS):split(2)
local mAN = mA - nn.Sigmoid()
local mHN = {
mH,
mX - nn.Linear(inputSize, hiddenSize),
}
mRNN:evaluate() ------------------------------------
out = mRNN:forward(th.randn(inputSize))
out = mRNN:forward(th.randn(inputSize)) ------- this will crash
out = mRNN:forward(th.randn(inputSize))
Hi guys,
This crashes :
The error:
Basically, the issue happens when feeding back outputs of a previous forward as inputs to the next forward to a gModule using split.
The text was updated successfully, but these errors were encountered: