-
Notifications
You must be signed in to change notification settings - Fork 424
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问seq_output = tf.concat(self.lstm_outputs, 1)的用意是什么? #11
Comments
我认为,self.lstm_inputs的shape在经过embedding_lookup后,应该是(num_seqs, num_steps, embedding_size)。也就是一个input由embedding_size大小的向量表示。 |
嗯,你的embedding_size就是我的num_classes, |
我打印出了shape,看来seq_output = tf.concat(self.lstm_outputs, 1) 这一句并没什么用处,因为concat前后,lstm_outputs与seq_output的shape都是一样的。 |
嗯嗯,是的,所以我就直接reshape了 =D |
这一步确实没用。 |
确实没用。。。 |
tf.concat(values, 1)#values必须是序列,在这里不起作用 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
你好,想请教个问题。
我的运行下来,报错在
y_reshaped = tf.reshape(y_one_hot, self.logits.get_shape())
因为y_one_hot和self.logits的总元素数量不同,所以不能reshape。
我推算了一下:
inputs的shape是(num_seqs, num_steps),经过tf.one_hot以后,lstm_inputs的shape变成(num_seqs, num_steps, num_classes)
我用的是cell是一层的lstm,lstm_inputs经过tf.nn.dynamic(cell, lstm_inputs, initial_state=self.initial_state)后,lstm_outputs的shape是(num_seqs, num_steps, lstm_size)
lstm_outputs经过tf.concat(lstm_outputs, 1)以后,shape没有任何变化,再经过一些列运算后,shape就会有问题。
所以想问一下tf.concat(lstm_outputs, 1)这一步是做什么的?
感谢~
The text was updated successfully, but these errors were encountered: