-
Notifications
You must be signed in to change notification settings - Fork 55
Bug in CPU training (related to TensorFlow)? #2
Comments
are you obtain the TFrecord training file via data_helper ? Line 149 used |
I used: python train.py BTW, may I know the corpus size of your char-rnn model, it seems that it is quite RAM-consuming. It will be nicer if I know your original corpus size ><" |
After changing the line: ^^ |
i used <1GB Apple Daily news text to train char-rnn LM. |
Um...maybe one of the reason is that my vocabulary size is smaller than yours i.e. 6790 |
As I observed, there will be an out-of-vocabulary error throwing out when using the "embedding_lookup"
Error looks like:
InvalidArgumentError (see above for traceback): indices[0,1,3] = 6501 is not in [0, 6342)
[[Node: model_1/embedding_lookup = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, class=["loc:@model/embedding"], validate
indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](model/embedding/read, _recv_model_1/inputs_0)]]
See geek-ai/irgan#9
Nothing wrong with your code, it sounds like a known issue in Tensorflow (CPU version)
The text was updated successfully, but these errors were encountered: