Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to run language_model in GPU other than device 0 #3

Open
byzhang opened this issue May 20, 2016 · 0 comments
Open

Failed to run language_model in GPU other than device 0 #3

byzhang opened this issue May 20, 2016 · 0 comments

Comments

@byzhang
Copy link

byzhang commented May 20, 2016

The server has multiple TitanX cards, can language_model can run in device 0, but not others.
The error in nivida side is

GPU 0000:04:00.0: Detected Critical Xid Error

In language_model side is

    Vocabulary size = 10002 (occuring more than 1)
Max training epochs = 2000
    Training cutoff = -1
  Number of threads = 1
     minibatch size = 100
       max_patience = 5
             device = gpu
Load location         = N/A
Constructed Stacked LSTMs
Vocabulary size       = 10002
Input size            = 100
Output size           = 10002
Stack size            = 4
Shortcut connections  = true
Memory feeds gates    = true
an illegal memory access was encountered
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant