-
Notifications
You must be signed in to change notification settings - Fork 955
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How could I run GPU docker? #361
Comments
I was running into this same issue. I believe I've found a work-around. First, check that your container can see your GPU with (host) $ docker run --runtime=nvidia --rm -it kaggle/python-gpu-build bash
(container) $ nvidia-smi You may get an error:
If this is the case, run the command
This appears to be due to the path
We simply need to remove this entry from (container) $ export LD_LIBRARY_PATH=/usr/local/cuda/lib64
>>> import tensorflow as tf
>>> tf.test.is_gpu_available()
[...]
True |
Thank you @pricebenjamin for sharing your detailed instructions. I linked to them from the "Running the image" section in the repo's README. |
The question is: How could use kaggle docker with GPU?
I haven't found any examples how could I use already built
kaggle docker-python
for GPU. So I decided to built it by myself.I cloned current repository and built GPU docker from there (
build --gpu
). After that I run docker to test where we have GPUs there (it was for me with official tensorflow DockerFiletensorflow/tensorflow:latest-gpu-py3
from here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/dockerfiles)Script:
for
tensorflow/tensorflow:latest-gpu-py3
I've received:['/device:GPU:0']
But in
kaggle/python-gpu-build
it won't work and response was:and I've found errors in logs:
side note: I'm using nvidia-docker2 by
--runtime=nvidia
.Does
kaggle/python-gpu-build
requires extra work to tune it before run? And where can I find more information how could I use it?Thanks!
The text was updated successfully, but these errors were encountered: