-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue faced while training the model on my macbook air #92
Comments
@Zulqurnain24 INTEL HD Graphics 6000 card in your MacBook doesn't support CUDA, so running Tensorflow on CPU is extremely slower than running on GPU. I'm not surprised that after 9 hours you haven't seen anything, it took a while for me to make the first 1000 iterations appear, using a Titan X (and totally it took almost 5 hours to train). So, there is no way to upgrade your machine, but you can use some GPU instances by AWS, Google Cloud or Azure to run the training process. Then you can do the inference on your machine, running on CPU (but don't expect to work on large image sizes, considering you have just 4GB of RAM). |
You probably shouldn't train this using CPU if you want to have training times less than several months. |
Hi, |
@Zulqurnain24 Have a look at AMD's ROCm platform, not sure if Radeon 555 is supported but worth to explore: https://rocm.github.io/dl.html |
Hello,
I have cloned https://github.com/lengstrom/fast-style-transfer . I got it setup to run on my macbook 1.6 GHz intel Core i5, 4GB RAM, and INTEL HD Graphics 6000 1536 MB. But it has been about 9 hours for the training and the screen is still so i do not have any clue what is happening or how long it will take. I am copying the terminal log here:
"(dataweekends) MacBook-Air:fast-style-transfer-master mohammadzulqurnain$ python style.py --style images/udnie.jpg --test images/ww1.jpg --test-dir test_dir --content-weight 1.5e1 --checkpoint-dir checkpoints --checkpoint-iterations 300 --batch-size 3
Train set has been trimmed slightly..
(1, 600, 800, 3)
2017-09-01 17:58:17.302360: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-01 17:58:17.302398: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-01 17:58:17.302408: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-01 17:58:17.302417: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
WARNING:tensorflow:From /anaconda/envs/dataweekends/lib/python2.7/site-packages/tensorflow/python/util/tf_should_use.py:175: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use
tf.global_variables_initializer
instead.UID: 3"
I need to know how long it will take and if my machine is weak for it then is there a way by which i can speed this up?
thank in advance
best regards
The text was updated successfully, but these errors were encountered: