Replies: 1 comment 2 replies
-
ex. GPU will take less time for computing the same results than CPU Here you might get error, because your training data and model are on different device (GPU/CPU) so you have put them on same device to get computation done. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thank you for this class - I am learning a lot!!
I am running all your code on an Intel/Mac Pro (mps GPU) and everything runs well and nearly as fast as Colab in your video. I replace your device agnostic code block with the following:
I find that each time before calling the going_modular - engine.train(...) function - I need to add one line of code to move the model.to(device) or I get an error
I tried adding the model.to(device) code into the engine.py file - just before the model.train() call. But that does not seem to make a difference.
What am I missing here??
Beta Was this translation helpful? Give feedback.
All reactions