Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about low GPU utilisation #11

Open
hakunin opened this issue Feb 4, 2022 · 1 comment
Open

Question about low GPU utilisation #11

hakunin opened this issue Feb 4, 2022 · 1 comment

Comments

@hakunin
Copy link

hakunin commented Feb 4, 2022

Hi,
I'm using gym_trading to learn about RL for algotrading.

Compared to some simple models I did with RNNs I noticed the learning takes significantly longer.
So I switched to running it on my 2080 ti. Runs a bit faster than on CPU.
But then I notice the total GPU utilization is only about 7-10% and that's on a desktop machine.

Is there a way I can speed up the learning and use more GPU power?

@hakunin hakunin changed the title Question about GPU utilisation Question about low GPU utilisation Feb 4, 2022
@hakunin
Copy link
Author

hakunin commented Feb 4, 2022

If I make the neural net really big, it gets to a higher utilisation (~30%)

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 Dense_1 (Dense)             (None, 1024)              11264

 Dense_2 (Dense)             (None, 1024)              1049600

 dropout (Dropout)           (None, 1024)              0

 Output (Dense)              (None, 3)                 3075

=================================================================
Total params: 1,063,939
Trainable params: 1,063,939
Non-trainable params: 0
_________________________________________________________________

When it says the model is sequential, it basically means it can't get any faster even if the network is small due to the sheer amount of iterations and no paralelism?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant