-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Usage #159
Comments
My best guess is that it has to do with saving the episode, but not at all sure. |
Huh interesting. Haven't seen the save issue for smaller models. Try
setting the save epi frequency in spec to null for now. We don't need to
save checkpoints, and one will save at the very end.
Without debug log it's hard to tell where it hangs. But I'll try to
reproduce and investigate later as well.
…On Wed, Sep 5, 2018 at 2:33 PM Milan Cvitkovic ***@***.***> wrote:
My best guess is that it has to do with saving the episode, but not at all
sure.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#159 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AH1Db9KIvWPbjOxZITZjBLWzh4Qy9RAeks5uYEM6gaJpZM4Wbu2j>
.
|
Merged
identified the root cause: the |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm running the
dqn_BeamRider-v4
trial in the attached spec file in train mode. (It's basically identical to thedqn_boltzmann_breakout
trial in thedqn.json
spec file, but with the BeamRider env.)I'm running 4 sessions at once in a 4 gpu machine. For the first ~5 episodes in each session all 4 gpus are used correctly, one for each session. (gpu 0 has a couple extra processes on it, which I'm assuming is normal). But gradually, by around ~100 episodes, all the gpu processes dissappear, and nothing is running on GPU anymore. The training process never crashes or finishes during this - its cpu usage goes to 0 and just sits there.
Any ideas what's going on?
openai_baseline_reproduction_dqn copy.txt
The text was updated successfully, but these errors were encountered: