-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'Leaky' object has no attribute 'reset' #339
Comments
Share a minimal working example of your code.
…On Tue, 16 July 2024, 6:11 am Tracy, ***@***.***> wrote:
- snntorch version: 0.8.0
- Python version: 3.7
- Operating System: Linux
Description
I used two GPUs to run my code and run into this error
image.png (view on web)
<https://github.com/user-attachments/assets/f046aa0c-7dba-4d7d-be15-db8931f23244>
But the code can be run with no error if I use one GPU.
The error comes from the two devices, The tensors are run on different
devices.
—
Reply to this email directly, view it on GitHub
<#339>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJTFT4SPTAHUUISJJT6LZVTZMUEYRAVCNFSM6AAAAABK6ORZVSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQYTAOJYGIYDONA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Take the tutorial 5(https://snntorch.readthedocs.io/en/latest/tutorials/tutorial_5.html) as an example, I just add os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
net = Net()
net = nn.DataParallel(net).cuda() to make it work on multiple GPUs. |
I think the mistake comes from the variable self.reset that defined in class Leaky. The variable self.reset is defined when an object of class Leaky is defined. So the variable self.reset is on a fixed GPU. But the inputs to the network are on different GPUs. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
I used two GPUs to run my code and run into this error
But the code can be run with no error if I use one GPU.
The error comes from the two devices, The tensors are run on different devices.
The text was updated successfully, but these errors were encountered: