Fix bug of device and dtype of WgtScaleBatchNorm.std
#116
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue Number: 115
Objective of pull request:
Pull request checklist
Your PR fulfills the following requirements:
flakeheaven lint src/lava tests/
) and (bandit -r src/lava/.
) pass locallypytest
) passes locallyPull request type
Please check your PR type:
What is the current behavior?
Run the following codes:
We will get the error:
What is the new behavior?
We can run the codes without any error.
Does this introduce a breaking change?
Supplemental information
The error
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
can be solved by changeFile "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/norm.py", line 170, in std
fromto
But it will raise a new error:
We can solve this error by cast both
torch.ones(1, device=std.device)
andtorch.ceil(torch.log2(std)).clamp( ...
totorch.int
.However, considering that the return value
std
is used for a float computation... / std.view(1, -1)
, I think using float directly is better than using<<
withtorch.int
.