[Feature] Contiguous stacking of matching specs #960
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR changes the behaviour of
torch.stack
when the arguments areTensorSpec
orCompositeSpec
variants. It checks for equality among the specs being stacked, and if they all match, it actually performs the stack (rather than returning a lazy stacked spec).To facilitate this, I have added
squeeze
andunsqueeze
methods to all specs. Stacking is done by computing the new shape and then calling something likespec.clone().unsqueeze(stack_dim).expand(new_shape)
. I addedsqueeze
since you can't haveunsqueeze
withoutsqueeze
😄Some of the tests probably need rethinking. I've made small changes to get them to pass, but there is new behaviour here which ideally would be tested for explicitly.