Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Indexing specs #1105

Merged
merged 7 commits into from
Apr 28, 2023
Merged

[Feature] Indexing specs #1105

merged 7 commits into from
Apr 28, 2023

Conversation

remidomingues
Copy link
Contributor

@remidomingues remidomingues commented Apr 28, 2023

Description

Add indexing support to remaining specs:

  • BinaryDiscreteTensorSpec
  • BoundedTensorSpec
  • CompositeSpec
  • MultiDiscreteTensorSpec
  • MultiOneHotDiscreteTensorSpec
  • UnboundedContinuousTensorSpec
  • UnboundedDiscreteTensorSpec

Already supported by previous PR #1081 :

  • DiscreteTensorSpec
  • OneHotDiscreteTensorSpec

Note: although indexing & tests have been implemented for BoundedTensorSpec & MultiDiscreteTensorSpec, a NotImplementedError is currently set to prevent a different behavior with the other specs until pytorch/pytorch#100080 is addressed.

Motivation and Context

Address feature request of adding indexing to specs: #1051.

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • New feature (non-breaking change which adds core functionality)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

cc @matteobettini

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 28, 2023
@vmoens vmoens changed the title Indexing specs [Feature] Indexing specs Apr 28, 2023
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic work, thanks a mil for this!

@@ -135,7 +135,12 @@ def _slice_indexing(shape: list[int], idx: slice):
return [n_items] + shape[1:]


def _shape_indexing(shape: list[int], idx: SHAPE_INDEX_TYPING):
def _shape_indexing(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems pretty crucial and we may refactor that at some point (e.g. using fake tensors)
Can we add a short docstring to say what it is about?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have a similar function in tensordict called _getitem_batch_size, is it inspired by that?

Copy link
Contributor Author

@remidomingues remidomingues Apr 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docstring added! _shape_indexing is definitely similar to _getitem_batch_size. I couldn't use the latter though as it doesn't perform some indexing checks, which are only executed when actually indexing the tensor. I assumed we wanted those checks, hence the reimplementation

from tensordict.utils import _getitem_batch_size
from torchrl.data.tensor_specs import _shape_indexing
_getitem_batch_size(torch.Size((3, 2)), 5) # torch.Size([2])
_shape_indexing(torch.Size((3, 2)), 5)  # IndexError: index 5 is out of bounds for axis 0 with size 3

If we can work with fake_mode to have both fast shape indexing and such checks, that would definitely be the best of both worlds!

@vmoens vmoens merged commit e80930d into pytorch:main Apr 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants