Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Feature Request / Proposal] Pixel shuffle layer #13548

Closed
kohr-h opened this issue Dec 5, 2018 · 9 comments · Fixed by #13571
Closed

[Feature Request / Proposal] Pixel shuffle layer #13548

kohr-h opened this issue Dec 5, 2018 · 9 comments · Fixed by #13571
Labels

Comments

@kohr-h
Copy link
Contributor

kohr-h commented Dec 5, 2018

Upsampling based on pixel shuffling has been proposed in the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network (2016). The complete method combines a convolution layer, the pixel shuffle operation and a specific initialization to get rid of block artifacts. For the initialization details, see Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize (2017).

Pixel shuffling in 2D means to reshape a tensor of shape (N, f1*f2*C, H, W) to (N, C, f1*H, f2*W), thereby effectively upscaling the images by (f1, f2).
In MXNet, pixel shuffling could be implemented like this in Python:

def hybrid_forward(self, F, x):
        f1, f2 = self._factors
                                                      # (N, f1*f2*C, H, W)
        x = F.reshape(x, (0, -4, -1, f1 * f2, 0, 0))  # (N, C, f1*f2, H, W)
        x = F.reshape(x, (0, 0, -3, 0))               # (N, C, f1*f2*H, W)
        x = F.reshape(x, (0, 0, -4, -1, f2, 0))       # (N, C, f1*H, f2, W)
        x = F.reshape(x, (0, 0, 0, -3))               # (N, C, f1*H, f2*W)
        return x

Would this be an interesting addition?

@lanking520
Copy link
Member

Hi @kohr-h please feel free to raise a PR to contribute to this operator/example in Python.
@szha @zhreshold WDYT?

@zhreshold
Copy link
Member

If it can be achieved simply using combination of reshapes, we can put it in mxnet.gluon.nn.contrib module

@kohr-h
Copy link
Contributor Author

kohr-h commented Dec 5, 2018

True, it's just reshaping in the end, although not entirely trivial to make work as hybrid block. Regarding placement, gluon.nn.contrib looks okay to me, given that the core collection of layers seems to be rather slim.

@kohr-h kohr-h mentioned this issue Dec 7, 2018
5 tasks
@Mut1nyJD
Copy link

Nice I was just looking for this

@Mut1nyJD
Copy link

Hmm but somehow I'll get strange result I replaced all my Conv2DTranspose layers with a Conv2D(channels= 2 * 2 * num_channel(*)) and then PixelShuffle2D((2,2))
I took the code from your PR but I'll get pretty horrible results.

(*) num_channels = equal the number of channels that used to be in Conv2DTranspose

@kohr-h
Copy link
Contributor Author

kohr-h commented Dec 12, 2018

Possible, I have to do more testing myself. I think you need to swap axes before reshaping.

@Mut1nyJD
Copy link

@kohr-h

Btw you might want to have look at the example/gluon/Superpixel it looks to me there someone wrote already a PixelShuffle OP in there. I have not tried it yet to see if it works correctly though.

@kohr-h
Copy link
Contributor Author

kohr-h commented Dec 24, 2018

@Mut1nyJD I've updated the code in the PR and cross-checked it against the PyTorch implementation.

@kohr-h
Copy link
Contributor Author

kohr-h commented Dec 24, 2018

Btw you might want to have look at the example/gluon/Superpixel it looks to me there someone wrote already a PixelShuffle OP in there. I have not tried it yet to see if it works correctly though.

Thanks for the pointer. Indeed, that's the same operation. It would make sense to drop in the new layer there.

ThomasDelteil pushed a commit that referenced this issue Feb 14, 2019
* Add pixelshuffle layers, closes #13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
stephenrawls pushed a commit to stephenrawls/incubator-mxnet that referenced this issue Feb 16, 2019
* Add pixelshuffle layers, closes apache#13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
jessr92 pushed a commit to jessr92/incubator-mxnet that referenced this issue Feb 19, 2019
* Add pixelshuffle layers, closes apache#13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
drivanov pushed a commit to drivanov/incubator-mxnet that referenced this issue Mar 4, 2019
* Add pixelshuffle layers, closes apache#13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
vdantu pushed a commit to vdantu/incubator-mxnet that referenced this issue Mar 31, 2019
* Add pixelshuffle layers, closes apache#13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this issue Jun 23, 2019
* Add pixelshuffle layers, closes apache#13548

* Remove fmt comments

* Use explicit class in super()

* Add axis swapping to pixel shuffling, add tests

* Add documentation to pixel shuffle layers

* Use pixelshuffle layer and fix download in superres example

* Add pixelshuffle layers to API doc page
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants