Skip to content

Conversation

@Deivanayaki-S
Copy link
Contributor

This PR introduces support for the upsample_bicubic2d operation in PyTorch, expanding the existing upsampling functionalities to include bicubic interpolation. The implementation aligns with the default behavior of PyTorch's torch.nn.functional.interpolate function, which utilizes a cubic convolution kernel with an alpha value of -0.75. Notably, the default alpha value has been adjusted from -0.5 to -0.75 to match the exact results produced by PyTorch's implementation. By adding this support, the following models are now able to run successfully.

  1. hustvl/yolos-tiny
  2. depth-anything/Depth-Anything-V2-Small-hf
  3. yolo_wgisd
  4. facebook/dinov2-base
  5. OpenGVLab/InternViT-300M-448px

@Deivanayaki-S Deivanayaki-S marked this pull request as ready for review May 9, 2025 15:40
Copy link
Member

@Hzfengsy Hzfengsy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, but a minor question

@Hzfengsy Hzfengsy merged commit dcb5a3a into apache:main May 10, 2025
11 checks passed
ShiboXing pushed a commit to ShiboXing/tvm that referenced this pull request Aug 10, 2025
… and FX graph (apache#17932)

* add upsample bicubic op support into torch frontend

* fix cubic alpha value for all interpolate func

* fix cubic alpha values in all test script

* update the mapping code in frontend

* fix lint issue

---------

Co-authored-by: deivanayakisankaralingam <deiva@Deivanayaki>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants