-
Notifications
You must be signed in to change notification settings - Fork 6.8k
fail to fall back when sparse arrays are passed to MKLDNN-enabled operators. #11448
Comments
@zheng-da Could you help provide a test case? |
Thank you for submitting the issue! @sandeep-krishnamurthy requesting this be labeled. |
@eric-haibin-lin @haojin2 could you please provide a test case? |
There’s an existing test batch norm training test disabled due to flakyness |
concat is tested and fixed. |
I'll take FullyConnected if no one's working on it right now. |
i created a PR for batchnorm. |
what else need to cover? @luobao-intel can help |
@pengzhao-intel @zouluobao everything I listed needs to be covered. The fix for BatchNorm has been merged and @haojin2 is working on FullyConnected. We need to fix other operators. |
@luobao-intel will handle other OPs :) |
@pengzhao-intel @luobao-intel we need to fix them and merge the PRs to the v1.3 release, which is probably before the end of next week. |
@haojin2 I thought you have fixed its backward storage type inference. |
@zheng-da related PR are merged. Could you verify if all issues are fixed and close the issue? |
Currently, the MKLDNN-enabled operators, such as convolution and pooling, can't handle sparse arrays correctly. The reason is that the storage inference of these operators doesn't return the right dispatch mode.
The MKLDNN-enabled operators include:
We may also need to test the operators below:
@haojin2 @azai91 @pengzhao-intel @TaoLv
The text was updated successfully, but these errors were encountered: