Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does MKLDNN support dilate for 3d convolutions? #676

Closed
ChaiBapchya opened this issue Mar 26, 2020 · 7 comments
Closed

Does MKLDNN support dilate for 3d convolutions? #676

ChaiBapchya opened this issue Mar 26, 2020 · 7 comments
Labels
integration Issues with integrating the library into applications

Comments

@ChaiBapchya
Copy link

ChaiBapchya commented Mar 26, 2020

MXNet has the submodule mkldnn (currently pointing to v1.0)
It doesn't have support for 3d convolution dilate on MXNet side. Hence wanted to confirm if later versions of MKLDNN support it? If not, can it be extended?

Refer:

https://github.com/apache/incubator-mxnet/blob/56e79853ad5cf98baf84454eb595c7658bef6ee6/src/operator/nn/mkldnn/mkldnn_convolution.cc#L145

@emfomenk
Copy link

Yes, DNNL does support dilated 3D convolutions:

# 3D and 2D conv w/o dilation (note dd=0 and dh=0)
$ ./benchdnn --conv --mode=P mb16ic16oc16_id14kd3dd0pd0 mb16ic16oc16_ih14kh3dh0ph0
Output template: perf,%engine%,%name%,%prb%,%Gops%,%Gfreq%,%-time%,%-Gflops%,%0time%,%0Gflops%
perf,cpu,,--conv mb16ic16id14oc16od12kd3pd0,0.382206,0,0.147461,2591.91,0.151043,2530.44
perf,cpu,,--conv mb16ic16ih14oc16oh12kh3ph0,0.0106168,0,0.0065918,1610.61,0.00735836,1442.83
tests:2 passed:0 skipped:0 mistrusted:0 unimplemented:0 failed:0 listed:0
total perf: min(ms):0.154053 avg(ms):0.158401

# 3D and 2D convs w/ dilation (note dd=2 and dh=2)
./benchdnn --conv --mode=P mb16ic16oc16_id14kd3dd2pd0 mb16ic16oc16_ih14kh3dh2ph0
Output template: perf,%engine%,%name%,%prb%,%Gops%,%Gfreq%,%-time%,%-Gflops%,%0time%,%0Gflops%
perf,cpu,,--conv mb16ic16id14oc16od8kd3pd0dd2,0.113246,0,0.0444336,2548.66,0.0461835,2452.09
perf,cpu,,--conv mb16ic16ih14oc16oh8kh3ph0dh2,0.00471859,0,0.00463867,1017.23,0.00560726,841.514
tests:2 passed:0 skipped:0 mistrusted:0 unimplemented:0 failed:0 listed:0
total perf: min(ms):0.0490723 avg(ms):0.0517908

@emfomenk
Copy link

The limitations could also be found in the dev guide.

@ChaiBapchya
Copy link
Author

Which version of mkldnn started supporting 3d dilate on conv?

@emfomenk
Copy link

Long-long time ago, in v0.10 or so. I am not sure why there is such a comment in MxNet. Maybe there are some limitations in the integration code (but this to be checked with MxNet team).

I just run dilated convolution for Intel MKL-DNN v1.0 and it seems working fine:

$ MKLDNN_VERBOSE=1 ./tests/benchdnn/benchdnn --conv --mode=C mb16ic16oc16_id14kd3dd2pd0 mb16ic16oc16_ih14kh3dh2ph0 )
mkldnn_verbose,info,Intel MKL-DNN v1.0.4 (commit 883133c3b97d27fc6eb976b2cea2a25252f0d911)
mkldnn_verbose,info,Detected ISA is Intel AVX-512 with AVX512BW, AVX512VL, and AVX512DQ extensions
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:aBcde16b:f0,num:1,16x16x14x14x14,0.568848
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:ABcde16b16a:f0,num:1,16x16x3x3x3,0.032959
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcde:f0 dst_f32::blocked:aBcde16b:f0,num:1,16x16x8x8x8,0.0888672
mkldnn_verbose,exec,cpu,reorder,simple:any,undef,src_f32::blocked:a:f0 dst_f32::blocked:a:f0,num:1,16,4.3291
mkldnn_verbose,exec,cpu,convolution,jit:avx512_common,forward_training,src_f32::blocked:aBcde16b:f0 wei_f32::blocked:ABcde16b16a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcde16b:f0,alg:convolution_direct,mb16_ic16oc16_id14od8kd3sd1dd2pd0_ih14oh8kh3sh1dh2ph0_iw14ow8kw3sw1d
w2pw0,0.224121
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:aBcde16b:f0 dst_f32::blocked:abcde:f0,num:1,16x16x8x8x8,0.0449219
0:PASSED __REPRO: mb16ic16id14ih14iw14oc16od8oh8ow8kd3kh3kw3dd2dh2dw2n"wip"
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:aBcd16b:f0,num:1,16x16x14x14,0.0529785
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:ABcd16b16a:f0,num:1,16x16x3x3,0.0141602
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:aBcd16b:f0,num:1,16x16x8x8,0.0290527
mkldnn_verbose,exec,cpu,reorder,simple:any,undef,src_f32::blocked:a:f0 dst_f32::blocked:a:f0,num:1,16,0.0109863
mkldnn_verbose,exec,cpu,convolution,jit:avx512_common,forward_training,src_f32::blocked:aBcd16b:f0 wei_f32::blocked:ABcd16b16a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:aBcd16b:f0,alg:convolution_direct,mb16_ic16oc16_ih14oh8kh3sh1dh2ph0_iw14ow8kw3sw1dw2pw0,0.0319824
mkldnn_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:aBcd16b:f0 dst_f32::blocked:abcd:f0,num:1,16x16x8x8,0.0109863
1:PASSED __REPRO: mb16ic16ih14oc16oh8kh3dh2n"wip"
tests:2 passed:2 skipped:0 mistrusted:0 unimplemented:0 failed:0

@emfomenk
Copy link

Needless to say that there were some bugs that we fixed in newer versions. For instance, if you take a look at the release note for DNNL v1.1.1, there was an issue with backward by weights. Maybe MxNet team decided to disable some cases because of a particular issue, I don't know. But, I think the latest version should work fine, and the restriction could be removed.

Also, summoning @tprimak -- maybe she remembers something about dilated convolutions and bugs from MxNet...

@pengzhao-intel
Copy link

We're following the case, thanks @ChaiBapchya @emfomenk

@vpirogov vpirogov added integration Issues with integrating the library into applications and removed question labels Apr 2, 2020
@vpirogov
Copy link
Member

vpirogov commented Apr 2, 2020

Closing as the issue is related to MXNet integration.

@vpirogov vpirogov closed this as completed Apr 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integration Issues with integrating the library into applications
Projects
None yet
Development

No branches or pull requests

4 participants