Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Model Quantization with CUDNN #15796

Closed
ou525 opened this issue Aug 8, 2019 · 12 comments
Closed

Model Quantization with CUDNN #15796

ou525 opened this issue Aug 8, 2019 · 12 comments
Labels
Quantization Issues/Feature Requests related to Quantization

Comments

@ou525
Copy link

ou525 commented Aug 8, 2019

Hi, I am trying the quantization example with cudnn in the master branch of MxNet, However, when I call the quantized model imagenet1k-inception-bn-quantized-0000 with the C++ interface, an error occurs, as follows

Description

registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
[11:01:57] src/executor/attach_op_execs_pass.cc:355: Neither FCompute nor FComputeEx registered _contrib_quantized_act
Segmentation fault (core dumped)

Environment info (Required)

----------Python Info----------
('Version :', '2.7.12')
('Compiler :', 'GCC 5.4.0 20160609')
('Build :', ('default', 'Nov 12 2018 14:36:49'))
('Arch :', ('64bit', 'ELF'))
------------Pip Info-----------
('Version :', '19.0.3')
('Directory :', '/usr/local/lib/python2.7/dist-packages/pip')
----------MXNet Info-----------

('Version :', '1.5.0')
('Directory :', '/usr/local/lib/python2.7/dist-packages/mxnet')
('Commit Hash :', '75a9e187d00a8b7ebc71412a02ed0e3ae489d91f')
('Library :', ['/usr/local/lib/python2.7/dist-packages/mxnet/libmxnet.so'])
Build features:
✔ CUDA
✔ CUDNN
✔ NCCL
✔ CUDA_RTC
✖ TENSORRT
✔ CPU_SSE
✔ CPU_SSE2
✔ CPU_SSE3
✔ CPU_SSE4_1
✔ CPU_SSE4_2
✖ CPU_SSE4A
✔ CPU_AVX
✖ CPU_AVX2
✖ OPENMP
✖ SSE
✔ F16C
✖ JEMALLOC
✖ BLAS_OPEN
✖ BLAS_ATLAS
✖ BLAS_MKL
✖ BLAS_APPLE
✔ LAPACK
✖ MKLDNN
✔ OPENCV
✖ CAFFE
✖ PROFILER
✔ DIST_KVSTORE
✖ CXX14
✖ INT64_TENSOR_SIZE
✔ SIGNAL_HANDLER
✖ DEBUG
----------System Info----------
('Platform :', 'Linux-4.15.0-55-generic-x86_64-with-Ubuntu-16.04-xenial')
('system :', 'Linux')
('node :', 'ou-OptiPlex-7050')
('release :', '4.15.0-55-generic')
('version :', '#60~16.04.2-Ubuntu SMP Thu Jul 4 09:03:09 UTC 2019')
----------Hardware Info----------
('machine :', 'x86_64')
('processor :', 'x86_64')
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
Stepping: 9
CPU MHz: 2811.659
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d

GPU: 1050ti
MxNet: mxnet-cu100
CUDA10.0, cuDNN 7.3.1

@xinyu-intel
Copy link
Contributor

@ou525 Hi, you can try to exclude quantize all activation layers of your model by setting excluded_sym_names to check if the error still exists.

@pengzhao-intel pengzhao-intel added the Quantization Issues/Feature Requests related to Quantization label Aug 8, 2019
@ou525
Copy link
Author

ou525 commented Aug 8, 2019

@xinyu-intel Thanks for the reply, the excluded_sym_names parameters are as follows, I have found them in the json file, and there is no omission. Can you tell me in more detail?
excluded_sym_names += ['ch_concat_3a_chconcat',
'ch_concat_3b_chconcat',
'ch_concat_3c_chconcat',
'ch_concat_4a_chconcat',
'ch_concat_4b_chconcat',
'ch_concat_4c_chconcat',
'ch_concat_4d_chconcat',
'ch_concat_4e_chconcat',
'ch_concat_5a_chconcat',
'ch_concat_5b_chconcat']

@xinyu-intel
Copy link
Contributor

@ou525 Yes, you may need to exclude both concat and activation layers since GPU currently doesn't support quantized_concat and quantized_activation. If you want to get a fully quantized model, you can try to install mxnet-mkl to quantize your model with CPU.

@pengzhao-intel
Copy link
Contributor

@xinyu-intel could we make GPU exclude these layers by default?

@xnnxax527
Copy link

xnnxax527 commented Aug 13, 2019

Hi @xinyu-intel , I see you release the calibration and quantization code on Intel MKLDNN, thanks.
I am so interested in quantizing gluoncv SSD model on GPU, but the mkldnn quantized ssd model can't run on ctx=mx.gpu(), what's the problem?
the model should quantized with cudnn if i want to run on GPU?

When run net(x), the error log is:
ipdb> net(x) *** mxnet.base.MXNetError: [00:17:29] src/imperative/./imperative_utils.h:558: Check failed: fcompute != nullptr: One of FStatefulCompute and FStatefulComputeEx must be registered for stateful operator _sg_mkldnn_conv Stack trace: [bt] (0) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x4f710b) [0x7f8a1582b10b] [bt] (1) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(mxnet::imperative::PushOperator(mxnet::OpStatePtr const&, nnvm::Op const*, nnvm::NodeAttrs const&, mxnet::Context const&, std::vector<mxn et::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, std::vector<mxnet::engine::Var*, std::allocator<mxnet::engine::Var*> > const&, std::vector<mxnet::Resource, std::allocator<mxnet::Resource> > const&, std::vector<mxnet: :NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<mxnet::OpReqType, std::allocato r<mxnet::OpReqType> > const&, mxnet::DispatchMode)+0x9cf) [0x7f8a17e3447f] [bt] (2) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(mxnet::Imperative::InvokeOp(mxnet::Context const&, nnvm::NodeAttrs const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, mxnet::DispatchMode, mxnet::OpStatePtr)+0xa71) [0x7f8a17e368d1] [bt] (3) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x2b0c990) [0x7f8a17e40990] [bt] (4) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(+0x2b0e5c4) [0x7f8a17e425c4] [bt] (5) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(mxnet::imperative::RunGraph(bool, nnvm::IndexedGraph const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, unsig ned long, unsigned long, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> >&&, std::vector<unsigned int, std::allocator<unsigned int> >&&, std::vector<mxnet::OpStatePtr, std::allocator<mxnet::OpStatePtr> >*, std::vecto r<mxnet::DispatchMode, std::allocator<mxnet::DispatchMode> > const&, bool, std::vector<mxnet::TShape, std::allocator<mxnet::TShape> >*)+0x208) [0x7f8a17e42a58] [bt] (6) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(mxnet::CachedOp::DynamicForward(mxnet::Context const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector <mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, bool)+0x124e) [0x7f8a17e10e6e] [bt] (7) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(mxnet::CachedOp::Forward(std::shared_ptr<mxnet::CachedOp> const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&, std::vector<mxnet::NDArray*, std::allocator<mxnet::NDArray*> > const&)+0xb55) [0x7f8a17e16825] [bt] (8) /home/jiashen.hjs/anaconda2/envs/py35/lib/python3.5/site-packages/mxnet/libmxnet.so(MXInvokeCachedOp+0x4ab) [0x7f8a17d2ac9b]

@pengzhao-intel
Copy link
Contributor

@DickJC123 @ptrendx is there any plan on GPU quantization?

@xinyu-intel
Copy link
Contributor

@xnnxax527 Yes, you can quantize ssd with ctx=mx.gpu(0). Currently, MXNet only supports limited quantized_operators for GPU with cudnn, includingquantized_conv, quantized_pool, quantized_flatten and quantized_fullyconnected. Also, I'm not sure if your Geforce graphic card supports DP4A instruction.

@pengzhao-intel
Copy link
Contributor

Fixed and closing.

@pengzhao-intel
Copy link
Contributor

Feel free to reopen if you encounter the problem again.

@jackchinor
Copy link

@pengzhao-intel when I run quantize example with 1.Model Quantization with Intel® MKL-DNN , there is a error :"quantize_model_mkldnn only support Intel cpu platform with MKL-DNN Backend", I didn't change the example code. I install mxnet-mkl through"pip install mxnet-mkl --pre".Could you tell me how to fix it ?

@pengzhao-intel
Copy link
Contributor

@jackchinor could you paste your cmd and output? @xinyu-intel

@xinyu-intel
Copy link
Contributor

@jackchinor try to set ctx to cpu

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Quantization Issues/Feature Requests related to Quantization
Projects
None yet
Development

No branches or pull requests

5 participants