-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
This is to upgrade the 3rdparty dependency to DNNL v1.2.1 and the changes are ready for review. Hope someone can help to retrigger the failing case. Thanks. |
@TaoLv you can retrigger yourself. |
Thanks for reminding, @leezu. Yes, I just got the privilege for re-triggering CI jobs last week~ |
@@ -100,7 +100,7 @@ static void VerifyDefMem(const mkldnn::memory &mem) { | |||
|
|||
TEST(MKLDNN_UTIL_FUNC, MemFormat) { | |||
// Check whether the number of format is correct. | |||
CHECK_EQ(mkldnn_format_tag_last, 131); | |||
CHECK_EQ(mkldnn_format_tag_last, 152); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where can we get this number? Or do we need to calculate this number manually when upgrading to a new version of DNNL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need print this flag in cpp to get the value. This test was design to notify us if there are new formats added to dnnl so we need carefully check if these new formats break any integration code or the integration code need be improved to handle them.
I noticed that DNNL released the 1.2.2 patch release today. I'm going to update this PR to the latest release of the library. |
Note that master branch is now used for 2.x development. If this needs to be backported to 1.7, it needs to be done manually. |
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits) * impl - FFI for np einsum (apache#17869) [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789) [Numpy] Kron operator (apache#17323) cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878) Add simplified HybridBlock.forward without F (apache#17530) Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700) Use multi-tensor sumSQ in clip_global_norm (apache#17652) [Numpy] Add op fmax, fmin, fmod (apache#17567) Adding sparse support to MXTensor for custom operators (apache#17569) Update 3rdparty/mkldnn to v1.2.2 (apache#17313) Dynamic subgraph compile support (apache#17623) Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835) staticbuild: Fix potential user-assisted execution of arbitrary code (apache#17860) * FFI for np.argmax and np.argmin (apache#17843) ffi for roll/rot90 (apache#17861) Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797) add ffi for full_like, binary (apache#17811) HybridBlock.export() to return created filenames (apache#17758) Fix SoftReLU fused operator numerical stability (apache#17849) CI: Test clang10 cpu & gpu builds with -WError (apache#17830) ...
* fix cpp test * update to dnnl v1.2-rc * pin rls-v1.2 * build dnnl with DNNL_ENABLE_CONCURRENT_EXEC=ON * update rls-v1.2 * update to formal 1.2 release * try patch * fix rnn * pin rls-v1.2 * dnnl v1.2.1 * dnnl v1.2.2
* fix cpp test * update to dnnl v1.2-rc * pin rls-v1.2 * build dnnl with DNNL_ENABLE_CONCURRENT_EXEC=ON * update rls-v1.2 * update to formal 1.2 release * try patch * fix rnn * pin rls-v1.2 * dnnl v1.2.1 * dnnl v1.2.2
Description
Currently it's the release candidate for v1.2.0. Will change to formal release once it's published.
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments