Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MKLDNN]Refactor requantize to speed up execution #14608

Merged
merged 7 commits into from
Apr 28, 2019

Conversation

ZhennanQin
Copy link
Contributor

@ZhennanQin ZhennanQin commented Apr 3, 2019

Description

Currently, requantize is too slow to use. This PR is to refactor its MKLDNN implementation to make it usable. Other change is, add out_type option to make it align with quantizeV2. Now requantize can support output uint8 if could.

@pengzhao-intel @TaoLv @reminisce @anirudh2290

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@piyushghai
Copy link
Contributor

@ZhennanQin Thanks for your contributions. Can you also look into the CI failures ?

@mxnet-label-bot Add [pr-work-in-progress, Quantization, MKLDNN]

@marcoabreu marcoabreu added MKLDNN pr-work-in-progress PR is still work in progress Quantization Issues/Feature Requests related to Quantization labels Apr 3, 2019
@glingyan
Copy link
Contributor

@ZhennanQin further action for this PR?

@glingyan
Copy link
Contributor

@ZhennanQin the patch could not be applied for with #14641

diff a/src/operator/quantization/mkldnn/mkldnn_quantize_v2-inl.h b/src/operator/quantization/mkldnn/mkldnn_quantize_v2-inl.h (rejected hunks)
@@ -70,7 +70,7 @@ static void MKLDNNQuantizeComputeKer(const std::vector& inputs,
}
}

  • auto out_type = GetOutputType(param);
  • auto out_type = GetQuantizeOutputType(param);
    if (out_type == mshadow::kUint8) {
    real_range = std::max(0.f, data_max);
    quantized_range = MaxValue();
    @@ -141,7 +141,7 @@ static void MKLDNNQuantizeV2Compute(const nnvm::NodeAttrs& attrs, const OpContex
    MKLDNNStream::Get()->Submit();
    }
    } else {
  • auto out_type = GetOutputType(param);
  • auto out_type = GetQuantizeOutputType(param);
    if (out_type == mshadow::kUint8) {
    MKLDNNQuantizeComputeKer<float, uint8_t>(inputs, outputs, param, req);
    } else if (out_type == mshadow::kInt8) {

@TaoLv
Copy link
Member

TaoLv commented Apr 21, 2019

@ZhennanQin please take a look at the conflicts and CI failures. Thank you.

@ZhennanQin ZhennanQin changed the title [WIP]Refactor requantize [MKLDNN]Refactor requantize to speed up execution Apr 24, 2019
Copy link
Contributor

@pengzhao-intel pengzhao-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@xinyu-intel xinyu-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.

@pengzhao-intel
Copy link
Contributor

Thanks all of the contribution. Merging now :)

@pengzhao-intel pengzhao-intel merged commit 6c60025 into apache:master Apr 28, 2019
access2rohit pushed a commit to access2rohit/incubator-mxnet that referenced this pull request May 14, 2019
* Refactor requantize

* fix ci

* Fix CI

* Fix ci
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
* Refactor requantize

* fix ci

* Fix CI

* Fix ci
@ZhennanQin ZhennanQin deleted the fix_requantize branch October 28, 2019 02:41
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
MKLDNN pr-work-in-progress PR is still work in progress Quantization Issues/Feature Requests related to Quantization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants