Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Numpy] FFI for einsum, dstack, unique #17869

Merged
merged 1 commit into from
Mar 25, 2020

Conversation

DwwWxx
Copy link
Contributor

@DwwWxx DwwWxx commented Mar 18, 2020

Description

FFI for einsum, dstack, unique

Numpy Operator Old FFI (ctypes) (us) New FFI (cython) (us)
einsum 82.55 23.08
dstack 65.53 36.88
unique 121.73 40.83

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@DwwWxx DwwWxx requested a review from szha as a code owner March 18, 2020 14:34
@DwwWxx DwwWxx force-pushed the FFI_for_np_einsum_np_dstack branch from bc12f62 to 8bbb3e1 Compare March 18, 2020 16:08
@haojin2 haojin2 added the Numpy label Mar 18, 2020
@haojin2 haojin2 self-assigned this Mar 18, 2020
@DwwWxx DwwWxx force-pushed the FFI_for_np_einsum_np_dstack branch from 8bbb3e1 to a7792db Compare March 19, 2020 04:15
Copy link
Contributor

@hzfan hzfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

* impl - FFI for np dstack

* impl - benchmark np_einsum np_dstack

* impl - FFI for np_unique

* impl - benchmark np_unique
@DwwWxx DwwWxx force-pushed the FFI_for_np_einsum_np_dstack branch from a7792db to 7f2766f Compare March 21, 2020 14:04
@haojin2
Copy link
Contributor

haojin2 commented Mar 25, 2020

@mxnet-bot run ci [unix-cpu, unix-gpu]

@mxnet-bot
Copy link

Jenkins CI successfully triggered : [unix-cpu, unix-gpu]

@haojin2 haojin2 merged commit 56e7985 into apache:master Mar 25, 2020
anirudh2290 added a commit to anirudh2290/mxnet that referenced this pull request Mar 27, 2020
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits)
  * impl - FFI for np einsum (apache#17869)
  [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789)
  [Numpy] Kron operator (apache#17323)
  cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878)
  Add simplified HybridBlock.forward without F (apache#17530)
  Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700)
  Use multi-tensor sumSQ in clip_global_norm (apache#17652)
  [Numpy] Add op fmax, fmin, fmod (apache#17567)
  Adding sparse support to MXTensor for custom operators (apache#17569)
  Update 3rdparty/mkldnn to v1.2.2 (apache#17313)
  Dynamic subgraph compile support (apache#17623)
  Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835)
  staticbuild: Fix potential user-assisted execution of arbitrary code  (apache#17860)
  * FFI for np.argmax and np.argmin (apache#17843)
  ffi for roll/rot90 (apache#17861)
  Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797)
  add ffi for full_like, binary (apache#17811)
  HybridBlock.export() to return created filenames (apache#17758)
  Fix SoftReLU fused operator numerical stability (apache#17849)
  CI: Test clang10 cpu & gpu builds with -WError (apache#17830)
  ...
MoisesHer pushed a commit to MoisesHer/incubator-mxnet that referenced this pull request Apr 10, 2020
* impl - FFI for np dstack

* impl - benchmark np_einsum np_dstack

* impl - FFI for np_unique

* impl - benchmark np_unique

Co-authored-by: Ubuntu <[email protected]>
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
* impl - FFI for np dstack

* impl - benchmark np_einsum np_dstack

* impl - FFI for np_unique

* impl - benchmark np_unique

Co-authored-by: Ubuntu <[email protected]>
sxjscience pushed a commit to sxjscience/mxnet that referenced this pull request Jul 1, 2020
* impl - FFI for np dstack

* impl - benchmark np_einsum np_dstack

* impl - FFI for np_unique

* impl - benchmark np_unique

Co-authored-by: Ubuntu <[email protected]>
shuo-ouyang pushed a commit to shuo-ouyang/incubator-mxnet that referenced this pull request Aug 9, 2020
* impl - FFI for np dstack

* impl - benchmark np_einsum np_dstack

* impl - FFI for np_unique

* impl - benchmark np_unique

Co-authored-by: Ubuntu <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants