Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[OpPerf] Add norm, cast ops, remaining optimizer ops #17542

Merged
merged 11 commits into from
Feb 13, 2020

Conversation

ChaiBapchya
Copy link
Contributor

@ChaiBapchya ChaiBapchya commented Feb 7, 2020

Description

Adds following ops to OpPerf [once all the OpPerf PRs get merged, all MXNet ops in NDArray namespace will be completed in OpPerf coverage (minus deprecated, _contrib ops and few other ops with known issues)]

  • norm op to reduction category
  • Following Optimizer ops
    • mp_nag_mom
    • nag_mom
    • lambd_update_phase_1 and 2
    • preloaded_multi_*
    • multi_*
  • Cast ops
    • cast
    • amp_cast
    • amp_multicast

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Code is well-documented:
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Comments

Ops falling in the category of multi_* and preloaded_multi_* were notorious
They don't have standard keyworded arguments (data=data)
They expect variable positional args (*data)
It was previously handled such that only 1 value of variable positional arg could be passed

Now, you can pass as many values for positional args as you want.
However for running run_performance_test, inputs have to be key:value pair

That is handled as args0, args1, arg2, etc
So as long as the name of the key starts with args it will be considered as input for the operator.

This is a very rare case that the operator doesn't take keyworded args. Most other ops in MXNet NDArray namespace take keyworded args.

benchmark/opperf/utils/benchmark_utils.py Outdated Show resolved Hide resolved
benchmark/opperf/utils/ndarray_utils.py Outdated Show resolved Hide resolved
benchmark/opperf/utils/ndarray_utils.py Outdated Show resolved Hide resolved
@ChaiBapchya
Copy link
Contributor Author

ChaiBapchya commented Feb 10, 2020

Category specific operators gist : https://gist.github.com/ChaiBapchya/b4b49632d845abd9a451ab37809c575b

Basically result of running

from benchmark.opperf.nd_operations.nn_optimizer_operators import run_optimizer_operators_benchmarks
run_optimizer_operators_benchmarks()
from benchmark.opperf.nd_operations.reduction_operators import run_mx_reduction_operators_benchmarks
run_mx_reduction_operators_benchmarks()
from benchmark.opperf.nd_operations.unary_operators import run_mx_unary_operators_benchmarks
run_mx_unary_operators_benchmarks()

@ChaiBapchya
Copy link
Contributor Author

ChaiBapchya commented Feb 11, 2020

Entire OpPerf Suite
CPU results : https://gist.github.com/ChaiBapchya/1c26e5a904d9ce9342d61b29f195c5cf [old]

Copy link
Contributor

@connorgoggins connorgoggins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great implementation! Just one small edit.

benchmark/opperf/rules/default_params.py Outdated Show resolved Hide resolved
benchmark/opperf/utils/benchmark_utils.py Show resolved Hide resolved
@connorgoggins
Copy link
Contributor

Also can we see updated perf results?

@ChaiBapchya
Copy link
Contributor Author

Previously entire NDArray that was input used to get printed
This would not be rendered correctly on Markdown (come up as new lines).
Now after this fix, only the shape of NDArray is stored as a String (much more readable and rendered correctly on Markdown)

CPU : https://gist.github.com/ChaiBapchya/12d7fdd4ac15703e537aefbc8055c981
GPU : https://gist.github.com/ChaiBapchya/c62c7fff4deb115b64ec69b5c4aa6a7d

@ChaiBapchya
Copy link
Contributor Author

@mxnet-label-bot add [pr-awaiting-review]

@lanking520 lanking520 added the pr-awaiting-review PR is waiting for code review label Feb 12, 2020
Copy link
Contributor

@connorgoggins connorgoggins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job!

@ChaiBapchya
Copy link
Contributor Author

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work!

@apeforest apeforest merged commit 93c123d into apache:master Feb 13, 2020
@ChaiBapchya ChaiBapchya deleted the remainder_op_opperf branch February 14, 2020 23:25
zheyuye pushed a commit to zheyuye/incubator-mxnet that referenced this pull request Feb 19, 2020
* add mp_nag_mom, nag_mom, lamb_update_phase_1&2 op

* add norm to reduction op

* add preloaded_*, multi_* optimizer ops

* add cast ops to unary op opperf

* change API to handle args in profiler_util instead of benchmark_util

* clean up positional args

* fix amp_cast,cast and lamb_update_* issue

* fix markdown readability issue

* add 3 types of dtype vars as inputs for 3 diff category of ops
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
* add mp_nag_mom, nag_mom, lamb_update_phase_1&2 op

* add norm to reduction op

* add preloaded_*, multi_* optimizer ops

* add cast ops to unary op opperf

* change API to handle args in profiler_util instead of benchmark_util

* clean up positional args

* fix amp_cast,cast and lamb_update_* issue

* fix markdown readability issue

* add 3 types of dtype vars as inputs for 3 diff category of ops
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants