Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[OpPerf] Implement remaining nn_conv ops in opperf #17500

Merged
merged 1 commit into from
Feb 12, 2020

Conversation

connorgoggins
Copy link
Contributor

Description

This PR serves to implement the remaining operators from the nn_conv category in opperf. To achieve this, I added a call to run_performance_test for the ROIPooling op within the run_pooling_operators_benchmarks function and added the rois argument to the NDArray list in default_params.py. Since ROIPooling was the only remaining op in the convolutional ops category, these were the only changes required.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage
  • Code is well-documented
  • To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • M benchmark/opperf/nd_operations/nn_conv_operators.py
  • M benchmark/opperf/rules/default_params.py

Comments

Tested on p2.16xl w/ubuntu 16.04 and Mac OS with:

  1. Checkout branch and call function run_pooling_operators_benchmarks - runs all pooling ops (Pooling and ROIPooling) on relevant data
  2. Checkout branch and run opperf.py (full run of all ops)

Performance Results

Group of operator test - all Pooling ops (CPU)
Full OpPerf test (CPU)

Group of operator test - all Pooling ops (GPU)
Full OpPerf test (GPU)

@apeforest @access2rohit @ChaiBapchya

Copy link
Contributor

@ChaiBapchya ChaiBapchya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks!

Copy link
Contributor

@access2rohit access2rohit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution.

@TaoLv
Copy link
Member

TaoLv commented Feb 2, 2020

@apeforest Sorry to pick a random PR to comment:
How to run the benchmark test for a single operator? I tried the code snippet in https://github.com/apache/incubator-mxnet/blob/master/benchmark/opperf/README.md.

import mxnet as mx
from mxnet import nd

from benchmark.opperf.utils.benchmark_utils import run_performance_test

add_res = run_performance_test(nd.add, run_backward=True, dtype='float32', ctx=mx.cpu(),
                               inputs=[{"lhs": (1024, 1024),
                                        "rhs": (1024, 1024)}],
                               warmup=10, runs=25)

And just get below printings, no performance report:

$ python op_benchmark.py
INFO:root:Begin Benchmark - add
INFO:root:Complete Benchmark - add

Another question is, the script can only be executed under the root folder of MXNet, right? So users always need to clone MXNet source code to do the benchmarking?

@ChaiBapchya
Copy link
Contributor

@TaoLv thanks for giving opperf a try. Jumping in quick to answer your 2 doubts

The result is stored in add_res
So if you print add_res you will see the details.

Yes, as it currently stands opperf utility can only be used after cloning the repo and setting the PYTHONPATH to the repo.

@TaoLv
Copy link
Member

TaoLv commented Feb 3, 2020

Thank you @ChaiBapchya

So if you print add_res you will see the details.

It would be better to add print(add_res) explicitly to the code snippet in the README.md.

Yes, as it currently stands opperf utility can only be used after cloning the repo and setting the PYTHONPATH to the repo.

Either explicitly document this requirement in the README or remove this requirement so the users can run benchmark directly after mxnet is pip installed.

@ChaiBapchya
Copy link
Contributor

Either explicitly document this requirement in the README or remove this requirement so the users can run benchmark directly after mxnet is pip installed.

https://github.com/apache/incubator-mxnet/tree/master/benchmark/opperf#prerequisites

Doesn't this say "add path to your cloned MXNet repository to the PYTHONPATH"

I'll add a comment that "explicitly" asks to "clone" the repo.
Also I'll try to see if this requirement can be removed and user can run benchmark directly with pip. But that would involve broader (time-consuming possibly) discussion so will do it later.

It would be better to add print(add_res) explicitly to the code snippet in the README.md.

I will add that print() statement explicitly to README too.

Thanks for pointing out.

CMakeLists.txt Outdated
@@ -314,6 +314,10 @@ if(USE_MKLDNN)
set(INSTALL_MKLDNN ON)
endif()

if(USE_CPP_PACKAGE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change seems unrelated to this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right - accidentally brought in external commits due to an incorrect rebase. Will fix ASAP.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Contributor

@apeforest apeforest left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove unrelated change from this PR. Thx!

@ChaiBapchya
Copy link
Contributor

Thank you @ChaiBapchya

So if you print add_res you will see the details.

It would be better to add print(add_res) explicitly to the code snippet in the README.md.

Yes, as it currently stands opperf utility can only be used after cloning the repo and setting the PYTHONPATH to the repo.

Either explicitly document this requirement in the README or remove this requirement so the users can run benchmark directly after mxnet is pip installed.

@TaoLv As requested, add the 2 changes to my existing PR -
specifically this commit - 4228baa
Does this help?

@connorgoggins connorgoggins force-pushed the implement_nn_conv_ops branch 2 times, most recently from 7fdcb89 to 5bb778b Compare February 4, 2020 23:49
@TaoLv
Copy link
Member

TaoLv commented Feb 5, 2020

@ChaiBapchya Thank you for the quick turn around. It looks good to me.

@apeforest apeforest merged commit c9c5a13 into apache:master Feb 12, 2020
zheyuye pushed a commit to zheyuye/incubator-mxnet that referenced this pull request Feb 19, 2020
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request May 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants