Skip to content

Commit

Permalink
Add an utility for operator benchmarks (apache#14977)
Browse files Browse the repository at this point in the history
* Initial end to end working skeleton

* Add skeleton for all other operator benchmarks

* Add Gluon Conv2D benchmarks

* Add readme and user guide, example result

* Add licence headers to all files

* fix RAT licence check issues

* Add ability to group list of operators with same inputs to benchmark. Update README

* Add comparison operator tests and more arithmetic operators

* Remove Gluon block and instead use only low level NDArray operators

* Add GEMM operators

* Add logical operations

* Add support to export results as markdown

* Add ability to query MXNet operator registry for operators and run benchmarks

* Delete duplicate arithmetic, logical, comparison operator benchmarks. Update ReadMe and main driver

* Add binary elementwise operator benchmarks

* Adding basic logging mechanisms

* Address review comments

* Few formatting issues resolved

* Add unary operators. Remove stale todo files

* Fix sanity tests

* Remove mention of hypothesis

* Add random sampling operator benchmarks.

* Add all activation operator benchmarks

* Add Pooling operator benchmarks

* Add Convolution operator benchmarks

* Add Reduction operator benchmarks

* Add an utility to get list of operator not benchmarked

* Autogenerate list of operators to cover

* Add basic nn operators - FC, dropout, batchnorm

* Add CPU result file
  • Loading branch information
sandeep-krishnamurthy authored and haohuw committed Jun 23, 2019
1 parent 9eb7e4e commit d0bcc78
Show file tree
Hide file tree
Showing 25 changed files with 2,613 additions and 0 deletions.
16 changes: 16 additions & 0 deletions benchmark/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
182 changes: 182 additions & 0 deletions benchmark/opperf/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
<!--- Licensed to the Apache Software Foundation (ASF) under one -->
<!--- or more contributor license agreements. See the NOTICE file -->
<!--- distributed with this work for additional information -->
<!--- regarding copyright ownership. The ASF licenses this file -->
<!--- to you under the Apache License, Version 2.0 (the -->
<!--- "License"); you may not use this file except in compliance -->
<!--- with the License. You may obtain a copy of the License at -->

<!--- http://www.apache.org/licenses/LICENSE-2.0 -->

<!--- Unless required by applicable law or agreed to in writing, -->
<!--- software distributed under the License is distributed on an -->
<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
<!--- KIND, either express or implied. See the License for the -->
<!--- specific language governing permissions and limitations -->
<!--- under the License. -->

# MXNet Operator Performance Benchmarks

A Python utility for benchmarking and profiling individual MXNet operator execution.

With this utility, for each MXNet operator you can get the following details:

**Timing**
1. Forward execution time
2. Backward execution time
3. Time spent for memory management

**Memory**
1. Total memory allocated

# Motivation

Benchmarks are usually done end-to-end for a given Network Architecture. For example: ResNet-50 benchmarks on ImageNet data. This is good measurement of overall performance and health of a deep learning framework. However, it is important to note the following important factors:
1. Users use a lot more operators that are not part of a standard network like ResNet. Example: Tensor manipulation operators like mean, max, topk, argmax, sort etc.
2. A standard Network Architecture like ResNet-50 is made up of many operators Ex: Convolution2D, Softmax, Dense and more. Consider the following scenarios:
1. We improved the performance of Convolution2D operator, but due to a bug, Softmax performance went down. Overall, we may observe end to end benchmarks are running fine, we may miss out the performance degradation of a single operator which can accumulate and become untraceable.
2. You need to see in a given network, which operator is taking maximum time and plan optimization work. With end to end benchmarks, it is hard to get more fine grained numbers at operator level.
3. We need to know on different hardware infrastructure (Ex: CPU with MKLDNN, GPU with NVIDIA CUDA and cuDNN) how different operators performs. With these details, we can plan the optimization work at operator level, which could exponentially boost up end to end performance.
4. You want to have nightly performance tests across all operators in a deep learning framework to catch regressions early.
5. We can integrate this framework with a CI/CD system to run per operator performance tests for PRs. Example: When a PR modifies the kernel of TransposeConv2D, we can run benchmarks of TransposeConv2D operator to verify performance.

Hence, in this utility, we will build the functionality to allow users and developers of deep learning frameworks to easily run benchmarks for individual operators.

# How to use

## Prerequisites

This utility uses MXNet profiler under the hood to fetch compute and memory metrics. Hence, you need to build MXNet with `USE_PROFILER=1` flag.

Make sure to build the flavor of MXNet, for example - with/without MKL, with CUDA 9 or 10.1 etc., on which you would like to measure operator performance. Finally, you need to add path to your cloned MXNet repository to the PYTHONPATH.

```
export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/
```

## Usecase 1 - Run benchmarks for all the operators

Below command runs all the MXNet operators (NDArray) benchmarks with default inputs and saves the final result as JSON in the given file.

```
python incubator-mxnet/benchmark/opperf/opperf.py --output-format json --output-file mxnet_operator_benchmark_results.json
```

**Other Supported Options:**

1. **output-format** : `json` or `md` for markdown file output.

2. **ctx** : `cpu` or `gpu`. By default, cpu on CPU machine, gpu(0) on GPU machine. You can override and set the global context for all operator benchmarks. Example: --ctx gpu(2).

3. **dtype** : By default, `float32`. You can override and set the global dtype for all operator benchmarks. Example: --dtype float64.

## Usecase 2 - Run benchmarks for all the operators in a specific category

For example, you want to run benchmarks for all NDArray Broadcast Binary Operators, Ex: broadcast_add, broadcast_mod, broadcast_pow etc., You just run the following python script.

```
#!/usr/bin/python
from benchmark.opperf.tensor_operations.binary_broadcast_operators import run_mx_binary_broadcast_operators_benchmarks
# Run all Binary Broadcast operations benchmarks with default input values
print(run_mx_binary_broadcast_operators_benchmarks())
```

Output for the above benchmark run, on a CPU machine, would look something like below:

```
{'broadcast_mod': [{'avg_time_forward_broadcast_mod': 28.7063, 'avg_time_mem_alloc_cpu/0': 4194.3042,
'avg_time_backward_broadcast_mod': 12.0954, 'inputs': {'lhs': (1024, 1024), 'rhs': (1024, 1024)}},
{'avg_time_forward_broadcast_mod': 2.7332, 'avg_time_mem_alloc_cpu/0': 400.0,
'avg_time_backward_broadcast_mod': 1.1288, 'inputs': {'lhs': (10000, 10), 'rhs': (10000, 10)}},
{'avg_time_forward_broadcast_mod': 30.5322, 'avg_time_mem_alloc_cpu/0': 4000.0,
'avg_time_backward_broadcast_mod': 225.0255, 'inputs': {'lhs': (10000, 1), 'rhs': (10000, 100)}}],
'broadcast_power': [{'avg_time_backward_broadcast_power': 49.5871, 'avg_time_forward_broadcast_power': 18.0954,
'avg_time_mem_alloc_cpu/0': 4194.3042, 'inputs': {'lhs': (1024, 1024), 'rhs': (1024, 1024)}},
{'avg_time_backward_broadcast_power': 4.6623, 'avg_time_forward_broadcast_power': 1.8283,
'avg_time_mem_alloc_cpu/0': 400.0, 'inputs': {'lhs': (10000, 10), 'rhs': (10000, 10)}},
{'avg_time_backward_broadcast_power': 279.922, 'avg_time_forward_broadcast_power': 24.4621,
'avg_time_mem_alloc_cpu/0': 4000.0, 'inputs': {'lhs': (10000, 1), 'rhs': (10000, 100)}}],
.....
.....
```

## Usecase 3 - Run benchmarks for specific operator
For example, you want to run benchmarks for `nd.add` operator in MXNet, you just run the following python script.

```
#!/usr/bin/python
import mxnet as mx
from mxnet import nd
from benchmark.opperf.utils.benchmark_utils import run_performance_test
add_res = run_performance_test(nd.add, run_backward=True, dtype='float32', ctx=mx.cpu(),
inputs=[{"lhs": (1024, 1024),
"rhs": (1024, 1024)}],
warmup=10, runs=25)
```

Output for the above benchmark run, on a CPU machine, would look something like below:

```
{'add': [{'avg_time_mem_alloc_cpu/0': 102760.4453,
'avg_time_forward_broadcast_add': 4.0372,
'avg_time_backward_broadcast_add': 5.3841,
'inputs': {'lhs': (1024, 1024), 'rhs': (1024, 1024)}}]}
```

## Usecase 3.1 - Run benchmarks for group of operators with same input
For example, you want to run benchmarks for `nd.add`, `nd.sub` operator in MXNet, with the same set of inputs. You just run the following python script.

```
#!/usr/bin/python
import mxnet as mx
from mxnet import nd
from benchmark.opperf.utils.benchmark_utils import run_performance_test
add_res = run_performance_test([nd.add, nd.sub], run_backward=True, dtype='float32', ctx=mx.cpu(),
inputs=[{"lhs": (1024, 1024),
"rhs": (1024, 1024)}],
warmup=10, runs=25)
```

Output for the above benchmark run, on a CPU machine, would look something like below:

```
{'add': [{'avg_time_mem_alloc_cpu/0': 102760.4453,
'avg_time_forward_broadcast_add': 4.0372,
'avg_time_backward_broadcast_add': 5.3841,
'inputs': {'lhs': (1024, 1024), 'rhs': (1024, 1024)}}],
'subtract': [{'avg_time_forward_broadcast_sub': 5.5137,
'avg_time_mem_alloc_cpu/0': 207618.0469,
'avg_time_backward_broadcast_sub': 7.2976,
'inputs': {'lhs': (1024, 1024), 'rhs': (1024, 1024)}}
]}
```
# How does it work under the hood?

Under the hood, executes NDArray operator using randomly generated data. Use MXNet profiler to get summary of the operator execution:
1. Memory
2. Computation time (forward, backward)

See the design proposal document for more details - https://cwiki.apache.org/confluence/display/MXNET/MXNet+Operator+Benchmarks

**NOTE:**

This utility queries MXNet operator registry to fetch all operators registered with MXNet, generate inputs and run benchmarks.
However, fully automated tests are enabled only for simpler operators such as - broadcast operators, element_wise operators etc... For the purpose of readability and giving more control to the users, complex operators such as convolution (2D, 3D), Pooling, Recurrent are not fully automated but expressed as default rules.
See `utils/op_registry_utils.py` for more details.

# TODO

All contributions are welcome. Below is the list of desired features:

1. Cover all MXNet operators.
2. Enhance MXNet profiler with additional APIs to programmatically fetch and process profiler data.
3. Integration with CI/CD system to run operator benchmarks for PR builds, nightly builds.
4. Dashboards and other modes of presentation of results for analyzing and planning tasks such as operator performance improvements.
5. Randomized Tensor Shape generation for profiling to identify bottlenecks in the operators.
16 changes: 16 additions & 0 deletions benchmark/opperf/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
16 changes: 16 additions & 0 deletions benchmark/opperf/custom_operations/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
67 changes: 67 additions & 0 deletions benchmark/opperf/custom_operations/custom_operations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

import mxnet as mx

"""
MXNet's Custom Operator Benchmark Tests.
It does a simple element wise addition to make sure computation
is not too much and we can observe custom operator logistics overhead.
"""


# 1. Define Custom Operator - Element wise Addition Multiplication
class CustomAddOne(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
self.assign(out_data[0], req[0], in_data[0] + 1)

def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
self.assign(in_grad[0], req[0], out_grad[0])


@mx.operator.register("CustomAddOne")
class CustomAddOneProp(mx.operator.CustomOpProp):
def __init__(self):
super(CustomAddOneProp, self).__init__(need_top_grad=True)

def list_arguments(self):
return ['in']

def list_outputs(self):
return ['output']

def infer_shape(self, in_shape):
# inputs, outputs, aux
return [in_shape[0]], [in_shape[0]], []

def create_operator(self, ctx, shapes, dtypes):
return CustomAddOne()


"""Helps to benchmark MXNet's Custom Op for Element wise addition on a (1000, 1) tensor.
Performs both forward and backward operation.
This test mainly uncovers core custom op overhead in MXNet.
Benchmark will be done on the following operation:
native_add -> native_add -> native_add -> CUSTOM_ADD -> native_add -> native_add -> native_add
By default run on 'float32' precision.
"""

# TODO
Loading

0 comments on commit d0bcc78

Please sign in to comment.