diff --git a/benchmark/opperf/README.md b/benchmark/opperf/README.md index 7e708d3cbe1d..6e628dfe40a8 100644 --- a/benchmark/opperf/README.md +++ b/benchmark/opperf/README.md @@ -47,7 +47,10 @@ Hence, in this utility, we will build the functionality to allow users and devel ## Prerequisites -Make sure to build the flavor of MXNet, for example - with/without MKL, with CUDA 9 or 10.1 etc., on which you would like to measure operator performance. Finally, you need to add path to your cloned MXNet repository to the PYTHONPATH. +Provided you have MXNet installed (any version >= 1.5.1), all you need to use opperf utility is to add path to your cloned MXNet repository to the PYTHONPATH. + +Note: +To install MXNet, refer [Installing MXNet page](https://mxnet.incubator.apache.org/versions/master/install/index.html) ``` export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/ @@ -75,7 +78,7 @@ For example, you want to run benchmarks for all NDArray Broadcast Binary Operato ``` #!/usr/bin/python -from benchmark.opperf.nd_operations.binary_broadcast_operators import run_mx_binary_broadcast_operators_benchmarks +from benchmark.opperf.nd_operations.binary_operators import run_mx_binary_broadcast_operators_benchmarks # Run all Binary Broadcast operations benchmarks with default input values print(run_mx_binary_broadcast_operators_benchmarks()) @@ -136,7 +139,7 @@ from mxnet import nd from benchmark.opperf.utils.benchmark_utils import run_performance_test -add_res = run_performance_test([nd.add, nd.sub], run_backward=True, dtype='float32', ctx=mx.cpu(), +add_res = run_performance_test([nd.add, nd.subtract], run_backward=True, dtype='float32', ctx=mx.cpu(), inputs=[{"lhs": (1024, 1024), "rhs": (1024, 1024)}], warmup=10, runs=25)