From 9f49d22fbdf90d1f31c8c37614d01d16c6946f21 Mon Sep 17 00:00:00 2001 From: waytrue17 <52505574+waytrue17@users.noreply.github.com> Date: Mon, 9 Aug 2021 17:48:51 -0700 Subject: [PATCH 1/2] Port #20496 --- 3rdparty/mshadow/README.md | 2 +- NEWS.md | 8 ++++---- README.md | 4 ++-- docker/Dockerfiles/Dockerfile.in.lib.cpu | 2 +- docker/Dockerfiles/Dockerfile.in.lib.gpu | 2 +- docs/static_site/src/pages/api/faq/cloud.md | 2 +- docs/static_site/src/pages/api/faq/new_op.md | 6 +++--- docs/static_site/src/pages/api/faq/perf.md | 10 +++++----- docs/static_site/src/pages/api/faq/recordio.md | 4 ++-- docs/static_site/src/pages/api/faq/s3_integration.md | 2 +- .../src/pages/api/r/docs/tutorials/custom_iterator.md | 2 +- example/README.md | 4 ++-- src/engine/naive_engine.cc | 2 +- src/engine/threaded_engine.h | 2 +- src/operator/svm_output.cc | 2 +- 15 files changed, 27 insertions(+), 27 deletions(-) diff --git a/3rdparty/mshadow/README.md b/3rdparty/mshadow/README.md index 6ff6cad06d0d..a645ef661bd7 100644 --- a/3rdparty/mshadow/README.md +++ b/3rdparty/mshadow/README.md @@ -50,5 +50,5 @@ Version Projects Using MShadow ---------------------- -* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/dmlc/mxnet) +* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/apache/mxnet) * [CXXNet: A lightweight C++ based deep learnig framework](https://github.com/dmlc/cxxnet) diff --git a/NEWS.md b/NEWS.md index a847db0fb351..c0448aa7e703 100644 --- a/NEWS.md +++ b/NEWS.md @@ -3572,7 +3572,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - ImageRecordIter now stores data in pinned memory to improve GPU memcopy speed. ### Bugfixes - Cython interface is fixed. `make cython` and `python setup.py install --with-cython` should install the cython interface and reduce overhead in applications that use imperative/bucketing. - - Fixed various bugs in Faster-RCNN example: https://github.com/dmlc/mxnet/pull/6486 + - Fixed various bugs in Faster-RCNN example: https://github.com/apache/mxnet/pull/6486 - Fixed various bugs in SSD example. - Fixed `out` argument not working for `zeros`, `ones`, `full`, etc. - `expand_dims` now supports backward shape inference. @@ -3648,9 +3648,9 @@ This is the last release before the NNVM refactor. - Support CuDNN v5 by @antinucleon - More applications - Speech recognition by @yzhang87 - - [Neural art](https://github.com/dmlc/mxnet/tree/master/example/neural-style) by @antinucleon - - [Detection](https://github.com/dmlc/mxnet/tree/master/example/rcnn), RCNN bt @precedenceguo - - [Segmentation](https://github.com/dmlc/mxnet/tree/master/example/fcn-xs), FCN by @tornadomeet + - [Neural art](https://github.com/apache/mxnet/tree/v0.7.0/example/neural-style) by @antinucleon + - [Detection](https://github.com/apache/mxnet/tree/v0.7.0/example/rcnn), RCNN bt @precedenceguo + - [Segmentation](https://github.com/apache/mxnet/tree/v0.7.0/example/fcn-xs), FCN by @tornadomeet - [Face identification](https://github.com/tornadomeet/mxnet-face) by @tornadomeet - More on the example diff --git a/README.md b/README.md index a18d6e2795a7..18b70791ac39 100644 --- a/README.md +++ b/README.md @@ -81,10 +81,10 @@ What's New * [0.12.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 0.12.0 Release. * [0.11.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 0.11.0 Release. * [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are now an Apache Incubator project. -* [0.10.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release. +* [0.10.0 Release](https://github.com/apache/mxnet/releases/tag/v0.10.0) - MXNet 0.10.0 Release. * [0.9.3 Release](./docs/architecture/release_note_0_9.md) - First 0.9 official release. * [0.9.1 Release (NNVM refactor)](./docs/architecture/release_note_0_9.md) - NNVM branch is merged into master now. An official release will be made soon. -* [0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0) +* [0.8.0 Release](https://github.com/apache/mxnet/releases/tag/v0.8.0) ### Ecosystem News diff --git a/docker/Dockerfiles/Dockerfile.in.lib.cpu b/docker/Dockerfiles/Dockerfile.in.lib.cpu index c6de40c5cea3..38f47db3aa6d 100644 --- a/docker/Dockerfiles/Dockerfile.in.lib.cpu +++ b/docker/Dockerfiles/Dockerfile.in.lib.cpu @@ -24,6 +24,6 @@ FROM ubuntu:14.04 COPY install/cpp.sh install/ RUN install/cpp.sh -RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \ +RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \ make -j$(nproc) && \ rm -r build diff --git a/docker/Dockerfiles/Dockerfile.in.lib.gpu b/docker/Dockerfiles/Dockerfile.in.lib.gpu index 03b920a685ff..a6eb80f9d428 100644 --- a/docker/Dockerfiles/Dockerfile.in.lib.gpu +++ b/docker/Dockerfiles/Dockerfile.in.lib.gpu @@ -25,5 +25,5 @@ COPY install/cpp.sh install/ RUN install/cpp.sh ENV BUILD_OPTS "USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1" -RUN git clone --recursive https://github.com/dmlc/mxnet && cd mxnet && \ +RUN git clone --recursive https://github.com/apache/mxnet && cd mxnet && \ make -j$(nproc) $BUILD_OPTS diff --git a/docs/static_site/src/pages/api/faq/cloud.md b/docs/static_site/src/pages/api/faq/cloud.md index 894b83ebdc48..1ccf7c15a5b2 100644 --- a/docs/static_site/src/pages/api/faq/cloud.md +++ b/docs/static_site/src/pages/api/faq/cloud.md @@ -112,7 +112,7 @@ cat hosts | xargs -I{} ssh -o StrictHostKeyChecking=no {} 'uname -a; pgrep pytho ``` ***Note:*** The preceding example is very simple to train and therefore isn't a good -benchmark for distributed training. Consider using other [examples](https://github.com/dmlc/mxnet/tree/master/example/image-classification). +benchmark for distributed training. Consider using other [examples](https://github.com/apache/mxnet/tree/v1.x/example/image-classification). ### More Options #### Use Multiple Data Shards diff --git a/docs/static_site/src/pages/api/faq/new_op.md b/docs/static_site/src/pages/api/faq/new_op.md index 787b4038dbf4..fe861831f24e 100644 --- a/docs/static_site/src/pages/api/faq/new_op.md +++ b/docs/static_site/src/pages/api/faq/new_op.md @@ -144,12 +144,12 @@ To use the custom operator, create a mx.sym.Custom symbol with op_type as the re mlp = mx.symbol.Custom(data=fc3, name='softmax', op_type='softmax') ``` -Please see the full code for this example [here](https://github.com/dmlc/mxnet/blob/master/example/numpy-ops/custom_softmax.py). +Please see the full code for this example [here](https://github.com/apache/mxnet/blob/v1.x/example/numpy-ops/custom_softmax.py). ## C++ With MXNet v0.9 (the NNVM refactor) or later, creating new operators has become easier. Operators are now registered with NNVM. -The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/dmlc/mxnet/tree/master/src/operator/tensor) for more examples): +The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/apache/mxnet/tree/v1.x/src/operator/tensor) for more examples): ```c++ NNVM_REGISTER_OP(abs) @@ -189,7 +189,7 @@ In this section, we will go through the basic attributes MXNet expect for all op You can find the definition for them in the following two files: - [nnvm/op_attr_types.h](https://github.com/dmlc/nnvm/blob/master/include/nnvm/op_attr_types.h) -- [mxnet/op_attr_types.h](https://github.com/dmlc/mxnet/blob/master/include/mxnet/op_attr_types.h) +- [mxnet/op_attr_types.h](https://github.com/apache/mxnet/blob/v1.x/include/mxnet/op_attr_types.h) #### Descriptions (Optional) diff --git a/docs/static_site/src/pages/api/faq/perf.md b/docs/static_site/src/pages/api/faq/perf.md index d085fc00a358..0e123ae3fc57 100644 --- a/docs/static_site/src/pages/api/faq/perf.md +++ b/docs/static_site/src/pages/api/faq/perf.md @@ -66,7 +66,7 @@ So whether you specify `cpu(0)` or `cpu()`, _MXNet_ will use all CPU cores on th ### Scoring results The following table shows performance of MXNet-1.2.0.rc1, namely number of images that can be predicted per second. -We used [example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py) +We used [example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py) to measure the performance on different AWS EC2 machines. AWS EC2 C5.18xlarge: @@ -150,7 +150,7 @@ and V100 (EC2 p3.2xlarge). ### Scoring results Based on -[example/image-classification/benchmark_score.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/benchmark_score.py) +[example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py) and MXNet-1.2.0.rc1, with cuDNN 7.0.5 - K80 (single GPU) @@ -213,7 +213,7 @@ Below is the performance result on V100 using float 16. ### Training results Based on -[example/image-classification/train_imagenet.py](https://github.com/dmlc/mxnet/blob/master/example/image-classification/train_imagenet.py) +[example/image-classification/train_imagenet.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/train_imagenet.py) and MXNet-1.2.0.rc1, with CUDNN 7.0.5. The benchmark script is available at [here](https://github.com/mli/mxnet-benchmark/blob/master/run_vary_batch.sh), where the batch size for Alexnet is increased by 16x. @@ -260,7 +260,7 @@ It's critical to use the proper type of `kvstore` to get the best performance. Refer to [Distributed Training](https://mxnet.apache.org/api/faq/distributed_training.html) for more details. -Besides, we can use [tools/bandwidth](https://github.com/dmlc/mxnet/tree/master/tools/bandwidth) +Besides, we can use [tools/bandwidth](https://github.com/apache/mxnet/tree/v1.x/tools/bandwidth) to find the communication cost per batch. Ideally, the communication cost should be less than the time to compute a batch. To reduce the communication cost, we can consider: @@ -293,7 +293,7 @@ by summarizing at the operator level, instead of a function, kernel, or instruct The profiler can be turned on with an [environment variable]({{'/api/faq/env_var#control-the-profiler' | relative_url}}) for an entire program run, or programmatically for just part of a run. Note that by default the profiler hides the details of each individual operator, and you can reveal the details by setting environment variables `MXNET_EXEC_BULK_EXEC_INFERENCE`, `MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN` and `MXNET_EXEC_BULK_EXEC_TRAIN` to 0. -See [example/profiler](https://github.com/dmlc/mxnet/tree/master/example/profiler) +See [example/profiler](https://github.com/apache/mxnet/tree/v1.x/example/profiler) for complete examples of how to use the profiler in code, or [this tutorial](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/profiler.html) on how to profile MXNet performance. Briefly, the Python code looks like: diff --git a/docs/static_site/src/pages/api/faq/recordio.md b/docs/static_site/src/pages/api/faq/recordio.md index 2e8fcdd647f3..180b0cfcc58b 100644 --- a/docs/static_site/src/pages/api/faq/recordio.md +++ b/docs/static_site/src/pages/api/faq/recordio.md @@ -34,8 +34,8 @@ RecordIO implements a file format for a sequence of records. We recommend storin We provide two tools for creating a RecordIO dataset. -* [im2rec.cc](https://github.com/dmlc/mxnet/blob/master/tools/im2rec.cc) - implements the tool using the C++ API. -* [im2rec.py](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) - implements the tool using the Python API. +* [im2rec.cc](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.cc) - implements the tool using the C++ API. +* [im2rec.py](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.py) - implements the tool using the Python API. Both provide the same output: a RecordIO dataset. diff --git a/docs/static_site/src/pages/api/faq/s3_integration.md b/docs/static_site/src/pages/api/faq/s3_integration.md index 6c8ca9768ba0..207ed5b92cfd 100644 --- a/docs/static_site/src/pages/api/faq/s3_integration.md +++ b/docs/static_site/src/pages/api/faq/s3_integration.md @@ -67,7 +67,7 @@ aws s3 sync ./training-data s3://bucket-name/training-data Once the data is in S3, it is very straightforward to use it from MXNet. Any data iterator that can read/write data from a local drive can also read/write data from S3. -Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/dmlc/mxnet/blob/master/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead. +Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/apache/mxnet/blob/v1.x/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead. ``` ~/mxnet$ sed -i -- 's/data\//s3:\/\/bucket-name\/training-data\//g' ./tests/python/train/test_conv.py diff --git a/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md b/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md index e0213387124e..58955a835669 100644 --- a/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md +++ b/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md @@ -43,7 +43,7 @@ You'll get two files, `mnist_train.csv` that contains 60.000 examples of hand wr Custom CSV Iterator ---------- -Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/dmlc/mxnet/blob/master/src/io/iter_csv.cc). +Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/v1.x/src/io/iter_csv.cc). For that we are going to use the R function `mx.io.CSVIter` as a base class. This class has as parameters `data.csv, data.shape, batch.size` and two main functions, `iter.next()` that calls the iterator in the next batch of data and `value()` that returns the train data and the label. diff --git a/example/README.md b/example/README.md index f145600b62af..6a2e471db6ac 100644 --- a/example/README.md +++ b/example/README.md @@ -48,7 +48,7 @@ Example applications or scripts should be submitted in this `example` folder. ### Tutorials -If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/dmlc/mxnet/tree/master/example/MXNetTutorialTemplate.ipynb). +If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/apache/mxnet/tree/v1.x/example/MXNetTutorialTemplate.ipynb). #### Tutorial location @@ -122,7 +122,7 @@ If your tutorial depends on specific packages, simply add them to this provision * "Learn to sort by LSTM" by [xlvector](https://github.com/xlvector) [github link](https://github.com/xlvector/learning-dl/tree/master/mxnet/lstm_sort) [Blog in Chinese](http://blog.xlvector.net/2016-05/mxnet-lstm-example/) * [Neural Art using extremely lightweight (<500K) neural network](https://github.com/pavelgonchar/neural-art-mini) Lightweight version of mxnet neural art implementation by [Pavel Gonchar](https://github.com/pavelgonchar) * [Neural Art with generative networks](https://github.com/zhaw/neural_style) by [zhaw](https://github.com/zhaw) -* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/dmlc/mxnet/tree/master/example/rcnn) +* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/apache/mxnet/tree/v1.x/example/rcnn) * [Asynchronous Methods for Deep Reinforcement Learning in MXNet](https://github.com/zmonoid/Asyn-RL-MXNet/blob/master/mx_asyn.py) by [zmonoid](https://github.com/zmonoid) * [Deep Q-learning in MXNet](https://github.com/zmonoid/DQN-MXNet) by [zmonoid](https://github.com/zmonoid) * [Face Detection with End-to-End Integration of a ConvNet and a 3D Model (ECCV16)](https://github.com/tfwu/FaceDetection-ConvNet-3D) by [tfwu](https://github.com/tfwu), source code for paper Yunzhu Li, Benyuan Sun, Tianfu Wu and Yizhou Wang, "Face Detection with End-to-End Integration of a ConvNet and a 3D Model", ECCV 2016 diff --git a/src/engine/naive_engine.cc b/src/engine/naive_engine.cc index e1ab240bbde4..1eeb804014d5 100644 --- a/src/engine/naive_engine.cc +++ b/src/engine/naive_engine.cc @@ -250,7 +250,7 @@ class NaiveEngine final : public Engine { #endif /*! * \brief Holding a shared_ptr to the object pool to prevent it from being destructed too early - * See also #309 (https://github.com/dmlc/mxnet/issues/309) and similar fix in threaded_engine.h. + * See also #309 (https://github.com/apache/mxnet/issues/309) and similar fix in threaded_engine.h. * Without this, segfaults seen on CentOS7 in * test_operator_gpu.py:test_convolution_multiple_streams */ diff --git a/src/engine/threaded_engine.h b/src/engine/threaded_engine.h index 45a02a57a931..0f9635d89d3b 100644 --- a/src/engine/threaded_engine.h +++ b/src/engine/threaded_engine.h @@ -586,7 +586,7 @@ class ThreadedEngine : public Engine { /*! * \brief Holding a shared_ptr to the object pool to prevent it from being destructed too early - * See also #309 (https://github.com/dmlc/mxnet/issues/309) + * See also #309 (https://github.com/apache/mxnet/issues/309) */ std::shared_ptr> objpool_opr_ref_; std::shared_ptr> objpool_blk_ref_; diff --git a/src/operator/svm_output.cc b/src/operator/svm_output.cc index fe8fa1a9cb77..3dabca7e0ced 100644 --- a/src/operator/svm_output.cc +++ b/src/operator/svm_output.cc @@ -88,7 +88,7 @@ MXNET_REGISTER_OP_PROPERTY(SVMOutput, SVMOutputProp) .describe(R"code(Computes support vector machine based transformation of the input. This tutorial demonstrates using SVM as output layer for classification instead of softmax: -https://github.com/dmlc/mxnet/tree/v1.x/example/svm_mnist. +https://github.com/apache/mxnet/tree/v1.x/example/svm_mnist. )code") .add_argument("data", "NDArray-or-Symbol", "Input data for SVM transformation.") From 8845817274b6d1fc24196a346d2841674c4e7c5e Mon Sep 17 00:00:00 2001 From: barry-jin Date: Fri, 24 Sep 2021 11:03:23 -0700 Subject: [PATCH 2/2] v1.x -> master --- docs/static_site/src/pages/api/faq/cloud.md | 2 +- docs/static_site/src/pages/api/faq/new_op.md | 6 +++--- docs/static_site/src/pages/api/faq/perf.md | 10 +++++----- docs/static_site/src/pages/api/faq/recordio.md | 4 ++-- docs/static_site/src/pages/api/faq/s3_integration.md | 2 +- .../src/pages/api/r/docs/tutorials/custom_iterator.md | 2 +- example/README.md | 4 ++-- 7 files changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/static_site/src/pages/api/faq/cloud.md b/docs/static_site/src/pages/api/faq/cloud.md index 1ccf7c15a5b2..0b7498e9c80f 100644 --- a/docs/static_site/src/pages/api/faq/cloud.md +++ b/docs/static_site/src/pages/api/faq/cloud.md @@ -112,7 +112,7 @@ cat hosts | xargs -I{} ssh -o StrictHostKeyChecking=no {} 'uname -a; pgrep pytho ``` ***Note:*** The preceding example is very simple to train and therefore isn't a good -benchmark for distributed training. Consider using other [examples](https://github.com/apache/mxnet/tree/v1.x/example/image-classification). +benchmark for distributed training. Consider using other [examples](https://github.com/apache/mxnet/tree/master/example/image-classification). ### More Options #### Use Multiple Data Shards diff --git a/docs/static_site/src/pages/api/faq/new_op.md b/docs/static_site/src/pages/api/faq/new_op.md index fe861831f24e..7d2df25b885a 100644 --- a/docs/static_site/src/pages/api/faq/new_op.md +++ b/docs/static_site/src/pages/api/faq/new_op.md @@ -144,12 +144,12 @@ To use the custom operator, create a mx.sym.Custom symbol with op_type as the re mlp = mx.symbol.Custom(data=fc3, name='softmax', op_type='softmax') ``` -Please see the full code for this example [here](https://github.com/apache/mxnet/blob/v1.x/example/numpy-ops/custom_softmax.py). +Please see the full code for this example [here](https://github.com/apache/mxnet/blob/master/example/numpy-ops/custom_softmax.py). ## C++ With MXNet v0.9 (the NNVM refactor) or later, creating new operators has become easier. Operators are now registered with NNVM. -The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/apache/mxnet/tree/v1.x/src/operator/tensor) for more examples): +The following code is an example on how to register an operator (checkout [src/operator/tensor](https://github.com/apache/mxnet/tree/master/src/operator/tensor) for more examples): ```c++ NNVM_REGISTER_OP(abs) @@ -189,7 +189,7 @@ In this section, we will go through the basic attributes MXNet expect for all op You can find the definition for them in the following two files: - [nnvm/op_attr_types.h](https://github.com/dmlc/nnvm/blob/master/include/nnvm/op_attr_types.h) -- [mxnet/op_attr_types.h](https://github.com/apache/mxnet/blob/v1.x/include/mxnet/op_attr_types.h) +- [mxnet/op_attr_types.h](https://github.com/apache/mxnet/blob/master/include/mxnet/op_attr_types.h) #### Descriptions (Optional) diff --git a/docs/static_site/src/pages/api/faq/perf.md b/docs/static_site/src/pages/api/faq/perf.md index 0e123ae3fc57..083ef6974f10 100644 --- a/docs/static_site/src/pages/api/faq/perf.md +++ b/docs/static_site/src/pages/api/faq/perf.md @@ -66,7 +66,7 @@ So whether you specify `cpu(0)` or `cpu()`, _MXNet_ will use all CPU cores on th ### Scoring results The following table shows performance of MXNet-1.2.0.rc1, namely number of images that can be predicted per second. -We used [example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py) +We used [example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/master/example/image-classification/benchmark_score.py) to measure the performance on different AWS EC2 machines. AWS EC2 C5.18xlarge: @@ -150,7 +150,7 @@ and V100 (EC2 p3.2xlarge). ### Scoring results Based on -[example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/benchmark_score.py) +[example/image-classification/benchmark_score.py](https://github.com/apache/mxnet/blob/master/example/image-classification/benchmark_score.py) and MXNet-1.2.0.rc1, with cuDNN 7.0.5 - K80 (single GPU) @@ -213,7 +213,7 @@ Below is the performance result on V100 using float 16. ### Training results Based on -[example/image-classification/train_imagenet.py](https://github.com/apache/mxnet/blob/v1.x/example/image-classification/train_imagenet.py) +[example/image-classification/train_imagenet.py](https://github.com/apache/mxnet/blob/master/example/image-classification/train_imagenet.py) and MXNet-1.2.0.rc1, with CUDNN 7.0.5. The benchmark script is available at [here](https://github.com/mli/mxnet-benchmark/blob/master/run_vary_batch.sh), where the batch size for Alexnet is increased by 16x. @@ -260,7 +260,7 @@ It's critical to use the proper type of `kvstore` to get the best performance. Refer to [Distributed Training](https://mxnet.apache.org/api/faq/distributed_training.html) for more details. -Besides, we can use [tools/bandwidth](https://github.com/apache/mxnet/tree/v1.x/tools/bandwidth) +Besides, we can use [tools/bandwidth](https://github.com/apache/mxnet/tree/master/tools/bandwidth) to find the communication cost per batch. Ideally, the communication cost should be less than the time to compute a batch. To reduce the communication cost, we can consider: @@ -293,7 +293,7 @@ by summarizing at the operator level, instead of a function, kernel, or instruct The profiler can be turned on with an [environment variable]({{'/api/faq/env_var#control-the-profiler' | relative_url}}) for an entire program run, or programmatically for just part of a run. Note that by default the profiler hides the details of each individual operator, and you can reveal the details by setting environment variables `MXNET_EXEC_BULK_EXEC_INFERENCE`, `MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN` and `MXNET_EXEC_BULK_EXEC_TRAIN` to 0. -See [example/profiler](https://github.com/apache/mxnet/tree/v1.x/example/profiler) +See [example/profiler](https://github.com/apache/mxnet/tree/master/example/profiler) for complete examples of how to use the profiler in code, or [this tutorial](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/profiler.html) on how to profile MXNet performance. Briefly, the Python code looks like: diff --git a/docs/static_site/src/pages/api/faq/recordio.md b/docs/static_site/src/pages/api/faq/recordio.md index 180b0cfcc58b..edcaed6a6943 100644 --- a/docs/static_site/src/pages/api/faq/recordio.md +++ b/docs/static_site/src/pages/api/faq/recordio.md @@ -34,8 +34,8 @@ RecordIO implements a file format for a sequence of records. We recommend storin We provide two tools for creating a RecordIO dataset. -* [im2rec.cc](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.cc) - implements the tool using the C++ API. -* [im2rec.py](https://github.com/apache/mxnet/blob/v1.x/tools/im2rec.py) - implements the tool using the Python API. +* [im2rec.cc](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.cc) - implements the tool using the C++ API. +* [im2rec.py](https://github.com/apache/incubator-mxnet/blob/master/tools/im2rec.py) - implements the tool using the Python API. Both provide the same output: a RecordIO dataset. diff --git a/docs/static_site/src/pages/api/faq/s3_integration.md b/docs/static_site/src/pages/api/faq/s3_integration.md index 207ed5b92cfd..e2854c86aabc 100644 --- a/docs/static_site/src/pages/api/faq/s3_integration.md +++ b/docs/static_site/src/pages/api/faq/s3_integration.md @@ -67,7 +67,7 @@ aws s3 sync ./training-data s3://bucket-name/training-data Once the data is in S3, it is very straightforward to use it from MXNet. Any data iterator that can read/write data from a local drive can also read/write data from S3. -Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/apache/mxnet/blob/v1.x/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead. +Let's modify an existing example code in MXNet repository to read data from S3 instead of local disk. [`mxnet/tests/python/train/test_conv.py`](https://github.com/apache/mxnet/blob/master/tests/python/train/test_conv.py) trains a convolutional network using MNIST data from local disk. We'll do the following change to read the data from S3 instead. ``` ~/mxnet$ sed -i -- 's/data\//s3:\/\/bucket-name\/training-data\//g' ./tests/python/train/test_conv.py diff --git a/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md b/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md index 58955a835669..4bfb5639ec01 100644 --- a/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md +++ b/docs/static_site/src/pages/api/r/docs/tutorials/custom_iterator.md @@ -43,7 +43,7 @@ You'll get two files, `mnist_train.csv` that contains 60.000 examples of hand wr Custom CSV Iterator ---------- -Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/v1.x/src/io/iter_csv.cc). +Next we are going to create a custom CSV Iterator based on the [C++ CSVIterator class](https://github.com/apache/mxnet/blob/master/src/io/iter_csv.cc). For that we are going to use the R function `mx.io.CSVIter` as a base class. This class has as parameters `data.csv, data.shape, batch.size` and two main functions, `iter.next()` that calls the iterator in the next batch of data and `value()` that returns the train data and the label. diff --git a/example/README.md b/example/README.md index 6a2e471db6ac..0bcdc9052d02 100644 --- a/example/README.md +++ b/example/README.md @@ -48,7 +48,7 @@ Example applications or scripts should be submitted in this `example` folder. ### Tutorials -If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/apache/mxnet/tree/v1.x/example/MXNetTutorialTemplate.ipynb). +If you have a tutorial idea for the website, download the [Jupyter notebook tutorial template](https://github.com/apache/mxnet/tree/master/example/MXNetTutorialTemplate.ipynb). #### Tutorial location @@ -122,7 +122,7 @@ If your tutorial depends on specific packages, simply add them to this provision * "Learn to sort by LSTM" by [xlvector](https://github.com/xlvector) [github link](https://github.com/xlvector/learning-dl/tree/master/mxnet/lstm_sort) [Blog in Chinese](http://blog.xlvector.net/2016-05/mxnet-lstm-example/) * [Neural Art using extremely lightweight (<500K) neural network](https://github.com/pavelgonchar/neural-art-mini) Lightweight version of mxnet neural art implementation by [Pavel Gonchar](https://github.com/pavelgonchar) * [Neural Art with generative networks](https://github.com/zhaw/neural_style) by [zhaw](https://github.com/zhaw) -* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/apache/mxnet/tree/v1.x/example/rcnn) +* [Faster R-CNN in MXNet with distributed implementation and data parallelization](https://github.com/apache/mxnet/tree/master/example/rcnn) * [Asynchronous Methods for Deep Reinforcement Learning in MXNet](https://github.com/zmonoid/Asyn-RL-MXNet/blob/master/mx_asyn.py) by [zmonoid](https://github.com/zmonoid) * [Deep Q-learning in MXNet](https://github.com/zmonoid/DQN-MXNet) by [zmonoid](https://github.com/zmonoid) * [Face Detection with End-to-End Integration of a ConvNet and a 3D Model (ECCV16)](https://github.com/tfwu/FaceDetection-ConvNet-3D) by [tfwu](https://github.com/tfwu), source code for paper Yunzhu Li, Benyuan Sun, Tianfu Wu and Yizhou Wang, "Face Detection with End-to-End Integration of a ConvNet and a 3D Model", ECCV 2016