Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Merge pull request #97 from tqchen/master
Browse files Browse the repository at this point in the history
new documents
  • Loading branch information
tqchen committed Sep 18, 2015
2 parents c1e8174 + 41ce311 commit 2e2e710
Show file tree
Hide file tree
Showing 14 changed files with 210 additions and 74 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,4 @@ build
dmlc-core
mshadow
data
recommonmark
8 changes: 8 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
Change Log
==========

in progress version
-------------------
- All basic modules ready


43 changes: 43 additions & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
Contributors of DMLC/MXNet
==========================
MXNet has been developed by a community of people who are interested in large-scale machine learning and deep learning.
Everyone is more than welcomed to is a great way to make the project better and more accessible to more users.

Committers
----------
Committers are people who have made substantial contribution to the project and being active.
The committers are the granted write access to the project.

* [Bing Xu](https://github.com/antinucleon)
- Bing is the major contributor of operators and ndarray modules of mxnet.
* [Tianjun Xiao](https://github.com/sneakerkg)
- Tianqjun is the master behind the fast data loading and preprocessing.
* [Yutian Li](https://github.com/hotpxl)
- Yutian is the ninja behind the dependency and storage engine of mxnet.
* [Mu Li](https://github.com/mli)
- Mu is the contributor of the distributed key-value store in mxnet.
* [Tianqi Chen](https://github.com/tqchen)
- Tianqi is one of the initiator of the mxnet project.
* [Min Lin](https://github.com/mavenlin)
- Min is the guy behind the symbolic magics of mxnet.
* [Naiyan Wang](https://github.com/winstywang)
- Naiyan is the creator of static symbolic graph module of mxnet.
* [Mingjie Wang](https://github.com/jermainewang)
- Mingjie is the initiator, and contributes the design of the dependency engine.

### Become a Comitter
MXNet is a opensource project and we are actively looking for new comitters
who are willing to help maintaining and lead the project. Committers comes from contributors who:
* Made substantial contribution to the project.
* Willing to actively spent time on maintaining and lead the project.

New committers will be proposed by current comitter memembers, with support from more than two of current comitters.

List of Contributors
--------------------
* [Full List of Contributors](https://github.com/dmlc/mxnet/graphs/contributors)
- To contributors: please add your name to the list when you submit a patch to the project:)
* [Jiawei Chen](https://github.com/Iroul)
- Jiawei is the man behind all the serializations.
* [Qiang Kou](https://github.com/thirdwing)
- KK is a R ninja, he will make mxnet available for R users.
72 changes: 46 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,52 @@
# MXNet
MXNet
=====

[![Build Status](https://travis-ci.org/dmlc/mxnet.svg?branch=master)](https://travis-ci.org/dmlc/mxnet)
[![Documentation Status](https://readthedocs.org/projects/mxnet/badge/?version=latest)](https://readthedocs.org/projects/mxnet/?badge=latest)
[![Documentation Status](https://readthedocs.org/projects/mxnet/badge/?version=latest)](http://mxnet.readthedocs.org/en/latest/)
[![GitHub Stats](https://img.shields.io/badge/github-stats-ff5500.svg)](http://githubstats.com/dmlc/mxnet)
[![Hex.pm](https://img.shields.io/hexpm/l/plug.svg)]()

This is a project that combines lessons and ideas we learnt from [cxxnet](https://github.com/dmlc/cxxnet), [minerva](https://github.com/dmlc/minerva) and [purine2](https://github.com/purine/purine2).
- The interface is designed in collaboration by authors of three projects.
- Nothing is yet working

# Guidelines
* Use google c style
* Put module header in [include](include)
* Depend on [dmlc-core](https://github.com/dmlc/dmlc-core)
* Doxygen comment every function, class and variable for the module headers
- Ref headers in [dmlc-core/include](https://github.com/dmlc/dmlc-core/tree/master/include/dmlc)
- Use the same style as dmlc-core
* Minimize dependency, if possible only depend on dmlc-core
* Macro Guard CXX11 code by
- Try to make interface compile when c++11 was not avaialable(but with some functionalities pieces missing)
```c++
#include <dmlc/base.h>
#if DMLC_USE_CXX11
// c++11 code here
#endif
```
- Update the dependencies by
```
git submodule foreach --recursive git pull origin master
```
* For heterogenous hardware support (CPU/GPU). Hope the GPU-specific component could be isolated easily. That is too say if we use `USE_CUDA` macro to wrap gpu-related code, the macro should not be everywhere in the project.
Contents
--------
* [Documentation](http://mxnet.readthedocs.org/en/latest/)
* [Build Instruction](doc/build.md)
* [Features](#features)
* [License](#license)

Features
--------
* Lightweight: small but sharp knife
- mxnet contains concise implementation of state-of-art deep learning models
- The project maintains a minimum dependency that makes it portable and easy to build
* Scalable and beyond
- The package scales to multiple GPUs already with an easy to use kvstore.
- The same code can be ported to distributed version when the distributed kvstore is ready.
* Multi-GPU NDArray/Tensor API with auto parallelization
- The package supports a flexible ndarray interface that runs on both CPU and GPU, more importantly
automatically parallelize the computation for you.
* Language agnostic
- The package currently support C++ and python, with a clean C API.
- This makes the package being easily portable to other languages and platforms.
* Cloud friendly
- MXNet is ready to work with cloud storages including S3, HDFS, AZure for data source and model saving.
- This means you do can put data on S3 directly using it to train your deep model.
* Easy extensibility with no requirement on GPU programming
- The package can be extended in several scopes, including python, c++.
- In all these levels, developers can write numpy style expressions, either via python
or [mshadow expression template](https://github.com/dmlc/mshadow).
- It brings concise and readable code, with performance matching hand crafted kernels

Bug Reporting
-------------
* For reporting bugs please use the [mxnet/issues](https://github.com/dmlc/mxnet/issues) page.

Contributing to MXNet
---------------------
MXNet has been developed and used by a group of active community members.
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
* Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) after your patch has been merged.

License
-------
© Contributors, 2015. Licensed under an [Apache-2.0](https://github.com/dmlc/mxnet/blob/master/LICENSE) license.
10 changes: 4 additions & 6 deletions doc/README
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
This document is generated by sphinx.
Make sure you cloned the following repos in the root.

- https://github.com/dmlc/dmlc-core
- https://github.com/dmlc/mshadow
- https://github.com/tqchen/recommonmark
- Type make in root foler to make the library
You can view a hosted version of document at http://mxnet.readthedocs.org/

Type make html in doc folder.
To build the document locally, type
- ```make html```
- It is recommended to type ```make``` in root to build mxnet beforehand.
10 changes: 10 additions & 0 deletions doc/build.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Build MXNet
===========
- You can clone the mxnet from the [github repo](https://github.com/dmlc/mxnet)
- After you clone the repo, update the submodules by
```bash
git submodule init
git submodule update
```
- Copy [make/config.mk](../make/config.mk) to the project root, modify according to your desired setting.
- Type ```make``` in the root folder.
4 changes: 1 addition & 3 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,9 +162,7 @@ def run_doxygen(folder):

def generate_doxygen_xml(app):
"""Run the doxygen make commands if we're on the ReadTheDocs server"""
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
if read_the_docs_build:
run_doxygen('..')
run_doxygen('..')
sys.stderr.write('The Lib path: %s\n' % str(os.listdir('../lib')))

def setup(app):
Expand Down
51 changes: 51 additions & 0 deletions doc/contribute.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
Contribute to MXNet
===================
MXNet has been developed and used by a group of active community members.
Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users.
* Please add your name to [CONTRIBUTORS.md](../CONTRIBUTORS.md) after your patch has been merged.

Code Style
----------
- Follow Google C style for C++.
- We use doxygen to document all the interface code.
- We use numpydoc to document all the python codes.
- You can reproduce the linter checks by typing ```make lint```

Contribute to Documents
-----------------------
* The document is created using sphinx and [recommonmark](http://recommonmark.readthedocs.org/en/latest/)
* You can build document locally to see the effect.


Contribute to Testcases
-----------------------
* All the testcases are in [tests](../tests)
* We use python nose for python test cases and gtest for c++ unittests.


Contribute to Examples
-------------------------
* Usecases and examples will be in [examples](../examples)
* We are super excited to hear about your story, if you have blogposts,
tutorials code solutions using mxnet, please tell us and we will add
a link in the example pages.

Submit a Pull Request
---------------------
* Before submit, please rebase your code on the most recent version of master, you can do it by
```bash
git remote add upstream https://github.com/dmlc/mxnet
git fetch upstream
git rebase upstream/master
```
* If you have multiple small commits that fixes small problems,
it might be good to merge them together(use git rebase then squash) into more meaningful groups.
* Send the pull request!
- Fix the problems reported by automatic checks
- If you are contributing a new module, consider add a testcase in [tests](../tests)






19 changes: 19 additions & 0 deletions doc/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
Frequent Asked Questions
========================
This document contains the frequent asked question to mxnet.


What is the relation between MXNet and CXXNet, Minerva, Purine2
---------------------------------------------------------------
MXNet is created in collaboration by authors from the three projects.
The project reflects what we have learnt from the past projects.
It combines important flavor of the existing projects, being
efficient, flexible and memory efficient.

It also contains new ideas, that allows user to combin different
ways of programming, and write CPU/GPU applications that are more
memory efficient than cxxnet, purine and more flexible than minerva.

How to Build the Project
------------------------
See [build instruction](build.md)
15 changes: 12 additions & 3 deletions doc/index.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,22 @@
MXNet Documentation
===================
This is document for mxnet, an efficient and flexible distributed framework for deep learning.

How to Get Started
------------------
For now, you can take a look at [Python User Guide](python/python_guide.md) and play with the
[examples](../examples) . More to come.

Contents
--------
* [Python User Guide](python/python_guide.md)
* [Build Instruction](build.md)
* [Python API Reference](python/python_api.md)


* [Python User Guide](python/python_guide.md)
* [Python API](python/python_api.md)
* [C++ Developer Guide](cpp/cpp_guide.md)
Developer Guide
---------------
* [Contributor Guideline](contribute.md)
* [Doxygen Version of C++ API](https://mxnet.readthedocs.org/en/latest/doxygen)

Indices and tables
Expand Down
17 changes: 3 additions & 14 deletions doc/python/python_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,14 +112,6 @@ same one. The following example performing computations on GPU 0:
[ 6. 6. 6.]]
```

#### Indexing

TODO

#### Linear Algebra

TODO

### Load and Save

There are two ways to save data to (load from) disks easily. The first way uses
Expand Down Expand Up @@ -174,11 +166,8 @@ can directly save to and load from them. For example:
>>> mx.nd.save('hdfs///users/myname/mydata.bin', [a,b])
```

### Parallelization

The operations of `NDArray` are executed by third libraries such as `cblas`,
`mkl`, and `cuda`. In default, each operation is executed by multi-threads. In
addition, `NDArray` can execute operations in parallel. It is desirable when we
### Automatic Parallelization
`NDArray` can automatically execute operations in parallel. It is desirable when we
use multiple resources such as CPU, GPU cards, and CPU-to-GPU memory bandwidth.

For example, if we write `a += 1` followed by `b += 1`, and `a` is on CPU while
Expand Down Expand Up @@ -206,7 +195,7 @@ automatically dispatch it into multi-devices, such as multi GPU cards or multi
machines.

It is achieved by lazy evaluation. Any operation we write down is issued into a
internal DAG engine, and then returned. For example, if we run `a += 1`, it
internal engine, and then returned. For example, if we run `a += 1`, it
returns immediately after pushing the plus operator to the engine. This
asynchronous allows us to push more operators to the engine, so it can determine
the read and write dependency and find a best way to execute them in
Expand Down
13 changes: 6 additions & 7 deletions doc/sphinx_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,21 @@
import subprocess


READTHEDOCS_BUILD = (os.environ.get('READTHEDOCS', None) == 'True')

def run_build_mxnet(folder):
"""Run the doxygen make command in the designated folder."""
try:
subprocess.call('cd ..; rm -rf dmlc-core;' +
'git clone https://github.com/dmlc/dmlc-core', shell = True)
subprocess.call('cd ..; rm -rf mshadow;' +
'git clone https://github.com/dmlc/mshadow', shell = True)
subprocess.call('cd ..; cp make/readthedocs.mk config.mk', shell = True)
subprocess.call('cd ..; rm -rf build', shell = True)
if READTHEDOCS_BUILD:
subprocess.call('cd ..; cp make/readthedocs.mk config.mk', shell = True)
subprocess.call('cd ..; rm -rf build', shell = True)
retcode = subprocess.call("cd %s; make" % folder, shell = True)
if retcode < 0:
sys.stderr.write("build terminated by signal %s" % (-retcode))
except OSError as e:
sys.stderr.write("build execution failed: %s" % e)

if os.environ.get('READTHEDOCS', None) == 'True':
if READTHEDOCS_BUILD or not os.path.exists('../recommonmark'):
subprocess.call('cd ..; rm -rf recommonmark;' +
'git clone https://github.com/tqchen/recommonmark', shell = True)

Expand Down
19 changes: 5 additions & 14 deletions make/readthedocs.mk
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,13 @@ USE_CUDA_PATH = NONE
# you can disable it, however, you will not able to use
# imbin iterator
USE_OPENCV = 0
USE_OPENCV_DECODER = 0

# whether use CUDNN R3 library
USE_CUDNN = 0
# add the path to CUDNN libary to link and compile flag
# if you do not need that, or do not have that, leave it as NONE
USE_CUDNN_PATH = NONE


# use openmp for parallelization
USE_OPENMP = 0

#
# choose the version of blas you want to use
Expand All @@ -37,17 +38,7 @@ USE_BLAS = NONE
#
USE_INTEL_PATH = NONE

# whether compile with parameter server
USE_DIST_PS = 0
PS_PATH = NONE
PS_THIRD_PATH = NONE

# whether compile with rabit
USE_RABIT_PS = 0
RABIT_PATH = rabit

# use openmp iterator
USE_OPENMP_ITER = 0
# the additional link flags you want to add
ADD_LDFLAGS =

Expand Down
2 changes: 1 addition & 1 deletion mshadow
Submodule mshadow updated 1 files
+2 −0 make/mshadow.mk

0 comments on commit 2e2e710

Please sign in to comment.