Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
import mshadow source tree
Browse files Browse the repository at this point in the history
  • Loading branch information
szha committed Aug 1, 2019
1 parent 8c641b8 commit 1434b98
Show file tree
Hide file tree
Showing 115 changed files with 21,728 additions and 0 deletions.
21 changes: 21 additions & 0 deletions 3rdparty/mshadow/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Compiled Object files
*.slo
*.lo
*.o

# Compiled Dynamic libraries
*.so
*.dylib

# Compiled Static libraries
*.lai
*.la
*.a
*~
doc/html
doc/latex
rabit
dmlc-core
*.db
*.bak
build
43 changes: 43 additions & 0 deletions 3rdparty/mshadow/.travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# disable sudo to use container based build
sudo: false

# Use Build Matrix to do lint and build seperately
env:
matrix:
- TASK=lint LINT_LANG=cpp
- TASK=doc
- TASK=build CXX=g++

# dependent apt packages
addons:
apt:
packages:
- doxygen
- wget
- unzip
- libblas-dev
- python3-pip

before_install:
- git clone https://github.com/dmlc/dmlc-core
- export TRAVIS=dmlc-core/scripts/travis
- source ${TRAVIS}/travis_setup_env.sh

install:
- pip3 install --upgrade pip --user
- pip3 install --user cpplint pylint

script: scripts/travis_script.sh

before_cache:
- ${TRAVIS}/travis_before_cache.sh

cache:
directories:
- ${HOME}/.cache/usr

notifications:
email:
on_success: change
on_failure: always

12 changes: 12 additions & 0 deletions 3rdparty/mshadow/CHANGES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Change Log
=====

mshadow-1.0
=====
* Initial release

mshadow-2.0: in progress
=====
* Support multiple data type
* Great refactoring of code
* Parameter server interface for MultiGPU and distributed learning
6 changes: 6 additions & 0 deletions 3rdparty/mshadow/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
cmake_minimum_required(VERSION 2.8.7)

project(mshadow C CXX)

set(mshadow_LINT_DIRS mshadow mshadow-ps)
add_custom_target(mshadow_lint COMMAND ${CMAKE_COMMAND} -DMSVC=${MSVC} -DPYTHON_EXECUTABLE=${PYTHON_EXECUTABLE} -DLINT_DIRS=${mshadow_LINT_DIRS} -DPROJECT_SOURCE_DIR=${PROJECT_SOURCE_DIR} -DPROJECT_NAME=mshadow -P ${PROJECT_SOURCE_DIR}/../dmlc-core/cmake/lint.cmake)
13 changes: 13 additions & 0 deletions 3rdparty/mshadow/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Copyright (c) 2014 by Contributors

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
37 changes: 37 additions & 0 deletions 3rdparty/mshadow/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
mshadow: Matrix Shadow
======
[![Build Status](https://travis-ci.org/dmlc/mshadow.svg?branch=master)](https://travis-ci.org/dmlc/mshadow)

MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support ***efficient***,
***device invariant*** and ***simple*** tensor library for machine learning project that aims for maximum performance and control, while also emphasize simplicity.

MShadow also provides interface that allows writing Multi-GPU and distributed deep learning programs in an easy and unified way.

* [Contributors](https://github.com/tqchen/mshadow/graphs/contributors)
* [Tutorial](guide)
* [Documentation](doc)
* [Parameter Server Interface for GPU Tensor](guide/mshadow-ps)

Features
--------
* Efficient: all the expression you write will be lazily evaluated and compiled into optimized code
- No temporal memory allocation will happen for expression you write
- mshadow will generate specific kernel for every expression you write in compile time.
* Device invariant: you can write one code and it will run on both CPU and GPU
* Simple: mshadow allows you to write machine learning code using expressions.
* Whitebox: put a float* into the Tensor struct and take the benefit of the package, no memory allocation is happened unless explicitly called
* Lightweight library: light amount of code to support frequently used functions in machine learning
* Extendable: user can write simple functions that plugs into mshadow and run on GPU/CPU, no experience in CUDA is required.
* MultiGPU and Distributed ML: mshadow-ps interface allows user to write efficient MultiGPU and distributed programs in an unified way.

Version
-------
* This version mshadow-2.x, there are a lot of changes in the interface and it is not backward compatible with mshadow-1.0
- If you use older version of cxxnet, you will need to use the legacy mshadow code
* For legacy code, refer to [Here](https://github.com/tqchen/mshadow/releases/tag/v1.1)
* Change log in [CHANGES.md](CHANGES.md)

Projects Using MShadow
----------------------
* [MXNet: Efficient and Flexible Distributed Deep Learning Framework](https://github.com/dmlc/mxnet)
* [CXXNet: A lightweight C++ based deep learnig framework](https://github.com/dmlc/cxxnet)
Loading

0 comments on commit 1434b98

Please sign in to comment.