Skip to content
forked from intel/caffe

This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors.

License

Notifications You must be signed in to change notification settings

NeoZhangJianyu/caffe

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intel® Distribution of Caffe*

This fork is dedicated to improving Caffe performance when running on CPU, in particular Intel® Xeon processors.

Building

Build procedure is the same as on bvlc-caffe-master branch, see section "Caffe". Both Make and CMake can be used. When OpenMP is available will be used automatically.

Running

Run procedure is the same as on bvlc-caffe-master branch.

Current implementation uses OpenMP threads. By default the number of OpenMP threads is set to the number of CPU cores. Each one thread is bound to a single core to achieve best performance results. It is however possible to use own configuration by providing right one through OpenMP environmental variables like OMP_NUM_THREADS or GOMP_CPU_AFFINITY.

If some system tool like numactl is used to control CPU affinity, by default caffe will prevent to use more than one thread per core. When less than required cores are specified, caffe will limit execution of OpenMP threads to specified cores only.

To collect performance on full INT8 model of ResNet-50 v1.0, please update the variables NUM_CORE, the batch size range s_BS and e_BS, and INSTANCES according to your test requirements, then run:

. run.sh

To verify the accuracy, please run

. run_accuracy.sh

Best performance solution

Please read our Wiki for our recommendations and configuration to achieve best performance on Intel CPUs.

Results:

Performance and convergence test result: https://github.com/intel/caffe/wiki/INTEL%C2%AE-OPTIMIZED-CAFFE-PERFORMANCE-AND-CONVERGENCE.

Scaling test result on AWS: https://github.com/intel/caffe/wiki/Intel%C2%AE-Optimization-for-Caffe-AWS-EC2-C5-(SKX)-Multi-node-Scaling.

Multinode Training

Intel® Distribution of Caffe* multi-node allows you to execute deep neural network training on multiple machines.

To understand how it works and read some tutorials, go to our Wiki. Start from Multinode guide.

License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}

*Other names and brands may be claimed as the property of others

SSD: Single Shot MultiBox Detector

This repository contains merged code issued as pull request to BVLC caffe written by: Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.

Original branch can be found at https://github.com/weiliu89/caffe/tree/ssd.

Read our wiki page for more details.

Caffe

Build Status License Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

About

This fork of BVLC/Caffe is dedicated to improving performance of this deep learning framework when running on CPU, in particular Intel® Xeon processors.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 82.5%
  • Python 7.1%
  • Cuda 3.6%
  • C 2.4%
  • CMake 1.7%
  • Shell 1.2%
  • Other 1.5%