Skip to content

weixuansun/CAM

This branch is 1 commit behind zhoubolei/CAM:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

18419ae · May 2, 2018

History

32 Commits
Apr 19, 2016
Jul 15, 2017
Mar 18, 2018
Jul 15, 2017
Aug 10, 2016
Sep 7, 2017
Jan 6, 2018
Apr 11, 2016
Apr 11, 2016
Apr 11, 2016
Apr 11, 2016
Jan 22, 2018
Apr 19, 2016
Apr 11, 2016
Apr 11, 2016
Apr 11, 2016
Apr 11, 2016
Apr 28, 2017
Apr 11, 2016
May 2, 2018
Apr 11, 2016

Repository files navigation

Sample code for the Class Activation Mapping

We propose a simple technique to expose the implicit attention of Convolutional Neural Networks on the image. It highlights the most informative image regions relevant to the predicted class. You could get attention-based model instantly by tweaking your own CNN a little bit more. The paper is published at CVPR'16.

The framework of the Class Activation Mapping is as below: Framework

Some predicted class activation maps are: Results

NEW: PyTorch Demo code

  • The popular networks such as ResNet, DenseNet, SqueezeNet, Inception already have global average pooling at the end, so you could generate the heatmap directly without even modifying the network architecture. Here is a sample script to generate CAM for the pretrained networks.
    python pytorch_CAM.py

You also could take a look at the unified PlacesCNN scene prediction code to see how the CAM along with scene categories, scene attributes are predicted. It has been used in the PlacesCNN scene recognition demo.

Pre-trained models in Caffe:

Usage Instructions:

  • Install caffe, compile the matcaffe (matlab wrapper for caffe), and make sure you could run the prediction example code classification.m.
  • Clone the code from Github:
git clone https://github.com/metalbubble/CAM.git
cd CAM
  • Download the pretrained network
sh models/download.sh
  • Run the demo code to generate the heatmap: in matlab terminal,
demo
  • Run the demo code to generate bounding boxes from the heatmap: in matlab terminal,
generate_bbox

The demo video of what the CNN is looking is here. The reimplementation in tensorflow is here. The pycaffe wrapper of CAM is reimplemented at here.

ILSVRC evaluation

Reference:

@inproceedings{zhou2016cvpr,
    author    = {Zhou, Bolei and Khosla, Aditya and Lapedriza, Agata and Oliva, Aude and Torralba, Antonio},
    title     = {Learning Deep Features for Discriminative Localization},
    booktitle = {Computer Vision and Pattern Recognition},
    year      = {2016}
}

License:

The pre-trained models and the CAM technique are released for unrestricted use.

Contact Bolei Zhou if you have questions.

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 71.4%
  • C++ 17.2%
  • C 6.4%
  • Python 4.5%
  • Other 0.5%