Skip to content

Commit bc4cd67

Browse files
authored
Merge pull request #493 from NarineK/optim-wip
Optim wip
2 parents f1cf864 + e72ec5d commit bc4cd67

File tree

153 files changed

+4012
-2026
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

153 files changed

+4012
-2026
lines changed

.circleci/config.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ commands:
4343
steps:
4444
- run:
4545
name: "Check import order with isort"
46-
command: isort --check-only -v
46+
command: isort --check-only -v .
4747

4848
mypy_check:
4949
description: "Static type checking with mypy"
@@ -121,10 +121,11 @@ commands:
121121
- run:
122122
name: "Install CUDA"
123123
command: |
124-
wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda-repo-ubuntu1604-10-1-local-10.1.243-418.87.00_1.0-1_amd64.deb
125-
sudo apt-get update
126-
export OS_RELEASE=$(uname -r)
127-
sudo dpkg -i cuda-repo-ubuntu1604-10-1-local-10.1.243-418.87.00_1.0-1_amd64.deb
124+
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-ubuntu1604.pin
125+
sudo mv cuda-ubuntu1604.pin /etc/apt/preferences.d/cuda-repository-pin-600
126+
wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1604-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
127+
sudo dpkg -i cuda-repo-ubuntu1604-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
128+
sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
128129
sudo apt-get update
129130
sudo apt-get --yes --force-yes install cuda
130131
@@ -182,7 +183,6 @@ jobs:
182183
- conda_install:
183184
args: "-n"
184185
- lint_flake8
185-
- lint_black
186186
- isort
187187
- unit_tests
188188
- sphinx
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
---
2+
name: "\U0001F41B Bug report"
3+
about: Create a report to help us improve
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
## 🐛 Bug
11+
12+
<!-- A clear and concise description of what the bug is. -->
13+
14+
## To Reproduce
15+
16+
Steps to reproduce the behavior:
17+
18+
1.
19+
1.
20+
1.
21+
22+
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
23+
24+
## Expected behavior
25+
26+
<!-- A clear and concise description of what you expected to happen. -->
27+
28+
## Environment
29+
Describe the environment used for Captum
30+
31+
```
32+
33+
- Captum / PyTorch Version (e.g., 1.0 / 0.4.0):
34+
- OS (e.g., Linux):
35+
- How you installed Captum / PyTorch (`conda`, `pip`, source):
36+
- Build command you used (if compiling from source):
37+
- Python version:
38+
- CUDA/cuDNN version:
39+
- GPU models and configuration:
40+
- Any other relevant information:
41+
42+
## Additional context
43+
44+
<!-- Add any other context about the problem here. -->
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
name: "\U0001F4DA Documentation"
3+
about: Describe this issue template's purpose here.
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
## 📚 Documentation
11+
12+
<!-- A clear and concise description of what content in https://captum.ai/docs, https://captum.ai/tutorials or https://captum.ai/api is an issue.
13+
Feel free also to file an issue if you see any problems in the README.md or in the code documentation
14+
-->
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
---
2+
name: "\U0001F680 Feature request"
3+
about: Suggest an idea for this project
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
## 🚀 Feature
11+
<!-- A clear and concise description of the feature proposal -->
12+
13+
## Motivation
14+
15+
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
16+
17+
## Pitch
18+
19+
<!-- A clear and concise description of what you want to happen. -->
20+
21+
## Alternatives
22+
23+
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
24+
25+
## Additional context
26+
27+
<!-- Add any other context or screenshots about the feature request here. -->
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
name: "❓ Questions and Help"
3+
about: Describe this issue template's purpose here.
4+
title: ''
5+
labels: ''
6+
assignees: ''
7+
8+
---
9+
10+
## ❓ Questions and Help
11+
12+
We have a set of listed resources available on the website and FAQ: https://captum.ai/ and https://captum.ai/docs/faq . Feel free to open an issue here on the github or in our discussion forums:
13+
14+
- [Discussion Forum](https://discuss.pytorch.org/c/captum)

AWESOME_LIST.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,3 +18,11 @@ Score-CAM is a gradient-free visualization method extended from Grad-CAM and Gra
1818
White noise stimuli is fed to a classifier and the ones that are categorized into a particular class are averaged. It gives an estimate of the templates a classifier uses for classification, and is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis.
1919
- [Paper](https://arxiv.org/abs/1912.12106)
2020
- [https://github.com/aliborji/WhiteNoiseAnalysis.git](https://github.com/aliborji/WhiteNoiseAnalysis.git)
21+
22+
23+
#### FastCAM: Multiscale Saliency Map with SMOE scale
24+
An attribution method that uses information at the end of each network scale which is then combined into a single saliency map.
25+
- [Paper](https://arxiv.org/abs/1911.11293)
26+
- [https://github.com/LLNL/fastcam](https://github.com/LLNL/fastcam)
27+
- [pull request](https://github.com/pytorch/captum/pull/442)
28+
- [jupyter notebook demo](https://github.com/LLNL/fastcam/blob/captum/demo-captum.ipynb)

CITATION

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
@misc{captum2019github,
2-
author = {Kokhlikyan, Narine and Miglani, Vivek and Martin, Miguel and Wang, Edward and Reynolds, Jonathan and Melnikov, Alexander and Lunova, Natalia and Reblitz-Richardson, Orion},
3-
title = {PyTorch Captum},
4-
year = {2019},
5-
publisher = {GitHub},
6-
journal = {GitHub repository},
7-
howpublished = {\url{https://github.com/pytorch/captum}},
1+
@misc{kokhlikyan2020captum,
2+
title={Captum: A unified and generic model interpretability library for PyTorch},
3+
author={Narine Kokhlikyan and Vivek Miglani and Miguel Martin and Edward Wang and Bilal Alsallakh and Jonathan Reynolds and Alexander Melnikov and Natalia Kliushkina and Carlos Araya and Siqi Yan and Orion Reblitz-Richardson},
4+
year={2020},
5+
eprint={2009.07896},
6+
archivePrefix={arXiv},
7+
primaryClass={cs.LG}
88
}

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -415,7 +415,7 @@ The slides of our presentation from NeurIPS 2019 can be found [here](docs/presen
415415
## References of Algorithms
416416

417417
* `IntegratedGradients`, `LayerIntegratedGradients`: [Axiomatic Attribution for Deep Networks, Mukund Sundararajan et al. 2017](https://arxiv.org/abs/1703.01365) and [Did the Model Understand the Question?, Pramod K. Mudrakarta, et al. 2018](https://arxiv.org/abs/1805.05492)
418-
* `InputXGradient`: [Investigating the influence of noise and distractors on the interpretation of neural networks, Pieter-Jan Kindermans et al. 2016](https://arxiv.org/abs/1611.07270)
418+
* `InputXGradient`: [Not Just a Black Box: Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2016](https://arxiv.org/abs/1605.01713)
419419
* `SmoothGrad`: [SmoothGrad: removing noise by adding noise, Daniel Smilkov et al. 2017](https://arxiv.org/abs/1706.03825)
420420
* `NoiseTunnel`: [Sanity Checks for Saliency Maps, Julius Adebayo et al. 2018](https://arxiv.org/abs/1810.03292)
421421
* `NeuronConductance`: [How Important is a neuron?, Kedar Dhamdhere et al. 2018](https://arxiv.org/abs/1805.12233)

captum/_utils/common.py

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -23,17 +23,17 @@ class ExpansionTypes(Enum):
2323

2424

2525
def safe_div(
26-
denom: Tensor, quotient: Union[Tensor, float], default_value: Tensor
26+
numerator: Tensor, denom: Union[Tensor, float], default_value: Tensor
2727
) -> Tensor:
2828
r"""
29-
A simple utility function to perform `denom / quotient`
30-
if the statement is undefined => result will be `default_value`
29+
A simple utility function to perform `numerator / denom`
30+
if the statement is undefined => result will be `default_value`
3131
"""
32-
if isinstance(quotient, float):
33-
return denom / quotient if quotient != 0.0 else default_value
32+
if isinstance(denom, float):
33+
return numerator / denom if denom != 0.0 else default_value
3434

35-
# if quotient is a tensor
36-
return denom / torch.where(quotient != 0.0, quotient, default_value)
35+
# if denominator is a tensor
36+
return numerator / torch.where(denom != 0.0, denom, default_value)
3737

3838

3939
@typing.overload
@@ -418,9 +418,9 @@ def _select_targets(output: Tensor, target: TargetType) -> Tensor:
418418

419419

420420
def _verify_select_column(
421-
output: Tensor, target: Union[int, Tuple[int, ...]]
421+
output: Tensor, target: Union[int, Tuple[Union[int, slice], ...]]
422422
) -> Tensor:
423-
target = cast(Tuple[int, ...], (target,) if isinstance(target, int) else target)
423+
target = (target,) if isinstance(target, int) else target
424424
assert (
425425
len(target) <= len(output.shape) - 1
426426
), "Cannot choose target column with output shape %r." % (output.shape,)
@@ -472,8 +472,10 @@ def _reduce_list(
472472
val_list[0]. It is assumed that all tuples in the list have the same length
473473
and red_func can be applied to all elements in each corresponding position.
474474
"""
475+
assert len(val_list) > 0, "Cannot reduce empty list!"
475476
if isinstance(val_list[0], torch.Tensor):
476-
return red_func(val_list)
477+
first_device = val_list[0].device
478+
return red_func([elem.to(first_device) for elem in val_list])
477479
elif isinstance(val_list[0], bool):
478480
return any(val_list)
479481
elif isinstance(val_list[0], tuple):

0 commit comments

Comments
 (0)