Anonymal provides the ability to efficiently manually anonymize videos quickly and easily. Simply scrub through a video in near-real-time and draw bounding boxes around objects you'd like to anonymize, controlled primarily with the keyboard. Existing software tools attempt to take this process entirely out of the hands of the user and often miss important detail and are tuned to specific anonymization targets, while a manual approach using video-editing software is highly time-consuming.
This repo and method are developed by Eric Zelikman and Xindi Wu closely based on that of the CVPR 2019 paper Fast Online Object Tracking and Segmentation: A Unifying Approach by
Qiang Wang*, Li Zhang*, Luca Bertinetto*, Weiming Hu, Philip H.S. Torr
CVPR 2019
[Paper] [Video] [Project Page]
Functionally, Anonymal aims to make video anonymization accessible and straightforward. To do this, it:
- Provides a user interface for the video and blurring procedure, allowing the selection of zero to many objects at different times, as necessary.
- Blurs the tracked regions, efficiently and losslessly storing the data.
- Automatically generates a new blurred video alongside polygonal metadata about the blurred areas.
- Contains scripts for simple batch video processing to reduce friction.
If you find this code useful, please consider citing
@misc{anonymal2020,
title={Anonymal},
author={Zelikman, Eric and Wu, Xindi},
booktitle={GitHub},
year={2020}
}
In addition, the implementation and method are based closely on the SiamMask paper by Wang et. al (2019), so please consider citing:
@inproceedings{wang2019fast,
title={Fast online object tracking and segmentation: A unifying approach},
author={Wang, Qiang and Zhang, Li and Bertinetto, Luca and Hu, Weiming and Torr, Philip HS},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
year={2019}
}
This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1, and CUDA 11.1
- Clone the repository
git clone https://github.com/ezelikman/Anonymal.git && cd Anonymal
export Anonymal=$PWD
- Setup python environment
conda create -n anonymal python=3.6
source activate anonymal
pip install -r requirements.txt
bash make.sh
- Add the project to your PYTHONPATH
export PYTHONPATH=$PWD:$PYTHONPATH
- Setup your environment
- Download the SiamMask model
cd $Anonymal/experiments/siammask_sharp
wget http://www.robots.ox.ac.uk/~qwang/SiamMask_VOT.pth
wget http://www.robots.ox.ac.uk/~qwang/SiamMask_DAVIS.pth
- Run
video_cleaner.py
to clean the video. Hitc
to play the video in real-time,a
andd
to go back or ahead 48 frames. To enter the anonymization mode, hitspace
at any point. Draw a bounding box and hitspace
orreturn
(enter
) to register the bounding box. When you are done selecting bounding boxes, hitescape
and thena
,escape
, ord
depending on whether you want to start anonymizing from a few frames before the current frame, the current frame, or a few frames after the current frame. When anonymization is running, you can hitc
to clear the current selection and play the video normally orspace
to choose a new one.video_cleaner.py
will save lossless images to the target directory in real-time - this allows you to easily go back and forth through the video without losing your changes (unless you choose to do so), and preserves all resolution.
cd $Anonymal/experiments/siammask_sharp
export PYTHONPATH=$PWD:$PYTHONPATH
python ../../tools/video_cleaner.py --resume SiamMask_DAVIS.pth --config config_davis.json --base_path 'demo.mp4' --target_path 'demo/'
-
(Tip: You can press
escape
repeatedly when in selection mode, without selecting targets, to step through frame by frame. In addition, to cancel selection in selection more, you can useescape + s
) -
Run
video_writer.py
to combine the images and the original video into a new video, if you chose not to do so when prompted at the end of thevideo_cleaner
run.
cd $Anonymal/experiments/siammask_sharp
export PYTHONPATH=$PWD:$PYTHONPATH
python ../../tools/video_writer.py --base_path 'demo.mp4' --target_path 'demo/'
Licensed under an MIT license.