Skip to content

Codes for Hallucinating Optical Flow Features From Appearance Features for VideoClassification

License

Notifications You must be signed in to change notification settings

YongyiTang92/MoNet-Features

Repository files navigation

Hallucinating Optical Flow Features for Video Classification

by Yongyi Tang, Lin Ma and Lianqiang Zhou. Accepted by IJCAI 2019.

Introduction

Extracting motion information, specifically in the form of optical flow features, is extremely computationally expensive. We propose a motion hallucination network to imagine the optical flow features from the appearance features for video classification. For more details, please refer to our paper.

image

Overview

This repository contains trained models and feature reported in the our paper on the Kinetics-400 dataset.

Sample code

Run the example code

$ python extract_monet_features.py

With default flags, this builds the I3D-MoNet model which takes a video segment as input. You need to alter the 'feed_dict' in line 45 by your video input. Then, the session runs the corresponding hallucinated motion features.

The correspoding checkpoints trained on the Kinetics-400 dataset can be downloaded from google-drive or weiyun

Loading pre-extracted features

We provide the I3D-rgb I3D-flow and the MoNet-flow features of the Kinetics-400 dataset in the form of tfrecords.

Run the example code of loading tfrecords after modifing the file paths.

$ python feature_reader.py

Citation

@InProceedings{tang2019hallucinating,
  author = {Yongyi Tang and Ma, Lin and Lianqiang Zhou},
  title = {Hallucinating Optical Flow Features for Video Classification},
  booktitle = {IJCAI},
  year = {2019}
}

Credits

Part of the code is from kinetics-i3d

About

Codes for Hallucinating Optical Flow Features From Appearance Features for VideoClassification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages