Unsupervised Human Action Recognition with Skeletal Graph Laplacian and Self-Supervised Viewpoints Invariance
This repository provides the Pytorch code for our work accepted to BMVC 2021 as an Oral Presentation.
Code repository with training script for the NTU-60 and NTU-120 datasets.
- Numpy
- Scikit-learn
- Pytorch
- Tqdm
- Wandb
The code has been run on PyTorch version 1.6.0, and we therefore recommend this version.
For any questions, feel free to contact [email protected]
-
NTU-60
- Download zip file containing raw skeleton data here
- Extract the nturgb+d_skeletons folder, contained inside zip file, to "dataset/raw/ntu_60"
- Execute NTU60_dataset_preprocessing.py
-
NTU-120
- Download zip files containing raw skeleton data here and here
- Extract the nturgb+d_skeletons folders, contained inside zip files, to "dataset/raw/ntu_120"
- Execute NTU120_dataset_preprocessing.py
@inproceedings{UHAR_BMVC2021,
title={{Unsupervised Human Action Recognition with Skeletal Graph Laplacian and Self-Supervised Viewpoints Invariance}},
author={Paoletti, Giancarlo and Cavazza, Jacopo and Beyan, Cigdem and Del Bue, Alessio},
booktitle={The 32nd British Machine Vision Conference (BMVC)},
year={2021},
}
The software is provided "as is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors, PAVIS or IIT be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software.
This project is licensed under the terms of the MIT license.