|
| 1 | +<p> |
| 2 | + <img src="imgs/fig.png" width="1000"> |
| 3 | + <br /> |
| 4 | +</p> |
| 5 | + |
| 6 | +<hr> |
| 7 | + |
| 8 | +<h1> GraphMAE: Self-Supervised Masked Graph Autoencoders </h1> |
| 9 | + |
| 10 | +CogDL implementation for KDD'22 paper: [GraphMAE: Self-Supervised Masked Graph Autoencoders](https://arxiv.org/abs/2205.10803). |
| 11 | + |
| 12 | +We also have a [Chinese blog](https://zhuanlan.zhihu.com/p/520389049) about GraphMAE on Zhihu (知乎), and an [English Blog](https://medium.com/p/7a641f8c66d0#4fae-bff62a5b8b4b) on Medium. |
| 13 | + |
| 14 | +GraphMAE is a generative self-supervised graph learning method, which achieves competitive or better performance than existing contrastive methods on tasks including *node classification*, *graph classification*, and *molecular property prediction*. |
| 15 | + |
| 16 | +<p> |
| 17 | + <img src="imgs/compare.png" width="520"><img src="imgs/ablation.jpg" width="270"> |
| 18 | + <br /> |
| 19 | +</p> |
| 20 | +<h2>Dependencies </h2> |
| 21 | + |
| 22 | +* Python >= 3.7 |
| 23 | +* [Pytorch](https://pytorch.org/) >= 1.9.0 |
| 24 | +* [cogdl](https://github.com/THUDM/cogdl) >= 0.5.3 |
| 25 | +* pyyaml == 5.4.1 |
| 26 | + |
| 27 | +<h2>Quick Start </h2> |
| 28 | + |
| 29 | +For quick start, you could run the scripts: |
| 30 | + |
| 31 | +**Node classification** |
| 32 | + |
| 33 | +```bash |
| 34 | +sh scripts/run_transductive.sh <dataset_name> <gpu_id> # for transductive node classification |
| 35 | +# example: sh scripts/run_transductive.sh cora/citeseer/pubmed/ogbn-arxiv 0 |
| 36 | +sh scripts/run_inductive.sh <dataset_name> <gpu_id> # for inductive node classification |
| 37 | +# example: sh scripts/run_inductive.sh reddit/ppi 0 |
| 38 | + |
| 39 | +# Or you could run the code manually: |
| 40 | +# for transductive node classification |
| 41 | +python main_transductive.py --dataset cora --encoder gat --decoder gat --seed 0 --device 0 |
| 42 | +# for inductive node classification |
| 43 | +python main_inductive.py --dataset ppi --encoder gat --decoder gat --seed 0 --device 0 |
| 44 | +``` |
| 45 | + |
| 46 | +Supported datasets: |
| 47 | + |
| 48 | +* transductive node classification: `cora`, `citeseer`, `pubmed`, `ogbn-arxiv` |
| 49 | +* inductive node classification: `ppi`, `reddit` |
| 50 | + |
| 51 | +Run the scripts provided or add `--use_cfg` in command to reproduce the reported results. |
| 52 | + |
| 53 | + |
| 54 | + |
| 55 | +**Graph classification** |
| 56 | + |
| 57 | +```bash |
| 58 | +sh scripts/run_graph.sh <dataset_name> <gpu_id> |
| 59 | +# example: sh scripts/run_graph.sh mutag/imdb-b/imdb-m/proteins/... 0 |
| 60 | + |
| 61 | +# Or you could run the code manually: |
| 62 | +python main_graph.py --dataset IMDB-BINARY --encoder gin --decoder gin --seed 0 --device 0 |
| 63 | +``` |
| 64 | + |
| 65 | +Supported datasets: |
| 66 | + |
| 67 | +- `IMDB-BINARY`, `IMDB-MULTI`, `PROTEINS`, `MUTAG`, `NCI1`, `REDDIT-BINERY`, `COLLAB` |
| 68 | + |
| 69 | +Run the scripts provided or add `--use_cfg` in command to reproduce the reported results. |
| 70 | + |
| 71 | + |
| 72 | + |
| 73 | +<h1> Citing </h1> |
| 74 | + |
| 75 | +If you find this work is helpful to your research, please consider citing our paper: |
| 76 | + |
| 77 | +``` |
| 78 | +@inproceedings{hou2022graphmae, |
| 79 | + title={GraphMAE: Self-Supervised Masked Graph Autoencoders}, |
| 80 | + author={Hou, Zhenyu and Liu, Xiao and Cen, Yukuo and Dong, Yuxiao and Yang, Hongxia and Wang, Chunjie and Tang, Jie}, |
| 81 | + booktitle={Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining}, |
| 82 | + pages={594--604}, |
| 83 | + year={2022} |
| 84 | +} |
| 85 | +``` |
0 commit comments