- TT compression for DLRM https://proceedings.mlsys.org/paper/2021/hash/979d472a84804b9f647bc185a877a8b5-Abstract.html
- https://csc.mpi-magdeburg.mpg.de/mpcsc/benner/publications_fiona/talks/2021/Benner-IPAM-2021.pdf
- https://arxiv.org/pdf/2011.06532.pdf
- High-performance graph sampling for GNN training
- https://arxiv.org/pdf/2009.06693.pdf
- https://arxiv.org/pdf/2203.10983.pdf
- Model compression for neural networks: Tensorizing Neural Networks.
- Applications in DLRM, language models (?), and edge computing
- Implications for parallelism as this increases depth of NN
- Tensor-train times dense matrix multiplication as a computational primitive?
-
SIGN (Scalable Inception Graph Neural Networks)
-
OGB large-scale challenge
- Organizational meeting
-
Hyperbolic embeddings
-
And here's a good introduction to the paper from the authors https://dawn.cs.stanford.edu/2018/03/19/hyperbolics/
-
You should only need to understand the Poincaré disk, which the above resources cover. But, if you want some more background on hyperbolic geometry, this book chapter is good: http://library.msri.org/books/Book31/files/cannon.pdf
-
For those curious to know more about hyperbolic neural networks, here are some more papers on this topic
-
Hyperbolic NNs: https://arxiv.org/abs/1805.09112
-
Fully hyperbolic NNs i.e. no tangent space projection: https://arxiv.org/abs/2105.14686
-
Hyperbolic attention networks. I haven't read this yet, but it would be remiss of me not at least mention something about attention https://arxiv.org/abs/1805.09786
- Equivariant GNNs:
- https://arxiv.org/abs/2102.09844
- Alphafold2 or Baker Lab variant:
- https://www.nature.com/articles/s41586-021-03819-2
- https://science.sciencemag.org/content/early/2021/07/14/science.abj8754.full
- Graphs, simplicial complexes, and hypergraphs for data modeling: https://arxiv.org/abs/2006.02870
- and hypergraph learning: https://vision.cornell.edu/se3/wp-content/uploads/2014/09/icml06.pdf
- A General Graph Neural Network Framework for Link Prediction: https://arxiv.org/pdf/2106.06935.pdf
- Path Problems in Networks: https://user.eng.umd.edu/~baras/publications/Books/S00245ED1V01Y201001CNT003.pdf
- Ensemble learning
- Graph Neural Tangent Kernels: https://openreview.net/pdf/dd6097df468d83341c8f74f3a83470866d994965.pdf
- Weisfeiler-Leman Heuristic and associated Graph Kernels: https://www.jmlr.org/papers/volume12/shervashidze11a/shervashidze11a.pdf
- Graph Attention Networks: https://arxiv.org/abs/1710.10903
- Topological Graph Neural Networks: https://arxiv.org/pdf/2102.07835.pdf
- MSA Transformer: https://www.biorxiv.org/content/10.1101/2021.02.12.430858v1.full.pdf
- Learning from Protein Structure with Geometric Vector Perceptrons: https://openreview.net/forum?id=1YLJDvSx6J4
- How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks https://openreview.net/forum?id=UH-cmocLJC
- Solving a series of successively harder DP problems with GNNs.
- Needleman-Wunsch on pairs of reads
- Smith-Waterman on pairs of reads
- Sequence to graph alignment (potentially useful for pangenomes)
- Many-to-many sequence alignment
- Assembly on error-free reads
- Assembly on erroneous reads
- https://www.reddit.com/r/MachineLearning/comments/kqazpd/d_why_im_lukewarm_on_graph_neural_networks/
- https://towardsdatascience.com/predictions-and-hopes-for-graph-ml-in-2021-6af2121c3e3d
- Combining Label Propagation and Simple Models out-performs Graph Neural Networks: https://openreview.net/forum?id=8E1-f3VhX1o
- Discussion on GNNs vs CNNs, Transformers vs GNNs, and whether we need any induction bias.
- Discussion on whether the test cases in the Correct&Smooth paper are too simple.
- Discussion on whether the proposed C&S model is any easier to tune and/or run compared to GNNs.
- We also talked about the issues with the authors' understanding of the topic in the Reddit post
- Paper for potential future reading: https://arxiv.org/pdf/1806.01261.pdf