Introduction to GNNs
Intuition
- Map nodes to d-dimensional embeddings such that similar nodes in the graph are embedded close together
→ How to learn mapping function $f$?
- Goal: $\text{similarity}(u, v) \approx z_v^\top z_u$
- Usually, $\text{ENC}(u) = z_u = H_{u}$
- Encoder: maps each node to a low-dimensional vector
- Similarity function: specifies how the relationships in vector space map to the relationships in the original network
- “Shallow” encoding
- Encoder is just an embedding-lookup
- Simplest encoding approach
- Limitations
- $O(|V|)$ parameters are needed (no parameter sharing between nodes)
- Inherently “transductive” (cannot generate embeddings for nodes that are not seen during training)
- Do not incorporate node features (node in many graphs have features which we can leverage)
Nowadays: Deep Graph Encoders
- $\text{ENC}(v)$: multiple layers of non-linear transformations based on graph structure
- Note: all these deep encoders can be combined with node similarity functions
Why Graph is Hard?
- No fixed node ordering or reference point
- Often dynamic and have multimodal features
Deep Learning for Graphs
A Naive Approach
- Join adjacency matrix and features
- Feed them into a deep neural net
- Issues
- $O(|V|)$ parameters
- Not applicable to graphs of different sizes
- Sensitive to node ordering
Idea: Convolutional Networks
Goal is to generalize convolutions beyond simple lattices and to leverage node features/attributes (e.g., text, images)...
Permutation Invariance/Equivariance