About this course
In this course we will cover the major theoretical concepts underlying Graph Neural Networks in a way that gets you to comprehension as efficiently and simply as possible. By the end, you should have a firm grasp on the most relevant topics and be equipped to read and understand foundational papers in the field.
We cover a lot of ground over 7 videos: core concepts like the Graph Laplacian are introduced and built upon to work toward an understanding of convolutions on Graphs, which underly many of the foundational GNNs. We also introduce the symmetry-first perspective from Geometric Deep Learning and analyze the representational capacity of GNNs. Each topic will also point you to a curated set of references so you can go deeper if you choose.