About this course


In this course we will cover the major theoretical concepts underlying Graph Neural Networks in a way that gets you to comprehension as efficiently and simply as possible. By the end, you should have a firm grasp on the most relevant topics and be equipped to read and understand foundational papers in the field.

We cover a lot of ground over 7 videos: core concepts like the Graph Laplacian are introduced and built upon to work toward an understanding of convolutions on Graphs, which underly many of the foundational GNNs. We also introduce the symmetry-first perspective from Geometric Deep Learning and analyze the representational capacity of GNNs.  Each topic will also point you to a curated set of references so you can go deeper if you choose.


Enroll

Note that this includes the Theory videos only. If you'd like to also enroll in the Hands-on section, see the full Introduction to GNNs course.

GNNs as the "Next Big Thing"


The topic of Graph Representation Learning has been exploding in popularity, but it's still relatively early days. Between 10-20% of all papers published at top conferences were on the topic of ML on graphs. Despite this popularity in the research community, these methods are just beginning to gain traction in industry as the toolsets mature (e.g., DeepMind with Travel time estimation in Google Maps). For those looking to be at the bleeding edge, this is a wonderful time to jump in.


Keep in Touch

Join the newsletter to get course updates

    We won't send you spam. Unsubscribe at any time.

    Hi, I'm Zak


    I am an Applied Scientist in FAANG that specializes in building systems supporting GNNs in industry. I also run the WelcomeAIOverlords YouTube channel, a Discord community and blog. This course uses the same approach of explaining things simply, but will go much deeper into the theory and applications of GNNs.