Hello, I’m Alex Adam

I'm a PhD researcher at the University of Toronto focused on making the deployment of deep learning models safe, robust, and reliable.

Latest Posts

Early Stopping and its Faults

Introduction Seeing as how the last few posts have been a bit theoretical in nature, I thought it might be useful to switch to something more practical. Early stopping is a strategy...

Universal Approximation Theorem - Part 3

Introduction It’s been too long since my last post, so expect to see a chain of new posts in the coming months. Back to the UAT madness that I started looking at last year. In part...

Universal Approximation Theorem - Part 2

Introduction In today’s post, we look at fitting a quartic function. Furthermore, we’re going to take a look at how far away we have to move from our training dataset in order for o...

Universal Approximation Theorem - Part 1

Introduction Early on in my deep learning studies, I heard something along the lines of: neural networks are universal function approximators. This made me really excited about the ...