Hello, I’m Alex Adam

I'm a PhD researcher at the University of Toronto focused on making the deployment of deep learning models safe, robust, and reliable.

Latest Posts

Visualizing Image Similarities

Introduction Unpacking the features learned by a deep convolutional neural network (CNN) is a dauting task. Going through each layer to either visualize filters or features scales p...

Ensemble Robustness to Adversarial Examples

Introduction Last summer I had the pleasure of working with a talented undergraduate researcher named Romain Speciel on a project that looked at how to regularize model ensembles in...

Increasing Interpretability to Improve Model Robustness

Introduction A recent attempt to improve the robustness of convolutional neural networks (CNNs) on image classification tasks has revealed an interesting link between robustness and...

Adversarial Examples: Rethinking the Definition

Introduction Adversarial examples are a large obstacle for a variety of machine learning systems to overcome. Their existence shows the tendency of models to rely on unreliable featu...

Understanding Neural Architecture Search

Introduction For the past couple of years, researchers and companies have been trying to make deep learning more accessible to non-experts by providing access to pre-trained computer...