Presented by: 
David Balduzzi (Victoria University Wellington)
Date: 
Mon 31 Jul, 2:00 pm - 3:00 pm
Venue: 
05-213

Deep learning has yielded spectacular breakthroughs, recently surpassing human performance in object recognition, Atari games and the board game Go. In this talk I will give a high-level overview of the mathematical foundations of deep learning, focusing on how gradients are computed and controlled. I will discuss the problems of vanishing, exploding and shattering gradients, and techniques that have been developed to ameliorate them in very deep networks. Finally, time-permitting, I will discuss convergence guarantees for neural nets.