A scientific understanding of modern deep learning

Monday, August 30, 2021

Join us for our Physics Condensed Matter at 10:00am

Artificial deep neural networks have in recent years emerged as a powerful class of function approximators for a broad range of problems in which we seek to learn from data. Largely due to their model complexity, however, basic questions surrounding how they learn and function are not well understood. I will survey parts of a body of recent work in which we tackle these questions. This will include theoretical insights for deep learning based off of exactly solvable limits, distilled models that illuminate universal phenomena; and a theoretical understanding of empirically observed "scaling laws." These investigations draw ideas, tools, and inspiration from statistical mechanics, information theory, and many-body theory more broadly while integrating together some of the perspectives of theoretical physics, statistics, and computer science.

https://berkeley.zoom.us/j/97545820986

Location: 
virtual (zoom)
Speaker: 
Yasaman Bahri
Affiliation: 
Google Research