Learning in real brains and artificial neural networks

25 October 2019

Jonathan Bloom
Institute Scientist
Broad Institute

Abstract

Learning is a high-dimensional dynamical process in both real brains and artificial neural networks. I'll first introduce how topology can help us "visualize" the dynamics of representation learning in the context of principal component analysis (PCA). I'll then relate PCA models in artificial and "real" neural networks, namely linear autoencoders and Oja's rule for Hebbian learning. More originally, I'll explain how penalizing large "synaptic" weights deepens this relationship, makes the learned representation more meaningful, and gives rise to new PCA algorithms. I'll apply this theory to predictive deep neural networks architected to respect the unidirectionality of actual neurons – with feedforward weights for prediction and separate feedback weights for learning – thinking of the loops between layers as autoencoders. Maximizing the flow of information while minimizing weights aligns the feedforward and feedback weights such that, empirically, backpropagation of errors through the feedback channel supports image classification at the scale of modern benchmarks. Indeed, we are now finding that a small set of bio-inspired primitive networks form a grammar for local rules that dynamically align weights to drive learning. I'll explain these math concepts, including neural networks, through accessible examples and pictures on the board.

current theory lunch schedule