Optimisation principles in biology

3 February 2006

William Bialek
Joseph Henry Laboratory of Physics and
Lewis Sigler Institute for Integrative Genomics
Princeton University

Abstract

Despite our human pre-occupation with our own mistakes, much of biology "works" remarkably well. Quantitative experiments on many different systems have led people to wonder whether rather than just being pretty good, biology may have found optimal solutions to at least some of the problems that organisms face in dealing with the world.

I'll focus here on two (potentially) different notion of optimization: Maximum reliability in the presence of noise, and maximum efficiency in the representation of information. In each case I'll start with examples from neurobiology and then try to draw parallels in genetic and biochemical systems. I hope to go through the steps of (a) defining theoretically what it means to be optimized, (b) designing experiments that test for optimization directly, (c) predicting the mechanisms that are necessary to achieve the optimum, and (d) closing the loop to show that these sometimes unexpected mechanisms actually exist. In each case I would like to convey the (optimistic) view that optimization is providing us with something like a theory of the system, rather than a highly parameterized model.

Optimization principles have the right "flavor" to be a theory of biological systems, but there are many problems. In practice, we have many seemingly independent principles applied to different systems ... a bit too much of a laundry list. Also, many theoretical appealing quantities (especially information theoretic quantities) have no obvious connection to biologically grounded notions of fitness. Indeed, the problem of what organisms really want to optimize could scuttle the whole theoretical program, since it might require us to take account of myriad biological details. I will try to cut through all this, and argue that an organism which maximizes the adaptive value of its actions given fixed resources must have internal representations of the outside world that are optimal in an abstract, information theoretic sense. This is true even if we know all the factors that determine the metrics for costs and benefits. The resulting optimization principle - efficient representation of predictive information - includes as special cases problems in signal processing and learning that have clear connections to computations done by the brain. I'll suggest that this new notion of optimization is testable directly, and report some promising preliminary experiments.

current theory lunch schedule