Integrating physiology and behaviour through homeostatically-regulated reinforcement learning

5 November 2021

Boris Gutkin
Cognitive Studies Department
École Normal Supérieure
Paris, France

zoom recording

Abstract

Efficient regulation of internal milieu and defending it against perturbations requires adaptive behavioral strategies. We develop computational principles that mediate the interaction between homeostatic and reinforcement learning processes. We propose a definition of primary rewards as outcomes fulfilling physiological needs. We then build a normative theory showing how internal states modulate learning of motivated behaviors, mathematically proving that seeking rewards is equivalent to the fundamental objective of physiological stability, defining the notion of physiological rationality of behavior. Our theory suggests a formal basis for temporal discounting of rewards by showing that discounting motivates animals to follow the shortest path in the space of physiological variables toward the desired setpoint. We also explain how animals learn to act predictively to preclude prospective homeostatic challenges and respond to appetitive reinforcers in a manner dependent on their internal states. I will further argue that the internal drive function, that links impact of outcomes on the internal states to motivation, should be related to ameliorated future viability of the organism and linked to population-level survival curves. In sum total, the theory argues for an internal definition of reward.

Time permitting, I will show how deregulation of the homeostatic processes leads to pathological learning and aberrant behaviors such as overconsumption of hyperpalatable foods and addictive behaviors.

References

  1. M Keramati, B Gutkin, "Homeostatic reinforcement learning for integrating reward collection and physiological stability", eLife 3:e04811 2014. Paper

current theory lunch schedule