Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines

Posted: 2015-07-28 in paper of the day, research
Tags: , ,

Samuel J. Gershman, Eric J. Horvitz, Joshua B. Tenenbaum:
Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines
Science 349(6245): 273-278, 2015

The article provides a shared computation-based view on concepts in AI, cognitive science and neuroscience. Advances that address challenges of perception and action under uncertainty are discussed.

Rationality was first defined by von Neumann and Morgenstern in the context of a decision-making agent who maximizes expected utilities [1]. The actual impact of this formalization was delayed until the late 80s. These days, it is widely accepted that actions are guided by expected utility, and the advent of approximate inference mechanisms has motivated AI researchers to look at probabilities through the lens of computation. This led to the development of large-scale probabilistic inference, mechanisms for determining best actions and decision-making systems that trade off precision and computation under bounded resources. Analogous ideas have become increasingly important in cognitive science and neuroscience as well. This article draws the connections between concepts in these three symbiotic fields, focusing on computational rationality.

Models of computational rationality require

  • Bayesian inferential processes for perceiving, predicting, learning and decision making
  • Mechanisms to assess the feasibility and outcomes of actions/decisions
  • Trade-off between precision and timeliness of decisions (if the system has bounded resources)

The article tries to shed some light on these aspects from three different perspectives.

The AI Perspective

In AI, graphical models were a key advance that accelerated the development of efficient inference methods, structure learning, transfer learning, active learning and probabilistic programming, which are now at the core of decision-making systems (e.g., IBM’s Watson, Google’s self-driving car, Microsoft’s automated assistant). However, these “real-world applications” of AI also triggered the consideration of non-traditional aspects, such as the finite amount of (computational) resources available for decision making. Trade-offs between computation time and precision of (approximate) inference need to be considered. Monte-Carlo methods and particle filters are examples of such inference methods, which explicitly consider the interplay between value and cost of inference. Attempts to mechanize probability for inference and learning stimulated thinking about the role of related representations and strategies in human cognition.

The Cognitive Science Perspective

Starting in the 1950s, human decision making was considered based on Bayesian decision theory, which has been picked up again in the 1990s: The success of probabilistic inference in AI (see above) pushed these ideas back into the center of cognitive modeling, and the connection was drawn between distributed message passing in graphical models and large-scale probabilistic inference in the brain. Nevertheless, simply following the route of making decisions based on maximum utility (following von Neumann and Morgenstern’s formalization) was insufficient for human decision making. However, computational rationality (i.e., rationality that explicitly accounts for resources and effort) seems to provide a framework that explains human decision making. There is a link between sampling by humans (evaluate various options in an uncertain situation) and sampling by AI systems (e.g., particle filters). An issue appears when only limited information is available, requiring the brain to have meta-reasoning mechanisms that are sensitive to the cost of information collection (cognition).

The Computational Neuroscience Perspective

The article nicely summarizes some results from computational neuroscience where the link is made between model-based and model-free reward-based learning in animals and machines (aka reinforcement learning). In particular, there is experimental evidence that animals use computationally costly but flexible model-based learning when only little information is available. However, when more and more data arrives the brain automatically switches towards low-cost but less flexible model-free learning is made [2,3]. In AI, we are also looking at computationally expensive model-based methods, which do not require lots of data but more computation time [4]. However, model-free (table-based) reinforcement learning algorithms are able to learn very complicated (sequential) decisions if they are given enough data [5].  Another connection between AI and cognitive science is the problem of exploration and exploitation. In the context of model-based learning, Monte-Carlo Tree Search sampling methods [6] have gained much attention in AI in this context, where decisions can be chosen to optimize the value of computation. The brain seems to be using similar algorithms for spatial naviation problems. Hybrid (model-free/model-based) approaches have been investigated in neuroscience [6] and started gaining some attention in AI [7].



[1] J. von Neumann, O. Morgenstern:
Theory of Games and Economic Behavior
Princeton University Press

[2] N. Daw, S. J. Gershman, B. Seymour, P. Dayan, R. J. Dolan:
Model-based Influences on Humans’ Choices and Striatal Prediction Errors
Neuron 69: 1204-1215

[3] N. Daw, Y. Niv, P. Dayan:
Uncertainty-based Competition between Prefrontal and Dorsolateral Striatal Systems for Behavioral Control
Nature Neuroscience 8(12):1704-1711

[4] M. P. Deisenroth, D. Fox, C. E. Rasmussen:
Gaussian Processes for Data-Efficient Learning in Robotics and Control
IEEE Transactions on Pattern Analysis and Machine Intelligence 37(2):408-423

[5] V. Mnih et al.:
Human-Level Control through Deep Reinforcement LearningNature 518(7540):529-533

[6] G. Chaslot, S. Bakkes, I. Szita, P. Spronck:
Monte-Carlo Tree Search: A New Framework for Game AI
Proceedings of the Artificial Intelligence and Interactive Digital Entertainment Conference

[7] C. J. Maddison, A. Huang, I. Sutskever, D. Silver:
Move Evaluation in Go Using Deep Convolutional Neural Networks

  1. […] Computational Rationality: A Converging Paradigm for Intelligence in Brains, Minds, and Machines […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s