We build precise and powerful models of people's cognitive abilities, combining psychology, neuroscience, and machine learning. We use interactive games, large data sets, online and lab experiments, as well as behavioral and neuroscientific tools to study how people learn, generalize and explore. We focus on the following three topics:
Compared to machine learning algorithms, people are generally much better at generalizing from limited data. To account for this, we work on compositional theories of generalization. Our account assumes that people rely on compositional inductive biases: priors over structures that can be combined and reused, creating potentially infinite generalizations from a finite set of simple building blocks. We model human generalization using methods of function approximation, neural networks and program induction.
We study how people use structure to guide their search for rewards. Our models combine the ability to generalize with an uncertainty-driven exploration strategy. Our models describe a large swath of human behavior: adult exploration, developmental differences, exploration in real world environments, psychiatric signatures of exploratory behavior, exploration in graph-structured spaces, as well as neural signatures of generalization-driven reinforcement learning.
We investigate how people trade off between accuracy and efficiency when solving complex problems. As biological computation normally costs time and energy, a computationally efficient agent might halt computation after a short time. Using the notion of computational rationality, which assumes that people approximate optimal solutions through limited mental effort, we explain common heuristics as efficient approximation to intractable problems. Moreover, we assess how people re-use computations, how they learn to approximate inferences over time, and how human inference scales with the complexity of the underlying problem.