Insight
is our reward

Publications in Proceedings of the Annual Meeting of the Cognitive Science Society by NOMIS researchers

Science can be viewed as a collective, epistemic endeavor. However, a variety of factors- such as the publish-or-perish culture, institutional incentives, and publishers who favor novel and positive findings- may challenge the ability of science to accurately aggregate information about the world. Evidence of the shortcomings in the current structure of science can be seen in the replication crisis that faces psychology and other disciplines. We analyze scientific publishing through the lens of cultural evolution, framing the scientific process as a multi-generational interplay between scientists and publishers in a multi-armed bandit setting. We examine the dynamics of this model through simulations, exploring the effect that different publication policies have on the accuracy of the published scientific record. Our findings highlight the need for replications and caution against behaviors that prioritize factors uncorrelated with result accuracy.

Research field(s)
Psychology & Cognitive Sciences, Psychology & Cognitive Sciences

Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards. However, this class of models fails to capture more idiosyncratic, discrete heuristics and strategies that people and animals appear to exhibit. Despite recent advances in strategy discovery using tools like recurrent networks that generalize the classic models, the resulting strategies are often onerous to interpret, making connections to cognition difficult to establish. We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness. Focusing on bandit tasks, we find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text. But are they equally adept at forming coherent probability judgments? We use probabilistic identities and repeated judgments to assess the coherence of probability judgments made by LLMs. Our results show that the judgments produced by these models are often incoherent, displaying human-like systematic deviations from the rules of probability theory. Moreover, when prompted to judge the same event, the mean-variance relationship of probability judgments produced by LLMs shows an inverted-U-shaped like that seen in humans. We propose that these deviations from rationality can be explained by linking autoregressive LLMs to implicit Bayesian inference and drawing parallels with the Bayesian Sampler model of human probability judgments.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Effectively updating one’s beliefs requires sufficient empirical evidence (i.e., data) and the computational capacity to process it. Yet both data and computational resources are limited for human minds. Here, we study the problem of belief updating under limited data and limited computation. Using information theory to characterize constraints on computation, we find that the solution to the resulting optimization problem links the data and computational limitations together: when computational resources are tight, agents may not be able to integrate new empirical evidence. The resource-rational belief updating rule we identify offers a novel interpretation of conservative Bayesian updating.

Research field(s)
Psychology & Cognitive Sciences

On-line decision problems – in which a decision is made based on a sequence of past events without knowledge of the future – have been extensively studied in theoretical computer science. A famous example is the Prediction from Expert Advice problem, in which an agent has to make a decision informed by the predictions of a set of experts. An optimal solution to this problem is the Multiplicative Weights Update Method (MWUM). In this paper, we investigate how humans behave in a Prediction from Expert Advice task. We compare MWUM and several other algorithms proposed in the computer science literature against human behavior. We find that MWUM provides the best fit to people’s choices.

Research field(s)
Psychology & Cognitive Sciences