Insight
is our reward

Publications in Psychology & Cognitive Sciences by NOMIS researchers

NOMIS Researcher(s)

Published in

April 17, 2025

For much of the global population, climate change appears as a slow, gradual shift in daily weather. This leads many to perceive its impacts as minor and results in apathy (the ‘boiling frog’ effect). How can we convey the urgency of the crisis when its impacts appear so subtle? Here, through a series of large-scale cognitive experiments (N = 799), we find that presenting people with binary climate data (for example, lake freeze history) significantly increases the perceived impact of climate change (Cohen’s d = 0.40, 95% confidence interval 0.26–0.54) compared with continuous data (for example, mean temperature). Computational modelling and follow-up experiments (N = 398) suggest that binary data enhance perceived impact by creating an ‘illusion’ of sudden shifts. Crucially, our approach does not involve selective data presentation but rather compares different datasets that reflect equivalent trends in climate change over time. These findings, robustly replicated across multiple experiments, provide a cognitive basis for the ‘boiling frog’ effect and offer a psychologically grounded approach for policymakers and educators to improve climate change communication while maintaining scientific accuracy.

Research field(s)
Information & Communication Technologies, Psychology & Cognitive Sciences, Behavioral Science & Comparative Psychology

NOMIS Researcher(s)

How can we build AI systems that can learn any set of individual human values both quickly and safely, avoiding causing harm or violating societal standards for acceptable behavior during the learning process? We explore the effects of representational alignment between humans and AI agents on learning human values. Making AI systems learn human-like representations of the world has many known benefits, including improving generalization, robustness to domain shifts, and few-shot learning performance. We demonstrate that this kind of representational alignment can also support safely learning and exploring human values in the context of personalization. We begin with a theoretical prediction, show that it applies to learning human morality judgments, then show that our results generalize to ten different aspects of human values — including ethics, honesty, and fairness — training AI agents on each set of values in a multi-armed bandit setting, where rewards reflect human value judgments over the chosen action. Using a set of textual action descriptions, we collect value judgments from humans, as well as similarity judgments from both humans and multiple language models, and demonstrate that representational alignment enables both safe exploration and improved generalization when learning human values.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

September 21, 2024

The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference. We argue that this is not the case and that these systems in fact offer new opportunities for Bayesian modeling. Specifically, we argue that artificial neural networks and Bayesian models of cognition lie at different levels of analysis and are complementary modeling approaches, together offering a way to understand human cognition that spans these levels. We also argue that the same perspective can be applied to intelligent machines, in which a Bayesian approach may be uniquely valuable in understanding the behavior of large, opaque artificial neural networks that are trained on proprietary data.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

Published in

July 24, 2024

The capacity to leverage information from others’ opinions is a hallmark of human cognition. Consequently, past research has investigated how we learn from others’ testimony. Yet a distinct form of social information—aggregated opinion—increasingly guides our judgments and decisions. We investigated how people learn from such information by conducting three experiments with participants recruited online within the United States (N = 886) comparing the predictions of three computational models: a Bayesian solution to this problem that can be implemented by a simple strategy for combining proportions with prior beliefs, and two alternatives from epistemology and economics. Across all studies, we found the strongest concordance between participants’ judgments and the predictions of the Bayesian model, though some participants’ judgments were better captured by alternative strategies. These findings lay the groundwork for future research and show that people draw systematic inferences from aggregated opinion, often in line with a Bayesian solution.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Science can be viewed as a collective, epistemic endeavor. However, a variety of factors- such as the publish-or-perish culture, institutional incentives, and publishers who favor novel and positive findings- may challenge the ability of science to accurately aggregate information about the world. Evidence of the shortcomings in the current structure of science can be seen in the replication crisis that faces psychology and other disciplines. We analyze scientific publishing through the lens of cultural evolution, framing the scientific process as a multi-generational interplay between scientists and publishers in a multi-armed bandit setting. We examine the dynamics of this model through simulations, exploring the effect that different publication policies have on the accuracy of the published scientific record. Our findings highlight the need for replications and caution against behaviors that prioritize factors uncorrelated with result accuracy.

Research field(s)
Psychology & Cognitive Sciences, Psychology & Cognitive Sciences

Typical models of learning assume incremental estimation of continuously-varying decision variables like expected rewards. However, this class of models fails to capture more idiosyncratic, discrete heuristics and strategies that people and animals appear to exhibit. Despite recent advances in strategy discovery using tools like recurrent networks that generalize the classic models, the resulting strategies are often onerous to interpret, making connections to cognition difficult to establish. We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness. Focusing on bandit tasks, we find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Autoregressive Large Language Models (LLMs) trained for next-word prediction have demonstrated remarkable proficiency at producing coherent text. But are they equally adept at forming coherent probability judgments? We use probabilistic identities and repeated judgments to assess the coherence of probability judgments made by LLMs. Our results show that the judgments produced by these models are often incoherent, displaying human-like systematic deviations from the rules of probability theory. Moreover, when prompted to judge the same event, the mean-variance relationship of probability judgments produced by LLMs shows an inverted-U-shaped like that seen in humans. We propose that these deviations from rationality can be explained by linking autoregressive LLMs to implicit Bayesian inference and drawing parallels with the Bayesian Sampler model of human probability judgments.

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

NOMIS Researcher(s)

Published in

January 1, 2024

The rapid development of machine learning has led to new opportunities for applying these methods to the study of human decision making. We highlight some of these opportunities and discuss some of the issues that arise when using machine learning to model the decisions people make. We first elaborate on the relationship between predicting decisions and explaining them, leveraging findings from computational learning theory to argue that, in some cases, the conversion of predictive models to interpretable ones with comparable accuracy is an intractable problem. We then identify an important bottleneck in using machine learning to study human cognition—data scarcity—and highlight active learning and optimal experimental design as a way to move forward. Finally, we touch on additional topics such as machine learning methods for combining multiple predictors arising from known theories and specific machine learning architectures that could prove useful for the study of judgment and decision making. In doing so, we point out connections to behavioral economics, computer science, cognitive science, and psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved)

Research field(s)
Artificial Intelligence & Image Processing, Psychology & Cognitive Sciences

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.

Research field(s)
Experimental Psychology

Effectively updating one’s beliefs requires sufficient empirical evidence (i.e., data) and the computational capacity to process it. Yet both data and computational resources are limited for human minds. Here, we study the problem of belief updating under limited data and limited computation. Using information theory to characterize constraints on computation, we find that the solution to the resulting optimization problem links the data and computational limitations together: when computational resources are tight, agents may not be able to integrate new empirical evidence. The resource-rational belief updating rule we identify offers a novel interpretation of conservative Bayesian updating.

Research field(s)
Psychology & Cognitive Sciences

On-line decision problems – in which a decision is made based on a sequence of past events without knowledge of the future – have been extensively studied in theoretical computer science. A famous example is the Prediction from Expert Advice problem, in which an agent has to make a decision informed by the predictions of a set of experts. An optimal solution to this problem is the Multiplicative Weights Update Method (MWUM). In this paper, we investigate how humans behave in a Prediction from Expert Advice task. We compare MWUM and several other algorithms proposed in the computer science literature against human behavior. We find that MWUM provides the best fit to people’s choices.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

August 1, 2023

Theories in cognitive science are primarily aimed at explaining human behavior in general, appealing to universal constructs such as perception or attention. When it is considered, modeling of individual differences is typically performed by adapting model parameters. The implicit assumption of this standard approach is that people are relatively similar, employing the same basic cognitive processes in a given problem domain. In this work, we consider a broader evaluation of the way in which people may differ. We evaluate 23 models of risky choice on around 300 individuals, and find that most models—spanning various constructs from heuristic rules and attention to regret and subjective perception—explain the behavior of different subpopulations of individuals. These results may account for part of the difficulty in obtaining a single elegant explanation of behavior in some long-studied domains, and suggest a more serious consideration of individual variability in theory comparisons going forward.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

May 17, 2023

Predicting the future can bring enormous advantages. Across the ages, reliance on supernatural foreseeing was substituted by the opinion of expert forecasters, and now by collective intelligence approaches which draw on many non-expert forecasters. Yet all of these approaches continue to see individual forecasts as the key unit on which accuracy is determined. Here, we hypothesize that compromise forecasts, defined as the average prediction in a group, represent a better way to harness collective predictive intelligence. We test this by analysing 5 years of data from the Good Judgement Project and comparing the accuracy of individual versus compromise forecasts. Furthermore, given that an accurate forecast is only useful if timely, we analyze how the accuracy changes through time as the events approach. We found that compromise forecasts are more accurate, and that this advantage persists through time, though accuracy varies. Contrary to what was expected (i.e. a monotonous increase in forecasting accuracy as time passes), forecasting error for individuals and for team compromise starts its decline around two months prior to the event. Overall, we offer a method of aggregating forecasts to improve accuracy, which can be straightforwardly applied in noisy real-world settings. © 2023 The Authors.

Research field(s)
Applied Sciences, Psychology & Cognitive Sciences, Experimental Psychology

NOMIS Researcher(s)

December 21, 2022

The dominant paradigm of experiments in the social and behavioral sciences views an experiment as a test of a theory, where the theory is assumed to generalize beyond the experiment’s specific conditions. According to this view, which Alan Newell once characterized as “playing twenty questions with nature,” theory is advanced one experiment at a time, and the integration of disparate findings is assumed to happen via the scientific publishing process. In this article, we argue that the process of integration is at best inefficient, and at worst it does not, in fact, occur. We further show that the challenge of integration cannot be adequately addressed by recently proposed reforms that focus on the reliability and replicability of individual findings, nor simply by conducting more or larger experiments. Rather, the problem arises from the imprecise nature of social and behavioral theories and, consequently, a lack of commensurability across experiments conducted under different conditions. Therefore, researchers must fundamentally rethink how they design experiments and how the experiments relate to theory. We specifically describe an alternative framework, integrative experiment design, which intrinsically promotes commensurability and continuous integration of knowledge. In this paradigm, researchers explicitly map the design space of possible experiments associated with a given research question, embracing many potentially relevant theories rather than focusing on just one. Researchers then iteratively generate theories and test them with experiments explicitly sampled from the design space, allowing results to be integrated across experiments. Given recent methodological and technological developments, we conclude that this approach is feasible and would generate more-reliable, more-cumulative empirical and theoretical knowledge than the current paradigm – and with far greater efficiency.

Research field(s)
Experimental Psychology, Social Sciences