Insight
is our reward

Publications in Psychology & Cognitive Sciences by NOMIS researchers

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.

Research field(s)
Experimental Psychology

Effectively updating one’s beliefs requires sufficient empirical evidence (i.e., data) and the computational capacity to process it. Yet both data and computational resources are limited for human minds. Here, we study the problem of belief updating under limited data and limited computation. Using information theory to characterize constraints on computation, we find that the solution to the resulting optimization problem links the data and computational limitations together: when computational resources are tight, agents may not be able to integrate new empirical evidence. The resource-rational belief updating rule we identify offers a novel interpretation of conservative Bayesian updating.

Research field(s)
Psychology & Cognitive Sciences

On-line decision problems – in which a decision is made based on a sequence of past events without knowledge of the future – have been extensively studied in theoretical computer science. A famous example is the Prediction from Expert Advice problem, in which an agent has to make a decision informed by the predictions of a set of experts. An optimal solution to this problem is the Multiplicative Weights Update Method (MWUM). In this paper, we investigate how humans behave in a Prediction from Expert Advice task. We compare MWUM and several other algorithms proposed in the computer science literature against human behavior. We find that MWUM provides the best fit to people’s choices.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

August 1, 2023

Theories in cognitive science are primarily aimed at explaining human behavior in general, appealing to universal constructs such as perception or attention. When it is considered, modeling of individual differences is typically performed by adapting model parameters. The implicit assumption of this standard approach is that people are relatively similar, employing the same basic cognitive processes in a given problem domain. In this work, we consider a broader evaluation of the way in which people may differ. We evaluate 23 models of risky choice on around 300 individuals, and find that most models—spanning various constructs from heuristic rules and attention to regret and subjective perception—explain the behavior of different subpopulations of individuals. These results may account for part of the difficulty in obtaining a single elegant explanation of behavior in some long-studied domains, and suggest a more serious consideration of individual variability in theory comparisons going forward.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

May 17, 2023

Predicting the future can bring enormous advantages. Across the ages, reliance on supernatural foreseeing was substituted by the opinion of expert forecasters, and now by collective intelligence approaches which draw on many non-expert forecasters. Yet all of these approaches continue to see individual forecasts as the key unit on which accuracy is determined. Here, we hypothesize that compromise forecasts, defined as the average prediction in a group, represent a better way to harness collective predictive intelligence. We test this by analysing 5 years of data from the Good Judgement Project and comparing the accuracy of individual versus compromise forecasts. Furthermore, given that an accurate forecast is only useful if timely, we analyze how the accuracy changes through time as the events approach. We found that compromise forecasts are more accurate, and that this advantage persists through time, though accuracy varies. Contrary to what was expected (i.e. a monotonous increase in forecasting accuracy as time passes), forecasting error for individuals and for team compromise starts its decline around two months prior to the event. Overall, we offer a method of aggregating forecasts to improve accuracy, which can be straightforwardly applied in noisy real-world settings. © 2023 The Authors.

Research field(s)
Applied Sciences, Psychology & Cognitive Sciences, Experimental Psychology