Insight
is our reward

Publications in Behavior by NOMIS researchers

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.

Research field(s)
Experimental Psychology

On-line decision problems – in which a decision is made based on a sequence of past events without knowledge of the future – have been extensively studied in theoretical computer science. A famous example is the Prediction from Expert Advice problem, in which an agent has to make a decision informed by the predictions of a set of experts. An optimal solution to this problem is the Multiplicative Weights Update Method (MWUM). In this paper, we investigate how humans behave in a Prediction from Expert Advice task. We compare MWUM and several other algorithms proposed in the computer science literature against human behavior. We find that MWUM provides the best fit to people’s choices.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

August 1, 2023

Theories in cognitive science are primarily aimed at explaining human behavior in general, appealing to universal constructs such as perception or attention. When it is considered, modeling of individual differences is typically performed by adapting model parameters. The implicit assumption of this standard approach is that people are relatively similar, employing the same basic cognitive processes in a given problem domain. In this work, we consider a broader evaluation of the way in which people may differ. We evaluate 23 models of risky choice on around 300 individuals, and find that most models—spanning various constructs from heuristic rules and attention to regret and subjective perception—explain the behavior of different subpopulations of individuals. These results may account for part of the difficulty in obtaining a single elegant explanation of behavior in some long-studied domains, and suggest a more serious consideration of individual variability in theory comparisons going forward.

Research field(s)
Psychology & Cognitive Sciences

NOMIS Researcher(s)

There is widespread agreement that delusions in clinical populations and delusion-like beliefs in the general population are, in part, caused by cognitive biases. Much of the evidence comes from two influential tasks: the Beads Task and the Bias Against Disconfirmatory Evidence Task. However, research using these tasks has been hampered by conceptual and empirical inconsistencies. In an online study, we examined relationships between delusion-like beliefs in the general population and cognitive biases associated with these tasks. Our study had four key strengths: A new animated Beads Task designed to reduce task miscomprehension, several data-quality checks to identify careless responders, a large sample (n= 1,002), and a preregistered analysis plan. When analyzing the full sample, our results replicated classic relationships between cognitive biases and delusion-like beliefs. However, when we removed 82 careless participants from the analyses (8.2% of the sample) we found that many of these relationships were severely diminished and, in some cases, eliminated outright. These results suggest that some (but not all) seemingly well-established relationships between cognitive biases and delusion-like beliefs might be artifacts of careless responding. © 2023 American Psychological Association

Research field(s)
Health Sciences, Psychology & Cognitive Sciences, Experimental Psychology

NOMIS Researcher(s)

May 17, 2023

Predicting the future can bring enormous advantages. Across the ages, reliance on supernatural foreseeing was substituted by the opinion of expert forecasters, and now by collective intelligence approaches which draw on many non-expert forecasters. Yet all of these approaches continue to see individual forecasts as the key unit on which accuracy is determined. Here, we hypothesize that compromise forecasts, defined as the average prediction in a group, represent a better way to harness collective predictive intelligence. We test this by analysing 5 years of data from the Good Judgement Project and comparing the accuracy of individual versus compromise forecasts. Furthermore, given that an accurate forecast is only useful if timely, we analyze how the accuracy changes through time as the events approach. We found that compromise forecasts are more accurate, and that this advantage persists through time, though accuracy varies. Contrary to what was expected (i.e. a monotonous increase in forecasting accuracy as time passes), forecasting error for individuals and for team compromise starts its decline around two months prior to the event. Overall, we offer a method of aggregating forecasts to improve accuracy, which can be straightforwardly applied in noisy real-world settings. © 2023 The Authors.

Research field(s)
Applied Sciences, Psychology & Cognitive Sciences, Experimental Psychology

NOMIS Researcher(s)

Published in

August 15, 2021

Widespread evidence from psychology and neuroscience documents that previous choices unconditionally increase the later desirability of chosen objects, even if those choices were uninformative. This is problematic for economists who use choice data to estimate latent preferences, demand functions, and social welfare. The evidence on this mere choice effect, however, exhibits serious shortcomings which prevent evaluating its possible relevance for economics. In this paper, we present a novel, parsimonious experimental design to test for the economic validity of the mere choice effect addressing these shortcomings. Our design uses well-defined, monetary lotteries, all decisions are incentivized, and we effectively randomize participants’ initial choices without relying on deception. Results from a large, pre-registered online experiment find no support for the mere choice effect. Our results challenge conventional wisdom outside economics. The mere choice effect does not seem to be a concern for economics, at least in the domain of decision making under risk.

Research field(s)
Psychology & Cognitive Sciences