Insight
is our reward

Publications in Experimental Psychology by NOMIS researchers

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.

Research field(s)
Experimental Psychology

NOMIS Researcher(s)

May 17, 2023

Predicting the future can bring enormous advantages. Across the ages, reliance on supernatural foreseeing was substituted by the opinion of expert forecasters, and now by collective intelligence approaches which draw on many non-expert forecasters. Yet all of these approaches continue to see individual forecasts as the key unit on which accuracy is determined. Here, we hypothesize that compromise forecasts, defined as the average prediction in a group, represent a better way to harness collective predictive intelligence. We test this by analysing 5 years of data from the Good Judgement Project and comparing the accuracy of individual versus compromise forecasts. Furthermore, given that an accurate forecast is only useful if timely, we analyze how the accuracy changes through time as the events approach. We found that compromise forecasts are more accurate, and that this advantage persists through time, though accuracy varies. Contrary to what was expected (i.e. a monotonous increase in forecasting accuracy as time passes), forecasting error for individuals and for team compromise starts its decline around two months prior to the event. Overall, we offer a method of aggregating forecasts to improve accuracy, which can be straightforwardly applied in noisy real-world settings. © 2023 The Authors.

Research field(s)
Applied Sciences, Psychology & Cognitive Sciences, Experimental Psychology