NOMIS researcher Ophelia Deroy and colleagues have published their findings in iScience suggesting that people are less likely to cooperate with AI even when AI is keen to cooperate. They are also more likely to take advantage of AI than they would other people.
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative towards us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In 9 experiments, humans interacted with either another human or an AI agent in 4 classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI’s benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans’ returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies.
Read the iScience publication: Algorithm exploitation: humans are keen to exploit benevolent AI
The New York Times: Why A.I. Should Be Afraid of Us
Chair, Philosophy of Mind and Cognitive Neuroscience
Munich Center for Neurosciences
Diversity in Social Environments (DISE)
NOMIS RESEARCH PROJECT