NOMIS Awardee Manos Tsakiris and colleagues have published their findings in iScience showing that people are more likely to perceive artificially generated faces as real than real ones. The findings have important social implications.
We might think that we are good at recognizing faces, but as it turns out we cannot tell the difference between real faces and artificially generated faces of non-existent people. New research, as part of the ‘Body & Image in Arts & Science’ (BIAS) project funded by the NOMIS Foundation, found that people are more likely to perceive artificially generated faces, known as Generative Adversarial Networks faces (GAN), as real than real ones.This matters, because artificial faces are becoming ubiquitous in everyday culture, so we may often interact with them without knowing it.
The research, On the Realness of People Who Do Not Exist: The Social Processing of Artificial Faces, led by Professor Manos Tsakiris from the Department of Psychology at Royal Holloway, and conducted by Dr Raffaele Tucciarelli at the Warburg Institute, School of Advanced Study, showed participants a series of faces that either belonged to real people or were artificially generated.
Participants were asked to judge whether the faces were real or not and, intriguingly, GAN faces were more likely to be perceived as real over those of actual people. The findings highlight the recent advances in the technology used to generate artificial images, with an impressive acceleration over the last decade of GAN technology – resulting in faces looking incredibly realistic today.
GAN faces are realistic-looking images of non-existing people that are computer generated. The people depicted in these images do not exist and never existed. They are increasingly used in marketing, journalism and social media, but also for malicious purposes, such as political propaganda and espionage.
The findings are important when society considers the social implications: from mainstream news to social media, edited photos to deep fake videos, humans to bots, and from alternative facts to fake news – people have to judge the reliability of the information they see.
The ever-increasing use of fake images and videos is shifting the cultural visual landscape from being primarily truthful to being potentially deceptive. The question of who to trust becomes particularly relevant.
For that reason, the same team of researchers went on to show, in a separate study, that participants are more likely to trust the information conveyed by faces they had previously judged to be real, even if they were artificially generated.
Professor Manos Tsakiris, Director of the Centre for the Politics of Feelings at the School of Advanced Study, said: “Many have argued that one of biggest casualties to Artificial Intelligence will be the erosion of trust in what we see and hear.
“As we show in our study, the realness that people project onto artificial faces makes them more likely to be trusted as informational sources, but later, when people realise there are AI images out there, their whole trust of any information given to them is drastically reduced. This could lead to people disengaging with messages given to them in the future as they do not know who or what to trust. Educating audiences about such technologies and advancing their digital literacy may make us less gullible but at the same time it may make us, in general, less trusting.”
Dr Raffaele Tucciarelli, who has recently completed his research at the Warburg Institute, added:
“Our results show that people’s biases in perceiving artificial faces as real can then make them more gullible to information conveyed by such artificial agents. As we live in times when more than ever before in human history, we are asked to judge the realness, truthfulness and trustworthiness of our social world, and what information we decide to trust can have far-reaching consequences.”
To further understand the role that knowledge of the existence of such technology may play in people’s behaviour, researchers then conducted a third study where half of the participants were informed about the existence of GAN faces while the other half were kept in the dark. The results showed that informed participants showed overall lower levels of trust in their virtual interaction with all faces presented to them, independently of whether these were real or not.
The subversive and ubiquitous use of technologies that can generate realistic-yet-fake photos and videos can erode trust in the main informational sources that we use, such as social media, with far-reaching consequences for our societies.
Read the iScience publication: On the realness of people who do not exist: The social processing of artificial faces