Insight
is our reward

Publications in Theory by NOMIS researchers

NOMIS Researcher(s)

Published in

June 9, 2023

During political campaigns, candidates use rhetoric to advance competing visions and assessments of their country. Research reveals that the moral language used in this rhetoric can significantly influence citizens’ political attitudes and behaviors; however, the moral language actually used in the rhetoric of elites during political campaigns remains understudied. Using a data set of every tweet (N = 139, 412) published by 39 US presidential candidates during the 2016 and 2020 primary elections, we extracted moral language and constructed network models illustrating how candidates’ rhetoric is semantically connected. These network models yielded two key discoveries. First, we find that party affiliation clusters can be reconstructed solely based on the moral words used in candidates’ rhetoric. Within each party, popular moral values are expressed in highly similar ways, with Democrats emphasizing careful and just treatment of individuals and Republicans emphasizing in-group loyalty and respect for social hierarchies. Second, we illustrate the ways in which outsider candidates like Donald Trump can separate themselves during primaries by using moral rhetoric that differs from their parties’ common language. Our findings demonstrate the functional use of strategic moral rhetoric in a campaign context and show that unique methods of text network analysis are broadly applicable to the study of campaigns and social movements. © The Author(s) 2023. Published by Oxford University Press on behalf of National Academy of Sciences. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Research field(s)
Health Sciences, Clinical Medicine, Neurology & Neurosurgery

NOMIS Researcher(s)

Published in

April 3, 2023

Nudge is a popular public policy tool that harnesses well-known biases in human judgement to subtly guide people’s decisions, often to improve their choices or to achieve some socially desirable outcome. Thanks to recent developments in artificial intelligence (AI) methods new possibilities emerge of how and when our decisions can be nudged. On the one hand, algorithmically personalized nudges have the potential to vastly improve human daily lives. On the other hand, blindly outsourcing the development and implementation of nudges to “black box” AI systems means that the ultimate reasons for why such nudges work, that is, the underlying human cognitive processes that they harness, will often be unknown. In this paper, we unpack this concern by considering a series of examples and case studies that demonstrate how AI systems can learn to harness biases in human judgment to reach a specified goal. Drawing on an analogy in a philosophical debate concerning the methodology of economics, we call for the need of an interdisciplinary oversight of AI systems that are tasked and deployed to nudge human behaviours. © 2023, The Author(s).

Research field(s)
Health Sciences, Psychology & Cognitive Sciences, Experimental Psychology