Insight
is our reward

Publications in Psychonomic Bulletin and Review by NOMIS researchers

NOMIS Researcher(s)

December 6, 2022

Why do swear words sound the way they do? Swear words are often thought to have sounds that render them especially fit for purpose, facilitating the expression of emotion and attitude. To date, however, there has been no systematic cross-linguistic investigation of phonetic patterns in profanity. In an initial, pilot study we explored statistical regularities in the sounds of swear words across a range of typologically distant languages. The best candidate for a cross-linguistic phonemic pattern in profanity was the absence of approximants (sonorous sounds like l, r, w and y). In Study 1, native speakers of various languages (Arabic, Chinese, Finnish, French, German, Spanish; N = 215) judged foreign words less likely to be swear words if they contained an approximant. In Study 2 we found that sanitized versions of English swear words – like darn instead of damn – contain significantly more approximants than the original swear words. Our findings reveal that not all sounds are equally suitable for profanity, and demonstrate that sound symbolism – wherein certain sounds are intrinsically associated with certain meanings – is more pervasive than has previously been appreciated, extending beyond denoting single concepts to serving pragmatic functions.

Research field(s)
Health Sciences, Psychology & Cognitive Sciences, Experimental Psychology

NOMIS Researcher(s)

December 1, 2020

From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other’s mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.

Research field(s)
Health Sciences, Psychology & Cognitive Sciences, Experimental Psychology