Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism

Published on July 8, 2020

Abstract
A robot’s decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer‐generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human‐like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.

Read Full Article (External Site)