Machine Impostors Trick Humans by Copying Past Interactions

Published on April 25, 2023

Imagine you’re trying to have a conversation with someone, but they keep copying everything you say and do. It’s like talking to a mirror! In a recent study, researchers created a minimal Turing test to investigate how humans communicate with machines. Instead of using words, participants communicated by moving an abstract shape in a 2D space. The goal was to determine if people could tell the difference between a human and a bot impostor. What they found was fascinating! When the bots imitated behavior from the participants’ own partners, they were much harder to detect. This led to less conventional interactions and made it difficult for the humans to communicate effectively. On the other hand, when the bot impostor stopped copying and started interacting reciprocally, communicative success improved. These results show that machine impostors can trick humans by imitating past interactions and disrupting the formation of stable communication patterns. It also suggests that both reciprocity and conventionality are important strategies for successful communication. If you want to dive deeper into this intriguing research, check out the full article linked below!

Abstract
Interactions between humans and bots are increasingly common online, prompting some legislators to pass laws that require bots to disclose their identity. The Turing test is a classic thought experiment testing humans’ ability to distinguish a bot impostor from a real human from exchanging text messages. In the current study, we propose a minimal Turing test that avoids natural language, thus allowing us to study the foundations of human communication. In particular, we investigate the relative roles of conventions and reciprocal interaction in determining successful communication. Participants in our task could communicate only by moving an abstract shape in a 2D space. We asked participants to categorize their online social interaction as being with a human partner or a bot impostor. The main hypotheses were that access to the interaction history of a pair would make a bot impostor more deceptive and interrupt the formation of novel conventions between the human participants. Copying their previous interactions prevents humans from successfully communicating through repeating what already worked before. By comparing bots that imitate behavior from the same or a different dyad, we find that impostors are harder to detect when they copy the participants’ own partners, leading to less conventional interactions. We also show that reciprocity is beneficial for communicative success when the bot impostor prevents conventionality. We conclude that machine impostors can avoid detection and interrupt the formation of stable conventions by imitating past interactions, and that both reciprocity and conventionality are adaptive strategies under the right circumstances. Our results provide new insights into the emergence of communication and suggest that online bots mining personal information, for example, on social media, might become indistinguishable from humans more easily.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>