The Elusive Deepfake: Humans Struggle to Spot Artificial Speech

Published on August 4, 2023

In the game of detecting deepfakes, humans seem to be stumped by the uncanny mimicry of artificial speech. According to recent research, our ability to spot computer-generated voices is only 73% accurate. It’s like trying to distinguish a talented impersonator from the real deal – sometimes the lines blur and fool us completely. Interestingly, this difficulty applies equally to English and Mandarin languages. Whether it’s synthesizing Obama’s voice or cloning Mandarin speech patterns, deepfake technology seems to have mastered the art of vocal deception. As we continue to evolve and employ AI in our everyday lives, it becomes crucial to develop robust countermeasures against these convincing mimics. Further investigation into the intricacies of acoustic cues, linguistic patterns, and intonations could potentially boost our chances of exposing these deceptive speeches. Curious minds might want to dive deeper into this fascinating study to learn more about the evolving landscape of deepfake detection!

New research has found that humans were only able to detect artificially generated speech 73% of the time, with the same accuracy in both English and Mandarin.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>