Can You Hear Me Now? Sensitive Comparisons of Human and Machine Perception

Published on October 17, 2022

Abstract
The rise of machine-learning systems that process sensory input has brought with it a rise in comparisons between human and machine perception. But such comparisons face a challenge: Whereas machine perception of some stimulus can often be probed through direct and explicit measures, much of human perceptual knowledge is latent, incomplete, or unavailable for explicit report. Here, we explore how this asymmetry can cause such comparisons to misestimate the overlap in human and machine perception. As a case study, we consider human perception of adversarial speech — synthetic audio commands that are recognized as valid messages by automated speech-recognition systems but that human listeners reportedly hear as meaningless noise. In five experiments, we adapt task designs from the human psychophysics literature to show that even when subjects cannot freely transcribe such speech commands (the previous benchmark for human understanding), they can sometimes demonstrate other forms of understanding, including discriminating adversarial speech from closely matched nonspeech (Experiments 1 and 2), finishing common phrases begun in adversarial speech (Experiments 3 and 4), and solving simple math problems posed in adversarial speech (Experiment 5) — even for stimuli previously described as unintelligible to human listeners. We recommend the adoption of such “sensitive tests” when comparing human and machine perception, and we discuss the broader consequences of such approaches for assessing the overlap between systems.

Read Full Article (External Site)