The experiments reported are careful and focused: listeners heard sentences that contained either intact words or nonwords created by swapping or replacing consonants. Across tasks that probed single-word detection and whole-sentence judgment, the researchers found no sign that transposed phonemes were treated like the original words. By contrast, the same manipulations in reading produced the well-known transposed-letter effect. That pattern points to a fundamental difference in how the brain encodes order in sound versus print, with listening favoring a tighter serial representation of phonemes.
If phoneme order is encoded more precisely in spoken language, that could shape how we design auditory interfaces, teach pronunciation, and support readers and listeners with different needs. The findings raise a practical question worth exploring further: what features of spoken input—rate, prosody, or neural timing—reinforce strict phoneme order, and how might that influence tools that aim to expand communication and comprehension for everyone?
Abstract
In the present study, we asked a simple question: Can transposed-phoneme effects, previously found with nonwords presented in isolation, be observed when the transposed-phoneme nonwords are embedded in a sequence of spoken words that, apart from the transposed-phoneme nonwords, formed a correct sentence? The results are clear-cut. We found no evidence for a transposed-phoneme effect during spoken sentence processing either in a nonword detection task (Experiments 1−3) or in a correct/incorrect decision task (Experiment 4), where “correctness” could either concern individual words (i.e., the presence of a nonword in the sequence) or the entire sequence (i.e., a grammatical decision). Hence, the presence of nonwords in spoken sentences was not harder to detect whether they were created by transposing (e.g., /ʃoloka/) or substituting (e.g., /ʃoropa/) two consonants in the corresponding base-words (e.g., /ʃokola/ chocolat “chocolate”). In contrast, a robust transposed-letter effect was observed during sentence reading (Experiment 5), using the same word/nonword sequences and the same correct/incorrect decision task as in Experiment 4. We discuss the possibility that the greater seriality imposed by spoken sentences in the processing of spoken words leads to a more precise encoding of phoneme order, thus cancelling the transposed-phoneme effect. Sentence reading, on the other hand, would involve more parallel processing, hence the robust transposed-letter effect found with written sentences.