Imagine you’re trying to classify objects as either small or big. You want to know if there’s a way to use distributional semantic models, which create word vector spaces, to accurately perform this task. Well, scientists developed a cool computational model to find out! They created different models and tested their performance in classifying over 1,500 words based on size and animacy. The most successful model used composite representations of big and small things, allowing it to make accurate classifications and even predict response times. This suggests that when humans perform similar tasks, they retrieve instances representative of the extremes on the semantic dimension they’re focusing on. This finding aligns with the instance theory of semantic memory. So, words can be like items in a store, with their particular features determining how we classify them!
Abstract
Semantic memory encompasses one’s knowledge about the world. Distributional semantic models, which construct vector spaces with embedded words, are a proposed framework for understanding the representational structure of human semantic knowledge. Unlike some classic semantic models, distributional semantic models lack a mechanism for specifying the properties of concepts, which raises questions regarding their utility for a general theory of semantic knowledge. Here, we develop a computational model of a binary semantic classification task, in which participants judged target words for the referent’s size or animacy. We created a family of models, evaluating multiple distributional semantic models, and mechanisms for performing the classification. The most successful model constructed two composite representations for each extreme of the decision axis (e.g., one averaging together representations of characteristically big things and another of characteristically small things). Next, the target item was compared to each composite representation, allowing the model to classify more than 1,500 words with human-range performance and to predict response times. We propose that when making a decision on a binary semantic classification task, humans use task prompts to retrieve instances representative of the extremes on that semantic dimension and compare the probe to those instances. This proposal is consistent with the principles of the instance theory of semantic memory.
Dr. David Lowemann, M.Sc, Ph.D., is a co-founder of the Institute for the Future of Human Potential, where he leads the charge in pioneering Self-Enhancement Science for the Success of Society. With a keen interest in exploring the untapped potential of the human mind, Dr. Lowemann has dedicated his career to pushing the boundaries of human capabilities and understanding.
Armed with a Master of Science degree and a Ph.D. in his field, Dr. Lowemann has consistently been at the forefront of research and innovation, delving into ways to optimize human performance, cognition, and overall well-being. His work at the Institute revolves around a profound commitment to harnessing cutting-edge science and technology to help individuals lead more fulfilling and intelligent lives.
Dr. Lowemann’s influence extends to the educational platform BetterSmarter.me, where he shares his insights, findings, and personal development strategies with a broader audience. His ongoing mission is shaping the way we perceive and leverage the vast capacities of the human mind, offering invaluable contributions to society’s overall success and collective well-being.