Computational Approaches to Visual Analogy: Specific vs. General

Published on September 17, 2023

Just like how some people are really good at specific tasks but struggle with general ones, researchers in artificial intelligence have been exploring two different approaches to understanding human reasoning: task-specific models and domain-general mapping. In this study, scientists focused on visual analogical reasoning using images of three-dimensional objects. They compared human performance to that of two deep learning models, which were specifically trained on these analogy problems, and a new model they developed called the part-based comparison (PCM) model. The results showed that the domain-general PCM model performed similarly to humans in key aspects of analogical reasoning, while the task-specific deep learning models fell short. This suggests that achieving human-like analogical reasoning may not be solely achieved through training specific models on large datasets for a particular type of analogy. Instead, it seems that humans (and potentially machines) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks and by efficiently computing relational similarity. To learn more about this fascinating research, check out the full article!

Abstract
Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task-specific knowledge acquired from a wealth of prior experience, or is it based on the domain-general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar three-dimensional objects (cars and their parts), we systematically manipulated viewpoints, part relations, and entity properties in visual analogy problems. We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) that were directly trained to solve these problems and to apply their task-specific knowledge to analogical reasoning. We also developed a new model using part-based comparison (PCM) by applying a domain-general mapping procedure to learned representations of cars and their component parts. Across four-term analogies (Experiment 1) and open-ended analogies (Experiment 2), the domain-general PCM model, but not the task-specific deep learning models, generated performance similar in key aspects to that of human reasoners. These findings provide evidence that human-like analogical reasoning is unlikely to be achieved by applying deep learning with big data to a specific type of analogy problem. Rather, humans do (and machines might) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks, coupled with efficient computation of relational similarity.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>