Analyzing Machine‐Learned Representations: A Natural Language Case Study

Published on December 19, 2020

Abstract
As modern deep networks become more complex, and get closer to human‐like capabilities in certain domains, the question arises as to how the representations and decision rules they learn compare to the ones in humans. In this work, we study representations of sentences in one such artificial system for natural language processing. We first present a diagnostic test dataset to examine the degree of abstract composable structure represented. Analyzing performance on these diagnostic tests indicates a lack of systematicity in representations and decision rules, and reveals a set of heuristic strategies. We then investigate the effect of training distribution on learning these heuristic strategies, and we study changes in these representations with various augmentations to the training set. Our results reveal parallels to the analogous representations in people. We find that these systems can learn abstract rules and generalize them to new contexts under certain circumstances—similar to human zero‐shot reasoning. However, we also note some shortcomings in this generalization behavior—similar to human judgment errors like belief bias. Studying these parallels suggests new ways to understand psychological phenomena in humans as well as informs best strategies for building artificial intelligence with human‐like language understanding.

Read Full Article (External Site)