The Embodied Brain of SOVEREIGN2: From Space-Variant Conscious Percepts During Visual Search and Navigation to Learning Invariant Object Categories and Cognitive-Emotional Plans for Acquiring Valued Goals

Published on June 26, 2019

This article further develops a model of how reactive and planned behaviors interact in real time. Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. Sequences of such behaviors need to be released at appropriate times during autonomous navigation to realize valued goals. The SOVEREIGN model of Gnadt and Grossberg (2008) embodied these capabilities, and was tested in a 3D virtual reality environment. Some SOVEREIGN processes were realized algorithmically. Other processes needed to achieve a more comprehensive adaptive intelligence in an embodied mobile system were not included at all. This article describes recent neural models of important missing capabilities with enough detail to define a research program that that can consistently incorporate them into an enhanced model called SOVEREIGN2. SOVEREIGN2 includes several interacting systems that model computationally complementary properties of the cortical What and Where processing streams, and homologous mechanisms for spatial navigation and arm movement control. Visual inputs are processed by networks that are sensitive to visual form and motion. View-, position-, and size-invariant recognition categories are learned in the What stream. Estimates of target and present position are computed in the Where stream, and can activate approach movements toward the target. Motion cues can elicit orienting movements to bring a new target into view. Approach and orienting movements are alternately performed. Cumulative estimates of each movement are derived from visual and vestibular cues. Sequences of experienced visual objects are stored in a cognitive working memory, whereas movement sequences are stored in a spatial/motor working memory. Stored sequences trigger learning of cognitive and spatial/motor list chunk categories, or plans, which together control planned movements towards valued goal objects. Predictively effective chunk combinations are selectively enhanced or suppressed via reinforcement and motivational learning by rewards and punishments, respectively. Expected vs. unexpected non-occurrences, or disconfirmations, of rewards or punishments also regulate these enhancement and suppressive processes. These several kinds of learning effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences.

Read Full Article (External Site)