This article further develops a model of how reactive and planned behaviors interact in real time. Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. Sequences of such behaviors need to be released at appropriate times during autonomous navigation to realize valued goals. The SOVEREIGN model of Gnadt and Grossberg (2008) embodied these capabilities, and was tested in a 3D virtual reality environment. Some SOVEREIGN processes were realized algorithmically. Other processes needed to achieve a more comprehensive adaptive intelligence in an embodied mobile system were not included at all. This article describes recent neural models of important missing capabilities with enough detail to define a research program that that can consistently incorporate them into an enhanced model called SOVEREIGN2. SOVEREIGN2 includes several interacting systems that model computationally complementary properties of the cortical What and Where processing streams, and homologous mechanisms for spatial navigation and arm movement control. Visual inputs are processed by networks that are sensitive to visual form and motion. View-, position-, and size-invariant recognition categories are learned in the What stream. Estimates of target and present position are computed in the Where stream, and can activate approach movements toward the target. Motion cues can elicit orienting movements to bring a new target into view. Approach and orienting movements are alternately performed. Cumulative estimates of each movement are derived from visual and vestibular cues. Sequences of experienced visual objects are stored in a cognitive working memory, whereas movement sequences are stored in a spatial/motor working memory. Stored sequences trigger learning of cognitive and spatial/motor list chunk categories, or plans, which together control planned movements towards valued goal objects. Predictively effective chunk combinations are selectively enhanced or suppressed via reinforcement and motivational learning by rewards and punishments, respectively. Expected vs. unexpected non-occurrences, or disconfirmations, of rewards or punishments also regulate these enhancement and suppressive processes. These several kinds of learning effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences.
Read Full Article (External Site)
Dr. David Lowemann, M.Sc, Ph.D., is a co-founder of the Institute for the Future of Human Potential, where he leads the charge in pioneering Self-Enhancement Science for the Success of Society. With a keen interest in exploring the untapped potential of the human mind, Dr. Lowemann has dedicated his career to pushing the boundaries of human capabilities and understanding.
Armed with a Master of Science degree and a Ph.D. in his field, Dr. Lowemann has consistently been at the forefront of research and innovation, delving into ways to optimize human performance, cognition, and overall well-being. His work at the Institute revolves around a profound commitment to harnessing cutting-edge science and technology to help individuals lead more fulfilling and intelligent lives.
Dr. Lowemann’s influence extends to the educational platform BetterSmarter.me, where he shares his insights, findings, and personal development strategies with a broader audience. His ongoing mission is shaping the way we perceive and leverage the vast capacities of the human mind, offering invaluable contributions to society’s overall success and collective well-being.