A brain-inspired model for finding targets in a 3D environment

Published on November 18, 2022

Imagine searching for a hidden treasure in a three-dimensional maze. You have two key skills to help you – identifying objects and predicting where the treasure might be. That’s exactly what researchers have been doing with a new brain-inspired attentional search model. This model mimics the ‘what’ and ‘where/how’ pathways in the human visual system. The ‘what’ pathway helps with object recognition, while the ‘where/how’ pathway assists in predicting the next location. To test this model, the researchers used 3D Cluttered Cube datasets filled with different images. The model utilizes a classifier network to identify objects and a camera motion network to predict the camera’s next position. By training the networks using reinforcement learning and backpropagation, the model successfully discovered search patterns and accurately classified target objects. This research opens up exciting possibilities for designing intelligent systems that can navigate and search efficiently in three-dimensional environments!

We propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the “where” pathway. To evaluate the proposed model, we generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter or background images on the other faces. The camera goes around each cube on a circular orbit and determines the identity of the image pasted on the face. The images pasted on the cube faces were drawn from: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of three concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the target classes or the clutter. The Camera Motion Network predicts the camera’s next position on the orbit (varying the azimuthal angle or “θ”). Here the camera performs one of three actions: move right, move left, or do not move. The Camera-Position Network adds the camera’s current position (θ) into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the model is trained end-to-end by backpropagating the total loss using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>