Unlocking Emotions: EEG Signals Decode Older Adults’ Feelings Using Innovative Framework

Published on September 21, 2022

Imagine flipping through old photo albums with your grandparents, and just by observing their brain activity, you can tell how they’re feeling. A team of scientists has developed a groundbreaking method that uses Electroencephalogram (EEG) signals to recognize emotions in older adults. Like analyzing muscle artifacts during speech or using fewer electrodes for better comfort, this technology measures emotional states accurately. The proposed CTA-CNN-Bi-LSTM framework utilizes eight specific channels in the brain to encode EEG data, extracting spatial information and learning bi-directional temporal patterns for robust emotion recognition. In fact, this innovative framework surpasses earlier methods and achieves an impressive 98.75% average recognition accuracy for positive, negative, and neutral emotions. By merging cutting-edge neural networks with a keen understanding of older adults’ emotional well-being, this research fosters connections between generations and enhances the mental health of our elder loved ones. To dive deeper into the study, check out the full article!

Reminiscence and conversation between older adults and younger volunteers using past photographs are very effective in improving the emotional state of older adults and alleviating depression. However, we need to evaluate the emotional state of the older adult while conversing on the past photographs. While Electroencephalogram (EEG) has a significantly stronger association with emotion than other physiological signals, the challenge is to eliminate muscle artifacts in the EEG during speech as well as to reduce the number of dry electrodes to improve user comfort while maintaining high emotion recognition accuracy. Therefore, we proposed the CTA-CNN-Bi-LSTM emotion recognition framework. EEG signals of 8 channels (P3, P4, F3, F4, F7, F8, T7, and T8) were first implemented in the MEMD-CCA method on three brain regions separately (Frontal, Temporal, Parietal) to remove the muscle artifacts then were fed into the Channel-Temporal attention module to get the weights of channels and temporal points most relevant to the positive, negative and neutral emotions to recode the EEG data. A Convolutional Neural Networks (CNNs) module then extracted the spatial information in the new EEG data to obtain the spatial feature maps which were then sequentially inputted into a Bi-LSTM module to learn the bi-directional temporal information for emotion recognition. Finally, we designed four group experiments to demonstrate that the proposed CTA-CNN-Bi-LSTM framework outperforms the previous works. And the highest average recognition accuracy of the positive, negative, and neutral emotions achieved 98.75%.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>