Integrating Multiple Imaging Techniques for Alzheimer’s Diagnosis

Published on April 28, 2022

Alzheimer’s disease (AD) is like a puzzle where we’re trying to find the missing pieces. By combining different types of brain scans, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), scientists are developing new approaches to diagnose AD. Instead of focusing on the entire brain, they’re zooming in on specific regions that are closely tied to the disease. In this study, researchers used a combination of convolutional auto-encoder and convolutional neural networks to merge features from the original images and the region-of-interest (ROI) analyses. By doing so, they were able to create a more accurate and comprehensive picture of AD. The results were promising, showing that their method outperformed previous studies in classifying brain diseases. Further research and validation are needed, but this integrated imaging approach brings us one step closer to early and accurate diagnosis of Alzheimer’s disease.

Alzheimer’s disease (AD) is an age-related disease that affects a large proportion of the elderly. Currently, the neuroimaging techniques [e.g., magnetic resonance imaging (MRI) and positron emission tomography (PET)] are promising modalities for AD diagnosis. Since not all brain regions are affected by AD, a common technique is to study some region-of-interests (ROIs) that are believed to be closely related to AD. Conventional methods used ROIs, identified by the handcrafted features through Automated Anatomical Labeling (AAL) atlas rather than utilizing the original images which may induce missing informative features. In addition, they learned their framework based on the discriminative patches instead of full images for AD diagnosis in multistage learning scheme. In this paper, we integrate the original image features from MRI and PET with their ROIs features in one learning process. Furthermore, we use the ROIs features for forcing the network to focus on the regions that is highly related to AD and hence, the performance of the AD diagnosis can be improved. Specifically, we first obtain the ROIs features from the AAL, then we register every ROI with its corresponding region of the original image to get a synthetic image for each modality of every subject. Then, we employ the convolutional auto-encoder network for learning the synthetic image features and the convolutional neural network (CNN) for learning the original image features. Meanwhile, we concatenate the features from both networks after each convolution layer. Finally, the highly learned features from the MRI and PET are concatenated for brain disease classification. Experiments are carried out on the ADNI datasets including ADNI-1 and ADNI-2 to evaluate our method performance. Our method demonstrates a higher performance in brain disease classification than the recent studies.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>