Creating Synthetic DWI Scalar Maps with GANs from FLAIR MRI!

Published on August 2, 2023

Imagine you have a recipe book for making delicious desserts, but it takes forever to gather all the ingredients and prepare them. Well, researchers may have found a way to speed up the process of acquiring diffusion-weighted imaging (DWI) volumes by using generative adversarial networks (GANs) to produce synthetic DWI scalar maps. It’s like having a machine that can generate all the necessary ingredients for your dessert without having to go to the store! The scientists evaluated different GAN-based models and found that the pix2pix model performed the best, generating high-quality DWI fractional anisotropy (FA) and mean diffusivity (MD) scalar maps from fluid-attenuated inversion recovery (FLAIR) MRI sequences. These generated maps not only match real images in terms of structure, but they also show great potential in bypassing or correcting registration in data pre-processing. This research could greatly benefit medical professionals by providing an efficient way to supplement clinical datasets and improve the accuracy of DWI volume acquisition.

IntroductionAcquisition and pre-processing pipelines for diffusion-weighted imaging (DWI) volumes are resource- and time-consuming. Generating synthetic DWI scalar maps from commonly acquired brain MRI sequences such as fluid-attenuated inversion recovery (FLAIR) could be useful for supplementing datasets. In this work we design and compare GAN-based image translation models for generating DWI scalar maps from FLAIR MRI for the first time.MethodsWe evaluate a pix2pix model, two modified CycleGANs using paired and unpaired data, and a convolutional autoencoder in synthesizing DWI fractional anisotropy (FA) and mean diffusivity (MD) from whole FLAIR volumes. In total, 420 FLAIR and DWI volumes (11,957 images) from multi-center dementia and vascular disease cohorts were used for training/testing. Generated images were evaluated using two groups of metrics: (1) human perception metrics including peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), (2) structural metrics including a newly proposed histogram similarity (Hist-KL) metric and mean squared error (MSE).ResultsPix2pix demonstrated the best performance both quantitatively and qualitatively with mean PSNR, SSIM, and MSE metrics of 23.41 dB, 0.8, 0.004, respectively for MD generation, and 24.05 dB, 0.78, 0.004, respectively for FA generation. The new histogram similarity metric demonstrated sensitivity to differences in fine details between generated and real images with mean pix2pix MD and FA Hist-KL metrics of 11.73 and 3.74, respectively. Detailed analysis of clinically relevant regions of white matter (WM) and gray matter (GM) in the pix2pix images also showed strong significant (p < 0.001) correlations between real and synthetic FA values in both tissue types (R = 0.714 for GM, R = 0.877 for WM).Discussion/conclusionOur results show that pix2pix’s FA and MD models had significantly better structural similarity of tissue structures and fine details than other models, including WM tracts and CSF spaces, between real and generated images. Regional analysis of synthetic volumes showed that synthetic DWI images can not only be used to supplement clinical datasets, but demonstrates potential utility in bypassing or correcting registration in data pre-processing.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>