DeepBouton: Automated Identification of Single-Neuron Axonal Boutons at the Brain-Wide Scale
Yurong Liu1,2, Lei Su1,2, Ning Li1,2, Fangfang Yin1,2, Feng Xiong1,2, Xiaomao Liu3, Hui Gong1,2* and Shaoqun Zeng1,2*
1Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
2MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
3School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, China
Fine morphological reconstruction of individual neurons across the entire brain is essential for mapping brain circuits. Inference of presynaptic axonal boutons, as a key part of single-neuron fine reconstruction, is critical for interpreting the patterns of neural circuit wiring schemes. However, automated bouton identification remains challenging for current neuron reconstruction tools, as they focus mainly on neurite skeleton drawing and have difficulties accurately quantifying bouton morphology. Here, we developed an automated method for recognizing single-neuron axonal boutons in whole-brain fluorescence microscopy datasets. The method is based on deep convolutional neural networks and density-peak clustering. High-dimensional feature representations of bouton morphology can be learned adaptively through convolutional networks and used for bouton recognition and subtype classification. We demonstrate that the approach is effective for detecting single-neuron boutons at the brain-wide scale for both long-range pyramidal projection neurons and local interneurons.
Introduction
Mapping neural circuits, a core goal of modern neuroscience, depends on fine morphological reconstruction of individual neurons across the whole brain, including neuronal skeleton drawing and synaptic connectivity inference (Halavi et al., 2012; Helmstaedter and Mitra, 2012). Axonal boutons in optical microscopy images are typical presynaptic structures indicative of one or more synapses (Hellwig et al., 1994; Anderson et al., 1998). Recent research by Gala et al. (2017) and Drawitsch et al. (2018) showed that optical microscopy-based axonal boutons were highly correlative with electron microscopy. Therefore, identification of axonal boutons of individual neurons is critical for interpreting the patterns of neural circuit wiring schemes, as boutons indicate contact sites of individual neurons and reveal how neural circuits are wired (Braitenberg and Schüz, 1998; Lichtman and Denk, 2011). Furthermore, acquired bouton distribution patterns at the single-neuron level provide more comprehensive and finer structural information for defining cell types (Karube et al., 2004; Portera-Cailliau et al., 2005; Huang, 2014) and simulating neural circuits (Goodman and Brette, 2008; Brüderle et al., 2009; Markram et al., 2015) combined with neuronal arborization patterns.
Recent progress in fluorescence sparse-labeling and large-volume fine-imaging techniques (Micheva and Smith, 2007; Rotolo et al., 2008; Osten and Margrie, 2013; Economo et al., 2016; Gong et al., 2016) has enabled the acquisition of submicron-resolution whole-brain datasets of neuronal morphology. These techniques provide detailed structural information on single neuron and axonal boutons. Manual counting of axonal boutons in whole-brain datasets is extremely inconvenient and time-consuming given the large number and wide range of single-neuron boutons. As such, various algorithms and tools have been developed for automated reconstruction of individual neurons (Donohue and Ascoli, 2011; Myatt et al., 2012; Peng et al., 2015). Most of these approaches are able to extract neuronal skeletons well. However, these tools focus mainly on neurite tracing and are insufficient to precisely quantify bouton morphology.
Several methods for detecting axonal boutons from light microscopy images have been proposed. Song et al. (2016) proposed a score index for quantifying axonal boutons, which used the maximum intensity along the axon to locate boutons. Bass et al. (2017) developed an automated algorithm for detecting axonal boutons based on Gabor filters and support vector machine in local image volume. The primary principle underlying these approaches is using manually designed features to approximately model axonal boutons. Nevertheless, the features are insufficient to accurately describe complex bouton morphology, since there are many suspected axonal swellings similar to boutons derived from the inhomogeneities of axonal fibers and insufficient imaging quality. Thus, it is difficult to distinguish between boutons and non-bouton swellings using manually designed features. Further, the shapes and sizes of boutons of individual neurons in different brain regions are diverse and may include partially overlapping boutons, which renders bouton recognition difficult.
Considering these challenges, we propose here an automated method, DeepBouton, for single-neuron bouton identification in whole-brain datasets. The method includes three key parts: neuron tree division with redundancy, initial bouton detection using density-peak clustering (Rodriguez and Laio, 2014; Cheng et al., 2016), and filtering out false positives from the initial detection via deep convolutional neural networks (LeCun et al., 2015; He et al., 2016). DeepBouton adopts a two-step recognition strategy: density-peak clustering to detect underlying bouton centers and deep convolutional networks for filtering out non-bouton axonal swellings in the initial detection. The method combines the adaptive feature representation ability of convolutional networks and robustness of density peak clustering, allowing description of bouton morphology and segmentation of objects with various patterns including overlapping. Thus, it can effectively detect axonal swellings of various morphologies and learn high-dimensional representations of bouton morphology to distinguish reliable boutons from other candidates. In addition, we developed a neuron tree division technique to process brain-wide single neurons effectively. To validate our method, we applied it for identification of boutons of both long-range pyramidal projection neurons and local interneurons in whole-brain datasets. We obtained precision and recalls rates of approximately 0.90.
Materials and Methods
The Principle of DeepBouton
DeepBouton consists of three parts: neuron tree division with redundancy, initial detection of axonal swellings, and filtering of non-bouton swellings (Figures 1A,B). First, with the guidance of a manually traced neuronal skeleton, piecewise sub-blocks are extracted along axons with redundancy (Figure 1C). For each sub-block, the foreground images are segmented through adaptive binarization and morphological erosion. Then, axonal swellings are localized with density-peak clustering in the foreground images (Figure 1D), and the detected swelling centers of all sub-blocks are merged. Finally, we designed and trained a patch-based classification convolutional network to filter the non-bouton swellings in the initial detection (Figure 1E). A demonstration of application of the method on an experimental dataset is depicted in Figure 1F.
FIGURE 1
Figure 1. The principle of DeepBouton. (A) Flow diagram of DeepBouton: extract images along axons piecewise from a whole-brain dataset guided by a manually traced neuronal skeleton, segment foreground images by adaptive binarization and morphology erosion, initially detect underlying boutons using density-peak clustering, and filter non-bouton axonal swellings via a deep convolutional network. (B) Pattern graphs of DeepBouton corresponding to the flow diagram in (A). (C) Diagram of piecewise-extracted images along axons: the axonal arbor is divided into segments with redundancy, and the tubular volume is extracted along the axonal skeleton for each segment. (D) Diagram of initially detected boutons using density-peak clustering: the points with a higher signal density than their neighbors and with a relatively large distance from points of higher densities are recognized as centers of underlying boutons (red dots), while the points with a higher density but with a small distance are not centers (black dots labeled by arrows). (E) Filtering of non-bouton axonal swellings in the initial detection via a patch-based classification convolutional network. (F) A demonstration of the method on an experimental dataset. Scale bars in (F) represent 1 mm and 2 μm, respectively.
The two-step recognition strategy is utilized for accurate identification of single-neuron boutons. The initial detection should contain as many underlying axonal swellings of diverse degrees as possible. Second recognition is then used to filter non-bouton swellings. Initial detection is difficult because underlying swellings have various sizes and partially overlap. We used density-peak clustering to locate swelling centers due to its robustness to cluster scale and effective splitting of overlapped clusters (Figure 1D). However, the initially detected swellings had diverse radii and intensities relative to neighboring axons, and a suitable recognition scale needed to be determined to distinguish boutons from non-bouton swellings. Here, we adopted deep convolutional networks to filter false positives due to their adaptive feature representation abilities without manually designed features compared to those of traditional machine learning or model-based approaches (Figure 1E). The blocking-merging strategy along axons with redundancy ensures that the method can quickly process ultra-volume datasets while maintaining recognition accuracy (Figure 1C).
Neuron Tree Division With Redundancy
Image Extraction
Single-neuron boutons for long-range projection neurons generally have brain-wide distributions as axonal projections across different brain regions. Therefore, we extracted piecewise sub-block images along axonal arbors with the guidance of the reconstructed neurons. Specifically, (a) an axonal arbor is divided into several segments with redundancies; and (b) for each segment, tubular volume along the axonal skeleton with a radius of 8 × 8 × 4 voxels is extracted from the corresponding whole-brain dataset as depicted in Figure 1C. Foreground segmentation and initial detection of boutons are performed on each sub-block, and the initially detected boutons of each segment are then merged.
Foreground Segmentation
Foreground images are segmented through adaptive binarization and mild morphological erosion. The binarization definition is the following formulation:
B={1I>C+threbinarizationC0otherwisewhere I is the original image, C represents the background image generated by multiple convolutions with averaging template, and threbinarization is a threshold parameter. It is easy to set the threshold parameter to ensure that underlying axonal bouton regions are segmented. To eliminate artifacts and noise points in binarized images, we performed mild morphological erosion. The foreground images are defined as the element-wise product of I and B.
Initial Detection of Boutons
We located centers of axonal swellings in the segmented foreground images via density-peak clustering (Rodriguez and Laio, 2014; Cheng et al., 2016). The principle of density-peak clustering is searching for density peaks in the ρ, δ feature space (Figure 1D), where ρ is the local signal density (i.e., the Gaussian-weighed mean of local signal intensities), and δ is the corresponding minimum distance from voxels of higher densities. The density peaks (i.e., centers of swellings) are characterized by a higher signal density ρ than their neighbors and by a relatively large distance δ. They act as isolated points in the ρ, δ space. Therefore, possible density peaks are the voxels with low feature densities Λ defined in the ρ, δ space. The clustering method explicitly adds the minimum distance to describe cluster centers other than the local signal density. Thus, cluster centers can be searched for intuitively in the density-distance space even for multiple-scale clusters or overlapped clusters. The formulations of the density-peak clustering are provided below.
Formulations of Density-Peak Clustering
The local signal density ρ of each voxel is defined as follows (Cheng et al., 2016):
ρi=1Z∑j:||pi−pj||2≤RI(pj)12πσexp(−||pi−pj||222σ2)where I(pi) represents the signal value of voxel pi; s and R are the kernel width and the window radius, respectively, of the Gaussian kernel function (R = 2σ); ||.||2 is the 2-norm; Z is a normalization constant. In our experiments, the kernel width σ is set to approximately one third of the average bouton radius.
With the density map, one can search for the minimum distance δ of each voxel according to the following formulation:
δi={j:minpj>pi||pi−pj||2max∀i,j||pi−pj||2ρi
Read Full Article (External Site)
Dr. David Lowemann, M.Sc, Ph.D., is a co-founder of the Institute for the Future of Human Potential, where he leads the charge in pioneering Self-Enhancement Science for the Success of Society. With a keen interest in exploring the untapped potential of the human mind, Dr. Lowemann has dedicated his career to pushing the boundaries of human capabilities and understanding.
Armed with a Master of Science degree and a Ph.D. in his field, Dr. Lowemann has consistently been at the forefront of research and innovation, delving into ways to optimize human performance, cognition, and overall well-being. His work at the Institute revolves around a profound commitment to harnessing cutting-edge science and technology to help individuals lead more fulfilling and intelligent lives.
Dr. Lowemann’s influence extends to the educational platform BetterSmarter.me, where he shares his insights, findings, and personal development strategies with a broader audience. His ongoing mission is shaping the way we perceive and leverage the vast capacities of the human mind, offering invaluable contributions to society’s overall success and collective well-being.