Residential College | false |
Status | 已發表Published |
Deep residual contextual and subpixel convolution network for automated neuronal structure segmentation in micro-connectomics | |
Xiao, Chi1,2; Hong, Bei2,3; Liu, Jing2,3; Tang, Yuanyan4; Xie, Qiwei5; Han, Hua2,3,6 | |
2022-06 | |
Source Publication | Computer Methods and Programs in Biomedicine |
ISSN | 0169-2607 |
Volume | 219Pages:106759 |
Abstract | Background and Objective: The goal of micro-connectomics research is to reconstruct the connectome and elucidate the mechanisms and functions of the nervous system via electron microscopy (EM). Due to the enormous variety of neuronal structures, neuron segmentation is among most difficult tasks in connectome reconstruction, and neuroanatomists desperately need a reliable neuronal structure segmentation method to reduce the burden of manual labeling and validation. Methods: In this article, we proposed an effective deep learning method based on a deep residual contextual and subpixel convolution network to obtain the neuronal structure segmentation in anisotropic EM image stacks. Furthermore, lifted multicut is used for post-processing to optimize the prediction and obtain the reconstruction results. Results: On the ISBI EM segmentation challenge, the proposed method ranks among the top of the leader board and yields a Rand score of 0.98788. On the public data set of mouse piriform cortex, it achieves a Rand score of 0.9562 and 0.9318 in the different testing stacks. The evaluation scores of our method are significantly improved when compared with those of state-of-the-art methods. Conclusions: The proposed automatic method contributes to the development of micro-connectomics, which improves the accuracy of neuronal structure segmentation and provides neuroanatomists with an effective approach to obtain the segmentation and reconstruction of neurons. |
Keyword | Deep Learning Neuronal Structure Segmentation Subpixel Convolution Electron Microscopy Micro-connectomics |
DOI | 10.1016/j.cmpb.2022.106759 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering ; Medical Informatics |
WOS Subject | Computer Science, Interdisciplinary Applications ; Computer Science, Theory & Methods ; Engineering, Biomedical ; Medical Informatics |
WOS ID | WOS:000821201900005 |
Scopus ID | 2-s2.0-85126855219 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Co-First Author | Xiao, Chi |
Corresponding Author | Xie, Qiwei; Han, Hua |
Affiliation | 1.Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, China 2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China 3.School of Artificial Intelligence, School of Future Technology, University of Chinese Academy of Sciences, China 4.Department of Computer and Information Science, University of Macau, China 5.Data Mining Lab, Beijing University of Technology, China 6.Chinese Academy of Sciences Center for Excellence in Brain Science and Intelligence Technology, China |
Recommended Citation GB/T 7714 | Xiao, Chi,Hong, Bei,Liu, Jing,et al. Deep residual contextual and subpixel convolution network for automated neuronal structure segmentation in micro-connectomics[J]. Computer Methods and Programs in Biomedicine, 2022, 219, 106759. |
APA | Xiao, Chi., Hong, Bei., Liu, Jing., Tang, Yuanyan., Xie, Qiwei., & Han, Hua (2022). Deep residual contextual and subpixel convolution network for automated neuronal structure segmentation in micro-connectomics. Computer Methods and Programs in Biomedicine, 219, 106759. |
MLA | Xiao, Chi,et al."Deep residual contextual and subpixel convolution network for automated neuronal structure segmentation in micro-connectomics".Computer Methods and Programs in Biomedicine 219(2022):106759. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment