Congratulations to Prof. Guang-Zhong Yang, Prof. Guoyan Zheng, Prof. Dahong Qian, Prof. Jie Yang, Associate Prof. Xiaolin Huang, Assistant Prof. Yun Gu and their teams for having their papers accepted for presentation at MICCAI 2020.
Cheng Yuan, Yujin Tang, Dahong Qian. “Ovarian Cancer Prediction in Proteomic Data Using Stacked Asymmetric Convolution.” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020.
Prediction of high grade ovarian cancer on proteomic data is a clinical challenge. Besides, it offers the potential for earlier intervention to increase overall survival, as well as guides the prophylactic ovarian removal to avoid unnecessary early menopause. In this work, we propose a model that learns how to detect ovarian cancer on images from uterine liquid proteomic data. The contributions of this work are two-fold. First, we propose an original method to use proteomic data without direct matching with the existing protein libraries as in the traditional method. The gray-scale peptide image generated by our method contains almost all information from mass spectrometry. Second, we pioneer in analyzing the uterine liquid proteomic data with deep convolutional neural networks. Specifically, we design a feature extractor consisting of stacked asymmetric convolutional layers, which could pay more attention to multiple compounds in different retention times and isotopes in similar mass/charge than symmetric convolutions. Another novelty is trying to find the patches contributing more in improving both sensitivity and specificity. In addition, we add an auxiliary classifier module near the end of training to push useful gradients into the lower layers and to improve the convergence during training. Compared with traditional proteome analysis, experimental results demonstrate the effectiveness and superiority of our model in high grade ovarian cancer prediction.
Guodong Zeng, Florian Schmaranzer, Till D. Lerch, Adam Boschung, Guoyan Zheng, Jürgen Burger, Kate Gerber, Moritz Tannast, Klaus Siebenrock, Young-Jo Kim, Eduardo N. Novais, Nicolas Gerber. “Entropy Guided Unsupervised Domain Adaptation for Cross-Center Hip Cartilage Segmentation from MRI.” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020.
Hip cartilage damage is a major predictor of the clinical outcome of surgical correction for femoroacetabular impingement (FAI) and hip dysplasia. Automatic segmentation for hip cartilage is an essential prior step in assessing cartilage damage status. Deep Convolutional Neural Networks have shown great success in various automated medical image segmentations, but testing on domain-shifted datasets (e.g. images obtained from different centers) can lead to severe performance losses. Creating annotations for each center is particularly expensive. Unsupervised Domain Adaptation (UDA) addresses this challenge by transferring knowledge from a domain with labels (source domain) to a domain without labels (target domain). In this paper, we propose an entropyguided domain adaptation method to address this challenge. Specifically, we first trained our model with supervised loss on the source domain, which enables low-entropy predictions on source-like images. Two discriminators were then used to minimize the gap between source and target domain with respect to the alignment of feature and entropy distribution: the feature map discriminator DF and the entropy map discriminator DE. DF aligns the feature map of different domains, while DE matches the target segmentation to low-entropy predictions like those from the source domain. The results of comprehensive experiments on cross-center MRI hip cartilage segmentation show the effectiveness of this method.
Hanxiao Zhang, Yun Gu, Yulei Qin, Feng Yao, Guang-Zhong Yang. “Learning with Sure Data for Nodule-Level Lung Cancer Prediction.” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020.
Recent evolution in image-based disease prediction based on deep learning has significantly extended the clinical capabilities of these systems. However, in certain cases (e.g. lung nodule prediction), ground truth labels manually annotated by radiologists (unsure data) are often based on subjective assessment, which lack pathological-proven benchmarks (sure data) at the nodule-level. To address this issue, we build a small yet definite CT dataset (171 patients) called SCH-LND focusing on solid lung nodules (90 benign/90 malignant cases). Under the supervision of SCH-LND dataset, many hidden drawbacks of unsure data (484 solid nodules selected from LIDC-IDRI dataset) served for malignancy prediction are objectively revealed. Explanations to this phenomenon are inferred in this paper from the view of model training and data annotation bias. Although learning from scratch over sure data with commonly used model can surpass the performance of unsure data in large scales, we additionally propose two frameworks to make the best use of these cross-domain resources, among which, transfer learning is verified as an effective approach for LIDC-IDRI knowledge adaptation. Results show that the proposed method can achieve good performance for nodule-level malignancy prediction with a small SCH-LND dataset.
Hao Zheng, Zhiguo Zhuang, Yulei Qin, Yun Gu, Jie Yang, Guang-Zhong Yang. “Weakly Supervised Deep Learning for Breast Cancer Segmentation with Coarse Annotations.” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020.
Cancer lesion segmentation plays a vital role in breast cancer diagnosis and treatment planning. As creating labels for large medical image datasets can be time-consuming, laborious and error prone, a framework is proposed in this paper by using coarse annotations generated from boundary scribbles for training deep convolutional neural networks. These coarse annotations include locations of lesions but are lack of accurate information about boundaries. To mitigate the negative impact of annotation errors, we propose an adaptive weighted constrained loss that can change the weight of the task-specific penalty term according to the learning process. To impose further supervision about the boundaries, uncertainty-based boundary maps are generated, which can provide better descriptions for the blurry boundaries. Validation on a dataset containing 154 MRI scans has shown an average Dice coefficient of 82.25%, which is comparable to results from fine annotations, demonstrating the efficacy of the proposed approach.
Yulei Qin, Hao Zheng, Yun Gu, Xiaolin Huang, Jie Yang, Lihui Wang, Yue-Min Zhu. “Learning Bronchiole-Sensitive Airway Segmentation CNNs by Feature Recalibration and Attention Distillation.” International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2020.
Training deep convolutional neural networks (CNNs) for airway segmentation is challenging due to the sparse supervisory signals caused by severe class imbalance between long, thin airways and background. In view of the intricate pattern of tree-like airways, the segmentation model should pay extra attention to the morphology and distribution characteristics of airways. We propose a CNNs-based airway segmentation method that enjoys superior sensitivity to tenuous peripheral bronchioles. We first present a feature recalibration module to make the best use of learned features. Spatial information of features is properly integrated to retain relative priority of activated regions, which benefits the subsequent channel-wise recalibration. Then, attention distillation module is introduced to reinforce the airway-specific representation learning. High-resolution attention maps with fine airway details are passing down from late layers to previous layers iteratively to enrich context knowledge. Extensive experiments demonstrate considerable performance gain brought by the two proposed modules. Compared with state-of-the-art methods, our method extracted much more branches while maintaining competitive overall segmentation performance.
Institute of Medical Robotics