基于非对称U型卷积神经网络的脑肿瘤图像分割
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(62172248); 山东省自然科学基金(ZR2021MF098)


Brain Tumor Image Segmentation Based on Asymmetric U-shaped Convolutional Neural Network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 增强出版
  • |
  • 文章评论
    摘要:

    在计算机视觉分割任务中, 基于Transformer的图像分割模型需要大量的图像数据才能达到最好的性能, 医学图像相对于自然图像, 数据量非常稀少, 而卷积本身具有更高的感应偏差, 使得它更适合医学图像方面的应用. 为了将Transformer的远程表征学习与CNN的感应偏差相结合, 本文设计了残差ConvNeXt模块来模拟Transformer的设计结构, 采用深度卷积和逐点卷积组成的残差ConvNeXt模块来提取特征信息, 极大地降低了参数量. 并对感受野和特征通道进行了有效的缩放和扩展, 丰富了特征信息. 此外, 本文提出了一个非对称3D U型网络ASUNet用于脑肿瘤图像的分割. 在非对称U型结构中, 采用残差连接, 将最后两个编码器的输出特征进行连接来扩大通道数. 最后, 在上采样的过程中采用深度监督, 促进了上采样过程中语义信息的恢复. 在BraTS 2020和FeTS 2021数据集上的实验结果表明, ET、WT和TC的骰子分数分别达到了77.08%、90.83%、83.41%和75.63%、90.45%、84.21%. 并且通过对比实验, ASUNet在准确性方面完全可以与Transformer构建的模型竞争, 同时保持了标准卷积神经网络的简单性和高效性.

    Abstract:

    In computer vision segmentation, the Transformer-based image segmentation model needs a large amount of image data to achieve the best performance. However, the data volume of medical images is very scarce compared with natural images. Convolution, with its higher inductive bias, is more suitable for medical images. To combine the long-range representation learning of Transformer with the inductive bias of CNN, a residual ConvNeXt module is designed to simulate the design structure of Transformer in this research. The module, composed of deep convolution and point wise convolution, is used to extract feature information, which greatly reduces the number of parameters. The receptive field and feature channel are effectively scaled and expanded to enrich the feature information. In addition, an asymmetric 3D U-shaped network called ASUNet is proposed for the segmentation of brain tumor images. In the asymmetric U-shaped structure, the output features of the last two encoders are connected by residual connection to expand the number of channels. Finally, deep supervision is used in the process of upsampling, which promotes the recovery of semantic information. Experimental results on the BraTS 2020 and FeTS 2021 datasets show that the dice scores of ET, WT, and TC reach 77.08%, 90.83%, 83.41%, and 75.63%, 90.45, 84.21%, respectively. Comparative experiments show that ASUNet can fully compete with Transformer-based models in terms of accuracy while maintaining the simplicity and efficiency of standard convolutional neural networks.

    参考文献
    相似文献
    引证文献
引用本文

刘盼盼,安典龙,丰艳.基于非对称U型卷积神经网络的脑肿瘤图像分割.计算机系统应用,,():1-9

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-01-23
  • 最后修改日期:2024-03-05
  • 录用日期:
  • 在线发布日期: 2024-07-03
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号