###
计算机系统应用英文版:2024,33(4):254-262
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
融合位置注意力机制与轻量化STDC网络的非结构化场景语义分割
(1.常州纺织服装职业技术学院, 常州 213164;2.常州大学 计算机与人工智能学院, 常州 213164)
Unstructured Scene Semantic Segmentation Combining Location Attention Mechanism and Lightweight STDC Network
(1.Changzhou Vocational Institute of Textile and Garment, Changzhou 213164, China;2.School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 27次   下载 180
Received:October 10, 2023    Revised:November 09, 2023
中文摘要: 近年来, 非结构化道路分割已成为计算机视觉领域的重要研究方向之一. 现有的大多数方法适合结构化道路的分割并无法满足非结构化道路分割的准确性与实时性需求. 为了解决上述问题, 本文对STDC网络进行改进, 引入残差连接来更好地融合多尺度语义信息, 还提出一种嵌入位置注意力模块的空洞空间卷积池化金字塔(PA-ASPP)来增强网络对道路等特定区域的位置感知能力. 本文在RUGD与RELLIS-3D两个数据集上进行实验, 所提出方法的MIoU在两个数据集的测试集上分别达到了50.78%和49.96%.
中文关键词: 非结构化环境  语义分割  PA-ASPP  STDC
Abstract:In recent years, unstructured road segmentation has become one of the important research directions in the field of computer vision. Most existing methods are suitable for structured road segmentation and cannot meet the accuracy and real-time requirements of unstructured road segmentation. To address the above issues, this study improves the short-term dense concatenate (STDC) network by introducing residual connections to better integrate multi-scale semantic information. Additionally, it proposes a position attention-aware spatial pyramid pooling (PA-ASPP) module to enhance the network’s position awareness ability for specific regions such as roads. Experiments are conducted on two datasets, RUGD and RELLIS-3D, and the proposed method achieves a mean intersection over union (MIoU) of 50.78% and 49.96% on the test sets of the two datasets, respectively.
文章编号:     中图分类号:    文献标志码:
基金项目:江苏省现代教育技术研究课题(2021-R-88294); 江苏研究生科研创新项目(KYCX23_3169)
引用文本:
陈晔,杨长春,杨森,王宇鹏,王彭.融合位置注意力机制与轻量化STDC网络的非结构化场景语义分割.计算机系统应用,2024,33(4):254-262
CHEN Ye,YANG Chang-Chun,YANG Sen,WANG Yu-Peng,WANG Peng.Unstructured Scene Semantic Segmentation Combining Location Attention Mechanism and Lightweight STDC Network.COMPUTER SYSTEMS APPLICATIONS,2024,33(4):254-262