基于视频残差神经网络的深度步态识别
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

公安部科技强警基础工作专项(2016GABJC06); 中央高校基本科研业务费(D2023001)


Deep Gait Recognition Based on Video Residual Neural Network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 增强出版
  • |
  • 文章评论
    摘要:

    步态识别是根据人体的行走方式进行身份识别. 目前, 大多数步态识别方法通过浅层神经网络进行特征提取, 在室内步态数据集表现良好, 然而在近年新公布的室外步态数据集中性能表现不佳. 为了解决室外步态数据集带来的严峻挑战, 提出了一种基于视频残差神经网络的深度步态识别模型. 在特征提取阶段, 基于提出的视频残差块构建深层3D卷积神经网络(3D CNN), 提取整个步态序列的时空动力学特征; 然后, 引入时序池化和水平金字塔映射降低采样特征分辨率并提取局部步态特征; 使用联合损失函数驱动训练过程, 最后通过BNNeck平衡损失函数并调整特征空间. 实验分别在公开的室内 (CASIA-B)、室外(GREW、Gait3D)这3个步态数据集上进行. 实验结果表明, 该模型在室外步态数据集中的准确率以及收敛速度优于其他模型.

    Abstract:

    Gait recognition is the process of identifying individuals based on their walking patterns. Currently, most gait recognition methods employ shallow neural networks for feature extraction, which performs well in indoor gait datasets but produces poor performance on the newly released outdoor gait datasets. To address the complicated challenges that arise from outdoor gait datasets, this study proposes a deep gait recognition model based on video residual neural networks. In the feature extraction phase, a deep 3D convolutional neural network (3D CNN) is constructed by the proposed video residual blocks to extract the spatio-temporal dynamics features of the entire gait sequence. Subsequently, temporal pooling and horizontal pyramid mapping are introduced to reduce the feature resolution of sampling data and extract local gait features. The training process is driven by a joint loss function, and finally loss functions are balanced and the feature space is adjusted by BNNeck. The experiments are conducted on three publicly available gait datasets, including both indoor (CASIA-B) and outdoor (GREW, Gait3D) gait datasets. The experimental results verify that the model outperforms other models in accuracy and convergence speed on outdoor gait datasets.

    参考文献
    相似文献
    引证文献
引用本文

马玉祥,代雪晶.基于视频残差神经网络的深度步态识别.计算机系统应用,2024,33(4):279-287

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-10-03
  • 最后修改日期:2023-11-03
  • 录用日期:
  • 在线发布日期: 2024-01-17
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号