基于输入特征稀疏化的图神经网络训练加速
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(62141216); 安徽高校协同创新项目(GXXT-2022-045)


Accelerating Graph Neural Network Training with Feature Data Sparsification
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 增强出版
  • |
  • 文章评论
    摘要:

    图神经网络(graph neural network, GNN)是处理图数据的重要方法. 由于计算复杂、图数据容量大, 在大规模图上训练图神经网络依赖于CPU-GPU协作和图采样训练方法, 其中图结构和特征数据存储在CPU内存中, 而采样得到的子图及其特征则传输至GPU进行训练. 然而, 这种方法面临着严重的图特征数据加载瓶颈, 显著降低了端到端训练性能, 且图特征占用过多内存, 严重限制了可训练的图规模. 为了解决这些问题, 我们提出了基于输入特征稀疏化的数据加载方法, 显著减少CPU内存占用和跨PCIe总线传输的数据量, 大幅缩短数据加载时间, 加速GNN的训练, 使其可以充分利用GPU计算资源. 针对图特征和GNN计算特性, 我们提出了适用于图特征数据的稀疏化方法, 在压缩比和模型准确度之间达到平衡. 我们在3个常见GNN模型和3个不同规模的数据集上进行了实验评估, 包括最大的公开数据集之一MAG240M. 结果表明, 此方法将特征尺寸减小了一个数量级以上, 并实现1.6–6.7倍的端到端训练加速, 而模型准确度的降低不超过1%. 此外, 在仅使用4个GPU的情况下, 仅需40 min就可以在MAG240M上完成GraphSAGE模型的训练并达到目标准确度.

    Abstract:

    Graph neural network (GNN) has become an important method for handling graph data. Due to the complexity of calculation and large capacity of graph data, training GNNs on large-scale graphs relies on CPU-GPU cooperation and graph sampling, which stores graph structure and feature data in CPU memory and transfers sampled subgraphs and their features to GPU for training. However, this approach faces a serious bottleneck in graph feature data loading, leading to a significant decrease in end-to-end training performance and severely limiting graph scale that can be trained as graph features take up too much memory. To address these challenges, this study proposes a data loading approach based on input feature sparsification, which significantly reduces CPU memory usage and data transfer across the PCIe bus, significantly shortens data loading time, accelerates GNN training, and enables full utilization of GPU resources. In view of the graph features and GNN computational characteristics, the study proposes a sparsification method suitable for the graph feature data, which achieves a balance between compression ratio and model accuracy. The study also conducts experimental evaluations on three common GNN models and three datasets of different sizes, including MAG240M, one of the largest publicly available datasets. The results show that this method reduces the feature size by more than one order of magnitude and achieves 1.6–6.7 times end-to-end training acceleration, while the model accuracy is reduced by less than 1%. In addition, with only four GPUs, the GraphSAGE model can be trained on the MAG240M in just 40 minutes with expected accuracy.

    参考文献
    相似文献
    引证文献
引用本文

马煜昕,许胤龙,李诚,钟锦.基于输入特征稀疏化的图神经网络训练加速.计算机系统应用,2024,33(1):245-253

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-03-16
  • 最后修改日期:2023-04-28
  • 录用日期:
  • 在线发布日期: 2023-11-24
  • 出版日期: 2023-01-05
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号