Convolutional Neural Network (CNN), which is one of the deep learning algorithms, has been applied in many fields. Because the scale and structure of the network model are complex and the model has large amount of data, it is necessary to reduce the requirements for computational resource. Generally, it needs to use data parallel strategy to partition and calculate tasks with large amount of data. However, just using data parallel strategy which does not combine with the characteristics of computing tasks, it would result in high volume data transmission. Because of that, it is essential to design a reasonable data partitioning strategy for reducing the amount of data transmission through the analysis of the network structure and the computing characteristics of CNN. Firstly, this paper introduces the optimization of computing tasks in deep learning accelerator. Then, it introduces the architecture of the deep learning accelerator based on many-core BWDSP and designs the strategy of computing partition. And it compares and analyzes the experimental results based on VGGNet-16. The experimental results show that the proposed optimization algorithm can significantly improve the performance of data transmission and reduce the amount of data transmission.