现有的图像修复方法存在受损区域修复痕迹明显、语义不连续、不清晰等问题, 针对这些问题本文提出了一种基于新型编码器并结合上下文感知损失的图像修复方法. 本文方法采用生成对抗网络作为基本网络架构, 为了能够充分学习图像特征得到更清晰的修复结果, 引入了SE-ResNet提取图像的有效特征; 同时提出联合上下文感知损失训练生成网络以约束局部特征的相似性, 使得修复图像更加接近原图且更加真实自然. 本文在多个公共数据集上进行实验, 证明了本文所提方法能够更好地对破损图像进行修复.
The existing image repair methods have some problems such as obvious trace, semantic discontinuity, unclear, etc. To solve these problems, this study proposes an image repair method based on a new encoder and context-aware loss. In this paper, the generative adversarial network is adopted as the basic network architecture. In order to fully learn the image features and get clearer repair results, SE-ResNet is introduced to extract the effective features of the image. At the same time, the joint context-aware loss training generating network is proposed to constrain the similarity of local features, so that the repaired image is closer to the original and more real and natural. Experiments on multiple public datasets in this paper prove that the proposed method can repair the damaged images better.