Image inpainting techniques based on deep learning have shown significant improvements by introducing structure priors, but still generate structure distortion or textures fuzzy for large missing areas. This is mainly because series networks have inherent disadvantages: employing unreasonable structural priors will inevitably lead to severe mistakes in the second stage of cascade inpainting framework. To address this issue, an appearance flow-based structure prior (AFSP) guided image inpainting is proposed. In the first stage, a structure generator regards edge-preserved smooth images as global structures of images and then appearance flow warps small-scale features in input and flows to corrupted regions. In the second stage, a texture generator using contextual attention is designed to yield image high-frequency details after obtaining reasonable structure priors. Compared with state-of-the-art approaches, the proposed AFSP achieved visually more realistic results. Compared on the Places2 dataset, the most challenging with 1.8 million high-resolution images of 365 complex scenes, shows that AFSP was 1.1731 dB higher than the average peak signal-to-noise ratio for EdgeConnect.
Tensor ring (TR) decomposition is an effective method to achieve deep neural network (DNN) compression. However, there are two problems with TR decomposition: setting TR rank to equal in TR decomposition and selecting rank through an iterative process is time-consuming. To address the two problems, A TR network compression method by Bayesian optimization (TR-BO) is proposed. TR-BO involves selecting rank via Bayesian optimization, compressing the neural network layer via TR decomposition using rank obtained in the previous step, and, finally, further fine-tuning the compressed model to overcome some of the performance loss due to compression. Experimental results show that TR-BO achieves the best results in terms of Top-1 accuracy, parameter, and training time. For example, on the CIFAR-10 dataset Resnet20 network, TR-BO-1 achieves 87.67% accuracy with a compression ratio of 13.66 and a running time of only 2.4 hours. Furthermore, TR-BO has achieved state-of-the-art performance on the CIFAR-10/100 benchmark tests.
Tensor decomposition has been extensively studied for convolutional neural networks (CNN) model compression. However, the direct decomposition of an uncompressed model into low-rank form causes unavoidable approximation error due to the lack of low-rank property of a pre-trained model. In this manuscript, a CNN model compression method using alternating constraint optimization framework (ACOF) is proposed. Firstly, ACOF formulates tensor decomposition-based model compression as a constraint optimization problem with low tensor rank constraints. This optimization problem is then solved systematically in an iterative manner using alternating direction method of multipliers (ADMM). During the alternating process, the uncompressed model gradually exhibits low-rank tensor property, and then the approximation error in low-rank tensor decomposition can be negligible. Finally, a high-performance CNN compression network can be effectively obtained by SGD-based fine-tuning. Extensive experimental results on image classification show that ACOF produces the optimal compressed model with high performance and low computational complexity. Notably, ACOF compresses Resnet56 to 28% without accuracy drop, and the compressed model have 1.14% higher accuracy than learning-compression (LC) method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.