To promote the super-resolution (SR) technology in real-world applications, the blind SR, involving kernel estimation and image restoration to super-resolve images with unknown degradation, has become one of the research focuses. Most existing methods either implement the above two tasks step-by-step so that do not well consider the compatibility between them, or repeatedly apply two modules over and over again to emphasize cooperation but limit the adaptive development of each one. Towards the above issues, based on the Deep Alternating Network (DAN), a novel training strategy named switching the iteration is proposed in this paper. In the first stage, an estimation module and a restoration module are optimized alternately to promote compatibility. In the second stage, duplicate the pre-trained modules and place them alternately to form a linear structure to promote adaptive development. Extensive experiments on isotropic Gaussian degradation datasets and irregular blur kernel degradation datasets show that the proposed method can achieve visually pleasing results and state-of-the-art performance in blind SR.
Natural images inevitably suffer from spatially variant blur caused by the relative motion between a camera and objects. We present an effective and efficient patch-wise edge-enhanced image regularization and a robust kernel similarity constraint to perform an accurate kernel estimation from coarse-to-fine iterations. The proposed adaptive regularization introduces a gradient magnitude penalty function into total variation to preserve and enhance salient edges while smoothing out harmful subtle structures. In addition, the similarity constraint is engaged in each patch without camera rotation effects, ensuring that the erroneous kernels can be identified by measuring the similarity among the kernels of neighbor patches and be replaced with the well-estimated ones. After obtaining accurate kernels, numerous nonblind deblurring methods can be applied to restore an image. Numerical experiments demonstrate that the proposed algorithm performs favorably without ringing artifacts and possesses high processing efficiency for natural nonuniform blurred images.
Scene classification shows pivotal role in remote sensing image researches. Since challenges of large similarity between classes, high diversity in each class and huge variations in background, spatial resolution, translation, etc., remote sensing image scene classification still urgently need development. In this paper, we propose a novel method named deep combinative feature learning (DCFL) to extract low-level texture and high-level semantic information from different network layers. First, feature encoder VGGNet-16 is fine-tuned for subsequent multi-scale feature extraction. And two shallow convolutional (Conv) layers are selected for convolutional feature summing maps (CFSM), from which we extract uniform LBP with rotation invariance to excavate detailed texture. Deep semantic features from fully-connected (FC) layer concatenated with shallow detailed features constitute deep combinative features, which are thrown into support vector machine (SVM) classifier for final classification. Extensive experiments are carried out and results prove the comparable advantages and effectiveness of the proposed DCFL contrasting with different state-of-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.