High-resolution ocean remote sensing images are of vital importance in the research field of ocean remote sensing. However, the available ocean remote sensing images are composed of averaged data, whose resolution is lower than the instant remote sensing images. In this paper, we propose a very deep super-resolution learning model for remote-sensing image super-resolution. In our research, we target satellite-derived sea surface temperature (SST) images, a typical kind of ocean remote sensing image, as a specific case study of super-resolution on remote sensing images. In this paper, we propose a novel model architecture based on the very deep super-resolution (VDSR) model, to further enhance its performance. Furthermore, we evaluate the peak signal-to-noise ratio (PSNR) and perceptual loss of the model trained on the natural images and SST frames. We designed and applied our model to the China Ocean SST database, the Ocean SST database, and the Ocean-Front databases, all containing remote sensing images captured by advanced very high resolution radiometers (AVHRR). Experimental results show that our model performs better than the state-of-the-art models on SST frames.
In this paper, we propose an efficient single image super-resolution (SR) method for multi-scale image texture recovery, based on Deep Skip Connection and Multi-Deconvolution Network. Our proposed method focuses on enhancing the expression capability of the convolutional neural network, so as to significantly improve the accuracy of the reconstructed higher-resolution texture details in images. The use of deep skip connection (DSC) can make full use of low-level information with the rich deep features. The multi-deconvolution layers (MDL) introduced can decrease the feature dimension, so this can reduce the computation required, caused by deepening the number of layers. All these features can reconstruct high-quality SR images. Experiment results show that our proposed method achieves state-of-the- art performance.
Deep completion which predicts dense depth from sparse depth has important applications in the fields of robotics, autonomous driving and virtual reality. It compensates for the shortcomings of low accuracy in monocular depth estimation. However, the previous deep completion works evenly processed each depth pixel and ignored the statistical properties of the depth value distribution. In this paper, we propose a self-supervised framework that can generate accurate dense depth from RGB images and sparse depth without the need for dense depth labels. We propose a novel attention-based loss that takes into account the statistical properties of the depth value distribution. We evaluate our approach on the KITTI Dataset. The experimental results show that our method achieves state-of-the-art performance. At the same time, ablation study proves that our method can effectively improve the accuracy of the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.