The problem of restoring images degraded by an underwater environment is challenging, in part because light traveling underwater suffers from two combined degradations, known as scattering and absorption, which leads to inaccurate transmittance estimation. In this work, we propose that underwater image dehazing and color correction algorithm based on scene depth estimation. Through scene depth estimation, we get accurate transmittance to achieve better dehazing effect. The experimental results show that our approach obtains good-quality images, with a visibility enhancement comparable or better than other recent methods. As for color recovery, We recovere among different images, regardless of the different water conditions. In this work we not only achieves the effect of underwater image dehazing, but also guarantees accuracy and timeliness of recovery results.
Blind image deblurring is a challenging problem which has drawn a lot of attention in recent years. Previous work states shows that image details caused by blur could adversely affect the kernel estimation, especially when the blur kernel is large. In this paper, we focus on how to extract the suitable salient structure for kernel estimation from a single blurred image. A fast method for estimating the salient structure of an image is proposed in the paper. The image is divided into two layers with different smoothness, and the local relative smoothness layer eliminates the image structure that adversely affects the kernel estimation. Further kernel estimation using the layer can obtain more accurate results. Substantial experiment shows that our method is effective on some challenging examples.
This paper analyzes the causes of image noise in seawater and the influence of noise on the target image of UUV(unmanned underwater vehicle), and points out the shortcomings of existing methods of noise suppression. In view of the above problems, we propose a real-time noise suppression method for the target image of the UUV platform. The algorithm is divided into three steps: (1) Firstly, the image is binarized by finding an appropriate threshold based on the dispersion between classes. (2) Then, the binary image is subjected to rapid morphological processing to separate the sticky noise. (3) Finally, the target connected domain is calibrated by the four-neighbor method and the pixel values outside the target are gradually reduced based on the principle of human vision to achieve the purpose of noise suppression. Experiments and results show that the method is able to preserve the edge and details of the target well, suppress the noise, and the speed is fast, which satisfies the accuracy and timeliness required for underwater video processing.
Aiming at the problem that the computational process in the method for image haze removal based on the dark channel prior is too complicated and too time-consuming, we propose a fast method for single image haze removal based on multiscale dark channel prior. In order to solve the problem that it takes too much time on a high resolution image, we choose to optimize the image haze removal based on dark channel prior algorithm by the method of multiple scales. We deal with rough resolution images and use fast minimum filtering and fast guided filtering to speed up the haze removal algorithm. Therefore, the speed of calculation is accelerated while maintaining the good image effect after haze removal.
In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.
KEYWORDS: Point spread functions, Cameras, 3D modeling, 3D image reconstruction, Image restoration, Motion models, 3D acquisition, 3D image processing, Image processing, Motion estimation
With improving of intelligent and automation in modern industrial production area, the detection and reconstruction of the 3D surface of the product has become an important technology, but the image which acquire on the actual production line has motion blur and this problem will affect the later reconstruction work. In order to solve this problem, a deblurring method which based on double view moving target image is proposed in this paper. We can deduce the relationship of the point spread function(PSF) path between the double view image through the epipolar geometry and the camera model. The experimental results show that deblurring with the PSF path solved by the geometric relationship achieves good results.
We present a novel blind restoration algorithm to restore object images from real images based on maximum likelihood estimation. In order to estimate the unknown turbulence point spread functions (PSF) of the observed image contain the same objects, multi-frame images are used to construct an integral likelihood function. A series of experiments for synthetic and real images illustrate that the proposed algorithm is effective in restoring real images.
KEYWORDS: Image processing, Point spread functions, Image quality, Deconvolution, Digital signal processing, Fourier transforms, 3D vision, Image deconvolution, Inverse problems, Medical imaging
For the real-time motion deblurring, it is of utmost importance to get a higher processing speed with about the same image quality. This paper presents a fast Richardson-Lucy motion deblurring approach to remove motion blur which rotates blurred image under blurring paths. Hence, the computational time is reduced sharply by using one-dimensional Fast Fourier Transform in one-dimensional Richardson-Lucy method. In order to obtain accurate transformational results, interpolation method is incorporated to fetch the gray values. Experiment results demonstrate that the proposed approach is efficient and effective to reduce motion blur under the blur paths.
Blind image deblurring is an important issue. In this paper, we focus on solving this issue by constrained regularization method. Motivated by the importance of edges to visual perception, the edge-enhancing indicator is introduced to constrain the total variation regularization, and the bilateral filter is used for edge-preserving smoothing. The proposed edge enhancing regularization method aims to smooth preferably within each region and preserve edges. Experiments on simulated and real motion blurred images show that the proposed method is competitive with recent state-of-the-art total variation methods.
KEYWORDS: Transform theory, Sodium, Direct methods, MATLAB, Image processing, Convolution, Information technology, Medical imaging, Data processing, Digital signal processing
This paper introduces and evaluates a new algorithm for the computation of type-III discrete Hartley transforms (DHT)
of length N = 2n. The length-N type-III discrete Hartley transforms can be decomposed into several length-16 type-III discrete Hartley transforms based on the radix-2 fast algorithm, and the length-16 type-III discrete Hartley transforms can be computed by first- order moments. It can save a lot of arithmetic operations and the computational complexity of the algorithms is lower than some existing methods. Moreover, this algorithm can be easily implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.