In order to improve the performance of multi-frame blind deconvolution algorithm, the analysis was conducted on the image restoration quality and convergence rate of the multi-frame blind deconvolution algorithm using Conjugate Gradient + Brent, Conjugate Gradient + Dbrent, Conjugate Gradient + Macopt, and L-BFGS + Wolfe combination optimization algorithms. The mathematical principles of above optimization algorithms were elaborated in detail, and they were introduced into the multi-frame blind deconvolution algorithm to achieve high quality restored images. Theoretical and experimental results indicate that the L-BFGS + Wolfe combination algorithm has the fastest convergence rate, but the restoration quality is lower compared to the other combination algorithms; Compared with the other combination algorithms, the Conjugate Gradient + Brent/Dbrent combination algorithm can obtain higher quality restored images, but its convergence rate is slower; The convergence rate and restoring quality of the Conjugate Gradient +Macopt combination algorithm are between L-BFGS + Wolfe and Conjugate Gradient + Brent/Dbrent.
Space object images obtained through ground-based telescopes tend to be heavily blurred and degraded by the atmospheric turbulence as well as detection noise and aberrations of optical systems. Multi-Frame Blind Deconvolution (MFBD) is currently the mainstream image restoration algorithm for images degraded by the atmospheric turbulence. MFBD jointly estimates the original image of object and the corresponding point spread functions (PSFs) from a sequence of short-exposure images. From our experience, there are still a lot of space for the improvement of the traditional MFBD algorithm. The mixed-Gaussian noise model that accounts for both the photonic and detector noise is used to replace the stationary Gaussian noise model. The L2-L1 (quadratic-linear) regularization method is used to replace originally used TV regularization method or Tikhonov regularization method. The phase annealing method is used to improve the quality of initial phase estimation and the multi-round iterative MFBD algorithm is preliminarily implemented. The simulation results demonstrate that the restored images obtained by the multi-round iterative MFBD algorithm often have better quality than that restored by traditional MFBD.
High-resolution imaging with large ground-based telescope is challenging due to atmospheric turbulence. Adaptive optics (AO) system can provide real-time compensation for the wavefront distortion in the pupil of the telescope. However, the observed images still suffer from a blurring caused by the residual wavefront. Numerical post-processing with a good approximation of the residual wavefront can help to effectively remove the blur. In this paper, a gradients measurement model for the Shack-Hartmann wavefront sensor (WFS) in a closed-loop AO system is built. The model is based on the frozen flow hypothesis with knowledge of the wind velocities of atmospheric turbulence layers. Then a high resolution residual wavefront reconstruction method using multiframe Shack-Hartmann WFS measurements and deformable mirror voltages is presented. Numerical results show that the method can effectively improve the spatial resolution and accuracy of the reconstructed residual wavefront.
High-resolution imaging with large ground-based telescopes is limited by atmospheric turbulence. The observed images are usually blurred with unknown point spread functions (PSFs) defined in terms of the wavefront distortions of light. To effectively remove the blur, numerical postprocessing with a good approximation of the wavefront is required. The gradient measurements of the wavefront recorded by Shack–Hartmann wavefront sensor (WFS) can be used to estimate the wavefront. A gradients measurement model for Shack–Hartmann WFS is built. This model is based on the frozen flow hypothesis and uses a least-squares-fit of tip and tilt across the subaperture in the WFS to genarate the averaged gradient measurements. Then a high-resolution wavefront reconstruction method using multiframe Shack–Hartmann WFS measurements is presented. The method uses high cadence WFS data in a Bayesian framework and takes into account the available a priori information of the wavefront phase. Numerical results show that the method can effectively improve the spatial resolution and accuracy of the reconstructed wavefront in different seeing conditions.
Multi-frame blind deconvolution (MFBD) is a well-known numerical restoration technique for obtaining highresolution images of astronomical targets through the Earth’s turbulent atmosphere. The performance of MFBD algorithms depend on initial estimates for the object and the PSFs. Even though the observed image might be close to the object and could be used for the initial estimate for the object, as is often the case with the PSFs, we lack prior knowledge on the PSFs for each image. In order to provide high-quality initial estimates and improve the performance of the MFBD algorithm, one of the most effective methods is to introducing an imaging Shack-Hartmann Wave-front sensor which is similar to the traditional Shack-Hartmann Wave-front sensor but with a smaller number of lenslets across the aperture, and to process the data using a multi-channel joint restoration algorithm. In this paper, we proposed a multi-channel joint restoration algorithm which involves the usage of an imaging Shack Hartmann channel data alongside with the science camera data to improve the overall performance of the MFBD restoration algorithm. The numerical results are given in order to illustrate the performance of the joint restoration process.
The atmospheric turbulence is a principal limitation to space objects imaging with ground-based telescopes. In order to obtain high-resolution images, post-processing is a necessary tool to overcome the effects of atmospheric turbulence. In this paper, we propose a multi-frame blind deconvolution algorithm based on the consistency constraints. We apply parametrization on the image and the PSFs, and present the minimization problem by conjugate gradient method through an alternating iterative framework. We also determine the regularization parameter adaptively at each step. Experimental results show that the proposed method can recover high quality image from turbulence degraded images effectively.
High-resolution Wavefront reconstruction using the frozen-flow hypothesis requires the wind velocities of all significant layers of turbulence in the atmosphere, which can be estimated from the time-delayed autocorrelation of the wavefront sensor (WFS) measurements. In this paper, we present a method to estimate the wind velocities of the frozen-flow atmospheric turbulence layers using the slope measurements of a Shack-Hartmann WFS. This method is tested by simulation experiments and the simulation results show that our method is efficient and the error is acceptable.
We propose a speckle imaging algorithm in which we use the improved form of spectral ratio to obtain the Fried parameter, we also use a filter to reduce the high frequency noise effects. Our algorithm makes an improvement in the quality of the reconstructed images. The performance is illustrated by computer simulations.
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.
This paper presents a technique that performs coarse-to-fine image registration both in spatial and range domain. The goal of image registration is to estimate geometric and photometric parameters via minimization of an objective function in the least square sense. In order to reduce the probability of falling into a local optimal solution, the algorithm employs a coarse-to-fine strategy. In the coarse step, an illumination offset and contrast invariant feature detector which is named SURF is used to estimate affine motion parameters between the reference image and the target image, and then the intensity of corresponding pixels is used to directly estimate contrast and bias parameters based on RANSAC. In the fine step, the estimated parameters obtained in the coarse step are used as a good initial estimation, and photometric and affine motion parameters are refined alternatively via minimizing the objective function. Experiments on simulated and real images show that the proposed image registration method is superior to the feature-based method used in the coarse step and the groupwise image registration algorithm proposed by Bartoli.
This paper presents a technique that performs multi-frame super-resolution of differently exposed images. The method first employs a coarse-to-fine image registration method to align image in both spatial and range domain. Then an image fusion method based on the maximum a posterior (MAP) is used to reconstruct a high-resolution image. The MAP cost function includes a data fidelity term and a regularized term. The data fidelity term is in the L2 norm, and the regularized term employs Huber-Markov prior which can reduce the noise and artifacts while reserving image edges. In order to reduce the influence of registration errors, the high-resolution image estimate and registration parameters are refined alternatively by minimizing the cost function. Experiments with synthetic and real images show that the photometric registration reduce the grid-like artifacts in the reconstructed high-resolution image, and the proposed multi-frame super resolution method has a better performance than the interpolation-based method with lower RMSE and less artifacts.
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
An imaging system is constructed by atmosphere turbulence and ground-based telescope when the latter is used to observe a space object. The wavefront measurement produced by adaptive optics system can be used to estimate the point spread function (PSF) of the imaging system since it contains the wavefront aberration information of the light from the object. But the detector noise of the wavefront sensor (WFS) will inevitably bring estimation error. Based on the statistical theory, a method is presented to improve the PSF estimation accuracy by eliminating the noise error from the wavefront measurement. The numerical simulation shows that the estimation error of this method could be lower than 10%. It also indicates that the higher the signal-noise ratio (SNR) of the WFS is, the more frames of the wavefront measurements are used, and the bigger the Fried constant is, the more accurate the estimation will be. The work in this paper can be applied to performance evaluation of imaging system, deconvolution of AO images, as well as photometric analysis of space object.
In adaptive optics (AO) system, the detector noise is one of the main error sources of Shack-Hartmann wavefront sensor (SH-WFS). Based on the statistical analysis of the noise, a noise error estimation method is presented by using multiframe of the Hartmann spots pattern and the centroid displacements calculated from them. A numerical simulation system for wavefront measuring is built, and used to verify the validity of this method. It shows that the estimation error of this method could be lower than 2%, provided that the signal-noise-ratio (SNR) is sufficient for the WFS working normally. We studied the least frames of data that are required for the method when the SNR of the WFS is at different levels. It indicates that fewer frames are required as the SNR level is higher, and only 2 frames of data are required when the SNR level is high enough. For different types of detector noise, we have analyzed the influence of the accuracy of their prior information on the estimation error. It shows that the influence of the readout noise is strong, and the influence of the photon-noise, the dark-current noise and the sky-background noise is neglectable, since the WFS is usually exposed shortly. The work in this paper can be of certain significance in estimating the point spread function of AO system with the WFS measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.