The majority of image quality studies in the field of remote sensing have been performed on systems with
conventional aperture functions. These systems have well-understood image quality tradeoffs, characterized by the General Image Quality Equation (GIQE). Advanced, next-generation imaging systems present challenges to both post-processing and image quality prediction. Examples include sparse apertures, synthetic apertures, coded apertures and phase elements. As a result of the non-conventional point spread functions of these systems, post-processing becomes a critical step in the imaging process and artifacts arise that are more complicated than simple edge overshoot. Previous research at the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory has resulted in a modeling methodology for sparse and segmented aperture systems, the validation of
which will be the focus of this work. This methodology has predicted some unique post-processing artifacts that arise when a sparse aperture system with wavefront error is used over a large (panchromatic) spectral bandpass. Since these artifacts are unique to sparse aperture systems, they have not yet been observed in any real-world data. In this work, a laboratory setup and initial results for a model validation study will be described. Initial results will focus on the validation of spatial frequency response predictions and verification of post-processing artifacts. The goal of this study is to validate the artifact and spatial frequency response predictions of this model.
This will allow model predictions to be used in image quality studies, such as aperture design optimization, and the signal-to-noise vs. post-processing artifact tradeoff resulting from choosing a panchromatic vs. multispectral
system.
KEYWORDS: Sensors, Signal to noise ratio, Optical transfer functions, Atmospheric sensing, Deconvolution, Point spread functions, Image sensors, Detection and tracking algorithms, Atmospheric turbulence, Atmospheric modeling
A new image reconstruction algorithm is presented that will remove the effect of atmospheric turbulence on motion compensated frame average images. The primary focus of this research was to develop a blind deconvolution technique that could be employed in a tactical military environment where both time and computational power are limited. Additionally, this technique can be employed to measure atmospheric seeing conditions. In a blind deconvolution fashion, the algorithm simultaneously computes a high resolution image and an average model for the atmospheric blur parameterized by Fried’s seeing parameter. The difference in this approach is that it does not assume a prior distribution for the seeing parameter, rather it assesses the convergence of the image’s variance as the stopping criteria and identification of the proper seeing parameter from a range of candidate values. Experimental results show that the convergence of variance technique allows for estimation of the seeing parameter accurate to within 0.5 cm and often even better depending on the signal to noise ratio.
A new image restoration algorithm is proposed to remove the effect of atmospheric turbulence on motion-compensated frame averaged data collected by a three dimensional FLASH Laser Radar (LADAR) imaging system. The algorithm simultaneously arrives at an enhanced image as well as Fried's seeing parameter through an Expectation Maximization (EM) technique. Unlike blind deconvolution algorithms that operate only on two dimensional images, this technique accounts for both the spatial and temporal mixing that is caused by the atmosphere through which the system is imaging. Additionally, due to the over-determined nature of this problem, the point-spread function parameterized by Fried's seeing parameter can be deduced without the requirement for additional assumptions or constraints. The utility of the approach lies in its application to laser illuminated imaging where processing time is important.
The goal of this work is to develop an algorithm to enhance the utility of three-dimensional (3-D) FLASH laser radar sensors through accurate ranging to multiple surfaces per image pixel while minimizing the effects of diffraction. With this algorithm it will be possible to realize numerous enhancements over both traditional Gaussian mixture modeling and single-surface range estimation. While traditional Gaussian mixture modeling can effectively model the received pulse, we know that its shape is likely altered by optical aberrations from the imaging system and the medium through which it is imaging. Additionally, only identifying a single surface per pulse may result in the loss of valuable information about partially obscured surfaces. This algorithm enables multisurface ranging of an entire image with a single laser pulse. Ultimately, improvements realized through this new ranging algorithm when coupled with various other techniques may make 3-D FLASH LADAR more suitable for remote sensing applications. Simulation examples show that the multisurface ranging algorithm derived in this work improves range estimation over standard Gaussian mixture modeling and frame-by-frame deconvolution using the Richardson-Lucy algorithm by up to 91% and 70% respectively.
KEYWORDS: Expectation maximization algorithms, Sensors, LIDAR, Detection and tracking algorithms, Diffraction, 3D acquisition, Data modeling, 3D image processing, 3D image enhancement, Ranging
The goal of this work is to develop an algorithm to enhance the utility of 3D FLASH laser radar sensors
through accurate ranging to multiple surfaces per image pixel. Using this algorithm it will be possible to realize
numerous enhancements over both traditional Gaussian mixture modeling and single surface range estimation.
While traditional Gaussian mixture modeling can effectively model the received pulse, we know that the received
pulse is likely corrupted due to optical aberrations from the imaging system and the medium through which it is
imaging. Additionally, only identifying a single surface per pulse may result in the loss of valuable information
about partially obscured surfaces. Ultimately, this algorithm in conjunction with other recent research may
allow for techniques that enhance the spatial resolution of an image, improve image registration and enable the
detection of obscured targets with a single pulse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.