PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10204, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an algorithm for registering spectral images acquired by a pushbroom multispectral scanner operating on an airborne or spaceborne platform subject to uncontrolled motion, i.e., variable time-dependent attitude (pitch, roll, and yaw) and platform position. In contrast to imagery collected during straight and level flight, uncontrolled platform motion causes each band to be warped with respect to the others. The warped bands cannot be simply registered using a rigid transformation but instead require a space-varying de-warping transformation. However, determination of this de-warping transformation from image data remains a challenging problem. In this paper, we formulate a powerful yet efficient model for the warp that takes into account both the detector array geometry and the imaging geometry. The physically-based geometric constraints incorporated into the model enable it to distinguish effectively between image-to-image warp due to uncontrolled variations in the sensor line-of-sight and band-to-band variations in image content and measurement noise. Results show that the model is capable of recovering the true warp in areas where there is little or no correlation in spatial content between the bands. For the test cases studied, the spline-warp model is able to reduce the registration error in the local warp determined from data using image correlation techniques by more than an order of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long-range airborne full-motion-video systems require large apertures to maximize multiple aspects of system
performance, including spatial resolution and sensitivity. As systems push to larger apertures for increased resolution
and standoff range, both mounting constraints and atmospheric effects limit their effectiveness. This paper considers two
questions: first, under what atmospheric and spectral conditions does it make sense to have a larger aperture; second,
what types of optical systems can best exploit movement-constrained mounting? We briefly explore high-level
atmospheric considerations in determining sensor aperture size for various spectral bands, following with a comparison
of the swept-volume-to-aperture ratio of Ritchey-Chrétien and three-mirror-anastigmat optical systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are components that are common to all electro-optical and infrared imaging system performance models. The
purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other
researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based
Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First,
performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results
are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate
performance of an algorithm used to detect objects in an image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As UAV imaging continues to expand, so too do the opportunities for improvements in data analysis. These
opportunities, in turn, present their own challenges including the need for real time radiometric and spectral calibration;
the continued development of quality metrics facilitating exploitation of strategic and tactical imagery; and the need to
correct for sensor and platform-induced artifacts in image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video tracking of rocket launches inherently must be done from long range. Due to the high temperatures produced, cameras are often placed far from launch sites and their distance to the rocket increases as it is tracked through the flight. Consequently, the imagery collected is generally severely degraded by atmospheric turbulence. In this talk, we present our experience in enhancing commercial space flight videos. We will present the mission objectives, the unique challenges faced, and the solutions to overcome them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can
be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote
Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects.
Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed.
RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a
naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing
readout device that recovers the distant audio. These two elements are passively coupled over long distances at the
speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel
and acoustic beam forming are all possible using RAS techniques and when combined with high-definition
video imagery it can help to provide a more cinema like immersive viewing experience.
A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The
acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is
further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical
readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from
a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and
simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages
include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical
configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires
overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic
range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time
image processing software environment provides many of the needed capabilities for researching video-acoustic
signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence
distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we
modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we
demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has
advantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by
atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were
simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is
used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise
mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed
with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching
method with restoration filter. These algorithms were chosen because they incorporate different approaches and
processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated
degraded imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric turbulence degrades imagery by imparting scintillation and warping effects that can reduce the ability to
identify key features of the subjects. While visually, a human can intuitively understand the improvement that turbulence
mitigation techniques can offer in increasing visual information, this enhancement is rarely quantified in a meaningful
way. In this paper, we discuss methods for measuring the potential improvement on system performance video
enhancement algorithms can provide. To accomplish this, we explore two metrics. We use resolution targets to
determine the difference between imagery degraded by turbulence and that improved by atmospheric correction
techniques. By comparing line scans between the data before and after processing, it is possible to quantify the
additional information extracted. Advanced processing of this data can provide information about the effective
modulation transfer function (MTF) of the system when atmospheric effects are considered and removed, using this data
we compute a second metric, the relative improvement in Strehl ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods to reconstruct pictures from imagery degraded by atmospheric turbulence have been under development for
decades. The techniques were initially developed for observing astronomical phenomena from the Earth’s surface, but
have more recently been modified for ground and air surveillance scenarios. Such applications can impose significant
constraints on deployment options because they both increase the computational complexity of the algorithms
themselves and often dictate a requirement for low size, weight, and power (SWaP) form factors. Consequently,
embedded implementations must be developed that can perform the necessary computations on low-SWaP platforms.
Fortunately, there is an emerging class of embedded processors driven by the mobile and ubiquitous computing
industries. We have leveraged these processors to develop embedded versions of the core atmospheric correction engine
found in our ATCOM software. In this paper, we will present our experience adapting our algorithms for embedded
systems on a chip (SoCs), namely the NVIDIA Tegra that couples general-purpose ARM cores with their graphics
processing unit (GPU) technology and the Xilinx Zynq which pairs similar ARM cores with their field-programmable
gate array (FPGA) fabric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern digital imaging systems are susceptible to degraded imagery because of atmospheric turbulence.
Notwithstanding significant improvements in resolution and speed, significant degradation of captured imagery still
hampers system designers and operators. Several techniques exist for mitigating the effects of the turbulence on
captured imagery, we will concentrate on the effects of Bi-Spectrum Speckle Averaging [1], [2] approach to image
enhancement, on a data-set captured in-conjunction with meteorological data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent research, anisoplanatic electromagnetic (EM) wave propagation along a slanted path in the presence
of low atmosphere phase turbulence (modified von Karman spectrum or MVKS) has been investigated
assuming a Hufnagel-Valley (HV) type structure parameter. Preliminary results indicate a strong dependence
on the slant angle especially for long range transmission and relatively strong turbulence. The investigation
was further divided into two regimes, viz. (a) one where the EM source consisted of a plane wave modulated
with a digitized image, which is propagated along the turbulent path and recovered via demodulation at the
receiver; and (b) transmit the plane wave without modulation along the turbulent path through an image
transparency and a thin lens designed to gather the received image in the focal plane. In this paper, we reexamine
the same problem (part (a) only) in the presence of a chaotic optical carrier where the chaos is
generated in the feedback loop of an acousto-optic Bragg cell. The image information is encrypted within the
chaos wave, and subsequently propagated along a similar slant path and identical turbulence conditions. The
recovered image extracted via heterodyning from the received chaos is compared quantitatively (through
image cross-correlations and mean-squared error measures) for the non-chaotic versus the chaotic approaches.
Generally, “packaging” the information in chaos improves performance through turbulent propagation, and
results are discussed from this perspective. Concurrently, we will also examine the effect of a non-encrypted
plane EM wave propagation through a transparency-lens combination. These results are also presented with
appropriate comparisons with the cases involving lensless transmission of imagery through corresponding
turbulent and non-turbulent layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.