PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8379, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ping Yuan, Rengarajan Sudharsanan, Xiaogang Bai, Paul McDonald, Eduardo Labios, Bryan Morris, John P. Nicholson, Gary M. Stuart, Harrison Danny, et al.
Three-dimensional (3D) topographic imaging using Short wavelength infrared (SWIR) Laser Detection and
Range (LADAR) systems have been successfully demonstrated on various platforms. LADAR imaging
provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. Recently
Spectrolab has demonstrated a compact 32×32 LADAR camera with single photon-level sensitivity with
small size, weight, and power (SWAP) budget. This camera has many special features such as non-uniform
bias correction, variable range gate width from 2 microseconds to 6 microseconds, windowing for smaller
arrays, and shorted pixel protection. Boeing integrated this camera with a 1.06 μm pulse laser on various
platforms and had demonstrated 3D imaging. In this presentation, the operation details of this camera and
3D imaging demonstration using this camera on various platforms will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future robots and autonomous vehicles require compact low-cost Laser Detection and Ranging (LADAR) systems for
autonomous navigation. Army Research Laboratory (ARL) had recently demonstrated a brass-board short-range eye-safe
MEMS scanning LADAR system for robotic applications. Boeing Spectrolab is doing a tech-transfer (CRADA) of this
system and has built a compact MEMS scanning LADAR system with additional improvements in receiver sensitivity,
laser system, and data processing system. Improved system sensitivity, low-cost, miniaturization, and low power
consumption are the main goals for the commercialization of this LADAR system. The receiver sensitivity has been
improved by 2x using large-area InGaAs PIN detectors with low-noise amplifiers. The FPGA code has been updated to
extend the range to 50 meters and detect up to 3 targets per pixel. Range accuracy has been improved through the
implementation of an optical T-Zero input line. A compact commercially available erbium fiber laser operating at 1550
nm wavelength is used as a transmitter, thus reducing the size of the LADAR system considerably from the ARL brassboard
system. The computer interface has been consolidated to allow image data and configuration data (configuration
settings and system status) to pass through a single Ethernet port. In this presentation we will discuss the system
architecture and future improvements to receiver sensitivity using avalanche photodiodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D laser sensor is a real-time remote sensor which offers 3D images of scenes. In this paper, we demonstrate a new
concept of the pulsed 3D laser sensor with 2D scanning of a transmitting beam and a scan-less receiver. The system
achieves the fast and long-range 3D imaging with a relatively simple system configuration. We newly developed a highaspect
APD array, a receiver IC, and a range and intensity detector. By combining these devices, we realized a 160 ×
120 pixels range imaging with an on-line frame rate of 8 Hz at a distance of about 50 m.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future planetary and lunar landers can benefit from a hazard detection (HD) system that employs a lidar to create a highresolution
3D terrain map in the vicinity of the landing site and an onboard computer to process the lidar data and
identify the safest landing site within the surveyed area. A divert maneuver would then be executed to land in this safe
site. An HD system enables landing in regions with a relatively high hazard abundance that would otherwise be
considered unacceptably risky, but are of high interest to the scientific community. A key component of a HD system is
a lidar with the ability to generate a 3D terrain image with the required range precision in the prescribed time and fits
within the project resource constraints. In this paper, we present the results obtained during performance testing of a
prototype "GoldenEye" 3D flash lidar developed by ASC, Inc. The testing was performed at JPL with the lidar and the
targets separated by 200 m. The analysis of the lidar performance obtained for different target types and albedos, pulse
energies, and fields of view is presented and compared to key HD lidar requirements identified for the Mars 2018 lander.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this work is to determine methods for detecting trails using statistics of LiDAR point cloud data,
while avoiding reliance on a Digital Elevation Model (DEM). Creation of a DEM is a subjective process that
requires assumptions be made about the density of the data points, the curvature of the ground, and other factors
which can lead to very dierent results in the nal DEM product, with no single correct" result. Exploitation of
point cloud data also lends itself well to automation. A LiDAR point cloud based trail detection scheme has been
designed in which statistical measures of local neighborhoods of LiDAR points are calculated, image processing
techniques employed to mask non-trail areas, and a constrained region growing scheme used to determine a nal
trails map. Results of the LiDAR point cloud based trail detection scheme are presented and compared to a
DEM-based trail detection scheme. Large trails are detected fairly reliably with some missing gaps, while smaller
trails are detected less reliably. Overall results of the LiDAR point cloud based methods are comparable to the
DEM-based results, with fewer false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in LIDAR technologies have increased the resolution of airborne instruments to the sub-meter level,
which opens up the possibility of creating detailed maps over a large area. The ability to map complex 3D structure is
especially challenging in urban environments, where both natural and manmade obstructions make comprehensive
mapping difficult. LIDAR remains unsurpassed in its capability to capture fine geometric details in this type of
environment, making it the ideal choice for many purposes. One important application of urban remote sensing is the
creation of line-of-sight maps, or viewsheds, which determine the visibility of areas from a given point within a scene.
Using a voxelized approach to LIDAR processing allows us to retain detail in overlapping structures, and we show how
this provides a better framework for handling line-of-sight calculations than existing approaches. Including additional
information about the instrument position during the data collection allows us to identify any scene areas which are
poorly sampled, and to determine any detrimental effect on line-of-sight maps. An experiment conducted during the
summer of 2011 collected both visible imagery and LIDAR at multiple returns per square meter of the downtown region
of Rochester, NY. We demonstrate our voxelized technique on this large real-world dataset, and derive where errors in
line-of-sight mapping are likely to occur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Road-side bombs are a real and continuing threat to soldiers in theater. CAE USA recently developed a prototype Volume based
Intelligence Surveillance Reconnaissance (VISR) sensor platform for IED detection. This vehicle-mounted,
prototype sensor system uses a high data rate LiDAR (1.33 million range measurements per second) to generate a 3D
mapping of roadways. The mapped data is used as a reference to generate real-time change detection on future trips on the
same roadways. The prototype VISR system is briefly described.
The focus of this paper is the methodology used to process the 3D LiDAR data, in real-time, to detect small changes on and
near the roadway ahead of a vehicle traveling at moderate speeds with sufficient warning to stop the vehicle at a safe
distance from the threat. The system relies on accurate navigation equipment to geo-reference the reference run and the
change-detection run. Since it was recognized early in the project that detection of small changes could not be achieved with
accurate navigation solutions alone, a scene alignment algorithm was developed to register the reference run with the
change detection run prior to applying the change detection algorithm. Good success was achieved in simultaneous real time
processing of scene alignment plus change detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As for the characteristic of the data acquired by laser radar and the three dimentional point cloud in disorder, and by
combining the abundant in three dimentional information of point cloud with the specific textural information of distance
images, we raised a new algorithm on the reconstruction of laser radar based on simplified point cloud and distance
images. In this article, we take advantage of the feature that Delaunay triangulation have to raise a simplified algorithm
to achieve the model network. In this algorithm, at first we build up the Delaunay triangulation, then comfirm the vector
by calculating the distance that every vertex in the network from the adjacency vertex, and then calculate the intersection
angle that the vector with triangle around; at the same time set the angular threshold in order to generate the new
Delaunay triangulation. Experimental results show that this algorithm can accomplish the simplication of triangulation
without affecting the accuracy of the modeling, along with the detailed, textural and shading information, we can achieve
3D reconstruction of the target images effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LADAR technology is used in remote sensing to monitor many parameters like particles and gases suspended in the
atmosphere, measure velocity, 3-D mapping, etc. Its application for 3-D mapping is of great interest in the geospatial
community. A LADAR system is capable of scanning the surface of an object by forming thousands or millions of
points in the 3-D space. New advances in LADAR technology have been pushing towards 4-D data (x, y, z, and
time). These systems are capable of operating in the same way as a video camera up to 30 frames per second.
Sampling a scene in the 4-D domain is very attractive for military and civilian applications.
This work presents an algorithm capable of using the 4-D measurements recorded by a LADAR system to
generate a 3-D video. The algorithm provides a feature extraction tool that is used to identify and rank safe landing
zones within the 3-D video point cloud dataset. We use parameters such as terrain irregularities, terrain slope or
gradients, and distances from vertical obstructions to identify and rank safe landing zones. Other additional mission
specific requirements can be included to further filter and rank safe landing zones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time awareness and rapid target detection are critical for the success of military missions. New technologies
capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently,
LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking
capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets
(3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time.
We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a
real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target
recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk
hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to
demonstrate the capabilities of our algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking of extended targets in high definition, 360-degree 3D-LIDAR (Light Detection and Ranging) measurements is
a challenging task and a current research topic. It is a key component in robotic applications, and is relevant to path
planning and collision avoidance.
This paper proposes a new method without a geometric model to simultaneously track and accumulate 3D-LIDAR
measurements of an object. The method itself is based on a particle filter and uses an object-related local 3D grid for
each object. No geometric object hypothesis is needed. Accumulation allows coping with occlusions.
The prediction step of the particle filter is governed by a motion model consisting of a deterministic and a probabilistic
part. Since this paper is focused on tracking ground vehicles, a bicycle model is used for the deterministic part. The
probabilistic part depends on the current state of each particle. A function for calculating the current probability density
function for state transition is developed. It is derived in detail and based on a database consisting of vehicle dynamics
measurements over several hundreds of kilometers. The adaptive probability density function narrows down the gating
area for measurement data association.
The second part of the proposed method addresses weighting the particles with a cost function. Different 3D-griddependent
cost functions are presented and evaluated.
Evaluations with real 3D-LIDAR measurements show the performance of the proposed method. The results are also
compared to ground truth data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new generation of laser-based imaging sensors enables collection of range images at video rate at the expense of
somewhat low spatial and range resolution. Combining several successive range images, instead of having to analyze
each image separately, is a way to improve the performance of feature extraction and target classification. In the robotics
community, occupancy grids are commonly used as a framework for combining sensor readings into a representation
that indicates passable (free) and non-passable (occupied) parts of the environment. In this paper we demonstrate how
3D occupancy grids can be used for outlier removal, registration quality assessment and measuring the degree of
unexplored space around a target, which may improve target detection and classification. Examples using data from a
maritime scene, acquired with a 3D FLASH sensor, are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bare earth extraction is an important component to LADAR data analysis in terms of terrain classification. The
challenge in providing accurate digital models is augmented when there is diverse topography within the data set or
complex combinations of vegetation and built structures. A successful approach provides a flexible methodology
(adaptable for topography and/or environment) that is capable of integrating multiple ladar point cloud data attributes. A
newly developed approach (TE-SiP) uses a 2nd and 3rd order spatial derivative for each point in the DEM to determine
sets of contiguous regions of similar elevation. Specifically, the derivative of the central point represents the curvature of
the terrain at that position. Contiguous sets of high (positive or negative) values define sharp edges such as building
edges or cliffs. This method is independent of the slope, such that very steep, but continuous topography still have
relatively low curvature values and are preserved in the terrain classification. Next, a recursive segmentation method
identifies unique features of homogeneity on the surface separated by areas of high curvature. An iterative selection
process is used to eliminate regions containing buildings or vegetation from the terrain surface. This technique was tested
on a variety of existing LADAR surveys, each with varying levels of topographic complexity. The results shown here
include developed and forested regions in the Dominican Republic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology for laser radar / ladar imaging through atmospheric turbulence is studied for target feature extraction,
acquisition, tracking, identification, etc. The procedure follows sequentially by (1) laser-mode propagation through the
outward atmospheric path, which is modeled by using multiple turbulence phase-screens; (2) the propagated laser mode
illuminates a target which is modeled using multiple facets; and (3) simultaneously, or near simultaneously, the return
path turbulence effects are modeled by a reverse order Cn
2(h), Lo, and lo set of phase-screens assuming a plane-wave.
This return path amplitude & phase screen is then used to create a pupil plus atmospheric effects impulse-response
which is used to (4) accurately construct the image of a diffuse target on the detector focal plane array using
conventional Fourier optics. Agreement of both the outward and the return path phase-screen matrices with their
respective analytical turbulence parameters, which are independently computed, is shown. The Fourier optics
construction process of the target's image is reviewed, and typical diffuse target images of facet model objects are
presented illustrating scintillation and speckle effects. The images may then be used in algorithm development for a
specific system performance determination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) are increasingly
being used to characterize ambient aerosols due to key advantages (i.e., wide area of regard (10 km2), fast response time,
high spatial resolution (<10 m) and high sensitivity). Scanning lidar allows for 3D imaging of atmospheric motion and
aerosol variability. Space Dynamics Laboratory at Utah State University, in conjunction with the USDA-ARS, has
developed and successfully deployed a three-wavelength lidar system called Aglite to characterize particles in diverse
settings. Aglite generates near real-time imagery of particle size distribution and size-segregated mass concentration in
addition to the ability to calculate whole facility emission rates. Based on over nine years of field and laboratory
experience, we present concentration and emission rate results from various measurements in military and civilian
deployments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on ground and airborne atmospheric methane measurements with a differential absorption lidar using an
optical parametric amplifier (OPA). Methane is a strong greenhouse gas on Earth and its accurate global mapping is
urgently needed to understand climate change. We are developing a nanosecond-pulsed OPA for remote measurements
of methane from an Earth-orbiting satellite. We have successfully demonstrated the detection of methane on the ground
and from an airplane at ~11-km altitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Formaldehyde is a trace species that plays a key role in atmospheric chemistry. It is an important indicator of nonmethane
volatile organic compound emissions. Also, it is a key reactive intermediate formed during the photochemical
oxidation in the troposphere. Because the lifetime of formaldehyde in the atmosphere is fairly short (several hours), its
presence signals hydrocarbon emission areas. The importance of measuring formaldehyde concentrations has been
recognized by the National Academy's Decadal Survey and two of NASA's forthcoming missions the GEO-CAPE and
GACM target its measurement. There are several techniques some of which are highly sensitive (detection limit ~ 50
parts-per-trillion) for in-situ measurement of formaldehyde and many reported atmospheric measurements. However
there appear to be no reported standoff lidar techniques for range resolved measurements of atmospheric formaldehyde
profiles. In this paper, we describe a formaldehyde lidar profiler based on differential laser induced fluorescence
technique. The UV absorption band in the 352 - 357nm is well suited for laser excitation with frequency tripled
Neodymium lasers and measuring the strong fluorescence in the 390 - 500nm region. Preliminary nighttime
measurements of formaldehyde were demonstrated with a lidar using a commercial Nd:YAG laser (354.7 nm) with a
rather large linewidth (~.02 nm). The measured sensitivity was ~1 ppb at 1 km with 100 meters range resolution even
with this non-optimized system. In this paper we describe our approach for increasing the sensitivity by many orders
and for daytime operation by improving the laser parameters (power and linewidth) and optimizing the receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the University of Hawaii, we have developed compact time-resolved (TR) Raman, and fluorescence spectrometers
suitable for planetary exploration under NASA's Mars Instrument Development Program. The compact Raman and
fluorescence spectrometers consist of custom miniature spectrographs based on volume holographic gratings, and
custom miniature intensified CCD cameras. These spectrographs have been interfaced with a regular 50 mm camera
lens as well as with a three and a half inch diameter telescope for remotely interrogating minerals, water, water-ice and
dry ice. Using a small frequency-doubled Nd:YAG pulsed laser (35 mJ/pulse, 20 Hz) and 50 mm camera lens, TRRaman
and LINF spectra of minerals, and bio-minerals can be measured within 30 s under super-critical CO2, and with
3.5-inch telescope these samples can be interrogated to 50 m radial distance during day time and nighttime. The
fluorescence spectrograph is capable of measuring TR- laser-induced fluorescence excited with 355 nm laser in the
spectral range 400-800 nm spectral range. The TR-fluorescence spectra allow measurement of LINF from rare-earths
and transition-metal ions in time domain, and also assist in differentiating between abiogenic minerals from organic and
biogenic materials based on the fluorescence lifetime. Biological materials are also identified from their characteristic
short-lived (<10 ns) laser-induced fluorescence lifetime. These instruments will play important role in planetary
exploration especially in NASA's future Mars Sample Return Mission, and lander and rover missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the
NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful
selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57
micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements
investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The
simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the
performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar
simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the
performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model
(LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform
error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in
CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are
planned to be included in the framework. The description of the modeling framework with selected results of the
simulation studies for CO2 and O2 sensing is presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An initialization method using airborne Doppler wind lidar data was developed and evaluated for a mass-consistent
diagnostic wind model over complex terrain. The wind profiles were retrieved from the airborne lidar using a
conical scanning scheme and a signal processing algorithm specifically designed for the airborne lidar system. An
objective data analysis method in complex terrain was then applied to those wind profiles to produce a threedimensional
wind field for model initialization. The model results using the lidar data initialization were compared
with independent surface weather observational data and profiles from a microwave radar wind profiler. For the
complex terrain in the Salinas valley, the model evaluation with a limited number of observations indicated that the
diagnostic wind model with airborne Doppler lidar data produced a reasonably good wind field in moderate to
strong wind conditions. However, caution must be stressed for weak wind conditions in which the flow is thermally
driven as the mass-consistent diagnostic wind model is not equipped to handle such cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia flew on the NASA's
DC-8 aircraft during the NASA Genesis and Rapid Intensification Processes (GRIP) during the summer of 2010. The
participation was part of the project Doppler Aerosol Wind Lidar (DAWN) Air. Selected results of airborne wind
profiling are presented and compared with the dropsonde data for verification purposes. Panoramic presentations of
different wind parameters over a nominal observation time span are also presented for selected GRIP data sets. The real-time
data acquisition and analysis software that was employed during the GRIP campaign is introduced with its unique
features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two different noise whitening methods in airborne wind profiling with a pulsed 2-micron coherent Doppler lidar system
at NASA Langley Research Center in Virginia are presented. In order to provide accurate wind parameter estimates
from the airborne lidar data acquired during the NASA Genesis and Rapid Intensification Processes (GRIP) campaign in
2010, the adverse effects of background instrument noise must be compensated properly in the early stage of data
processing. The results of the two methods are presented using selected GRIP data and compared with the dropsonde
data for verification purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study of atmospheric effects on Geiger Mode laser ranging and detection (LADAR), the parameter space is
explored primarily using the Air Force Institute of Technology Center for Directed Energy's (AFIT/CDE) Laser
Environmental Effects Definition and Reference (LEEDR) code. The expected performance of LADAR systems is
assessed at operationally representative wavelengths of 1.064, 1.56 and 2.039 μm at a number of locations worldwide.
Signal attenuation and background noise are characterized using LEEDR. These results are compared to standard
atmosphere and Fast Atmospheric Signature Code (FASCODE) assessments. Scenarios evaluated are based on air-toground
engagements including both down looking oblique and vertical geometries in which anticipated clear air aerosols
are expected to occur. Engagement geometry variations are considered to determine optimum employment techniques
to exploit or defeat the environmental conditions. Results, presented primarily in the form of worldwide plots of
notional signal to noise ratios, show a significant climate dependence, but large variances between climatological and
standard atmosphere assessments. An overall average absolute mean difference ratio of 1.03 is found when
climatological signal-to-noise ratios at 40 locations are compared to their equivalent standard atmosphere assessment.
Atmospheric transmission is shown to not always correlate with signal-to-noise ratios between different atmosphere
profiles. Allowing aerosols to swell with relative humidity proves to be significant especially for up looking geometries
reducing the signal-to-noise ratio several orders of magnitude. Turbulence blurring effects that impact tracking and
imaging show that the LADAR system has little capability at a 50km range yet the turbulence has little impact at a 3km
range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of the recent small, low power ladar systems provide detection sensitivities on the photon(s) level for altimetry
applications. These "photon-counting" instruments, many times, are the operational solution to high altitude or space
based platforms where low signal strength and size limitations must be accommodated. Despite the many existing
algorithms for lidar data product generation, there remains a void in techniques available for handling the increased noise
level in the photon-counting measurements as the larger analog systems do not exhibit such low SNR. Solar background
noise poses a significant challenge to accurately extract surface features from the data. Thus, filtering is required prior to
implementation of other post-processing efforts. This paper presents several methodologies for noise filtering photoncounting
data. Techniques include modified Canny Edge Detection, PDF-based signal extraction, and localized statistical
analysis. The Canny Edge detection identifies features in a rasterized data product using a Gaussian filter and gradient
calculation to extract signal photons. PDF-based analysis matches local probability density functions with the aggregate,
thereby extracting probable signal points. The localized statistical method assigns thresholding values based on a
weighted local mean of angular variances. These approaches have demonstrated the ability to remove noise and
subsequently provide accurate surface (ground/canopy) determination. The results presented here are based on analysis
of multiple data sets acquired with the high altitude NASA MABEL system and photon-counting data supplied by Sigma
Space Inc. configured to simulate the NASA upcoming ICESat-2 mission instrument expected data product.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-of-Flight range measurements rely on the unambiguous assignment of each received echo signal to its causative
emitted pulse signal. The maximum unambiguous measurement range depends on the signal group velocity in the
propagation medium and the source signals' pulse repetition interval. When this range is exceeded an echo signal and its
preceding pulse signal are not associated any longer and the result is ambiguous. We introduce a novel, two-stage
approach which significantly increases the maximum unambiguous measurement range by applying a specifically coded
pulse-position-modulation scheme to the train of emitted pulses in the first step. In the second step the analysis of
resulting measurement ranges allows the unambiguous decision for the correct ranges. In this regard we also present a
unique feature of a group of digital codes which helps to enhance detection robustness. Results are given on the basis of
time-of-flight measurements from scanning LIDAR, where this technique has been implemented for the first time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fusion of imaging ladar information and digital imagery results in 2.5-D surfaces covered with texture
information. Called "texel images," these datasets, when taken from dierent viewpoints, can be combined to
create 3-D images of buildings, vehicles, or other objects. These
3-D images can then be further processed for
automatic target recognition, or viewed in a 3-D viewer for tactical planning purposes.
This paper presents a procedure for calibration, error correction, and fusing of ladar and digital camera
information from a single hand-held sensor to create accurate texel images. A brief description of a prototype
sensor is given, along with calibration technique used with the sensor, which is applicable to other imaging
ladar/digital image sensor systems. The method combines systematic error correction of the ladar data, correction
for lens distortion of the digital camera image, and fusion of the ladar to the camera data in a single process. The
result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements
from the proposed algorithm, are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scanning LIDARs are widely used as 3D sensors for navigation due to their ability to provide 3D information of terrains
and obstacles with a high degree of precision. The optics of conventional scanning LIDARs are generally monostatic, i.e.
launch beam and return beam share the same optical path in scanning optics. As a consequence, LIDARs with
monostatic optics suffer poor performance at short range (<5m) due to scattering from internal optics and insufficient
dynamic range of a LIDAR receiver to cover both short range and long range (1km) . This drawback is undesirable for
rover navigation since it is critical for low profile rovers to see well at short range. It is also an issue for LIDARs used in
applications involving aerosol penetration since the scattering from nearby aerosol particles can disable LIDARs at short
range. In many cases, multiple 3D sensors have to be used for navigation.
To overcome these limitations, Neptec has previously developed a scanning LIDAR (called TriDAR) with specially
designed triangulation optics that is capable of high speed scanning. In this paper, the reported WABS (Wide Angle
Bistatic Scanning) LIDAR has demonstrated a few major advances over the TriDAR design. While it retains the benefit
of bistatic optics as seen from TriDAR, in which launch beam path and return beam path are separated in space, it
significantly improves performance in term of field-of-view, receiving optical aperture and sensor size.
The WABS LIDAR design was prototyped under a contract with the Canadian Space Agency. The LIDAR prototype
was used as the 3D sensor for the navigation system on a lunar rover prototype. It demonstrated good performance of
FOV (45°×60°) and minimum range spec (1.5m); both are critical for rover navigation and hazard avoidance. The paper
discusses design concept and objective of the WABS LIDAR; it also presents some test results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The
investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando.
The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The
sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The
human target was holding various items and also performing certain tasks some of high of relevance in defence and
security. One of the main purposes with this investigation was to compare the recognition of these human targets and
their activities with the resolution information obtained from conventional resolution charts. The data collection of
human targets was also made from out roof top laboratory at about 13 m height above ground.
The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both
passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests.
The paper will present images for both passive and active modes obtained at different elevations and discuss the results
from both technical and system perspectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a LIDAR system with a sensor head which, although it includes a scanning mechanism, is less than
20 cc in size. The system is not only small, but is also highly sensitive.
Our LIDAR system is based on time-of-flight measurements, and incorporates an optical fiber. The main feature of our
system is the utilization of optical amplifiers for both the transmitter and the receiver, and the optical amplifiers enable
us to exceed the detection limit set by thermal noise. In conventional LIDAR systems the detection limit is determined
by the thermal noise, because the avalanche photo-diodes (APD) and trans-impedance amplifiers (TIA) that they use
detect the received signals directly. In the case of our LIDAR system, the received signal is amplified by an optical fiber
amplifier before reaching the photo diode and the TIA. Therefore, our LIDAR system boosts the signal level before the
weak incoming signal is depleted by thermal noise. There are conditions under which the noise figure for the
combination of an optical fiber amplifier and a photo diode is superior to the noise figure for an avalanche photo diode.
We optimized the gains of the optical fiber amplifier and the TIA in our LIDAR system such that it would be capable of
detecting a single photon. As a result, the detection limit of our system is determined by shot noise.
We have previously demonstrated optical pre-amplified LIDAR with a perfect co-axial optical system[1]. For this we
used a variable optical attenuator to remove internal reflection from the transmission and receiving lenses. However, the
optical attenuator had an insertion loss of 6dB which reduced the sensitivity of the LIDAR. We re-designed the optical
system such that it was semi-co-axial and removed the variable optical attenuator. As a result, we succeeded in scanning
up to a range of 80 m.
This small and highly sensitive measurement technology shows great potential for use in LIDAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The compact High Speed Scanning Lidar (HSSL) was designed to meet the requirements for a rover GN&C sensor. The
eye-safe HSSL's fast scanning speed, low volume and low power, make it the ideal choice for a variety of real-time and
non-real-time applications including:
3D Mapping;
Vehicle guidance and Navigation;
Obstacle Detection;
Orbiter Rendezvous;
Spacecraft Landing / Hazard Avoidance.
The HSSL comprises two main hardware units: Sensor Head and Control Unit. In a rover application, the Sensor Head
mounts on the top of the rover while the Control Unit can be mounted on the rover deck or within its avionics bay. An
Operator Computer is used to command the lidar and immediately display the acquired scan data.
The innovative lidar design concept was a result of an extensive trade study conducted during the initial phase of an
exploration rover program. The lidar utilizes an innovative scanner coupled with a compact fiber laser and high-speed
timing electronics. Compared to existing compact lidar systems, distinguishing features of the HSSL include its high
accuracy, high resolution, high refresh rate and large field of view. Other benefits of this design include the capability to
quickly configure scan settings to fit various operational modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At last year, we have been developing 3D scanning LIDAR designated as KIDAR-B25, which features the 3D scanning
structure based on an optically and mechanically coupled instrument. In contrast with previous scanning LIDARs,
vertical scanning is realized using two stepping motors synchronized with movement and moves in a spiral. From the
results of outdoor experiments conducted last year to evaluate and measure the LIDAR performance and stability, we
identified some limitations and problems that should be resolved. In the first instance, the samples per second are
inefficient for use in detection, object clustering, and classification. In addition, the accuracy and precision of distance at
every point is seriously affected by the reflectance and distance of the target. Therefore, we have focused on improving
the 3D LIDAR range finding performance, speed of measurement, and stability regardless of environmental variation.
Toward the realization of these goals, in this paper, we deal with two improvements compared with previous 3D LIDAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light detection and ranging (LIDAR) has been used in remote sensing systems, obstacle avoidance systems on planetary
landers, rendezvous docking systems, and formation flight control systems. A wide dynamic range is necessary for
LIDAR systems on planetary landers and in rendezvous docking systems. For example, a dynamic range of 60 dB was
required for the receiving system used in the Hayabusa mission to measure distances between 50 m and 50 km. In
addition, the obstacle detection and avoidance system of a planetary lander requires a ranging resolution of better than 10
cm. For planetary landers, the Institute of Space and Astronautical Science/Japan Aerospace Exploration Agency is
developing a readout integrated circuit (ROIC) for LIDAR reception. This report introduces the design of the customized
IC and reports the results of preliminary experiments evaluating the prototype, LIDARX04.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced LIDAR applications such as next gen:
Micro Pulse;
Time of Flight (e.g., Satellite Laser Ranging);
Coherent and Incoherent Doppler (e.g., Wind LIDAR);
High Spectral Resolution;
Differential Absorption (DIAL);
photon counting LIDAR (e.g., 3D LIDAR);
are placing more demanding requirements on conventional lasers (e.g., increased rep rates, etc.)
and have inspired the development of new types of laser sources. Today, solid state lasers are
used for wind sensing, 2D laser Radar, 3D scanning and flash LIDAR.
In this paper, we report on the development of compact, highly efficient, high power all-solidstate
diode pulsed pumped ns lasers, as well as, high average power/high pulse energy sub
nanosecond (<1ns) and picosecond (<100ps) lasers for these next gen LIDAR applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to a large number of available Airborne Lidar Bathymetry (ALB) survey datasets and scheduled future surveys,
there is a growing need from coastal mapping communities to estimate the accuracy of ALB as a function of the survey
system and environmental conditions. Knowledge of ALB accuracy can also be used to evaluate the quality of products
derived from ALB surveying. This paper presents theoretical and experimental results focused on the relationship
between sea surface conditions and the accuracy of ALB measurements. The simulated environmental conditions were
defined according to the typical conditions under which successful ALB surveys can be conducted. The theoretical part
of the research included simulations, where the ray-path geometry of the laser beam was monitored below the water
surface. Wave-tank experiments were conducted to support the simulations. A cross section of the laser beam was
monitored underwater using a green laser with and without wind-driven waves. The results of the study show that
capillary waves and small gravity waves distort the laser footprint. Because sea-state condition is related to wind at a
first-order approximation, it is possible to suggest wind speed thresholds for different ALB survey projects that vary in
accuracy requirements. If wind or wave information were collected during an ALB survey, then it is possible to evaluate
the change in accuracy of ALB survey due to different sea surface conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a demand from the authorities to have good maps of the coastal environment for their exploitation and
preservation of the coastal areas. The goal for environmental mapping and monitoring is to differentiate between vegetation and non-vegetated bottoms and, if possible, to differentiate between species. Airborne lidar bathymetry is an
interesting method for mapping shallow underwater habitats. In general, the maximum depth range for airborne laser
exceeds the possible depth range for passive sensors. Today, operational lidar systems are able to capture the bottom (or
vegetation) topography as well as estimations of the bottom reflectivity using e.g. reflected bottom pulse power. In this
paper we study the possibilities and advantages for environmental mapping, if laser sensing would be further developed from single wavelength depth sounding systems to include multiple emission wavelengths and fluorescence receiver
channels. Our results show that an airborne fluorescence lidar has several interesting features which might be useful in mapping underwater habitats. An example is the laser induced fluorescence giving rise to the emission spectrum which
could be used for classification together with the elastic lidar signal. In the first part of our study, vegetation and substrate samples were collected and their spectral reflectance and fluorescence were subsequently measured in
laboratory. A laser wavelength of 532 nm was used for excitation of the samples. The choice of 532 nm as excitation
wavelength is motivated by the fact that this wavelength is commonly used in bathymetric laser scanners and that the
excitation wavelengths are limited to the visual region as e.g. ultraviolet radiation is highly attenuated in water. The
second part of our work consisted of theoretical performance calculations for a potential real system, and comparison of
separability between species and substrate signatures using selected wavelength regions for fluorescence sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work reports on the design, construction and commissioning of a ultraviolet scanning Raman lidar system,
which is deployed at the Otlica observatory in Slovenia. The system uses a fast parabolic mirror as a receiver and
a frequency-tripled Q-Switched Nd:YAG pulsed laser as a transmitter, both are mounted on a common frame
with steerable elevation angle. Custom optics using a low f-number aspheric lens were designed to focus the light
into a UV-enhanced optical ber, used to transfer the lidar return signal from the telescope to the polychromator.
Vibrational Raman spectra of N2 and H2O were separated using narrow-band interference lters combined with
dichroic beam splitters. System functionality and performance was assessed in a series of preliminary experiments
and by the comparison of the retrieved results to radiosonde data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the previous study, we have demonstrated the first development result of the 3D imaging LADAR (LAser Detection
And Ranging) which can obtain the 3D data using linear array receiver. The system consists of in-house-made key
components. The linear array receiver consists of the previously reported APD (Avalanche Photo Diode) array, the ROIC
(Read Out Integrated Circuit) array assembled in one package, and the transmitting optics using pupil divide method
which realizes a uniform illumination on a target. In this paper, we report the advanced 3D imaging LADAR with
improved ROIC. The ROIC has the function to set the optimum threshold for pulse peak detection in each element and
switch the measurement range width on a case by case basis. Moreover, the response of MUX in ROIC is improved.
Installing this ROIC, we realized 256× 256 pixels range imaging with an on-line frame rate of more than 30 Hz. Then,
we tried online object detection with the obtained 3D image using a simple detection algorithm. We demonstrated system
has the potential to detect the object even in the scene with some clutters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A laser radar (LADAR) system with a Geiger mode avalanche photodiode (GAPD) is used extensively due to its high
detection sensitivity. However, this system requires a certain amount of time to receive subsequent signals after detecting
the previous one. This dead time, usually 10 ns to 10 μs, is determined by the material composition of the detector and
the design of the quenching circuits. Therefore, when we measure objects in close proximity to other objects along the
optical axis using the LADAR system with GAPD, it is difficult to separate them clearly owing to the dead time problem.
One example for that is a case of hidden objects behind partially transparent blinds. In this paper, we suggested a
modified LADAR system with GAPD to remove the dead time problem by adopting an additional linear mode avalanche
photodiode (LAPD) as a complementary detector. Because the LAPD does not have dead time while still maintaining
relatively low detection sensitivity, the proposed system can measure an object placed within the dead time with high
detection sensitivity. Light is emitted from the pulsed laser of a light source and is delivered into a fast photodiode to
generate a start signal. Most of laser pulses are directed onto the target and scattered from the surfaces of targets. The
scattered light in the field-of-view of the system is divided by a polarizing beam splitter, after which it becomes incident
to two different types of APDs, the GAPD and the LAPD. The GAPD receives the signals from the target with high
sensitivity, and the signals scattered in the dead time zone are then detected by the LAPD. The obtained signals are
analyzed at the same time. In this way, the signals scattered from objects placed within the dead time can be
distinguished clearly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new method to improve the SNR by temporal filtering method in LADAR system using two Geiger-mode
avalanche photodiodes (GmAPDs) is proposed. The new method is implemented by using two GmAPDs with
beam splitter and employing AND process to their ends. Then, timing circuitry receives the electrical signals only if each
GmAPDs generates the electrical signals simultaneously. Though this method decreases the energy of a laser-return
pulse scattered from the target, it is highly effective in reducing the false-alarm probability because of the randomly
distributed noise on the time domain. Then it needs not any image processing steps. The experiments are performed to
prove the advantage of the new method proposed with varying the time bin size. The experimental results represent that
the improvement of SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.