KEYWORDS: Contrast transfer function, LIDAR, Imaging systems, Laser systems engineering, 3D acquisition, Data processing, Stereoscopy, Detection and tracking algorithms, Clouds, 3D image processing
Several quantitative data quality metrics for three dimensional (3D) laser radar systems are presented, namely: X-Y
contrast transfer function, Z noise, Z resolution, X-Y edge & line spread functions, 3D point spread function and data
voids. These metrics are calculated from both raw and/or processed point cloud data, providing different information
regarding the performance of 3D imaging laser radar systems and the perceptual quality attributes of 3D datasets. The
discussion is presented within the context of 3D imaging laser radar systems employing arrays of Geiger-mode
Avalanche Photodiode (GmAPD) detectors, but the metrics may generally be applied to linear mode systems as well. An
example for the role of these metrics in comparison of noise removal algorithms is also provided.
Image chain analysis is a systems engineering tool which allows imaging system designers to understand how different
components of the imaging chain affect the quality and derivable information of generated data products. In this paper,
we apply image chain analysis techniques to formulate a product chain for airborne three-dimensional (3D) imaging
laser radar systems that employ arrays of Geiger-mode avalanche photodiode detectors. The processes involved in 3D
data generation and subsequent information extraction are described. These processes are grouped into five groups (Data
Capture, Raw Point Cloud Formation, Noise Filtering, Advanced Post-Processing and Data Analysis) to form the
proposed product chain. For each group, key parameters that affect 3D data quality are identified along with synthetic
data examples of their respective impact to 3D data quality and information extraction. In addition, we discuss on-going
and future work intended to continue our understanding o3 3D data quality and interpretability.
Three-dimensional (3D) Light Detection And Ranging (LIDAR) systems designed for foliage penetration can produce
good bare-earth products in medium to medium-heavy obscuration environments, but product creation becomes
increasingly more difficult as the obscuration level increases. A prior knowledge of the obscuration environment over
large areas is hard to obtain. The competing factors of area coverage rate and product quality are difficult to balance.
Ground-based estimates of obscuration levels are labor intensive and only capture a small portion of the area of interest.
Estimates of obscuration levels derived from airborne data require that the area of interest has been collected previously.
Recently, there has been a focus on lacunarity (scale dependent measure of translational invariance) to quantify the gap
structure of canopies. While this approach is useful, it needs to be evaluated relative to the size of the instantaneous
field-of-view (IFOV) of the system under consideration. In this paper, the author reports on initial results to generate not
just average obscuration values from overhead canopy photographs, but to generate obscuration probability density
functions (PDFs) for both gimbaled linear-mode and geiger-mode airborne LIDAR. In general, gimbaled linear-mode
(LM) LIDAR collects data with higher signal-to-noise (SNR), but is limited to smaller areas and cannot collect at higher
altitudes. Conversely, geiger-mode (GM) LIDAR has a much lower SNR, but is capable of higher area rates and
collecting data at higher altitudes. To date, geiger-mode LIDAR obscurant penetration theory has relied on a single
obscuration value, but recent work has extended it to use PDFs1. Whether or not the inclusion of PDFs significantly
changes predicted results and more closely matches actual results awaits the generation of PDFs over specific ground
truth targets and comparison to actual collections of those ground truth targets. Ideally, examination of individual PDFs
for specific collections will provide insight into how collection operations can be optimized in general and whether or
not a generation of representative PDFs of various forest types will be useful for collection planning.
The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is a synthetic imagery generation model developed at the Center for Imaging Science (CIS) at the Rochester Institute of Technology (RIT). It is a quantitative first principle based model that calculates the sensor reaching radiance from the visible through to the long wave infrared on a spectral basis. DIRSIG generates a very accurate representation of what a sensor would see by modeling all the processes involved in the imaging chain. Currently, DIRSIG only models passive sources such as the sun and blackbody radiation due to the temperature of an object. Active systems have the benefit of the user being able to control the illumination source and tailor it for specific applications. Remote sensing Laser Detection and Ranging (LADAR) systems that utilize a laser as the active source have been in existence for over 30 years. Recent advances in tunable lasers and infrared detectors have allowed much more sophisticated and accurate work to be done, but a comprehensive spectral LADAR model has yet to be developed. In order to provide a tool to assist in LADAR development, this research incorporates a first principle based elastic LADAR model into DIRSIG. It calculates the irradiance onto the focal plane on a spectral basis for both the atmospheric and topographic return, based on the system characteristics and the assumed atmosphere. The geometrical form factor, a measure of the overlap between the sensor and receiver field-of-view, is carefully accounted for in both the monostatic and bistatic cases. The model includes the effect of multiple bounces from topographical targets. Currently, only direct detection systems will be modeled. Several sources of noise are extensively modeled, such as speckle from rough surfaces. Additionally, atmospheric turbulence effects including scintillation, beam effects, and image effects are accounted for. To allow for future growth, the model and coding are modular and anticipate the inclusion of advanced sensor modules and inelastic scattering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.