Open Access
29 July 2020 Hybrid passive polarimetric imager and lidar combination for material classification
Jarrod P. Brown, Rodney G. Roberts, Darrell C. Card, Christian L. Saludez, Christian K. Keyser
Author Affiliations +
Abstract

We investigate the augmentation of active imaging with passive polarimetric imaging for material classification. Experiments are conducted to obtain a multimodal dataset of lidar reflectivity and polarimetric thermal self-emission measurements against a diverse set of material types. Using the assumption that active lidar imaging can provide high-resolution three-dimensional spatial information, a known surface orientation is utilized to enable higher fidelity classification. Machine learning is applied to the dataset of monostatic lidar unidirectional reflectivity and passive longwave infrared degree of linear polarization features for material classification. The hybrid sensor technique can classify materials with 91.1% accuracy even with measurement noise resulting in a signal-to-noise ratio of only 6 dB. The application of the proposed technique is applicable for the classification of hidden objects or could assist existing spatial-based object classification.

1.

Introduction

Many applications such as autonomous driving, surveillance, reconnaissance, and target engagement require the capability to accurately classify objects. Both active (e.g., sonar, radar, and lidar systems) and passive imaging (e.g., visible and infrared cameras) are popular solutions for object classification. Active imaging sensors operating in the optical spectrum, such as lidar, actively transmit light and detect backscattered light to characterize material properties, shape, and size.1 Lidar offers several advantages over other sensing modalities, including ranging (enabling point-cloud rendering), pulse separation (enabling foliage penetration for hidden object classification), directional material reflectance, and invariance to lighting.2 Similar to lidar, passive infrared sensors also capture material properties such as spectral reflectance as well as spatial information; however, passive sensors rely on external sources to illuminate or emit radiance (e.g., the sun illuminating the material or the material self-emitting due to body temperature). Both lidar and passive infrared imaging have demonstrated excellent performance in object classification, assuming sufficient pixel coverage of the object is obtained via imaging in order to infer spatial information of the object (e.g., template matching).3,4 However, spatial-based recognition is only successful if a significant portion of the object is visible. For scenarios where only a small fraction of a surface on an object is imaged (e.g., hidden by obscuration), spatial information is of limited utility. In this scenario, only spectral information is available for object classification (this nonspatial classification can be considered material classification).

Polarization-sensitive passive infrared imaging has been employed and demonstrated to improve discrimination between natural and manmade classes.5,6 This two-class discrimination is typically based on contrast enhancement of a spatial area in the scene with the surrounding background. More sophisticated material classification techniques using passive polarimetric imagers have also been investigated to classify material types.711 In order to obtain successful results, the techniques must make several assumptions such as a known orientation of the material surface and a single illumination source. These assumptions may not be valid in actual remote sensing applications utilizing passive sensors alone. In related work, the authors suggest utilizing lidar to obtain object orientation and measurement geometry;10 however, no further research has been presented regarding this suggestion.

In recent years, significant advancements have been made in machine learning techniques for classification, specifically in the field of deep learning.1216 A recently published survey12 reviews deep learning-based hyperspectral image classification publications and compares several strategies for this topic. The survey includes networks designed to only use the spectral content of a single pixel, which is ideal for material classification. Unfortunately, material classification in passive imaging is difficult due to significant signal variability from the fluctuation in external sources such as temperature, cloud cover, and diurnal cycle.17 This issue has been demonstrated with a deep belief network trained using longwave infrared (LWIR) hyperspectral imagery collected over multiple diurnal cycles.13 Results showed that a multiday augmented deep network had a significant drop in performance when tested on a single day, demonstrating a lack of generalization for the specific dataset utilized. In other work, a deep transfer learning method has been proposed to improve the hyperspectral image classification performance in the situation of limited training samples.14 The deep network design consistently demonstrates superior performance over other popular machine learning techniques. However, the design requires spatial features which may be limited if an object is partially hidden. Similar work utilizes deep learning techniques to combine hyperspectral imagery with visible15 and lidar16 modalities. These publications suggest that combining information using machine learning techniques will greatly enhance classification performance.

In this paper, we present a hybrid passive polarimetric LWIR imager and lidar combination for material classification. Lidar is commonly paired with hyperspectral imagery to leverage height and shape features of lidar with spectral characterization obtained by passive sensors operating at many wavelengths.1820 Similarly, polarimetric imagery also is typically fused with hyperspectral imagery.2123 In contrast to the aforementioned research, which relies on the hyperspectral characterization of materials to distinguish material types, we combine passive polarimetric and active reflectivity features of the dual imaging architecture. The specific imaging capabilities we use include degree of linear polarization (DoLP) from passive polarimetric imaging, monostatic unidirectional reflectance (fr) from lidar imaging, and viewing orientation (θ,ϕ). Viewing orientation is assumed to be available using lidar three-dimensional (3-D) point-cloud ranging. Very limited research has been published on the combination of lidar with passive polarimetric imaging to improve classification performance, which we believe is an important aspect in machine learning applications for infrared imaging.24 The innovation of our work includes (1) the architecture of utilizing θ, ϕ, and fr from lidar in combination with DoLP measured by a passive polarimetric imager, (2) a unique dataset of 34 diverse material types imaged the hybrid system at eight observation angles, and (3) material classification results from combining the measurements, viewing angle, and training data. Therefore, the emphasis of this paper is the introduction and demonstration of the proposed hybrid sensing technique for material classification. We believe advanced classification methods could be designed for specific applications based on this work.

The remainder of this paper is organized as follows. In Sec. 2, we describe the sensing modalities used in this work, including the sensor data representation. Then, Sec. 3 presents a solution for material classification focused on the joint usage of passive polarimetric and lidar infrared imaging. The proposed multisensor architecture utilizes observation angle as well as multiple measurements taken from each sensor to classify material type. A demonstration of an example application is also presented. In Sec. 4, we demonstrate the feasibility of material classification with the proposed multisensor architecture by training and testing six popular machine learning techniques. The measurement and processing of the dual modality dataset is explained. Classification accuracy of the multisensor architecture is compared to the performance of each sensor operating independently. Finally, we conclude our work and future research direction in the last section.

2.

Sensors and Data Representation

The machine learning application presented in this paper utilizes a hybrid imaging architecture consisting of lidar and passive polarimetric sensors to capture fr and DoLP features, respectively. The independent sensing modalities present distinct characteristics of a material; however, both depend on the same description of the interaction of the electromagnetic field with materials. Consider the scenario of an optical signal with wavelength λ (nm) incident on a surface from the direction described by θi and ϕi, and reflecting into the direction of θr and ϕr. The reflected radiance Lr (Wm2sr1) carries information about polarimetric interactions of the incident irradiance Ei (Wm2), and is expressed as

Eq. (1)

Lr(θr,ϕr,λ)=Mr(θi,ϕi,θr,ϕr,λ)Ei(θi,ϕi,λ),
where Mr (sr1) is the polarimetric bidirectional reflectance distribution function which is a 4×4 Mueller matrix.2527 Lr and Ei are 4×1 column matrices in Stokes notation, described as

Eq. (2)

S=[s0s1s2s3]T,
where S represents the polarimetric state of the signals described by Stokes parameters s0, s1, s2, and s3. Stokes notation allows s0 to represent total signal intensity, s1 to represent horizontal and vertical linear polarizations, s2 to represent linear polarization oriented at 45 deg and 135 deg, and s3 to represent circular polarization.26 Equation (1) is the general representation of an optical signal interacting with a material surface. The data representation of signals captured by lidar and passive polarimetric imaging is further discussed in the following sections.

2.1.

Lidar

The lidar features utilized in our machine learning technique include unidirectional reflectivity and range. Reflectivity is used to characterize the material, and range is used to estimate the observation angle of the material surface. The direct detection pulsed lidar sensor utilized in this work operates at the 1.55-μm wavelength and uses a linear mode avalanche photodetector. The system transmits a 5-ns full-width at half-maximum laser pulse which strikes and scatters opaque surfaces. The intensity of the backscattered laser energy is captured by the photodetector and digitized by a receiver. The time elapsed between the transmitted and reflected pulses is used to calculate range. Multiple range measurements across a fraction of a surface can be used to estimate angle of incidence. The peak of the backscattered pulse is used to estimate reflectivity.

As shown in Fig. 1(a), active sensors are typically dominated by unidirectional radiance represented by Eq. (1) with θr=θi and ϕr=ϕi, which we denote as θ and ϕ, respectively. However, the receiving detector is polarization insensitive; therefore, only the s0 component of Lr is measured. Furthermore, we assume nondiagonal elements in the first row of the Mueller matrix for our data to be zero. This assumption is supported by experimental measurements28,29 of diverse materials which show that nondiagonal Mueller matrix elements of most opaque surfaces that might be observed in a remote sensing application are approximately zero. Using the stated simplifications, Eq. (1) is approximated for our lidar system as

Eq. (3)

Lr(θ,ϕ,λ)=fr(θ,ϕ,λ)Ei(θ,ϕ,λ),
where Lr and Ei are the scalar s0 elements of Lr and Ei, and fr (sr1) is the top-left element of M which represents the scalar monostatic bidirectional reflectance distribution function (mBRDF).

Fig. 1

Depiction of common geometries for radiometric sources in (a) active (unidirectional) and (b) passive (specular, diffuse, and self-emission) imaging.

OE_59_7_073106_f001.png

Due to practical complications in measuring Ei in Eq. (3), fr is defined in alternative form as

Eq. (4)

fr(θ,ϕ,λ)=PrPiΩcosθ,
which describes the scattered power Pr (W) per unit solid angle Ω (sr) normalized by the incident power Pi (W) and the cosine of the detector zenith angle θ measured relative to the material surface.30 Theoretically, an active imaging system could be calibrated to have a known Pi by measuring direct output power and estimating range and atmospheric attenuation. Likewise, θ could be estimated by calculating surface orientation using lidar 3-D point-cloud data, and Ω is calculated from range and aperture size. Therefore, fr could be calculated and utilized for material classification. An alternative method to calculate fr in experimentation utilizes a reference material with a known directional-hemispherical reflectivity ρDHR, such as Spectralon, in addition to Pr and θ. This is a favorable method because Pi can be difficult to calibrate, however, ρDHR can be accurately measured using laboratory instruments. Since Spectralon is manufactured to closely approximate ideal Lambertian diffuse reflectors, the Spectralon fr is assumed to be ρDHR(λ)π which has been supported by laboratory measurements. Finally, mBRDF is calculated as

Eq. (5)

fr(θ,ϕ,λ)=PrcosθsPrscosθρDHR(λ)π,
where Prs is the power measurement (or backscatter pulse peak) of the Spectralon, Pr is the power measurement of the sample, and the incident power is characterized to be constant for each measurement (Spectralon and sample).30,31 In this paper, a database of material fr is collected using the technique described in Eq. (5).

2.2.

Passive Polarimeter

The polarimetric feature, DoLP, is captured using a cooled Polaris 640 LWIR Imaging Polarimeter, manufactured by Polaris Sensor Technologies, Inc.32 The sensor has an operating wavelength of 7.5 to 11.1  μm and up to a 120-Hz frame rate. The Polaris 640 system is equipped with a fixed polarizer and rotating retarder imaging polarimeter, which takes measurements of linear polarization oriented at 0 deg, 45 deg, 90 deg, and 135 deg such that L0deg, L45deg, L90deg, and L135deg are scalar measurements of L. The measurements are combined using the modified Pickering’s method5 described as

Eq. (6)

L=[s0s1s2s3]=[L0deg+L45deg+L90deg+L135deg2L0degL90degL45degL135deg0],
to obtain a polarimetric Stokes column matrix. Since circular polarization emitted from an object is extremely uncommon, most passive polarimeters (including the one utilized in our experiments) do not capture s3;27,33 therefore, the s3 element is set to zero.

A common characterization of polarization in passive polarimetric imaging is DoLP, which is calculated from L as

Eq. (7)

DoLP=s12+s22s0,
and describes the fraction of the power that is linearly polarized. Due to the nature of the quantities involved, DoLP ranges from zero to one (i.e., zero indicates no polarization is detected, and one indicates the signal is completely polarized). As depicted in Fig. 1(b), passive sensors capture the sum of specular and diffuse reflected signals as well as self-emitted radiance.34 Emitted radiance Le (Wm2sr1) is described as

Eq. (8)

Le(θ,ϕ,λ)=Me(θ,ϕ,λ)[Eb(λ)000]T,
where Eb (Wm2) is the intensity of radiance derived from the surface body temperature, Me (sr1) is the directional polarimetric emittance, which is a 4×4 Mueller matrix,27 and θ, ϕ is the observation angle relative to normal. The specular and diffuse reflected radiance are each described by Eq. (1). We assume the emitted radiance is significantly larger than diffuse and specular reflectance within the LWIR waveband. Through experimentation, it has been shown that this is a valid assumption when imaging objects heated to 100°C with a cold sky.35 Therefore, the experiments in this paper are conducted on heated samples in a controlled indoor laboratory. Passive polarimetric measurements of an object are taken with the retarder waveplate at angles 0 deg, 45 deg, 90 deg, and 135 deg so that the column matrix in Eq. (6) can be constructed. Finally, DoLP is calculated using Eq. (7).

The fundamental properties of polarization suggest that polarimetric measurements could be useful features for material classification, specifically in discriminating rough and smooth surfaces.9,36 This is typically explained by representing the texture of the surface as multiple microfacets with orientations following a random distribution. The angle-dependent polarization from each microfacet is incoherently summed when simultaneously observing multiple microfacets of a rough surface, resulting in an unpolarized signal. Conversely, smooth surfaces maintain a consistent orientation across the surface and therefore preserve the polarimetric signal.

2.3.

Data Representation

This paper advances material classification by utilizing the feature set consisting of measurements of lidar and passive polarimetric sensors both characterized over a well-defined set of observation angles. The number of unique observation angles and the specific angles utilized are expected to significantly affect classification performance. For example, from Fresnel reflectance theory, DoLP is known to increase as observation angle relative to normal increases.37 Concerning the mBRDF angle dependence, perfectly diffused Lambertian surfaces have uniform fr for all angles; however, realistic surfaces typically have specular components with higher values within the normal incidence specular lobe.30 We assume the observation angle can be determined by estimating surface orientation relative to normal using lidar 3-D point-cloud imagery. The observation angle is represented as θ and is restricted to be in the monostatic plane of incidence such that ϕ=0deg. Furthermore, in many applications, multiple observation angles can be measured on a single material surface, due to a moving platform or moving object. The features are jointly represented by feature vector X as

Eq. (9)

X(θ1,θ2,,θN)=[fr(θ1)fr(θ2)fr(θN)DoLP(θ1)DoLP(θ2)DoLP(θN)]T,
where N represents the total number of observation angles at which the measurements are taken.

3.

Hybrid Sensor Architecture for Material Classification

In this section, we establish the first-ever implementation of hybrid passive polarimetric imager and lidar combination for material classification. While we believe the combination of these modalities offers several benefits, this paper is focused specifically on the classification of material type. Material classification could be extremely useful for detecting partially hidden objects or could assist spatial-based object classification. As discussed in the previous section, the hybrid sensing architecture we propose uses fr and DoLP features measured by the lidar and the passive polarimeter sensors, respectively, which are simultaneously captured at a colocated observation geometry. The proposed hybrid sensing architecture requires a state-of-the-art linear-mode lidar capable of obtaining high-resolution 3-D point-cloud and reflectivity measurements for each pixel. The point-cloud data are used to estimate surface orientation and thus observation angle θ relative to the surface normal. Both lidar and passive polarimetric infrared intensity values are utilized to calculate fr and DoLP. The required processing steps are shown in Fig. 2. First, lidar and passive polarimetric measurements are captured to form 3-D point-cloud, intensity, and Stokes data. Measurements could be repeated to capture multiple observation angles. The features are combined to form X from Eq. (9), and material classification is implemented. Details of the classification process are presented next, and the training and parameter optimization of the classifier is discussed in Sec. 4.4. If the proposed architecture is utilized in applications where long ranges or adverse weather conditions are present, the measurements must be corrected to compensate for environmental effects. In Sec. 3.2, as a notional hybrid sensing system, which represents one of several applications benefiting from this progressive technology is presented, and solutions to potential obstacles of utilizing the proposed technology in a tactical environment are discussed.

Fig. 2

Hybrid sensing material classification flowchart.

OE_59_7_073106_f002.png

3.1.

Material Classification

Since both fr and DoLP are expected to have consistent and repeatable measurements in most situations, a supervised learning algorithm is considered for the hybrid sensor material classification in this paper. In supervised machine learning, labeled sample data is used offline to model the mapping between input examples and the known output classes.38 We utilize features measured against laboratory data of a diverse material dataset to train the supervised classifier in identifying material type. The key idea of supervised learning is to estimate a decision boundary, which separates each class from one another based on the training data. We propose using a support vector machine (SVM) for classifying material type due to the proven success of this classifier in similar applications such as hyperspectral imaging for land cover classification and target detection.39,40 However, we believe advanced classifiers could be designed, based on the proposed technique (i.e., hybrid sensing with known viewing orientation), that optimize performance for a specific application. The SVM presented in this paper demonstrates the general application of material classification.

The SVM classifier tries to find the optimal separating hyperplane that maximizes the margin between the closest training samples of each class. The hyperplanes are typically formed in high-dimensional space using kernel transformation functions;41 and boundary pixels (i.e., support vectors) are utilized to create a decision surface.42 Therefore, SVM classifiers are inherently binary classifiers designed to solve two-class problems. A collection of SVM classifiers must be implemented to separate multiple classes. Multiclass designs include one-versus-all (one SVM classifier for each class) and one-versus-one (one SVM classifier for each pair of classes). The SVM classifier is a particularity popular solution for machine learning when there are a limited number of training samples available,43 which is typically the case in nonconventional imaging, such as hyperspectral, polarimetric, and lidar. The implementation, parameter selection, and classification accuracy of material classification for our proposed hybrid sensing system are presented in Sec. 4.

3.2.

Notional System

The proposed hybrid sensing architecture is beneficial to a multitude of machine learning applications, such as automatic target detection, land cover classification, autonomous driving, and machine vision in manufacturing. The actual system parameters of the lidar and passive polarimetric sensors should be carefully selected to optimize the performance for the specific application. For example, commercially available lidar systems designed for autonomous driving currently utilize high scanning rates and a large field-of-view, requiring high repetition rate lasers with moderate power and 200-m maximum distance.4446 In contrast, scanning linear-mode lidar in 3-D mapping remote sensing applications typically requires a higher power laser and operates at an altitude of 1000 to 5000 ft,46 with operating ranges of 1  km or greater. In this section, we demonstrate the feasibility of the proposed architecture by presenting a notional implementation for a remote sensing application.

To support our notion of hybrid sensing, a tactical demonstrator is fully assembled using the commercially available passive polarimetric imager manufactured by Polaris Sensor Technologies, Inc. as described in Sec. 2.2, and a custom lidar system owned and operated by the Air Force Research Laboratory (AFRL) at Eglin Air Force Base. Parameters for the demonstration are shown in Table 1. The system is operated to capture imagery at 1.5  km from a 25-m tower. A flat white painted aluminum 1.22  m×1.52  m panel is placed in a predominately natural scene at a 1.469-km slant range and 40-deg observation angle. Example imagery from the demonstration is shown in Fig. 3. At this range, there are 88 and 12 pixels on the panel with the lidar and passive systems, respectively. For this application, the passive system is designed to have a larger field-of-view to locate possible objects-of-interest, and the lidar is cued to image specific areas with high resolution. The presented notional hybrid system demonstrates the feasibility to capture imagery using a tactical system in a relevant application.

Table 1

Parameters of hybrid passive polarimetric and lidar demonstrator system.

Passive system parametersLidar system parameters
SensorPolaris Vela 640SensorAFRL Custom
Wavelength7 to 11.1  μmWavelength1.55  μm
Polarization0 deg, 45 deg, 90 deg, and 135 degDetectorInGaAs LmAPD
Pixels640×480ScannerX-Y galvanometers
FOV8  deg×6  degFOV0.9  deg×0.9  deg
IFOV220  μradLaser divergence60  μrad
Integration time52.267  μsLaser power270 mW
Frame rate120 HzLaser rate30 kHz
Max range measured5 km (100°C blackbody)Max range measured2 km (10% Spectralon)

Fig. 3

Imagery from the hybrid sensor demonstration showing (a) fr from the lidar system and (b) DoLP from the passive polarimetric imager, with the flat white painted aluminum panel circled in red.

OE_59_7_073106_f003.png

If the proposed architecture is utilized in applications where long ranges or adverse weather conditions are present, the lidar measurements must be corrected to compensate for atmospheric attenuation and signal loss using a radiometric model. The first technique to mitigate this issue is choosing an operating wavelength of the laser to be within a high transmission window. In addition, we suggest utilizing a popular radiometric model, such as MODTRAN47 or LEEDR,48 as well as current meteorological data, to correct for atmospheric effects. The passive polarimetric signal DoLP is not altered due to signal attenuation, but sources of noise such as diffuse reflected LWIR radiance could affect the polarimetric signal. In this paper, we do not attempt to correct measurements taken in adverse conditions and long ranges. Instead, we limit our measurements in this paper to close range under ideal conditions and then introduce a generic error source into the test database when evaluating the classification accuracy (discussed in Sec. 4.4). The error term represents effects of long range imaging and atmospheric conditions (or possible errors resulting from the correction of those effects). Adding error to our data alters the signal-to-ratio (SNR), which is varied to represent multiple degrees of accuracy that may be expected. Basically, longer ranges and more difficult imaging environments are expected to reduce the SNR, and we evaluate performance against varying amounts of SNR.

4.

Experiment Results

In this section, the proposed architecture is evaluated for material classification. We present a unique common dataset for polarimetric LWIR and lidar measurements against a diverse set of materials. Next, the dataset is analyzed and trends from each class are discussed. Then, the implementation of supervised learning is fully described. Finally, a comprehensive evaluation of material classification performance for the machine learning algorithms is presented.

4.1.

Dataset

To our knowledge, there are no lidar datasets with LWIR passive polarimetric imagery available to evaluate the performance of material classification algorithms. Therefore, an experiment is conducted to obtain a unique characterization of a diverse set of materials with both active and passive polarimetric infrared imaging systems. The experiment is conducted to collect fr and DoLP of 34 materials imaged at eight observation angles. The sample materials consist of painted aluminum panels (of various colors and gloss), painted tile thinset (of various colors and texture), naturally occurring objects (e.g., leaves, pine needle, and bark), asphalt, concrete, brick, rubber, metal, roof shingle, plywood, plexiglass, and cardboard, as shown in Fig. 4. The diverse set of materials are categorized into 19 classes, which are labeled as classes a through s. Measurements of each material are analyzed in Sec. 4.2, and class groupings are used for classification in Sec. 4.4.

Fig. 4

Materials utilized in experiments separated into 19 classes labeled a through s. For each material, a picture and name are shown (there are two green and two black painted aluminum samples made by different paint vendors).

OE_59_7_073106_f004.png

Each sample is placed on a rotation stage controlled by an articulating tripod which has the ability to pan and tilt via computer-controlled instruction. Samples are imaged at angles 0 deg to 70 deg in 10-deg increments where 0 deg is normal incidence (as determined by a mirror) and ϕ is held constant at 0 deg. The entire scene remains static for each iteration of imaging. The scanning lidar system captures pulse intensity at each pixel of the image by measuring the peak power of the backscattered pulse. A region of interest (ROI) is manually selected in the lidar imagery to represent approximately the same portion of the material for all angles, as shown in Fig. 5. The ROI is selected to include all of the sample surfaces except for areas near the edge. The ROI consists of at least 1800 pixels at normal and 250 pixels at 70 deg. The measurements are taken in a controlled laboratory setting at a distance of 9  m. Measurements are also taken against calibrated Spectralon panels with ρDHR accurately measured at the 1.55-μm wavelength. Using the mean power measurements of the materials and Spectralon panels, fr is calculated using Eq. (5).

Fig. 5

Experiment setup showing sensor measurements of (a) lidar normal to material surface, (b) lidar at 70 deg, (c) LWIR intensity at normal, and (d) LWIR intensity at 70 deg. ROIs are shown in blue for the active system.

OE_59_7_073106_f005.png

The entire experiment process is repeated using an LWIR polarimeter in place of the lidar system. In order to capture the emissive properties of the material, a heating element is utilized to maintain an 100°C surface temperature. The passive polarimeter measures the Stokes column matrix, as described in Eq. (6) (example imagery is shown in Fig. 5). ROIs are manually selected and consist of at least 3000 pixels at normal observation angle and 650 pixels at 70 deg. Finally, DoLP is calculated using Eq. (7). More details of the experiment setup and methodology have been recently published.24

The sample mean X¯ and standard deviation σSV of the pixel values within each ROI are calculated to statistically represent the experiment measurements as random variables. For simplicity, both fr and DoLP are approximated as Gaussian distributions. The feature set of Eq. (9) is formed using the experiment measurements of each material described as

Eq. (10)

X(θ1,θ2,,θN)=X¯(θ1,θ2,,θN)+ηSV(θ1,θ2,,θN),
where θ1,θ2,,θN are observation angles 0 deg, 10 deg, 20 deg, 30 deg, 40 deg, 50 deg, 60 deg, and 70 deg. The vector X¯ contains the calculated sample mean at observation angles one through N. The vector ηSV contains random numbers representing sample variance due to surface texture. The Gaussian distributions used to generate ηSV are zero mean and the angle-dependent standard deviations for each element of ηSV are represented by the vector σSV(θ1,θ2,,θN), which contains the calculated standard deviation at observation angles one thorough N. The statistics X¯ and σSV of lidar and passive polarimetric measurements for each of the 34 materials is presented in Figs. 6 and 7, respectively. Each curve represents a material measured against observation angle. Data points on the curves represent the mean and error bars on each curve represent one standard deviation of the measurements within the ROI. The classes are separated into six figures with different y axis limits to better view the data in the charts.

Fig. 6

Lidar fr mean measurement (within ROI) versus observation angle for materials in classes (a) a–c, (b) d–f, (c) g and h, (d) i–k, (e) l–n, and (f) o–s. Standard deviation is shown as error bars for each data point.

OE_59_7_073106_f006.png

Fig. 7

Passive polarimeter DoLP mean measurement (within ROI) versus observation angle for materials in classes (a) a–c, (b) d–f, (c) g and h, (d) i–k, (e) l–n, and (f) o–s. Standard deviation is shown as error bars for each data point.

OE_59_7_073106_f007.png

4.2.

Data Analysis

Next, we analyze the dataset obtained with the hybrid sensor experiment. Inspection of fr in Fig. 6 shows the sample mean of materials with semigloss or glossy paint have extremely large fr near normal (due the specular lobe of the lidar geometry) and low diffuse fr at other angles. The fr of all other materials tends to vary slowly with observation angle because the backscatter energy is mostly diffused reflectance. Dark color paints (i.e., green, black, and camouflage) have much lower fr than light colors (i.e., tan, white, and gray) because the darker colors absorb some of the laser energy. Additional groups of materials with considerably low reflectance include asphalt, rubber, and rusted steel. Materials painted light colors and brick have the overall highest fr. The natural materials, roof, concrete, cement block, cardboard, plywood, and plexiglass have similar fr that is typically more than dark paints but less than light paints.

According to Fresnel polarization theory,26 the magnitude of linear polarization is zero at normal observation angle and increases as a function of angle and refractive index of the material. For rough surfaces, the polarization is degraded as the signal from each microfacet is incoherently summed.34 In our dataset, DoLP is approximately zero near normal observation angle and increases with angle for almost all materials (resulting in a s1 and +DoLP). The only exception is plywood which has a reflected component that is prevalent at observation angles less than 20 deg (+s1 and +DoLP). Aluminum with light or dark paint color has the highest DoLP due to very smooth surfaces. Natural materials have the lowest DoLP, due to rough surfaces. Likewise, the smooth, medium, and rough textured thinset has DoLP inversely proportional to the surface roughness. Many of the measurements within a class maintain very similar signatures. For example, all materials of the semigloss light painted aluminum class (class e) have approximately the same polarimetric signal for all angles [as shown in Fig. 7(b)]. However, in comparison to the fr measurements, DoLP appears to be less diverse between classes. For example, class e is very similar to classes d and f. Therefore, classification may be more difficult with DoLP. Overall, the combined dataset is seen to agree with reflectance and polarization theory.

As previously discussed, the standard deviation represents material variation due to surface texture. In lidar imagery, standard deviation is relatively small compared to the mean, with the exception of the glossy and the camouflage painted aluminum panels. The glossy paints have a nonuniform specular spot at the center of the material near normal observation angles. The camouflage sample has three different paint colors within the ROI which causes a high standard deviation. As anticipated, the standard deviation of DoLP is highly correlated with the surface roughness (i.e., rough and smooth surfaces have high and low standard deviation, respectively)34 and mixed material types. For example, thinset with rough texture has higher standard deviation than the smooth thinset. Similarly, oak leaves and rusted steel have significantly higher variance due to the diverse materials within the ROI (i.e., colors of leaves, rust deposits on steel, etc.).

4.3.

Implementation of Supervised Learning

The complete dataset which is composed of sample mean and standard deviation presented in Figs. 6 and 7 is utilized to generate a database for supervised machine learning and classification performance evaluation. The initial database contains 34 row vectors, where each 1×16 row vector contains fr and DoLP measured at eight observation angles, as described in Eq. (9). For each of the 34 material samples, 100 observation vectors are generated using the sum of sample mean X¯ and randomly distributed Gaussian noise ηSV characterized by the material’s variance σSV, as described in Eq. (10). The entire database is organized as a 3400×16 matrix, to represent an ensemble of measurements of the material. Class labels are assigned to each observation following the class grouping a through s as indicated in Fig. 4. The generated database represents intrinsic variation due to surface texture and inconsistent material properties across the sample surface with no measurement noise added (e.g., rust, discoloration, grain, nonuniform mixtures, etc.). To address measurement noise, we introduce a separate noise component, which is described in Sec. 4.4.2.

We propose the use of SVM to implement material classification, as discussed in Sec. 3.1; however, we emphasize the hybrid sensing architecture is prevailing over single modality sensing while using any one of an assortment of supervised machine learning techniques. Therefore, in addition to SVM we also implement decision tree,38 discriminant,49 Naïve Bayes,50 k-nearest neighbors (kNN),51 and neural network52 to prove the benefit of hybrid sensing. All classifiers are implemented using either the Statistics and Machine Learning or Deep Learning toolboxes from MATLAB.53 First, the database is loaded into the Classification Learner tool in MATLAB and the option to partition into five disjoint folds is selected. This option utilizes four folds for training and one fold is used for testing. To reduce classification variability, five rounds of cross-validation are performed using different partitions, and the validation results are averaged to obtain the final classification accuracy. Next, each of the six classifier techniques is individually selected within the tool. Parameters of each classification method are iteratively adjusted as shown in Table 2. All combinations of the parameters are exhaustively exercised, and the optimal result is utilized in the final accuracy metric for each implementation. Please note: best-performing parameters within the listed parameter-space change depending on the number of viewing angles (i.e., features), SNR, and classes of the dataset. Furthermore, future implementations could utilize automatic selection of the parameters via optimization tools provided by MATLAB to optimize the classifier for specific applications. Finally, the classification learner tool allows the user to select a subset of the features in the database to utilize in training and testing. In the following section, we present results from the experiment using various combinations of viewing angle measurements.

Table 2

Parameter space explored for each classification technique.

ClassifierParameters
SVMKernelGaussian, linear, quadratic, cubic
Kernel scale1, 4, 16
Box constraint1, 10, 100
Multiclass methodOne-versus-one, one-versus-all
Decision treeMaximum splits4, 20, 100
Split criterionGini’s diversity index, Twoing rule, maximum deviance reduction
DiscriminantTypeLinear, quadratic
Covariance matrixFull, diagonal
Naïve BayesKernelGaussian, Epanechnikov, triangle, box
kNNNumber of neighbors1, 10, 20, 100
WeightEqual, inverse, squared inverse
Distance metricEuclidean, Chebyshev, Minkowski, Mahalanobis, cosine, city block
Neural networkNumber of neurons10, 15, 20, 30, 50
StructureFeed-forward, cascade-forward

4.4.

Performance Evaluation

To fully demonstrate the added benefit of multisensor material classification, supervised techniques are utilized with features of individual sensors as well as the proposed hybrid system. We also experiment with multiple combinations of observation angles. First, results of a single observation angle using fr only, DoLP only, and hybrid features are evaluated without measurement noise added. Then, performance using a single observation angle is evaluated with varying levels of noise added. Finally, results using multiple observation angles with noise are presented.

4.4.1.

Single observation angle without measurement noise

Measurements at a single observation angle, fr(θ1) and DoLP(θ1), are utilized and θ1 is varied from 0 deg to 70 deg. Total classification accuracy, calculated as the number of observations correctly classified out of the total number of observations, is determined for each angle. As shown in Fig. 8(a), classification with fr has consistent performance for all angles and DoLP improves as θ1 increases. The result matches expected performance based on Fresnel reflectance, where DoLP increases with angle and material classes become more distinct as observation angle increases. The highest classification accuracy obtained in this experiment is 83.6%, which occurs at θ1=70deg. By utilizing multiple features, classification accuracy is increased by 44.5% compared to lidar only, and 32.3% compared to passive polarimetric only; however, since a standalone passive polarimeter cannot determine observation angle without lidar point-cloud information, the DoLP only classifier is still dependent on the ranging information of the lidar ranging in a dual-sensor architecture. For evaluation purposes, we assume perfect knowledge of θ in this paper.

Fig. 8

Classification accuracy of fr, DoLP, and hybrid features with the SVM classifier using fivefold cross-validation against the material database, showing (a) θ1 varied from 0 deg to 70 deg with no noise added and (b) Gaussian noise added to observations at 70 deg such that SNR is varied from 3 to 10 dB.

OE_59_7_073106_f008.png

4.4.2.

Single observation angle with measurement noise

Next, in order to comprehensively demonstrate the effectiveness of the hybrid architecture, classification performance is evaluated with measurement noise added to the generated database. The feature vector described in Eq. (10) is replaced with

Eq. (11)

X(θ1,θ2,,θN)=X¯(θ1,θ2,,θN)+ηSV(θ1,θ2,,θN)+ηMN(θ1,θ2,,θN),
where ηMN represents a vector containing Gaussian random numbers with zero mean and σMN standard deviation. We analyze classification accuracy versus SNR (dB), which we define as

Eq. (12)

SNR=10logX¯+ηSVσMN,
where σMN is the standard deviation of the generated noise. Therefore, Eq. (12) is solved for σMN and then evaluated with SNR varied from 3 to 10 dB for each observation X¯.

The X(θ1=70deg) database, which includes sample variance, measurement noise, and measurement mean at a single observation angle of 70 deg, is utilized with the SVM classifier for the fr only, DoLP only, and hybrid (i.e., fr and DoLP) architectures. As shown in Fig. 8(b), classification accuracy of all three architectures improves as SNR increases. The highest classification accuracy is 73.3% at SNR=10  dB. With SNR=6  dB, which the signal is only four times greater than the standard deviation of the noise, classification accuracy is 56.6% using hybrid sensing.

4.4.3.

Multiple observation angles with measurement noise

Finally, the classification accuracy for combinations of observation angles is examined, representing a scenario of a passively augmented lidar architecture imaging an object from multiple viewpoints (e.g., θ1,θ2,,θN). In this experiment, the database of X from Eq. (11) is utilized with SNR of 6 and 9 dB. In Table 3, the classification accuracy using imagery captured at all 0 deg, 10 deg, 20 deg, 30 deg, 40 deg, 50 deg, 60 deg, and 70 deg viewing angles is presented. Results of SVM, decision tree, discriminant, Naïve Bayes, kNN, and neural network classifiers using the parameters listed in Table 2 are shown. Parameters of the individual classifiers are optimized for each scenario. Results show that all classifiers follow the same trend versus SNR (higher SNR increases accuracy).

Table 3

Classification accuracy using measurements from all 0 deg, 10 deg, 20 deg, 30 deg, 40 deg, 50 deg, 60 deg, and 70 deg viewing angles.

ClassifierSNR=6  dBSNR=9  dB
Classification Accuracy (%)Classification Accuracy (%)
frDoLPHybridfrDoLPHybrid
SVM64.352.491.173.765.494.4
Decision tree53.039.569.462.650.283.8
Discriminant56.646.482.165.158.790.7
Naïve Bayes57.346.983.266.655.592.4
kNN62.946.482.671.658.292.0
Neural network42.423.381.761.136.789.0

Classification accuracy when using eight viewing angles from 0 deg to 70 deg is very impressive. However, in many scenarios obtaining this diverse set of angles is impractical. Therefore, we present additional experimentation utilizing combinations of only two to seven viewing angles. Obtaining multiple viewpoints is most likely to occur as consecutive angles (e.g., a moving platform may have a clear view of an object’s surface for 30 deg to 50 deg observation angles before losing sight of it due to obscuration). We examine combinations of observation angles with consecutive angles. As shown in Table 4, the utilization of additional observation angles generally improves performance. For example, the accuracy of X(50deg,60deg,70deg) is 70.8%, a 5.4% increase from X(60deg,70deg).

Table 4

Classification accuracy of single modalities and the proposed hybrid technique using SVM.

Feature setSNR=6  dBSNR=9  dB
Classification Accuracy (%)Classification Accuracy (%)
frDoLPHybridfrDoLPHybrid
X(10  deg,20  deg)34.329.657.839.735.269.9
X(20  deg,30  deg)34.631.160.942.737.476.3
X(30  deg,40  deg)34.732.762.041.738.176.5
X(40  deg,50  deg)33.432.562.342.839.277.5
X(50  deg,60  deg)32.433.164.142.840.777.9
X(60  deg,70  deg)30.233.965.443.041.580.4
X(0  deg,10  deg,20  deg)47.933.971.554.741.682.3
X(10  deg,20  deg,30  deg)37.634.167.747.042.182.5
X(20  deg,30  deg,40  deg)36.333.669.249.441.083.4
X(30  deg,40  deg,50  deg)39.935.869.548.943.683.8
X(40  deg,50  deg,60  deg)39.235.769.551.042.383.9
X(50  deg,60  deg,70  deg)40.337.270.848.646.586.5
X(0  deg,10  deg,20  deg,30  deg)51.439.078.458.448.188.6
X(10  deg,20  deg,30  deg,40  deg)41.837.675.255.346.387.3
X(20  deg,30  deg,40  deg,50  deg)44.238.776.055.447.288.1
X(30  deg,40  deg,50  deg,60  deg)43.538.374.153.848.187.1
X(40  deg,50  deg,60  deg,70  deg)44.639.476.454.048.389.1
X(0  deg,10  deg,20  deg,30  deg,40  deg)53.443.183.363.351.691.0
X(10  deg,20  deg,30  deg,40  deg,50  deg)46.541.779.759.151.489.8
X(20  deg,30  deg,40  deg,50  deg,60  deg)47.141.479.759.651.990.0
X(30  deg,40  deg,50  deg,60  deg,70  deg)48.441.079.657.753.190.7
X(0  deg,10  deg,20  deg,30  deg,40  deg,50  deg)56.946.086.366.456.492.2
X(10  deg,20  deg,30  deg,40  deg,50  deg,60  deg)50.243.982.963.756.190.7
X(20  deg,30  deg,40  deg,50  deg,60  deg,70  deg)53.643.783.964.356.792.1
X(0  deg,10  deg,20  deg,30  deg,40  deg,50  deg,60  deg)58.948.588.769.360.192.6
X(10  deg,20  deg,30  deg,40  deg,50  deg,60deg,70  deg)54.946.385.967.661.192.2
X(0  deg,10  deg,20  deg,30  deg,40  deg,50  deg,60  deg,70  deg)64.352.491.173.765.494.4

4.5.

Discussion of Results

The classification accuracy of all scenarios evaluated on our dataset is greater than 20%. With the 19 classes considered, a completely random guess would result in less than 5.3% chance of correct classification. The performance is enabled by having a known observation angle. When considering a single known observation angle, results of fr and DoLP are very similar [Fig. 8(b)]. However, when combinations of angles are considered (Tables 3 and 4) fr consistently outperforms DoLP. In fact, the more angles utilized, the better the performance. This is because the actual measurements (shown in Figs. 6 and 7) of most materials have signatures that vary with observation angle. In all scenarios combining the features in a hybrid architecture significantly improves performance. As previously mentioned, a standalone passive polarimeter is not capable of obtaining observation angle without lidar point-cloud information. Therefore, utilizing only the DoLP feature would still require a lidar system. We believe 6 dB is a reasonable evaluation point for SNR, based on our experience with lidar and infrared imaging systems. At 6 dB, the proposed technique achieves 91.1% material classification accuracy using SVM.

When comparing classifier techniques (Table 3), SVM obtains the best results. This could be due to the limited parameter space we explored with each classifier (shown in Table 1). Optimizing these parameters for the specific dataset could improve classification accuracy of each method. We also notice that some SVM classifiers require on the order of 10 times longer to train than other classifier types (but performance metrics on training time are not presented because the metric is highly dependent on computational hardware). We recommend that the type of classifier utilized in future work should be carefully selected for each individual application (by considering the amount of training data, dimensionality of the data, training time, number of the features, number of classes, and class separation).

5.

Conclusion

This work lays the foundation for follow-on work to design advanced classifiers optimized for specific applications. The combination of lidar and passive polarimetric sensors in a hybrid imaging architecture is demonstrated to obtain 91.1% material classification accuracy. A unique dataset consisting of fr and DoLP measurements versus θ is presented for a diverse set of 34 material types each imaged at eight observation angles. Material classification is implemented using six machine learning classifiers with multiple feature sets to clearly show the benefit of using a hybrid infrared imaging technique. The advantage of imaging an object at multiple viewpoints is shown to increase classification accuracy by 31.5% compared to classification at 70 deg alone when SNR=6  dB is considered. The presented technique relies on lidar 3-D point-cloud imagery to estimate surface orientation and is designed to classify on material surface properties fr measured with lidar and DoLP measured with passive polarimetric infrared sensors. Future work can combine this technology with object classification based on spatial features. For example, spatial features such as shape, height, length, and intensity contrast are typically obtained from the imagery of the sensors in the proposed hybrid sensing architecture. By combining material classification of our work with spatial features captured with the same sensors, we expect the classification accuracy to further improve.

References

1. 

S. Deshpande, W. Muron, Y. Cai, “Vehicle classification,” Computer Vision and Imaging in Intelligent Transportation Systems, 49 –52 John Wiley and Sons Ltd., Hoboken, New Jersey (2017). Google Scholar

2. 

J. Murray et al., “Advanced 3D polarimetric flash ladar imaging through foliage,” Proc. SPIE, 5086 84 –95 (2003). https://doi.org/10.1117/12.501612 PSISDG 0277-786X Google Scholar

3. 

W. Yao, S. Hinz and U. Stilla, “3D object-based classification for vehicle extraction from airborne lidar data by combining point shape information with spatial edge,” in IAPR Workshop Pattern Recognit. Remote Sens., 1 –4 (2010). https://doi.org/10.1109/PRRS.2010.5742804 Google Scholar

4. 

B. Schachter, Automatic Target Recognition, TT118 3rd ed.SPIE Press, Bellingham, Washington (2009). Google Scholar

5. 

L. Meng and J. P. Kerekes, “Adaptive target detection with a polarization sensitive optical system,” Appl. Opt., 50 1925 –1932 (2011). https://doi.org/10.1364/AO.50.001925 APOPAI 0003-6935 Google Scholar

6. 

J. Romano, D. Rosario and J. McCarthy, “Day/night polarimetric anomaly detection using SPICE imagery,” IEEE Trans. Geosci. Remote Sens., 50 5014 –5023 (2012). https://doi.org/10.1109/TGRS.2012.2195186 IGRSD2 0196-2892 Google Scholar

7. 

D. LeMaster and S. Cain, “Multichannel blind deconvolution of polarimetric imagery,” J. Opt. Soc. Am. A, 25 2170 –2176 (2008). https://doi.org/10.1364/JOSAA.25.002170 JOSAAH 0030-3941 Google Scholar

8. 

T. V. T. Krishna, C. D. Creusere and D. G. Voelz, “Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint,” IEEE Trans. Image Process., 20 288 –292 (2011). https://doi.org/10.1109/TIP.2010.2052274 IIPRE4 1057-7149 Google Scholar

9. 

H. Zhan and D. G. Voelz, “Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation,” Opt. Eng., 55 (12), 123103 (2016). https://doi.org/10.1117/1.OE.55.12.123103 Google Scholar

10. 

IV M. W. Hyde et al., “Determining the complex index of refraction of an unknown object using turbulence-degraded polarimetric imagery,” Opt. Eng., 49 (12), 126201 (2010). https://doi.org/10.1117/1.3518044 Google Scholar

11. 

B. L. Holtsberry and D. G. Voelz, “Material identification from remote sensing of polarized self-emission,” Proc. SPIE, 11132 1113203 (2019). https://doi.org/10.1117/12.2528282 PSISDG 0277-786X Google Scholar

12. 

S. Li et al., “Deep learning for hyperspectral image classification: an overview,” IEEE Trans. Geosci. Remote Sens., 57 (9), 6690 –6709 (2019). https://doi.org/10.1109/TGRS.2019.2907932 IGRSD2 0196-2892 Google Scholar

13. 

P. Rauss and D. Rosario, “Deep greedy learning under thermal variability in full diurnal cycles,” Opt. Eng., 56 (8), 081809 (2017). https://doi.org/10.1117/1.OE.56.8.081809 Google Scholar

14. 

B. Liu et al., “Deep convolutional recurrent neural network with transfer learning for hyperspectral image classification,” J. Appl. Remote Sens., 12 (2), 026028 (2018). https://doi.org/10.1117/1.JRS.12.026028 Google Scholar

15. 

G. Abdi, F. Samadzadegan and P. Reinartz, “Deep learning decision fusion for the classification of urban remote sensing data,” J. Appl. Remote Sens., 12 (1), 016038 (2018). https://doi.org/10.1117/1.JRS.12.016038 Google Scholar

16. 

W. Liao et al., “Deep learning for fusion of apex hyperspectral and full-waveform lidar remote sensing data for tree species mapping,” IEEE Access, 6 68716 –68729 (2018). https://doi.org/10.1109/ACCESS.2018.2880083 Google Scholar

17. 

J. P. Brown et al., “Characterizing polarization in passive polarimetric remote sensing,” in IEEE Res. and Appl. Photonics Defense Conf., 1 –2 (2019). https://doi.org/10.1109/RAPID.2019.8864351 Google Scholar

18. 

M. Dalponte, L. Bruzzone and D. Gianelle, “Fusion of hyperspectral and lidar remote sensing data for classification of complex forest areas,” IEEE Trans. Geosci. Remote Sens., 46 1416 –1427 (2008). https://doi.org/10.1109/TGRS.2008.916480 IGRSD2 0196-2892 Google Scholar

19. 

B. Rasti et al., “Fusion of hyperspectral and LiDAR data using sparse and low-rank component analysis,” IEEE Trans. Geosci. Remote Sens., 55 (11), 6354 –6365 (2017). https://doi.org/10.1109/TGRS.2017.2726901 IGRSD2 0196-2892 Google Scholar

20. 

S. Samiappan, L. Dabbiru and R. Moorhead, “Fusion of hyperspectral and LiDAR data using random feature selection and morphological attribute profiles,” in in Workshop Hyperspectral Image and Signal Process.: Evol. Remote Sens., 1 –4 (2016). https://doi.org/10.1109/WHISPERS.2016.8071662 Google Scholar

21. 

B. Flusche, M. Gartley and J. Schott, “Defining a process to fuse polarimetric and spectral data for target detection and explore the trade space via simulation,” J. Appl. Remote Sens., 4 043550 (2010). https://doi.org/10.1117/1.3516616 Google Scholar

22. 

D. B. Cavanaugh, K. R. Castle and W. Davenport, “Anomaly detection using the hyperspectral polarimetric imaging testbed,” Proc. SPIE, 6233 62331Q (2006). https://doi.org/10.1117/12.666133 PSISDG 0277-786X Google Scholar

23. 

Y. Zhao, P. Gong and Q. Pan, “Unsupervised spectropolarimetric imagery clustering fusion,” J. Appl. Remote Sens., 3 033535 (2009). https://doi.org/10.1117/1.3168619 Google Scholar

24. 

J. P. Brown et al., “Experiments in multiple-waveband passive polarimetric and active infrared imaging for material classification,” Proc. SPIE, 11412 1141209 (2020). https://doi.org/10.1117/12.2560286 PSISDG 0277-786X Google Scholar

25. 

F. Nicodemus, J. Richmond and J. Hsia, “Geometrical considerations and nomenclature for reflectance,” Washington, D.C. (1977). Google Scholar

26. 

D. Goldstein, Polarized Light, 2nd edMarcel Dekker, Inc., New York (2003). Google Scholar

27. 

J. Schott, Fundamentals of Polarimetric Remote Sensing, TT81 SPIE Press, Bellingham, Washington (2009). Google Scholar

28. 

C. Keyser et al., “Single-pulse Mueller matrix polarimeter for rapid scene characterization LADAR,” Proc. SPIE, 10655 106550G (2018). https://doi.org/10.1117/12.2305564 PSISDG 0277-786X Google Scholar

29. 

C. K. Keyser et al., “A fiber Kerr effect polarization state generator for temporally multiplexed polarimetric ladar,” Proc. SPIE, 11005 110050X (2019). https://doi.org/10.1117/12.2519137 PSISDG 0277-786X Google Scholar

30. 

G. T. Georgiev and J. J. Butler, “BRDF study of gray-scale Spectralon,” Proc. SPIE, 7081 708107 (2008). https://doi.org/10.1117/12.795931 PSISDG 0277-786X Google Scholar

31. 

J. C. Stover, Optical Scattering: Measurement and Analysis, PM24 2nd edSPIE Press, Bellingham, Washington (1995). Google Scholar

33. 

D. B. Chenault et al., “Metrics for comparison of polarimetric and thermal target to background contrast,” in IEEE Res. and Appl. Photonics Defense Conf., 1 –4 (2018). https://doi.org/10.1109/RAPID.2018.8508988 Google Scholar

34. 

L. Wolff, A. Lundberg and R. Tang, “Thermal emission polarization,” Proc. SPIE, 3754 75 –86 (1999). https://doi.org/10.1117/12.366318 PSISDG 0277-786X Google Scholar

35. 

C. Saludez et al., “Observations on passive polarimetric imaging across multiple infrared wavebands,” Proc. SPIE, 10986 1098606 (2019). https://doi.org/10.1117/12.2518881 PSISDG 0277-786X Google Scholar

36. 

J. P. Brown et al., “Experiments in detecting obscured objects using longwave infrared polarimetric passive imaging,” Proc. SPIE, 11001 1100107 (2019). https://doi.org/10.1117/12.2518547 PSISDG 0277-786X Google Scholar

37. 

P. Chang et al., “Importance of shadowing and multiple reflections in emission polarization,” Waves Random Media, 12 1 –19 (2002). https://doi.org/10.1088/0959-7174/12/1/301 WRMEEV 0959-7174 Google Scholar

38. 

L. Breiman et al., Classification and Regression Trees, Taylor and Hall/CRC, Boca Raton, Florida (1984). Google Scholar

39. 

U. B. Gewali, S. T. Monteiro and E. Saber, “Machine learning based hyperspectral image analysis: a survey,” (2018). Google Scholar

40. 

R. R. Pullanagari et al., “Assessing the performance of multiple spectral–spatial features of a hyperspectral image for classification of urban land cover classes using support vector machines and artificial neural network,” J. Appl. Remote Sens., 11 (2), 026009 (2017). https://doi.org/10.1117/1.JRS.11.026009 Google Scholar

41. 

N. Cristianini and B. Scholkopf, “Support vector machines and kernel methods: the new generation of learning machines,” AI Mag., 23 31 (2002). https://doi.org/10.1609/aimag.v23i3.1655 AIMAEK Google Scholar

42. 

R. S. Hosseini, S. Homayouni and R. Safari, “Modified algorithm based on support vector machines for classification of hyperspectral images in a similarity space,” J. Appl. Remote Sens., 6 (1), 063550 (2012). https://doi.org/10.1117/1.JRS.6.063550 Google Scholar

43. 

N. Ghoggali, F. Melgani and Y. Bazi, “A multiobjective genetic svm approach for classification problems with limited training samples,” IEEE Trans. Geosci. Remote Sens., 47 (6), 1707 –1718 (2009). https://doi.org/10.1109/TGRS.2008.2007128 IGRSD2 0196-2892 Google Scholar

44. 

C. Rablau, “Lidar: a new self-driving vehicle for introducing optics to broader engineering and non-engineering audiences,” Proc. SPIE, 11143 111430C (2019). https://doi.org/10.1117/12.2523863 PSISDG 0277-786X Google Scholar

45. 

E. J. Nunes-Pereira et al., “The LiDAR hop-on-hop-off route: visiting the LiDARs past, present, and future landscapes,” Proc. SPIE, 11207 112072Q (2019). https://doi.org/10.1117/12.2530904 PSISDG 0277-786X Google Scholar

46. 

P. McManamon, LiDAR Technologies and Systems, SPIE Press, Bellingham, Washington (2019). Google Scholar

47. 

L. Meng and J. Kerekes, “An analytical model for optical polarimetric imaging systems,” IEEE Trans. Geosci. Remote Sens., 52 6615 –6626 (2014). https://doi.org/10.1109/TGRS.2014.2299272 IGRSD2 0196-2892 Google Scholar

48. 

S. T. Fiorino et al., “A first principles atmospheric propagation and characterization tool: the laser environmental effects definition and reference (LEEDR),” Proc. SPIE, 6878 68780B (2008). https://doi.org/10.1117/12.763812 PSISDG 0277-786X Google Scholar

49. 

Y. Guo, T. Hastie and R. Tibshirani, “Regularized linear discriminant analysis and its application in microarrays,” Biostatistics, 8 86 –100 (2006). https://doi.org/10.1093/biostatistics/kxj035 Google Scholar

50. 

T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, 2nd ed.Springer, New York (2008). Google Scholar

51. 

V. Chandola, A. Banerjee and V. Kumar, “Anomaly detection: a survey,” ACM Comput. Surv., 41 1 –58 (2009). https://doi.org/10.1145/1541880.1541882 ACSUEY 0360-0300 Google Scholar

52. 

I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, Cambridge, Massachusetts (2016). Google Scholar

53. 

, “Supervised learning algorithms for binary and multiclass problems,” (2020) https://www.mathworks.com/help/stats/classification.html January ). 2020). Google Scholar

Biography

Jarrod P. Brown received his MS degree in electrical engineering from Florida State University in 2012. Currently, he is working toward his PhD in object detection, material classification, and multisystem architectures using polarimetric imaging for remote sensing applications. Since 2013, he has been with the Air Force Research Laboratory Munitions Directorate developing imaging systems and object detection algorithms. His research interests include passive and active infrared imaging.

Rodney G. Roberts received his PhD in electrical engineering from Purdue University in 1992. He is a professor of electrical and computer engineering at the Florida Agricultural and Mechanical University–Florida State University College of Engineering. His research interests include robotics, teleoperation, image processing, and signal processing.

Darrell C. Card: Biography is not available.

Christian L. Saludez received his BS degree in electrical engineering from University of West Florida in 2017. Currently, he is pursuing his MS degree in industrial and systems engineering at the University of Florida, Gainesville, Florida, USA. He has been employed by the Air Force Research Laboratory Munitions Directorate since 2018, developing and testing novel imaging systems. His research interests are passive infrared imaging, image processing, and remote sensing.

Christian K. Keyser received his PhD in optical physics from the CREOL/UCF and has worked at NRL and Northrop Grumman corporation. Currently, he is with Air Force Research Laboratory Munitions Directorate and is interested in novel sensor approaches especially architectures where disparate technologies are tightly integrated. His research interest includes spectropolarimetric LiDAR, passive polarimetric imaging, nonlinear fiber optics in solid core fiber and gas/liquid filled hollow fiber, signal processing, and quantum sensing and metrology.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jarrod P. Brown, Rodney G. Roberts, Darrell C. Card, Christian L. Saludez, and Christian K. Keyser "Hybrid passive polarimetric imager and lidar combination for material classification," Optical Engineering 59(7), 073106 (29 July 2020). https://doi.org/10.1117/1.OE.59.7.073106
Received: 21 April 2020; Accepted: 17 July 2020; Published: 29 July 2020
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications and 1 patent.
Advertisement
Advertisement
KEYWORDS
LIDAR

Polarimetry

Imaging systems

Image classification

Signal to noise ratio

Sensors

Machine learning

RELATED CONTENT

Results of ACTIM an EDA study on spectral laser...
Proceedings of SPIE (October 05 2011)
Mueller matrix by imaging polarimeter
Proceedings of SPIE (June 08 2012)
Polarimetric effects in nonpolarimetric imaging
Proceedings of SPIE (April 26 2010)
Three-dimensional imaging polarimetry
Proceedings of SPIE (September 19 2001)

Back to Top