The main goal of a cyberspace environment is to support decision makers with relevant information on time for
operational use. Cyberspace environments depend on geospatial data including terrestrial, aerial/UAV, satellite and
other multi-sensor data obtained in electro-optical and other imaging domains. Despite advances in automated
geospatial image processing, the "human in the loop" is still necessary because current applications depend upon
complex algorithms and adequate classification rules that can only be provided by skilled geospatial professionals.
Signals extracted from humans may become an element of a cyberspace system. This paper describes research experiments on integrating an EEG device within geospatial technology.
Laser communication systems operate in the presence of strong atmospheric turbulence, affecting communication
platform by broadening of the laser footprint, random jitter of the laser beam, and high spatial frequency
intensity fluctuations referred to as scintillation. The prediction of the effects induced by the atmospheric
turbulence is a crucial task for reliable data transmission. Equipping the lasercom platform with adaptive optics
system capable of probing the atmospheric turbulence and generating the data on wave front errors in real
time improves performance and extends the range of optical communications systems. Most adaptive optics
systems implement wavefront sensors to measure the errors induced by the atmospheric turbulence. Real time
analysis of the data received from the wavefront sensor is used for outgoing laser beam compensation significantly
improves the lasercom performance. To obtain reliable data, the wavefront sensor needs to be accurately aligned
and calibrated. To model the performance of a laser communication system operating in the real world we have
developed an outdoor 3.2 km, partially over water, turbulence measurement and monitoring communication link.
The developed techniques of wavefront sensor alignment and calibration led to the successful data collection and
analysis are discussed in this paper.
To explore anisoplanatism over horizontal paths a LCS experiment was developed. The experiment operated in
real-world conditions over a 3.2km path, partly over water. To compare the results obtained from the experimental
data and established theory, we modeled the experimental path via simulation using a finite number
of phase screens. The scale and location of the phase screens in the simulation were varied to account for a
different turbulence conditions along the propagation path. Preliminary comparison of our experimental data
and simulation show that adjacent PSFs are significantly correlated at angles much larger than the predicted
theoretical isoplanatic angle.
Near the ground laser communication systems must operate in the presence strong atmospheric turbulence. To
model the performance of a laser communication system operating in the real world we have developed an outdoor
3.2 km, partially over water, turbulence measurement and monitoring communication link. The transmitter side
is equipped with the laser and the bank of 20 horizontally, in-line mounted light emitting diodes. The receiver
side consists of two channels used for wavefront sensor and point spread function measurements. The effects of
anisoplanatism on the point spread function and statistics of Fried parameter r0 are discussed in this article.
KEYWORDS: Cameras, 3D modeling, Control systems, Unmanned aerial vehicles, Calibration, Data modeling, 3D image processing, Sensors, Algorithm development, Surveillance
This paper outlines research experiments performed on quantative evaluation of 3D geospatial data obtained by
means of Photogrammetric Small UAV(PSUAV) developed at Michigan Tech. SUAV platform is equipped with
autopilot and capable to accommodate a payload up to 11 pounds. Experiments were performed deploying 12MP
Cannon Rebel EOS camera, which was a subject of calibration procedures. Surveying grade GPS equipment was
used to prepare ground calibration sites. Work on processing of the obtained datasets encompasses: sensor modeling,
single photo resections with image co-registration, mosaicking, and finally 3D terrain models generation. One of the
most important results achieved at current stage of PSUAV development is method and algorithms for comparison
of UAV obtained DEMs with another models obtained from different geospatial sources.
Small unmanned aerial vehicle (SUAV) imagery geometrical quality is affected by the fact that cameras which
are installed in SUAV usually are not calibrated due to the platforms size and cost constrains. To this end, image
enhancements and camera calibration processes are crucial elements of the remote sensing system architectures.
In this work we present experimental research involving SUAV platform equipped with autopilot and with ability
to accommodate a payload up to 11 pounds. SUAV platform is currently fitted with a 12MP EOS camera, which
is a subject of calibration procedures. Presented preliminary results of the research demonstrate SUAV remote
sensing feasibility.
The outdoor 3.2km, partially over water, turbulence measurement and monitoring communication link has
being developed with the goal to statistically describe atmospheric turbulence using results derived from the
experimentally collected data. The system described in this paper has two transmitters and a receiver. The
transmitter side is equipped with the laser and the bank of 20 horizontally, in-line mounted LEDs. The receiver
side consists of the two-channel receiver allowing performing simultaneous wavefront sensor and point spread
function measurements.The data collected from both channels are used for the Fried parameter estimations. In
this paper we emphasize out attention on the data collection and analysis via point spread function channel only.
The results presented in this paper are based on the 6Tb of data collected through 40 days time interval, and
under various day and night atmospheric conditions.
In this paper we describe multidisciplinary experimental research concentrated on stereoscopic presentation of geospatial
imagery data obtained from various sensors. Source data were different in scale, texture, geometry and content. None of
image processing techniques allows processing such a data simultaneously. However, augmented reality system allows
subjects to fuse multi-sensor, multi-temporal data and terrain reality into single model. Augmented reality experimental
set, based on head-mounted display was designed to efficiently superimpose LIDAR point-clouds for comfortable
stereoscopic perception. Practical research experiment performed indicates feasibility of the stereoscopic perception data
obtained on-the-fly. One of the most interesting findings is that source LIDAR point-clouds do not have to be preprocessed
or enhanced for being in the experiments described.
So-called "free-space" laser communication systems working near the surface of the Earth must operate in the
presence of atmospheric turbulence. The effects of the atmospheric turbulence on the laser beam which are
relevant to optical communications are a broadening of the laser footprint, random jitter of the laser beam, and
high spatial frequency intensity fluctuations referred to as scintillation. The overall goal of our program is to
improve performance and extend the range of optical communications systems by exploring the use of adaptive
optics and channel coding. To better model the performance of a real system operating in the real world, we have
developed an outdoor turbulence-measurement and monitoring system. In this paper we describe an atmospheric
turbulence monitoring system for three kilometers, partially over water path. The laser transmitter operates
at 808 nm with a source power of 2mW. The receiver consists of relay optics, a Hartmann wave front sensor
(WFS), and a CCD camera. The WFS is used to monitor atmospheric turbulence-induced phase aberrations,
and the camera is used for both conventional imaging studies and measurements of anisoplanatic effects. In this
paper we describe this system and present some preliminary results obtained from the measurements.
Situational awareness is a critical issue for the modern battle and security systems improvement of which will increase human performance efficiency. There are multiple research project and development efforts based on omni-directional (fish-eye) electro-optical and other frequency sensor fusion systems implementing head-mounted visualization systems. However, the efficiency of these systems is limited by the human eye-brain system perception limitations. Humans are capable to naturally perceive the situations in front of them, but interpretation of omni-directional visual scenes increases the user's mental workload, increasing human fatigue and disorientation requiring more effort for object recognition. It is especially important to reduce this workload making rear scenes perception intuitive in battlefield situations where a combatant can be attacked from both directions.
This paper describes an experimental model of the system fusion architecture of the Visual Acoustic Seeing (VAS) for representation spatial geometric 3D model in form of 3D volumetric sound. Current research in the area of auralization points to the possibility of identifying sound direction. However, for complete spatial perception it is necessary to identify the direction and the distance to an object by an expression of volumetric sound, we initially assume that the distance can be encoded by the sound frequency. The chain: object features -> sensor -> 3D geometric model-> auralization constitutes Volumetric Acoustic Seeing (VAS).
Paper describes VAS experimental research for representing and perceiving spatial information by means of human hearing cues in more details.
Spatial and temporal data derived from eye movements, compiled while the human eye observes geospatial
imagery, retain meaningful and usable information. When human perceives the stereo effect, the virtual three
dimensional (3D) model resulting from eye-brain interaction is generated in the mind. If the eye movements are
recorded while the virtual model is observed, it is possible to reconstruct a 3D geometrical model almost identical
to the one generated in the human brain. Information obtained from eye-movements can be utilized in many
ways for remote sensing applications such as geospatial image analysis and interpretation. There are various eyetracking
systems available on the market; however, none of them is designed to work with stereoscopic imagery.
We explore different approaches and designs of the most suitable and non-intrusive scheme for stereoscopic image
viewing in the eye-tracking systems to observe and analyze 3D visual models. The design of the proposed system
is based on the optical separation method, which provides visually comfortable environment for perception of
stereoscopic imagery. A proof of concept solution is based on multiple mirror-lens assembly that provides a
significant reduction of geometrical constrains in eye-frame capturing. Two projected solutions: for wide-angle
of viewing and helmet-integrated eye-tracker are also discussed here.
Many modern technologies widely deploy semi-autonomous robotic platforms, remotely controlled by a human
operator. Such tasks usually require rapid fusion of multisensor imagery and auxiliary geospatial data.
Operational-control units in particular can be considered as displays of the decision-support systems, and the
complexity of automated multi-domain geospatial data fusion leads to human-in-the loop technology which
widely deploys visual analytics. While a number of research studies have investigated eye movements and attention
on casual scenes, there has been a lack of investigations concerning the expert's eye movements and visual
attention, specifically when an operator is engaged in real-time visual data fusion to control and maneuver a
remote unmanned robotic vehicle which acquires visual data using CCTV cameras in visible, IR or other spectral
zones, and transmits this data through telemetric channels to a human operator. In this paper we investigate
the applicability of eye-tracking technology for the numerical assessment of efficiency of an operator in fusion of
multi-sensor and multi-geometry visual data in real-time robotic control tasks.
There is a significant fixed aberration in some commercial off-the-shelf liquid crystal spatial light modulators (SLMs). In a recent experiment we conducted to simulate the effects of atmospheric turbulence and correction schemes in a laboratory setting using such an SLM, this aberration was too strong to neglect. We then tried to characterize and correct the observed aberration. Our method of characterizing the device uses a measurement of the far-field intensity pattern caused by the aberration and processing based on a parameterized version of the phase retrieval algorithm. This approach uses simple and widely available hardware and does not require expensive aberration sensing equipment. The phase aberrations were characterized and compared with the manufacturer's published measurements for a similar device, with excellent agreement. To test the quality of our aberration estimate, a correction phase was computed and applied to the SLM, and the resulting far-field patterns were measured and compared to the theoretical patterns with excellent results. Experiments show that when the correction is applied to the SLM, nearly diffraction-limited far-field intensity patterns are observed.
The task of delivering sufficient level of airborne laser energy to ground based targets is of high interest. To overcome the degradation in beam quality induced by atmospheric turbulence, it is necessary to measure and compensate for the phase distortions in the wavefront. Since, in general, there will not be a cooperative beacon present, an artificial laser beacon is used for this purpose. In many cases of practical interest, beacons created by scattering light from a surface in the scene are anisoplanatic, and as a result provide poor beam compensation results when conventional adaptive optics systems are used. In this paper we present three approaches for beacon creation in a down-looking scenario. In the first approach we probe whole volume of the atmosphere between transmitter and the target. In this case the beacon is created by scattering an initially focused beam from the surface of the target. The second approach describes generation of an uncompensated Rayleigh beacon at some intermediate distance between the transmitter and the target. This method allows compensation for only part of the atmospheric path, which in some cases provides sufficient performance. Lastly, we present a novel technique of "bootstrap" beacon generation that allows achieving dynamic wavefront compensation. In this approach a series of compensated beacons is created along the optical path, with the goal of providing a physically smaller beacon at the target plane. The performance of these techniques is evaluated by using the average Strehl ratio and the radially averaged intensity of the beam falling on the target plane. Simulation results show that under most turbulence conditions of practical interest the novel "bootstrap" technique provides better power in the bucket in comparison with the other two techniques.
There is strong interest in developing adaptive optics solutions for extreme conditions, such as laser beam projection over long, horizontal paths. In most realistic operational scenarios there is no suitable beacon readily available for tracking and wave front sensing. In these situations it is necessary to create a bacon artificially. In this paper we explore two strategies for creating a beacon: (1) scattering an initially focused beam from a surface accomplished compensation for part of the path. In many cases of practical interest, beacons created by scattering of the light from a surface in the scene results in beacons which are anisoplanatic, and hence provide poor beam compensation results. Partial path compensation based on a Rayleigh beacon provides comparable performance in some cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.