PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Obstacle detection, and more generally, terrain classification are two of the most important and fundamental perception functions required for robust unmanned off-road vehicle operation. To better address these tasks, we have developed a novel method that uses multiple readings from multiple sensor modalities to compute a vector measure of the physical density of a particular world location as it appears to each sensor modality. This “density map” representation serves as a powerful discriminator for the terrain classification task.
We have developed this concept into a system to characterize terrain in real time from a set of sensors on-board an autonomous vehicle by assigning each patch of terrain a type and by estimating a cost metric for the vehicle to traverse that terrain. The system is fast enough to produce these estimates in real time; on our testbed vehicle, our terrain classification system is updated at roughly 70 Hz by a variety of different ladar and radar sensors. This paper discusses our methods for modeling each sensor modality, establishing the classification system, and compensating for the fact that the sensor readings may be unsynchronized and taken from a moving vehicle.
A number of experiments are presented using both a stationary platform and using the autonomous Raptor vehicle developed by SAIC for the PerceptOR program. Results indicate that this system can be used to correctly classify clear flat ground, sparse vegetation, and impenetrable vegetation, and is practical for use as a guidance system for a completely autonomous vehicle. Additionally, we have demonstrated a limited ability to use this system for more sophisticated terrain classification, such as the ability to identify metal wire fencing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unique performance of biological systems across a wide spectrum of phylogenetic species has historically provided inspirations for roboticists in new designs and fabrication of new robotic platforms. Of particular interest to a number of important applications is to create dynamic robots able to adapt to a change in their world, unplanned events that are sometimes unexpected, and sometimes unstable, harsh conditions. It is likely that the exploring dynamics in biological systems will continue to provide rich solutions to attaining robots capable of more complex tasks for this purpose. This is because the long-term design process of evolution utilizes a natural selection process that responds to such changes. Recently, there have been significant advances across a number of interdisciplinary efforts that have generated new capabilities in biorobotics. Whole body dynamics that capture the force dynamics and functional stability of legged systems over rough terrain have been elucidated and applied in legged robotic systems. Exploying the force dynamics of flapping winged insect flight has provided key discoveries and enabled the fabrication of new micro air vehicles. New classes of materials are being developed that emulate the ability of natural muscle, capturing the compliant and soft subtle movement and performance of biological appendages. In addition, classes of new multifunctional materials are being developed to enable the design of biorobotics with the structural and functional efficiency of living organisms. Optical flow and other sensors based on the principles of invertebrate vision have been implemented on robotic platforms for autonomous robotic guidance and control. These fundamental advances have resulted in the emergence of a new generation of bioinspired dynamic robots which show significant performance improvements in early prototype testing and that could someday be useful in a number of significant applications such as search and rescue and entertainment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A long duration robotic presence on lunar and planetary surfaces will allow the acquisition of scientifically interesting information from a diverse set of surface and sub-surface sites. The wide range of terrain types including plains, cliffs, sand dunes, and lava tubes will require the development of robotic systems that can adapt to possibly rapidly changing terrain. These systems include single as well as teams of robots. In this paper, we describe the development of an integrated suite of autonomous, adaptive hardware/software control methods called SMART (System for Mobility and Access to Rough Terrain) that enables mobile robots to explore potentially important science sites currently beyond the reach of conventional rover designs. SMART uses the behavior coordination mechanisms of CAMPOUT, a previously developed system for multi-agent control. For the specific application area of cliffside exploration, SMART consists of a distributed sensing system for cooperative map-making called MITSAF (Model-based Information Theoretic Sensing and Fusion), mobility system for rappelling down a cliff and moving to a designated way-point, and science sample acquisition from the cliff face. We also report the results of some experimental studies on highly sloped cliff faces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The overall characteristics of the French robotic Operational Demonstrator (OD) SYRANO are described, while the system is being currently evaluated in th Mourmelon proving ground by the French military forces. The article deals with the technical choices made to get an homogeneous OD, leading to acceptable and credible operational uses. It focuses especially on the way the transmission problem between the robot and the control station (one of well-known
shortcomings in teleoperated system) was solved, using specific equipment and high level teleoperation modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new terrain mapping method for mobile robots with a 2-D laser rangefinder. In the proposed method, an elevation map and a certainty map are built and used for the filtering of erroneous data. The filter, called Certainty Assisted Spatial (CAS) filter, first employs the physical constraints on motion continuity and spatial continuity to distinguish corrupted pixels (e.g., due to artifacts, random noise, or the "mixed pixels" effect) and missing data from uncorrupted pixels in an elevation map. It then removes the corrupted pixels and missing data, while missing data is filled in by a Weighted Median filter. Uncorrupted pixels are left intact so as to retain edges of objects. Our extensive indoor and outdoor mapping experiments demonstrate that the CAS filter has better performance in erroneous data reduction and map detail preservation than existing filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Defence Research and Development Canada's (DRDC) Autonomous Intelligent System's program conducts research to increase the independence and effectiveness of military vehicles and systems. DRDC-Suffield's Autonomous Land Systems (ALS) is creating new concept vehicles and autonomous control systems for use in outdoor areas, urban streets, urban interiors and urban subspaces. This paper will first give an overview of the ALS program and then give a specific description of the work being done for mobility in urban subspaces. Discussed will be the Theseus: Thethered Distributed Robotics (TDR) system, which will not only manage an unavoidable tether but exploit it for mobility and navigation. Also discussed will be the prototype robot called the Hedgehog, which uses conformal 3D mobility in ducts, sewer pipes, collapsed rubble voids and chimneys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sandia National Laboratories has developed a mesoscale hopping mobility platform (Hopper) to overcome the longstanding problems of mobility and power in small scale unmanned vehicles. The system provides mobility in situations such as negotiating tall obstacles and rough terrain that are prohibitive for other small ground base vehicles. The Defense Advanced Research Projects Administration (DARPA) provided the funding for the Hopper project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Through funding from the US Army-Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program, Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS) has developed the T-series of omni-directional robots based on the USU omni-directional vehicle (ODV) technology. The ODV provides independent computer control of steering and drive in a single wheel assembly. By putting multiple omni-directional (OD) wheels on a chassis, a vehicle is capable of uncoupled translational and rotational motion. Previous robots in the series, the T1, T2, T3, ODIS, ODIS-T, and ODIS-S have all used OD wheels based on electric motors. The T4 weighs approximately 1400 lbs and features a 4-wheel drive wheel configuration. Each wheel assembly consists of a hydraulic drive motor and a hydraulic steering motor. A gasoline engine is used to power both the hydraulic and electrical systems. The paper presents an overview of the mechanical design of the vehicle as well as potential uses of this technology in fielded systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Other than from its sensing and processing capabilities, a mobile robotic platform can be limited in its use by its ability to move in the environment. A wheeled robot works well on flat surfaces. Tracks are useful over rough terrains, while legs allow a robot to move over obstacles. In this paper we present a new concept of mobile robot with the objective of combining different locomotion mechanisms on the same platform to increase its locomotion capabilities. After presenting a review of multi-modal robotic platforms, we describe the design of our robot called AZIMUT. AZIMUT combines wheels, legs and tracks to move in three-dimensional environments. The robot is symmetrical and is made of four independent leg-track-wheel articulations. It can move with its articulations up, down or straight, or move sideways without changing the robot's orientation. The robot could be used in surveillance and rescue missions, exploration or operation in hazardous environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous and semi-autonomous ground robots exploring urban environments need the ability to detect various types of fences that are obstacles to mobility. Visual detection of wire fences is challenging due to the small size of the wire forming the fence and the presence of multiple unknown natural and/or man-made backgrounds visible through the structure of the fence. A deformable template based algorithm has been developed to visually identify the periodic structure of chain link fences in typical outdoor scenes. The algorithm extracts edge points from the image using the Prewitt
gradient operator and a histogram based thresholding method. The fence is modeled as two sets of regularly spaced parallel lines. Each of these sets of lines is parameterized by orientation, line spacing, and location of the left-most line within a specified Region Of Interest. A search in this parameter space finds the template which minimizes an energy function based on proximity of lines in the deformed template to edge points in the images. The algorithm performs well even in the presence of clutter edges from background textures in the scene. Modification of the template to account
for effects of perspective distortion when viewing fences from off-normal angles is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates.
In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle.
For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The realization of on- and off-road autonomous navigation of Unmanned Ground Vehicles (UGVs) requires real-time motion planning in the presence of dynamic objects with unknown trajectories. To successfully plan paths and to navigate in an unstructured environment, the UGVs should have the difficult and computationally intensive competency to predict the future locations of moving objects that could interfere with its path. This paper details the development of a combined probabilistic object classification and estimation theoretic framework to predict the future location of moving objects, along with an associated uncertainty measure. The development of a moving object testbed that facilitates the testing of different representations and prediction algorithms in an implementation-independent platform is also outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From cooperation of multiple mobile robots equipped with different sensor systems for navigation tasks, more accurate localisation estimates can be expected. This paper presents strategies to control within such robot teams relative positions between the vehicles in order to derive by an ultrasonic ranging system also information on locations of robots with limited navigation sensor systems. These algorithms have been tested in experiments with the MERLIN rover hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents methods for terrain classification that support adaptive selection of parameters for Terrain Classification system. Work is also presented for water body detection and we present results from experiments conducted for water detection methods utilizing LADAR, color camera and polarization filter based sensors. Use of multiple sensors can provide better water detection capability. An approach for adaptive terrain classification is shown
for existing rule-based classification algorithms. This approach allows us to develop a set of rules for various representative terrain types from various sites and operating conditions (light level, humidity, season, etc.) and exploit the onboard vehicle situational knowledge to select the most suitable set of rules for operation. An important element of this work requires use of data collected for different seasons and locations or terrain types in order to provide sensitivity measures. Existing terrain classification algorithms can utilize input from multiple sensors such as: Color, LADAR, FLIR and Multi-Spectral imagery. The performance of these algorithms is expected to improve as we acquire
an increasing number of additional data sets that includes features of interest taken under various conditions of terrain-types types,
illumination, temperature, humidity etc. and allow us to build a database of terrain knowledge. Environmental nformation and ground-truth is also collected along with the sensor data data. A Geographical Information System (GIS) interface is utilized along with related public-domain tools. Such tools are integrated to our system and used to provide data-management, spatial-modeling, and visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process how to acquire knowledge about the operating environment belongs to the most challenging problems that autonomous mobile robots solve. The quality of the model depends on a number and the art of sensors used and on a precision the robot knows its position in the environment. The occupancy grid belongs to the most common low-level models of the environment being considered for highly robust approach for fusion of noisy data and for fusion of data from different kinds of sensor. This paper primarily introduces a novel method for building an occupancy grid from a monocular color camera with its’ automatic calibration. The other part of the work describes a method for fusion of camera data with data from a sonar rangefinder. The presented methods were experimentally verified with an indoor experimental robot at the Czech Technical University facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the design process, earthmoving manufacturers routinely subject machines to rigorous, long-term tests to ensure quality. Automating portions of the testing process can potentially reduce the cost and time to complete these tests. We present a system that guides a 175 horsepower track-type tractor (Caterpillar Model D6R XL) along a prescribed route, allowing simple tasks to be completed by the automated machine while more complex tasks, such as site clean up, are handled by an operator. Additionally, the machine can be operated manually or via remote control and observed over the internet using a remote supervisor program. We envision that safety would be handled using work procedures, multiple over-ride methods and a GPS fence. The current system can follow turns within a half meter and straight sections within a quarter meter. The controller hardware and software are integrated with existing on-board electronic
modules and allow for portability. The current system successfully handles the challenges of a clutch-brake drive train and has the potential to improve control over test variables, lower testing costs and enable testing at higher speeds allowing for higher impact tests than a human operator can tolerate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Idaho National Engineering and Environmental Laboratory (INEEL), through collaboration with INSAT Co., has developed a low cost robotic auto-steering system for parallel contour swathing. The capability to perform parallel contour swathing while minimizing “skip” and “overlap” is a necessity for cost-effective crop management within precision agriculture. Current methods for performing parallel contour swathing consist of using a Differential Global Position System (DGPS) coupled with a light bar system to prompt an operator where to steer. The complexity of operating heavy equipment, ensuring proper chemical mixture and application, and steering to a light bar indicator can be overwhelming to an operator. To simplify these tasks, an inexpensive robotic steering system has been developed and tested on several farming implements. This development leveraged research conducted by the INEEL and Utah
State University. The INEEL-INSAT Auto-Steering Software and Equipment Technology provides the following: 1) the ability to drive in a straight line within ± 2 feet while traveling at least 15 mph, 2) interfaces to a Real Time Kinematic (RTK) DGPS and sub-meter DGPS, 3) safety features such as Emergency-stop, steering wheel deactivation, computer watchdog deactivation, etc., and 4) a low-cost, field-ready system that is easily adapted to other systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the largest factors relating to the commercial success of unmanned vehicles will be ease of use. A man machine interface (MMI) with the goal of allowing a user to easily task, monitor, and control multiple vehicles can benefit from several advancements in human machine interaction from both the research and commercial sectors. This paper focuses on the design considerations of an MMI that balances the complexity inherent to the control of multiple autonomous vehicles with the simplicity required for commercialization. It also profiles MARS, an MMI that Autonomous Solutions Incorporated has developed to facilitate the commercialization of automated test vehicles, and discusses an example applicable to the Goodyear Proving Grounds facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Man Portable Robotic System (MPRS) project objective was to build and deliver hardened robotic systems to the U.S. Army’s 10 Mountain Division in Fort Drum, New York. The system, specifically designed for tunnel and sewer reconnaissance, was equipped with visual and audio sensors that allowed the Army engineers to detect trip wires and
booby traps before personnel entered a potentially hostile environment.
The MPRS system has shown to be useful in government and military supported field exercises, but the system has yet to reach the hands of civilian users. Potential users in Law Enforcement and Border Patrol have shown a strong interest in the system, but robotic costs were thought to be prohibitive for law enforcement budgets.
Through the Center for Commercialization of Advanced Technology (CCAT) program, an attempt will be made to commercialize the MPRS. This included a detailed market analysis performed to verify the market viability of the technologies. Hence, the first step in this phase is to fully define the marketability of proposed technologies in terms of actual market size, pricing and cost factors, competitive risks and/or advantages, and other key factors used to develop
marketing and business plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile robots currently cannot detect and read arbitrary signs. This is a major hindrance to mobile robot usability, since they cannot be tasked using directions that are intuitive to humans. It also limits their ability to report their position relative to intuitive landmarks. Other researchers have demonstrated some success on traffic sign recognition, but using template based methods limits the set of recognizable signs. There is a clear need for a sign detection and recognition system that can process a much wider variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. We are developing a system for Sign Understanding in Support of Autonomous Navigation (SUSAN), that detects signs from various cues common to most signs: vivid colors, compact shape, and text. We have demonstrated the feasibility of our approach on a variety of signs in both indoor and outdoor locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicles that serve in the role as landmine detection robots could be an important tool for demining former conflict areas. On the LOTUS platform for humanitarian demining, different sensors are used to detect a wide range of landmine types. Reliable and accurate detection depends on correctly combining the observations from the different sensors on the moving platform. Currently a method based on odometry is used to merge the readings from the sensors. In this paper a vision based approach is presented which can estimate the relative sensor pose and position together with the vehicle motion.
To estimate the relative position and orientation of sensors, techniques from camera calibration are used. The platform motion is estimated from tracked features on the ground. A new approach is presented which can reduce the influence of tracking errors or other outliers on the accuracy of the ego-motion estimate. Overall, the new vision based approach for sensor localization leads to better estimates then the current odometry based method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting water hazards for autonomous, off-road navigation of unmanned ground vehicles is a largely unexplored problem. In this paper, we catalog environmental variables that affect the difficulty of this problem, including day vs. night operation, whether the water reflects sky or other terrain features, the size of the water body, and other factors. We briefly survey sensors that are applicable to detecting water hazards in each of these conditions. We then present analyses and results for water detection for four specific sensor cases: (1) using color image classification to recognize sky reflections in water during the day, (2) using ladar to detect the presense of water bodies and to measure their depth, (3) using short-wave infrared (SWIR) imagery to detect water bodies, as well as snow and ice, and (4) using mid-wave infrared (MWIR) imagery to recognize water bodies at night. For color imagery, we demonstrate solid results with a classifier that runs at nearly video rate on a 433 MHz processor. For ladar, we present a detailed propagation analysis that shows the limits of water body detection and depth estimation as a function of lookahead distance, water depth, and ladar wavelength. For SWIR and MWIR, we present sample imagery from a variety of data collections that illustrate the potential of these sensors. These results demonstrate significant progress on this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Binocular, correlation based stereo has been a key component in many efforts at autonomous vehicle navigation. However, estimation of ground truth range data, especially in field conditions, remains a challenge. We present a 5 camera, multibaseline stereo system and demonstrate its use as a passive ground truthing mechanism for binocular stereo. In this paper, we provide both a system description and a detailed overview of a novel depth-based multibaseline stereo algorithm. Our new algorithm avoids the need for pairwise camera rectification. We conclude with several simulations and real world experiments to verify our results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we will present an overview of the Coupled Layered Architecture for Robotic Autonomy. CLARAty develops a framework for generic and reusable robotic components that can be adapted to a number of heterogeneous robot platforms. It also provides a framework that will simplify the integration of new technologies and enable the
comparison of various elements. CLARAty consists of two distinct layers: a Functional Layer and a Decision Layer. The Functional Layer defines the various abstractions of the system and adapts the abstract components to real or simulated devices. It provides a framework and the algorithms for low- and mid-level autonomy. The Decision Layer provides the system's high-level autonomy, which reasons about global resources and mission constraints. The Decision
Layer accesses information from the Functional Layer at multiple levels of granularity. In this article, we will also present some of the challenges in developing interoperable software for various rover platforms. Examples will include challenges from the locomotion and manipulation domains
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future robotic planetary exploration will need to traverse geographically diverse and challenging terrain. Cliffs, ravines, and fissures are of great scientific interest because they may contain important data regarding past water flow and past life.
Highly sloped terrain is difficult and often impossible to safely navigate using a single robot. This paper describes a control system for a team of three robots that access cliff walls at inclines up to 70°. Two robot assistants, or anchors, lower a third robot, called the rappeller, down the cliff using tethers. The anchors use actively controlled winches to first assist the rappeller in navigation about the cliff face and then retreat to safe ground.
This paper describes the control of these three robots so they function as a team to explore the cliff face. Stability requirements for safe operation are identified and a behavior-based control scheme is presented. Behaviors are defined for the system and command fusion methods are described. Controller stability and sensitivity are examined. Controller performance is evaluated with simulation and a laboratory system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The FCS Operational Requirements Document (ORD) identifies unmanned systems as a key component of the FCS Unit of Action. FCS unmanned systems include Unmanned Aerial Vehicles (UAV), Unmanned Ground Vehicles (UGV), Unattended Ground Sensors (UGS) and Unattended Munitions (UM). Unmanned systems are intended to enhance the Unit of Action across the full range of operations when integrated with manned platforms. Unmanned systems will provide the commander with tools to gather battlespace information while significantly reducing overall soldier risk. Unmanned systems will be used in some cases to augment or replace human intervention to perform many of the dirty, dull and dangerous missions presently performed by soldiers and to serve as a combat multiplier for mission performance, force protection and survivability. This paper focuses on the application of UGVs within the FCS Unit of Action. There are three different UGVs planned to support the FCS Unit of Action; the Soldier Unmanned Ground Vehicle (SUGV); The Multi-role Utility Logistics Equipment (MULE) platform; and the Armed Robotic Vehicle (ARV).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes a study on refueling and rearming FCS-related vehicles in the field. In keeping with the FCS philosophy, the resupply process should be unmanned. For the purposes of the
study, a resupply (RS) system is defined as an autonomous robotic platform, which interacts with a combat vehicle (CV). The purpose of the interaction is transfer of liquid fuel and/or ammunition. The RS
may be capable of providing both the fuel and the ammunition simultaneously, or there may be separate resupply vehicles, each dedicated to one consumable. The CV may be resupplied while on-station and operational or may be taken out of service and moved to a resupply point.
The study proposed a resupply system, which consists of two RS vehicles (i.e., separate vehicles for fuel and ammunition) to refuel the CV. Four families of scenarios were considered: the RS moves to the CV ("door to door"), the RS and CV both move ("rendezvous"), the CV move the RS ("filling station"), and the CV move to a pod dropped nearby. The "door to door" scenario was rated the most feasible, with
the rendezvous scenario a close second.
The study ascertained that RS vehicles using a robotic manipulator for the transfer mechanism is based on best engineering practices and constitute a low risk design. The required level of autonomy to
accomplish resupply is teleoperation, though a mixed-initiative approach poses relatively low risk. A teleoperator or simple mixed-initiative system can be completed in 3 years, and offers significant
performance benefits. Full autonomy was determined to be too high risk, but mixed-initiative work could serve as a basis for evolving to full autonomy.
The study also considered the impact of emerging technologies on resupply. The key technical risks in ascending order of investment priority are: platform design, munitions transfer mechanism, and
human-robot interaction (HRI). The platform design and munitions transfer mechanism are lower risk than HRI, which is a relatively new aspect of system design. The key enabling technologies are range
sensing and terrain reasoning. Breakthroughs in these areas would lower the risk of full autonomy modes of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army is undergoing a transformation from Cold War era "heavy" forward-deployed forces arrayed against a monolithic known enemy to lighter, more flexible, U.S.-based forces able to rapidly engage in a full spectrum of military operations. Unmanned systems can potentially contribute towards achieving this goal of a highly capable and flexible ground force. To support this effort, the U.S. Army Research Laboratory has undertaken a long-term research program to support technology development for unmanned ground vehicle systems. Over the course of the past year, this multifaceted effort has made significant technical strides, demonstrating sufficient technological maturity to potentially enable incorporation of semi-autonomous unmanned vehicles into the initial fielding of Future Combat Systems (FCS), while successfully conducting additional research directed toward improved capabilities for later increments of FCS and Land Warrior systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotics is a fundamental enabling technology required to meet the U.S. Army's vision to be a strategically responsive force capable of domination across the entire spectrum of conflict. The U. S. Army Research, Development and Engineering Command (RDECOM) Tank Automotive Research, Development & Engineering Center (TARDEC), in
partnership with the U.S. Army Research Laboratory, is developing a leader-follower capability for Future Combat Systems. The Robotic Follower Advanced Technology Demonstration (ATD) utilizes a manned leader to provide a highlevel proofing of the follower's path, which operates with minimal user intervention. This paper will give a programmatic overview and discuss both the technical approach and operational experimentation results obtained during testing
conducted at Ft. Bliss, New Mexico in February-March 2003.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned ground vehicle (UGV) technology can be used in a number of ways to assist in counter-terrorism activities. Robots can be employed for a host of terrorism deterrence and detection applications. As reported in last year's Aerosense conference, the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) and Utah State University (USU) have developed a
tele-operated robot called ODIS (Omnidirectional Inspection System) that is particularly effective in performing under-vehicle inspections at security checkpoints. ODIS' continuing development for this task is heavily influenced by feedback received from soldiers and civilian law enforcement personnel using ODIS-prototypes in an operational environment. Our goal is to convince civilian law enforcement and military police to replace the traditional "mirror on a stick" system of looking under cars for bombs and contraband with ODIS. This paper reports our efforts in the past one year in terms of optimizing ODIS for the visual inspection task. Of particular concern is the design of the vision system. This paper documents details on the various issues relating to ODIS' vision system - sensor, lighting, image processing, and display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DARPA has been leading two programs under joint sponsorship with the Army which are directed at advancement of unmanned ground vehicle (UGV) technologies in support of the Future Combat Systems (FCS) Program. These two programs are intended to provide complimentary developments to allow the Army with its Lead Systems Integrator an expanded set of technology options as it goes through its system trade
studies. The data and experiences derived from these programs also increase understanding of current capabilities and future growth trends to aid in requirements definition for FCS UGV elements of the force structure. These two programs: Unmanned Ground Combat Vehicle (UGCV), and Perception for Off-Road Robotics (PerceptOR) will be described in this paper with comments on their current status.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many of the potential applications of mobile robots require a small to medium sized vehicle that is capable of traversing large obstacles and rugged terrain. Search and rescue operations require a robot small enough to drive through doorways, yet capable enough to surmount rubble piles and stairs. This paper presents the GOAT (Goes Over All Terrain) vehicle, a medium scale robot which incorporates a novel configuration which puts the drive wheels on the ends of actuated arms. This allows GOAT to adjust body height and posture and combines the benefits of legged locomotion with the ease of wheeled driving. The paper presents the design of the GOAT and the results of prototype construction and initial testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Off-road robotics efforts such as DARPA's PerceptOR program have motivated the development of testbed vehicles capable of sustained operation in a variety of terrain and environments. This paper describes the retrofitting of a minimally-modified ATV chassis into such a testbed which has been used by multiple programs for autonomous mobility development and sensor characterization. Modular mechanical interfaces for sensors and equipment enclosures enabled integration of multiple payload configurations. The electric power subsystem was capable of short-term operation on batteries with refueled generation for continuous operation. Processing subsystems were mounted in sealed, shock-dampened enclosures with heat exchangers for internal cooling to protect against external dust and moisture. The computational architecture was divided into a real-time vehicle control layer and an expandable high level processing and perception layer. The navigation subsystem integrated real time kinematic GPS with a three-axis IMU for accurate vehicle localization and sensor registration. The vehicle software system was based on the MarsScape architecture developed under DARPA's MARS program. Vehicle mobility software capabilities included route planning, waypoint navigation, teleoperation, and obstacle detection and avoidance. The paper describes the vehicle design in detail and summarizes its performance during field testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unmanned ground compat vehicle (UGCV) design evolved by the SAIC team on the DARPA UGCV Program is summarized in this paper. This UGCV design provides exceptional performance against all of the program metrics and incorporates key attributes essential for high performance robotic combat vehicles. This performance includes protection against 7.62 mm threats, C130 and CH47 transportability, and the ability to accept several relevant weapons payloads, as well as advanced sensors and perception algorithms evolving from the PerceptOR program. The UGCV design incorporates a combination of technologies and design features, carefully selected through detailed trade studies, which provide optimum performance against mobility, payload, and endurance goals without sacrificing transportability, survivability, or life cycle cost. The design was optimized to maximize performance against all Category I metrics. In each case, the performance of this design was validated with detailed simulations, indicating that the vehicle exceeded the Category I metrics. Mobility metrics were analyzed using high fidelity VisualNastran vehicle models, which incorporate the suspension control algorithms and controller cycle times. DADS/Easy 5 3-D models and ADAMS simulations were also used to validate vehicle dynamics and control algorithms during obstacle negotiation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active suspension is now a well-tried technology in road vehicles. It has been installed on a HMMV and demonstrated to significantly improve performance in rough road conditions. This capability presents an opportunity for improved mobility in off-road conditions. The challenge is to devise a means of translating the desired trajectory of the vehicle into commands to the suspension actuators and the traction motors in an optimal, or near optimal manner. In this paper we describe part of a software architecture that was developed to enable such performance from a six-wheeled vehicle with active suspension and independent wheel drives. The vehicle was a concept developed under the DARPA Unmanned Ground Combat Vehicle Program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In human supervised autonomously controlled systems the human operator should concentrate attention on tasks, which need high skills and leave to the computer yet complex processes, which can be done autonomously. It is crucial to let the operator have a complete control over his tool by enabling him maximum freedom to make changes in the robot behavior as response to specific environments or events. A model based on a mixture of hybrid and behavior based systems enables human operator intervention at all levels of the control hierarchy. This paper deals about how this human intervention should be applied.
For human intervention at the low layers of the control hierarchy of RCS, where the control bandwidth is extremely high, a software entity, known as agent, which acts in the purpose of the human operator, will take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This agent will be a task oriented control agent. For layers, with lower control bandwidth, the agent serves as an intelligent interface, which may restrict the less experienced operator from catastrophic intervention with the system activity. In such a case, the human
operator should have the choice to completely switch off this feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a project to collect and disseminate sensor data for autonomous mobility research. Our goals are to provide data of known accuracy and precision to researchers and developers to enable algorithms to be developed using realistically difficult sensory data. This enables quantitative comparisons of algorithms by running them on the same data, allows groups that lack equipment to participate in mobility research, and speeds technology transfer by providing industry with metrics for comparing algorithm performance. Data are collected using the NIST High Mobility Multi-purpose Wheeled Vehicle (HMMWV), an instrumented vehicle that can be driven manually or autonomously both on roads and off. The vehicle can mount multiple sensors and provides highly accurate position and orientation information as data are collected. The sensors on the HMMWV include an imaging ladar, a color camera, color stereo, and inertial navigation (INS) and Global Positioning System (GPS). Also available are a high-resolution scanning ladar, a line-scan ladar, and a multi-camera panoramic sensor. The sensors are characterized by collecting data from calibrated courses containing known objects. For some of the data, ground truth will be collected from site surveys. Access to the data is through a web-based query interface. Additional information stored with the sensor data includes navigation and timing data, sensor to vehicle coordinate transformations for each sensor, and sensor calibration information. Several sets of data have already been collected and the web query interface has been developed. Data collection is an ongoing process, and where appropriate, NIST will work with other groups to collect data for specific applications using third-party sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robotic vehicle needs to understand the terrain and features around it if it is to be able to navigate complex environments such as road systems. By taking advantage of the fact that such vehicles also need accurate knowledge of their own location and orientation, we have developed a sensing and object recognition system based on information about the area where the vehicle is expected to operate. The information is collected through aerial surveys, from maps, and by previous traverses of the terrain by the vehicle. It takes the form of terrain elevation information, feature information (roads, road signs, trees, ponds, fences, etc.) and constraint information (e.g., one-way streets). We have implemented such an a priori database using One Semi-Automated Forces (OneSAF), a military simulation environment. Using the Inertial Navigation System and Global Positioning System (GPS) on the NIST High Mobility Multi-purpose Wheeled Vehicle (HMMWV) to provide indexing into the database, we extract all the elevation and feature information for a region surrounding the vehicle as it moves about the NIST campus. This information has also been mapped into the sensor coordinate systems. For example, processing the information from an imaging Laser Detection And Ranging (LADAR) that scans a region in front of the vehicle has been greatly simplified by generating a prediction image by scanning the corresponding region in the a priori model. This allows the system to focus the search for a particular feature in a small region around where the a priori information predicts it will appear. It also permits immediate identification of features that match the expectations. Results indicate that this processing can be performed in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The future battlefield will require an unprecedented level of automation in which soldier-operated, autonomous, and semi-autonomous ground, air, and sea platforms along with mounted and dismounted soldiers will function as a tightly coupled team. Sophisticated robotic platforms with diverse sensor suites will be an integral part of the Objective Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The Army Research Laboratory has developed a robot-based acoustic detection system that will detect and localize on an impulsive noise event, such as a sniper's weapon firing. Additionally, acoustic sensor arrays worn on a soldier's helmet or equipment can enhance his situational awareness and RSTA capabilities. The Land Warrior or Objective Force Warrior body-worn computer can detect tactically significant impulsive signatures from bullets, mortars, artillery, and missiles or spectral signatures from tanks, helicopters, UAVs, and mobile robots. Time-difference-of-arrival techniques can determine a sound's direction of arrival, while head attitude sensors can instantly determine the helmet orientation at time of capture. With precision GPS location of the soldier, along with the locations of other soldiers, robots, or unattended ground sensors that heard the same event, triangulation techniques can produce an accurate location of the target. Data from C-4 explosions and 0.50-Caliber shots shows that both helmet and robot systems can localize on the same event. This provides an awesome capability - mobile robots and soldiers working together on an ever-changing battlespace to detect the enemy and improve the survivability, mobility, and lethality of our future warriors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an analysis of odometry errors in over-constrained mobile robots, that is, vehicles that have more independent motors than degrees of freedom.
Based on our analysis we developed and examined three novel error-reducing methods. One method, called “Fewest Pulses” method, makes use of the observation that most terrain irregularities, as well as wheel slip, result in an erroneous over-count of encoder pulses. A second method, called “Cross-coupled Control,” optimizes the motor control algorithm of the robot to reduce synchronization errors that would otherwise result in wheel slip with conventional controllers. The third method is based on so-called Expert Rules. With this method readings from redundant encoders are compared and utilized in different ways, according to predefined rules.
In the work described here we implemented our three error reducing methods on a modified Pioneer AT skid-steer platform and compared their odometric accuracy. The results in this paper point to clear advantages of the Expert Rule-based method over the other tested methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a previous presentation at AeroSense 2002, we described a methodology to assess the results of image processing algorithms for ill-structured road detection and tracking. In this paper, we present our first application of this methodology on sixedge detectors and a database counting about 20,000 images.
Our evaluation approach is based on the use of video image sequences, ground truth - reference results established by human experts - and assessment metrics which measure the quality of the image processing
results. We need a quantitative, comparative and repetitive evaluation of many algorithms in order to direct future developments.
The main scope of this paper consists in presenting the lessons learned from applying our methodology. More precisely, we describe the assessment metrics, the algorithms and the database. Then we describe how we manage to extract the qualities and weaknesses of each algorithm and to establish a global scoring. The insight we gain
for the definition of assessment metrics is also presented.
Finally, we suggest some promising directions for the development of road tracking algorithms and complementarities that must be sought after. To conclude, we describe future improvements for the database constitution, the assessment tools and the overall methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article is a general introduction to one of the main robotic program launched in France over the past 10 years to improve performances of UGV for credible and reliable land operational missions. This Robotic Advanced Studies Program (RASP) began in 2000 and is investigating most of the fundamental aspects of robotics systems, through 3 mains themes: teleoperation, autonomous navigation and Operational Demonstrator SYRANO improvement. An overview of the main RASP studies is given, as well as some interesting results as an introduction to further articles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an application called the Tele-robotic management system (TMS) for coordinating multiple operators with multiple robots for applications such as underground mining. TMS utilizes several graphical interfaces to allow the user to define a partially ordered plan for multiple robots. This plan is then converted to a Petri net for execution and monitoring. TMS uses a distributed framework to allow robots and operators to easily integrate with the applications. This framework allows robots and operators to join the network and advertise their capabilities through services. TMS then decides whether tasks should be dispatched to a robot or a remote operator based on the services offered by the robots and operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increasingly mobile robots are finding applications in the military, mining, nuclear and agriculture industries. These fields require a robot capable of operating in a highly unstructured and changing environment. Current autonomous control techniques are not robust enough to allow successful operation at all times in these environments. Teleoperation can help with many tasks but causes operator fatigue and negates much of the economic advantages of using robots by requiring one person per robot. This paper introduces a control system for mobile robots based on the concept of levels of autonomy. Levels of autonomy recognizes that control can be shared between the operator and robot in a continuous fashion from teleoperation to full autonomy. By sharing control, the robot can benefit from the operator's knowledge of the world to help extricate it from difficult situations. The robot can operate as autonomously as the situation allows, reducing operator fatigue and increasing the economic benefit by allowing a single operator to control multiple robots simultaneously. This paper presents a levels of autonomy control system developed for use in exploration or reconnaissance tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The success of any potential application for mobile robots depends largely on the specific environment where the application takes place. Practical applications are rarely found in highly structured environments, but unstructured environments (such as natural terrain) pose major challenges to any mobile robot. We believe that semi-structured environments-such as parking lots-provide a good opportunity for successful mobile robot applications. Parking lots tend to be flat and smooth, and cars can be uniquely identified by their license plates. Our scenario is a parking lot where only known vehicles are supposed to park. The robot looks for vehicles that do not belong in the parking lot. It checks both license plates and vehicle types, in case the plate is stolen from an approved vehicle. It operates autonomously, but reports back to a guard who verifies its performance. Our interest is in developing the robot's vision system, which we call Scene Estimation & Situational Awareness Mapping Engine (SESAME). In this paper, we present initial results from the development of two SESAME subsystems, the ego-location and license plate detection systems. While their ultimate goals are obviously quite different, our design demonstrates that by sharing intermediate results, both tasks can be significantly simplified. The inspiration for this design approach comes from the basic tenets of Situational Awareness (SA), where the benefits of holistic perception are clearly demonstrated over the more typical designs that attempt to solve each sensing/perception problem in isolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In earlier research the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) was funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGVs). As part of our research, we presented the use of a grammar-based approach to enabling intelligent behaviors in autonomous robotic vehicles. With the growth of the number of available resources on the robot, the variety of the generated behaviors and the need for parallel execution of multiple behaviors to achieve reaction also grew. As continuation of our past efforts, in this paper, we discuss the parallel execution of behaviors and the management of utilized resources. In our approach, available resources are wrapped with a layer (termed services) that synchronizes and serializes access to the underlying resources. The controlling agents (called behavior generating agents) generate behaviors to be executed via these services. The agents are prioritized and then, based on their priority and the availability of requested services, the Control Supervisor decides on a winner for the grant of access to services. Though the architecture is applicable to a variety of autonomous vehicles, we discuss its application on T4, a mid-sized autonomous vehicle developed for security applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Research Laboratory (ARL), as part of a customer and mission-funded exploratory development program, has been developing a prototype of low-frequency, ultra-wideband (UWB) forward-imaging synthetic aperture radar (SAR) to support the U.S. Army's vision for increased mobility and survivability of unmanned ground vehicle missions. The ability of the UWB radar technology to detect objects under foilage could provide an important obstacle-avoidance capability for robotic vehicles, which could improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U.S. forces. In a recent experiment at Aberdeen Proving Ground (APG), we exercised the UWB SAR radar in forward-looking mode and collected data to support the investigation.
This paper discusses the signal processing algorithms and techniques that we developed and applied to the recent UWB SAR forward-looking data. The algorithms include motion data processing, self-interference signal (SIR) removal, radio frequency interference (RFI) signal removal, forward-looking image formation, and visualization techniques. We present forward-loking SAR imagery and also volumetric imagery of some targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In support of the Army vision for increased mobility, survivability, and lethality, we are investigating the use of ultra-wideband (UWB) synthetic aperture radar (SAR) technology to enhance unmanned ground vehicle missions. The ability of UWB radar technology to detect objects concealed by foilage could provide an important obstacle avoidance capability for robotic vehicles. This would improve the speed and maneuverability of these vehicles and consequently increase the probability of survivability of U.S. forces. This technology would address the particular challenges that confront robotic vehicles such as large rocks hidden in tall grass and voids such as ditches and bodies of water.
ARL has designed and constructed an instrumentation-grade low frequency, UWB synthetic aperture radar for evaluation of the target signatures and underlying phenomenology of stationary tactical targets concealed by foilage and objects buried in the ground. The radar (named BoomSAR) is installed in teh basekt of a 30-ton boom lift and can be operated while the entire boom lift is driven forward slowly, with the boom arm extended as high as 45 m to generate a synthetic aperture.
In this paper, we investigate the potential use of the UWB radar in the forward imaging configuration. The paper describes the forward imaging radar and test setup at Aberdeen Proving Ground, Maryland. We present imagery of "positive" obstacles such as trees, fences, wires, mines, etc., as well as "negative" obstacles such as ditches. Imagery of small targets such as plastic mines is also included. We provide eletromagnetic simulations of forward SAR imagery of plastic mines and compare that to the measurement data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Army Research Laboratory (ARL), as part of a customer and mission-funded exploratory development program, has been evaluating low-frequency, ultra-wideband (UWB) imaging radar for forward imaging to support the Army's vision for increased mobility and survivability of unmanned ground vehicle missions. As part of the program to improve the radar system and imaging capability, ARL has incorporated a differential global positioning system (DGPS) for motion compensation into the radar system. The use of DGPS can greatly increase positional accuracy, thereby allowing us to improve our ability to focus better images for the detection of small targets such as plastic mines and other concealed objects buried underground. The ability of UWB radar technology to detect concealed objects could provide an important obstacle avoidance capability for robotic vehicles, which would improve the speed and maneuverability of these vehicles and consequently increase the survivability of the U.S. forces.
This paper details the integration and discusses the significance of integrating a DGPS into the radar system for forward imaging. It also compares the difference between DGPS and the motion compensation data collected by the use of the original theodolite-based system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the battlefield and on the home front there exists an increased Nuclear, Biological, and Chemical (NBC) threat. There has been an ongoing effort to develop methods in detecting the presence of NBC agents. The utilization of small robotic platforms equipped with NBC sensors is one way to aid in reconnaisance missions along with inspecting suspicious areas and vehicles. The U.S. Army's Omni-Directional Inspection System (ODIS) and iRobot's Packbot are two low profile robotic platforms that are being investigated by the U.S. Army TARDEC's Robotic Mobility Laboratory (TRML) to perform such tasks. There currently exists a variety of testing methods used in detecting NBC agents, which each have advantages and disadvantages. These different methods, along with their advantages and disadvantages are discussed in this paper. Traditional NBC type sensing systems are large requires a large vehicle or a trailer to be transported. To integrate these sensors into small robotic systems, they need to require less power and shrunk in size. Some commercially available products and ongoing research at government and academic laboratories are looking at improving NBC based detection systems are discussed in this paper for the integration of robotic platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A series of micro-robots (MERLIN: Mobile Experimental Robots for Locomotion and Intelligent Navigation) has been designed and implemented for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication by radio link. The sensors onboard on the MERLIN robot can be divided into two categories: internal sensors for low-level control and for measuring the state of the robot and external sensors for obstacle detection, modeling of the environment and position estimation and navigation of the robot in a global co-ordinate system. The special emphasis of this paper is to describe the capabilities of MERLIN for obstacle detection, targets detection and for distance measurement. Besides ultrasonic sensors a new camera based on PMD-technology is used. This Photonic Mixer Device (PMD) represents a new electro-optic device that provides a smart interface between the world of incoherent optical signals and the world of their electronic signal processing. This PMD-technology directly enables 3D-imaging by means of the time-of-flight (TOF) principle. It offers an extremely high potential for new solutions in the robotics application field. The PMD-Technology opens up amazing new perspectives for obstacle detection systems, target acquisition as well as mapping of unknown environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A team of small, low cost robots instead of one large, complex robot is useful in operations such as search and rescue, urban exploration etc. However, the performance of such a team is limited due to restricted mobility of the team members. We propose to overcome the mobility restrictions by physical cooperation among the team members. We carry out a feasibility analysis of a particular behavior of two robots cooperating to cross a gap and also consider the effect of the scaling of the robot dimensions. We simulate the dynamic equations describing the motion which leads to the relaxation the requirements derived from the static analysis. A decentralized control architecture is designed which avoids continuous communication between the robots thus rendering the cooperation to be simple and low cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.