PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes our progress in near-range (within 0 to 2 meters) ego-centric docking using vision under variable lighting conditions (indoors, outdoors, dusk). The docking behavior is fully autonomous and reactive, where the robot directly responds to the ratio of the number of pixels of two colored fiducials without constructing an explicit model of the landmark. This is similar to visual homing in insects and has a low computational complexity of O(n2) and a fast update rate. In order to accurately segment the colored fiducials under constrained lighting conditions, the spherical coordinate transform (SCT) color space is used, rather than RGB or HSV, in conjunction with an adaptive segmentation algorithm. Experiments with a daughter robot docking with a mother robot were collected. Results showed that 1) vision-based docking is faster than teleoperation yet equivalent in performance and 2) adaptive segmentation is more robust under challenging lighting conditions, including outdoors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, integration of two basic imaging sensor technology components: optical imaging and radar imaging, will be provided, within the small communication platform environment. This is possible due to a novel MMW imaging technique developed by Waveband Corporation, based on a small and compact MMW (millimeter wave) antenna which is diffraction- grating-based, rather than gimble (scanning), or phase-array-based,as in the prior art. The proposed integral imaging system can operate UGV in any weather and environmental conditions (daylight, low-vision, dark, night, fog, snow, dust, etc) by applying switching module which automatically provides any of the three regions: (a) only optical; (b) only radar; (c) both (i.e., hybrid imaging). In this paper, feasibility and figure the merit analysis is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for in real time shaping of pictures of reflecting objects is described. This method, proposed by author in 1950 (first published in 19821,2), rest upon the usage of pulsed radiation sources and antenna with the beam super-scanning during radiation and reception of pulses3,4. The super-scanning system transforms the time signals into quasi-tomographic periodical spatial structures, because it is possible to receive reflecting signals only from discrete Visibility Layers (Vls) whose position in space is known. With different laws of beam scanning during radiation and reception, this structure has the form of a survey net. Coordinates and dimension of the dynamic objects crossing this net (by the error directly independent of the distance to the object and antenna size) can be measured. It is shown that such system can be adaptive, effects the optimization of energy distribution in the space during observation. Furthermore, a such system permits the necessary information I with minimum energy W, according to criterion (eta) equals I(bit)/W (watt), to receive. This criterion is modification (by author) of thermodynamic Brillouin's criterion of physical experiment efficiency. The model of ultrasound super-scanning tomograph, working in air camera at frequency 115 kHz, (the range of bats and dolphins) was created. The experiments fully confirm the theoretical results. Furthermore, some new effects were discovered. For example, the effect of super-resolution of a group of objects located inside LV through intentionally increasing intervals between signals reflected the remote objects in a group. It is mention also the known hypothesis of the super-scanning mechanism in dolphin's sonar5. In report the principle and some of the super-scanning systems possibilities, are demonstrated. The ultrasonic version of the SSL, working inside ocean or sea, for creation the autonomous high-resolution ground and underwater vehicles or diver's information system, can be used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the research developed in our lab these past years, leading to autonomous robots evolving in non-cooperative, even hostile outdoor environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the functions to follow autonomously roads and terrain contours with a tracked vehicle. These capabilities were demonstrated as part of the German experimental PRogram of Intelligent Mobile Unmanned Systems (PRIMUS) phase C at the end of June 1999. The performance of the dynamic image processing based perception allows a traveling speed up to 50km/h on roads. The speed limit is given by the used vehicle Digitized Wiesel 2. The mentioned performance is independent of the structure and the type of road, like paved or unpaved roads with or without gravel, as many test trials indicate. Handling the extreme vehicle vibrations caused by the interaction between graveled ground and the vehicle tracks are a special challenge. Another driving function is to follow autonomously terrain contours like trenches, field borders, furrows or tracks of ahead driving vehicles. This type of function was demonstrated with a speed up to 25km/h in open terrain. Both driving functions are combined with an obstacle detection and avoidance capability, which is part of a separate perception mode. Although the focus of this paper lies on road and contour following, an overview of the overall architecture and hardware framework of the system is a given first. This is followed by the description of operation modes for the above mentioned driving functions, including the control flow between the robot vehicle and the Command & Control Station, which is integrated in a second Digitized Wiesel 2. After that the architecture of the perception approach is presented. A collection of experimental results and a discussion of those results will conclude the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we have addressed some of technical challenges associated with tactical behaviors modeling of a group of Tandem Mobile Robotic Vehicles (TMRV) in unstructured environment. We have discussed intelligent schemes for robust maneuverability control of the TMRV under a Supervisory Mobility Controller. We have considered four tasks of TMRV including: Terrain Navigation, Tandem Mobility Control; Tactical Strategic Formation and Communication-based Control. We have developed a supervisory mobility controller environmnet using FMCell software with six functional control loop. In this paper, we present and discuss modular and functional architecture of our supervisory mobility controller, in particular, our strategies for separation of supervisory functions according to their complexity, priority, and intelligence requirements. Some examples demonstrating effectiveness and efficiency of the newly developed techniques are presented. We have also discussed how these behaviors can be applied to tandem unmanned ground vehicle systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This project describes an approach to creating autonomous systems that can continue to learn throughout their lives, that is, to be adaptive to changes in the environment and in their own capabilities. Evolutionary learning methods have been found to be useful in several areas in the development of autonomous vehicles. In our research, evolutionary algorithms are used to explore the alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. The learned behaviors are then tested in the actual robot and the results compared. Initial research demonstrated the ability to learn reasonable complex robot behaviors such as herding, and navigation and collision avoidance using this offline learning approach. In this work, the vehicle is always exploring different strategies via an internal simulation model; the simulation in term, is changing over time to better match the world. This model, which we call Continuous and Embedded Learning (also referred to as Anytime Learning), is a general approach to continuous learning in a changing environment. The agent's learning module continuously tests new strategies against a simulation model of the task environment, and dynamically updates the knowledge base used by the agent on the basis of the results. The execution module controls the agent's interaction with the environment, and includes a monitor that can dynamically modify the simulation model based on its observations of the environment. When a simulation model is modified, the learning process continues on the modified model. The learning system is assume to operate indefinitely, and the execution system uses the results of learning as they become available. Early experimental studies demonstrate a robot that can learn to adapt to failures in its sonar sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small physical agents will be ubiquitous on the battlefield of the 21st century, principally to lower the exposure to harm of our ground forces. Teams of small collaborating physical agents conducting tasks such as Reconnaissance, Surveillance, and Target Acquisition (RSTA); chemical and biological agent detection, logistics, sentry; and communications relay will have advanced sensor and mobility characteristics. The mother ship much effectively deliver/retrieve, service, and control these robots as well as fuse the information gathered by these highly mobile robot teams. The mother ship concept presented in this paper includes the case where the mother ship is itself a robot or a manned system. The mother ship must have long-range mobility to deploy the small, highly maneuverable agents that will operate in urban environments and more localized areas, and act as a logistics base for the robot teams. The mother ship must also establish a robust communications network between the agents and is an up-link point for disseminating the intelligence gathered by the smaller agents; and, because of its global knowledge, provides the high-level information fusion, control and planning for the collaborative physical agents. Additionally, the mother ship incorporates battlefield visualization, information fusion, and multi-resolution analysis, and intelligent software agent technology, to support mission planning and execution. This paper discusses on going research at the U.S. Army Research Laboratory that supports the development of a robot mother ship. This research includes docking, battlefield visualization, intelligent software agents, adaptive communications, information fusion, and multi- modal human computer interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An acoustic sensor array that cues an imaging system on a small tele- operated robotic vehicle was used to detect human voice and activity inside a building. The advantage of acoustic sensors is that it is a non-line of sight (NLOS) sensing technology that can augment traditional LOS sensors such as visible and IR cameras. Acoustic energy emitted from a target, such as from a person, weapon, or radio, will travel through walls and smoke, around corners, and down corridors, whereas these obstructions would cripple an imaging detection system. The hardware developed and tested used an array of eight microphones to detect the loudest direction and automatically setter a camera's pan/tilt toward the noise centroid. This type of system has applicability for counter sniper applications, building clearing, and search/rescue. Data presented will be time-frequency representations showing voice detected within rooms and down hallways at various ranges. Another benefit of acoustics is that it provides the tele-operator some situational awareness clues via low-bandwidth transmission of raw audio data for the operator to interpret with either headphones or through time-frequency analysis. This data can be useful to recognize familiar sounds that might indicate the presence of personnel, such as talking, equipment, movement noise, etc. The same array also detects the sounds of the robot it is mounted on, and can be useful for engine diagnostics and trouble shooting, or for self-noise emanations for stealthy travel. Data presented will characterize vehicle self noise over various surfaces such as tiles, carpets, pavement, sidewalk, and grass. Vehicle diagnostic sounds will indicate a slipping clutch and repeated unexpected application of emergency braking mechanism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important but oft overlooked aspect of any robotic system is the synergistic benefit of designing the chassis to have high intrinsic mobility which complements rather than limits, its system capabilities. This novel concept continues to be investigated by the Defence Research Establishment Suffield (DRES) with the Articulated Navigation Testbed (ANT) Unmanned Ground Vehicle (UGV). The ANT demonstrates high mobility through the combination of articulated steering and a hybrid locomotion scheme which utilizes individually powered wheels on the edge of rigid legs; legs which are capable of approximately 450 degrees of rotation. The configuration can be minimally configured as a 4x4 and modularly expanded to 6x6, 8x8, and so on. This enhanced mobility configuration permits pose control and novel maneuvers such as stepping, bridging, crawling, etc. Resultant mobility improvements, particularly in unstructured and off-road environments, will reduce the resolution with which the UGV sensor systems must perceive its surroundings and decreases the computational requirements of the UGV's perception systems1 for successful semi-autonomous or autonomous terrain negotiation. This paper reviews critical vehicle developments leading up to the ANT concept, describes the basis for its configuration and speculates on the impact of the intrinsic mobility concept for UGV effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In response to ultra-high maneuverability vehicle requirements, Utah State University (USU) has developed an autonomous vehicle with unique mobility and maneuverability capabilities. This paper describes a study of the mobility of the USU T2 Omni-Directional Vehicle (ODV). The T2 vehicle is a mid-scale (625 kg), second-generation ODV mobile robot with six independently driven and steered wheel assemblies. The six wheel, independent steering system is capable of unlimited steering rotation, presenting a unique solution to enhanced vehicle mobility requirements. This mobility study focuses on energy consumption in three basic experiments, comparing two modes of steering: Ackerman and ODV. The experiments are all performed on the same vehicle without any physical changes to the vehicle itself, providing a direct comparison these two steering methodologies. A computer simulation of the T2 mechanical and control system dynamics is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot vehicle mobility is the product of the physical configuration, mechatronics (sensors, actuators, and control) and the motion programs for different obstacles, terrain conditions, and maneuver objectives. This paper examines the mobility potential of a robotic 6-by-6 wheeled omni-directional drive vehicle (ODV) with z-axis and tire inflation control. Ad ODV can steer and drive all wheels independently. The direction of motion is independent of the orientation of the body. Z- axis control refers to independent control of the suspension elevation at each wheel. Pneumatic tire inflation control provides the ability to inflate and deflate individual tires. The paper describes motion programs for various discrete obstacles and challenging terrain conditions. The paper illustrates how ODV control, z-axis control and tire inflation control interact to provide high mobility with respect to cornering, maneuvering on slopes, negotiating vertical step and horizontal gap obstacles, and braking/acceleration on soft soil and slick surfaces. The paper derives guidelines for the physical dimensions of the vehicle needed to achieve these capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, performances of two vehicles with different suspension systems are compared. One of the vehicles has six wheels and six standard independent suspensions, which move along vehicle body lift coordinate. The other vehicle has four wheels and four independent suspensions to form an A-frame system. Each of the four suspensions can have large rotations around the join joining the vehicle body and the suspension. Based on kinematics analysis, the A-frame suspension vehicle has advantages in vertical position adjustment, stability for side slope surface crossing, hill climbing and descending, cornering, and isolation of body from acceleration and braking. The standard suspension vehicle has advantages in constant wheelbase, no sideslip while vehicle body changing its positions, simplicity in structure and mathematical modeling. According to dynamic response analysis for passive mode, the A-frame vehicle is better in handling and clearance maintenance and the standard independent suspension vehicle is better in ground irregularity isolation and traction force maintenance. This paper also introduces the applications of isometric charts for standard suspension vehicles. Each chart can be used to select spring and damper pairs for a group of standard suspension vehicles, which have different inertia and geometry properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bekker's Derived Terramechanics Model (BDTM) is an analytical tool for evaluating vehicle off-road mobility. BDTM has been developed using Bekker's equations for vehicle soil interactions. He developed the bevameter technique to measure mechanical strength characteristics for many soil and snow conditions. BDTM is in a spreadsheet format, and its primary purpose is to compare mobility characteristics for robotic track and wheeled vehicles under different terrain conditions. Bekker's model is a simple, linear one degree-of-freedom (1-DOF) model, which assumes that in a perfectly cohesive soil (i.e. clay), soil thrust is only a function of contact surfacearea. The model also assumes that for a perfection cohesionless or frictional soil (i.e. dry sand), soil thrust is a function of vehicular weight. This paper attempts to compare the mobility characteristics of wheeled vs. track vehicles for different size, weight and terrain conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To enhance the mobility of the USU T-class of vehicles, the T3 vehicle has been developed that incorporates Z-axis motion of the drive wheel modules. Moving the wheels up and down provides the ability to pitch and roll the vehicle chassis and move the vehicle center of gravity to change the force distribution on the individual drive wheels. The omni- directional capability of the vehicle provides the capability to align the vehicle with the slope gradient that maximizes the vehicle stability. This paper shows that by pitching the vehicle into the slope, that the uphill traction limit of the vehicle can be increased by about 10 degree(s). Future research efforts concerning stair climbing, step negotiation, and obstacle field navigation are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the Operator Control Unit (OCU) developed for use in the first of three yearly demonstrations and Battle Lab Warfighting Experiments (BLWEs) for the Demo III/XUV program. The OCU hardware and software provides the man machine interface to a team of Experimental Unmanned Vehicles (XUVs). The OCU provides the capabilities to command and control multiple autonomous unmanned vehicles performing collective and individual scout tasks in support of battalion scout missions. The paper provides an overview of the OCU requirements, the hardware configuration, the man machine interfaces, and software capabilities necessary to support mission planning and execution for XUV scout missions. Additionally we will discuss lessons learned from virtual and constructive simulations conducted at the Mounted Maneuver Battle Labs and from OCU development, integration, and testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nine-month study was conducted under the direction of Tank-Automotive Research, Development and Engineering Center (TARDEC) in Warren, MI. to determine the best platform design for inherent all-terrain mobility of an unmanned robotic vehicle in the 15000-2500 lb. range. Reference platforms were the DEMO III 4x4 and the Utah State University 6x6 with omni-directional wheels. The study systematically developed desired top- down design-driving capabilities, operational needs, and mobility concepts supported by extensive analysis using the NATO Reference Mobility Model and literature searches. Maximizing mobility over all terrain and resisting immobilization were emphasized in order to minimize sensor computational burdens while maximizing the probability of timely mission accomplishment. Several wheeled, tracked and hybrid platform concepts were evaluated. Significant improvements in cross- country mobility, obstacle negotiation and self-extraction capability were achieved with hybrid solutions. Final concept development focused on an 8x8 swiveling wheeled platform with band track overlays. Conclusions of the study were: a technology demonstrator platform should be built for mobility validation and NRMM II refinemment; a robotic- vehicle-specific NRMM II mobility scenario should be developed; and sensor solutions for unmanned mobility platforms should be revisited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our research deals with the design and experiment of a control architecture for an autonomous outdoor mobile robot which uses mainly vision for perception. In this case of a single robot, we have designed a hybrid architecture with an attention mechanism that allows dynamic selection of perception processes. Building on this work, we have developed an open multi-agent architecture, for standard multi-task operating system, using the C++ programming language and Posix threads. Our implementation features of efficient and fully generic messages between agents, automatic acknowledgement receipts and built-in synchronization capabilities. Knowledge is distributed among robots according to a collaborative scheme: every robot builds its own representation of the world and shares it with others. Pieces of information are exchanged when decisions have to be made. Experiments are to be led with two outdoor ActiveMedia Pioneer AT mobile robots. Distributed perception, using mainly vision but also ultrasound, will serve as proof of concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of the Future Combat System and the system of systems approach to development of a deployable and lethal medium force, unmanned systems are viewed as essential contributors to provide the necessary capabilities while at the same time improving soldier survivability. One of the most critical missions for both unmanned ground and air platforms is Reconnaissance, Surveillance, and Target Acquisition (RSTA). Unmanned systems have the advantage that they may be put in harm's way. The systems must, however, be designed to be affordable to enable aggressive use that may result in loss or damage during operations. Advances in sensors, machine perception, and advanced computer architectures have made a semi-autonomous Unmanned Ground Vehicle (UGV) a reality. While still requiring a man in the loop with a low bandwidth communication link for supervisory planning and control, the UGV can carry out numerous mission activities otherwise performed by manned systems. This paper will present an NVESD concept for a cost effective targeting system along with a discussion of sensors and concepts for autonomous mobility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the Joint Robotics Program (JRP) is to conduct research, development, acquisition and fielding within the domain of unmanned ground vehicle systems for a wide range of military applications. The program is structured to field first generation systems, mature promising technologies and then upgrade capabilities by means of an evolutionary strategy. In the near term, acquisition programs emphasize teleoperation over diverse terrain, more autonomous functioning for structured environments, and extensive opportunities for users to operate UGV's. Autonomous mobility in unstructured environments i sthe main thrust of the JRP technology base. Recently, the Demo III program held a highly successful demonstration of autonomous mobility at Aberdeen Proving Ground, MD. Other successes with prototypical countermine systems in Bosnia, as well as soldiers' and Marines' experimentation with reconnaissance unmanned ground vehicles (UGVs) continue to engender requirements in other areas. Users are developing requirements for UGGVs that convoy with manned vehicles; carry and deliver supplies; carry and employ weapons; can be carried in a backpack and conduct reconnaissance inside multi-story buildings. The overall progress of the JRP is reflected in the fact that Services have identified procurement funding to buy UGVs. The author will update the conference on the considerable progress of the JRP, which is preparing to provide our Armed Forces with a lead-ahead capability for the 21st Century.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Few disasters can inspire more compassion for victims and families than those involving structural collapse. Video clips of children's bodies pulled from earthquake stricken cities and bombing sties tend to invoke tremendous grief and sorrow because of the totally unpredictable nature of the crisis and lack of even the slightest degree of negligence (such as with those who choose to ignore storm warnings). Heartbreaking stories of people buried alive for days provide a visceral and horrific perspective of some of greatest fears ever to be imagined by human beings. Current trends toward urban sprawl and increasing human discord dictates that structural collapse disasters will continue to present themselves at an alarming rate. The proliferation of domestic terrorism, HAZMAT and biological contaminants further complicates the matter further and presents a daunting problem set for Urban Search and Rescue (USAR) organizations around the world. This paper amplifies the case for robot assisted search and rescue that was first presented during the KNOBSAR project initiated at the Colorado School of Mines in 1995. It anticipates increasing technical development in mobile robot technologies and promotes their use for a wide variety of humanitarian assistance missions. Focus is placed on development of advanced robotic systems that are employed in a complementary tool-like fashion as opposed to traditional robotic approaches that portend to replace humans in hazardous tasks. Operational challenges for USAR are presented first, followed by a brief history of mobiles robot development. The paper then presents conformal robotics as a new design paradigm with emphasis on variable geometry and volumes. A section on robot perception follows with an initial attempt to characterize sensing in a volumetric manner. Collaborative rescue is then briefly discussed with an emphasis on marsupial operations and linked mobility. The paper concludes with an emphasis on Human Robot Interface (HRI) and a call for additional research in this exciting and all too important field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Demo III Experimental Unmanned Ground Vehicle program is directed at developing autonomous mobility technology, integrating it onto a number of small, agile testbed vehicles and evaluating its maturity through a series of experiments conducted with the military user community. During FY99 the program focused upon development and integration of a baseline capability into two testbed vehicles that can maneuver cross country at speeds of up to 10 mph during daylight and 5 mph in darkness, over semi- arid terrain, i.e., terrain without significant vegetation. Efforts were centered on developing a multi-mode perception capability based upon both passive and active sensor systems that included stereo vision (using both normal CCD video and FLIR sensors), a multi-line laser scanner and imaging radar, implementation of the 4-D/RCS computer architecture, and a user-friendly operator interface incorporating advanced mission planning tools. The technology was evaluated in September '99 by the troops from the Armor Center during a Battle Lab Warfighting Experiment at Aberdeen Proving Ground, MD conducted by the Mounted Maneuver Battle Laboratory. During FY00, technical activities will focus on improving perception technology to permit higher speed day/night operation in obstacle-rich environments. A major technology thrust will be the introduction of object classification capabilities, e.g., the ability to differentiate between rocks and bushes or grass, into the autonomous mobility perception suite. This expanding capability is essential for future efforts directed at the development of tactical behaviors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forward Deployed Robotic Unit (FDRU) is a core science and technology objective of the US Army, which will demonstrate the impact of autonomous systems on all phases of future land warfare. It will develop, integrate and demonstrate technology required to achieve robotic and fire control capabilities for future land combat vehicles, e.g., Future Combat Systems, using a system of systems approach that culminates in a field demonstration in 2005. It will also provide the required unmanned assets and conduct the demonstration. Battle Lab Warfighting Experiments and data analysis required to understand the effects of unmanned assets on combat operations. The US Army Tank- Automotive & Armaments Command and the US Army Research Laboratory are teaming in an effort to leverage prior technology achievements in the areas of autonomous mobility, architecture, sensor and robotics system integration; advance the state-of-the-art in these areas; and to provide field demonstration/application of the technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beginning in FY98 and continuing in FY99, the Center for Self-Organizing and Intelligent Systems (CSOIS) at Utah State University (USU) has been funded by the US Army Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program to develop and demonstrate enhanced mobility concepts for unmanned ground vehicles (UGV). The long-range goal of the program is to develop and demonstrate enabling technologies that allow lightweight robotic and semiautonomous ground vehicles to achieve on-road and off-road mobility and survivability similar to current manned, wheeled, and tracked military vehicles, with a focus on small-scale to mid-scale vehicles. This paper describes the design concept and the performance of the T-series of robotic vehicles resulting from the TACOM Intelligent Mobility funding at USU (the T1, T2 and T3). USU-TACOM intelligent mobility concepts discussed in the paper include: (1) inherent mobility capability improvements, achieve through the unique concept of USU's omni-directional vehicle (ODV) steering design, which features six independently-controlled smart wheels; (2) intelligent mobility control, enhanced through intelligent coordination and control of each of the six wheels in the ODV vehicles; (3) global mobility control, enhanced through USU optimal multi-agent mission and mobility planning system; and (4) future mobility capability and control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teams of heterogeneous mobile robots are a key aspect of future unmanned systems for operations in complex and dynamic urban environments, such as that envisions by DARPA's Tactical Mobile Robotics program. Interactions among such team members enable a variety of mission roles beyond those achievable with single robots or homogeneous teams. Key technologies include docking for power and data transfer, marsupial transport and deployment, collaborative team user interface, cooperative obstacle negotiation, distributed sensing, and peer inspection. This paper describes recent results in the integration and evaluation of component technologies within a collaborative system design. Integration considerations include requirement definition, flexible design management, interface control, and incremental technology integration. Collaborative system requirements are derived from mission objectives and robotic roles, and impact system and individual robot design at several levels. Design management is a challenge in a dynamic environment, with rapid evolution of mission objectives and available technologies. The object-oriented system model approach employed includes both software and hardware object representations to enable on- the-fly system and robot reconfiguration. Controlled interfaces among robots include mechanical, behavioral, communications, and electrical parameters. Technologies are under development by several organizations within the TMR program community. The incremental integration and validation of these within the collaborative system architecture reduces development risk through frequent experimental evaluations. The TMR system configuration includes Packbot-Perceivers, Packbot- Effectors, and Throwbots. Surrogates for these robots are used to validate and refine designs for multi-robot interaction components. Collaborative capability results from recent experimental evaluations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Naval Explosive Ordnance Disposal Technology Division (NAVEODTECHDIV) has had an active program for several years for the development of technologies required to realize an autonomous system of small robots to clear an area of unexploded submunitions. The focus thus far has been on the technology elements themselves, with an emphasis on autonomous electronic control and processing. NAVEODTECHDIV is now developing demonstration systems to prove the feasibility of this application. At this stage, the systems are used in relatively benign terrain, and the targets are inert, not live munitions. However, this is adequate to show possibilities, and allow for experimentation before a full-scale development effort is initiated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of electric machines and drive systems into Unmanned Ground Vehicle (UGV) applications depends largely on meeting requirements of drive power and speed control with the limited space of an in-wheel design. In addition, UGV drive systems must operate efficiently under all conditions so as to minimize the power consumption from limited power source. The concern for energy consumption and space limitations in UGV applications suggests the need for application specific motors and control systems that are integral part of the vehicle design. The performance of a design specific electric motor and control system for a UGV application in a simulated environment and on a laboratory test bench would provide much information about the motor's operating parameters and allow for optimization of the drive system for the specific UGV application. The parameters of concern here are the output power and torque of the motor over the speed range of interest and the overall efficiency of the drive system. The effects of speed control algorithms on motor performance are also of importance in determining the capabilities of the motor and control system as an integrated unit. This paper presents the development and initial testing of an integrated UGV drive system in the Power Electronics Lab (PEL) at the University of Alaska Fairbanks (UAF) in a joint effort with Utah State University (USU). The UGV drive system employs a custom designed axial-gap permanent magnet synchronous motor (AGPMSM) with scalar control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Foster-Miller, under the sponsorship of LTC John Blitch's DARPA TMR program, has developed a number of prototype payloads that can be fit onto small robotic platforms for the purpose of performing various search and rescue functions. Presently they have been designed for the Lemmings-type vehicle but the intent in to construct them so that they are plug and play modules for a larger variety of systems. The function modules include: modified image intensifiers and thermal cameras for dark terrain search; two-way voice communications; fingerprint identification; door breaching for operation in hazardous environments; water, material and air sampling and monitoring kits; deployable combat casualty rescue nets; and rescue line launchers with fire control. The vehicle is amphibious to 90-ft and has been fitted with various underwater sensors for search and recovery operations. The vehicle can be fitted with a 14, 18 or a 40-in fold-up mast for out of reach operations. This paper will discuss the functionality of the subsystems and how they relate to robotic platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Serpentine robots offer advantages over traditional mobile robots and robot arms because they have enhanced flexibility and reachability; especially in convoluted environments. These mechanisms are especially well suited for search and rescue operations where making contact with surviving victims trapped in a collapsed building is essential. The same flexibility that makes serpentine robots incredibly useful also makes them difficult to design and control. This paper will describe the current status of serpentine robot design and path planning underway in our research group and point towards future directions of research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the 1995 Oklahoma City bombing and Kobe, Japan, earthquake, robotics researchers have been considering search and rescue as a humanitarian research domain. The recent devastation in Turkey and Taiwan, compounded with the new Robocup Rescue and AAAI Urban Search and Rescue robot competition, may encourage more research. However, roboticists generally go not have access to domain experts: the emergency workers or first providers. This paper shares our understanding of urban search and rescue, based on our active research in this area and training sessions with rescue workers from the Hillsborough County (Florida) Fire Departments. The paper is intended to be a stepping stone for roboticists entering the field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes the government experimental program PRIMUS (PRogram of Intelligent Mobile Unmanned Systems) and the achieved results of phase C demonstrated in summer 1999 on a military prooving ground. In this program there shall be shown the autonomous driving on an unmanned robot in open terrain. The most possible degree of autonomy shall be reached with today's technology to get a platform for different missions. The goal is to release the soldier from high dangerous tasks, to increase the performance and to come to a reduction of personnel and costs with unmanned systems. In phase C of the program two small tracked vehicles (Digitized Wiesel 2, airtransportable by CH53) are used. One as a robot vehicle the other as a command & control system. The Wiesel 2 is configured as a drive by wire-system and therefore well suited for the adaption of control computers. The autonomous detection and avoidance of obstacles in unknown, not cooperative environment is the main task. For navigation and orientation a sensor package is integrated. To detect obstacles the scene in the driving corridor of the robot is scanned 4 times per second by a 3D- Range image camera (LADAR). The measured 3D-range image is converted into a 2D-obstacle map and used as input for calculation of an obstacle free path. The combination of local navigation (obstacle avoidance) and global navigation leads to a collission free driving in open terrain to a predefined goal point with a velocity of up to 25km/h. A contour tracker with a TV-camera as sensor is also implemented which allows to follow contours (e.g. edge of a meadow) or to drive on paved or unpaved roads with a velocity up to 50km/h. In addition to these autonomous driving modes the operator in the command & control station can drive the robot by remote control. All the functions were successfully demonstrated in the summer 1999 on a military prooving ground. During a mission example the robot vehicle covered a distance of several kilometers in open terrain and on unpaved roads and performed a reconnaissance operation with the built-in RSTA- sensors. PRIMUS-C meets the requirements to drive autonomously and teleoperated in open terrain and on roads. The realized functions can be transfered to any vehicle and adapted to different mission requirements. This means that PRIMUS-C is a universal, modular and vehicle-independent platform for different military applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low-bandwidth digital video wireless transmission is essential for Unmanned Ground Vehicle (UGV) teleoperation. In this paper, we propose, for the first time, audio-like bandwidth (64 kpbs) wireless TV-class digital video communication (210 Mbps-original bandwidth) based on POC's Soft-Computing and Soft-Communication (SC2) technologies, which integrate: highly-parallel electronics (256 processors, in parallel); super-computer-class processing power (8 BOPs, or eight billion operations per second); small power consumption (1-2W, and small packaging (2x3);leading in the near future to business-card-size, fully upgradable PCMCIA packaging. The proposed technology has applications in UGV communication platforms, providing communication and autonomous image and signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned vehicles, such as mobile robots, must exhibit adjustable autonomy. They must be able to be self-sufficient when the situation warrants; however, as they interact with each other and with humans, they must exhibit an ability to dynamically adjust their independence or dependence as co-operative agents attempting to achieve some goal. This is what we mean by adjustable autonomy. We have been investigating various modes of communication that enhance a robot's capability to work interactively with other robots and with humans. Specifically, we have been investigating how natural language and gesture can provide a user- friendly interface to mobile robots. We have extended this initial work to include semantic and pragmatic procedures that allow humans and robots to act co-operatively, based on whether or not goals have been achieved by the various agents in the interaction. By processing commands that are either spoken or initiated by clicking buttons on a Personal Digital Assistant and by gesturing either naturally or symbolically, we are tracking the various goals of the interaction, the agent involved in the interaction, and whether or not the goal has been achieved. The various agents involved in achieving the goals are each aware of their own and others' goals and what goals have been stated or accomplished so that eventually any member of the group, be it robot or a human, if necessary, can interact with the other members to achieve the stated goals of a mission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperation is important to the Army because of its interest in incorporating robotics in the battlefield. The objective of this research is to demonstrate the capability to drive multiple vehicles using only a single driver. Teleoperation in an important near term goal, and we hope that his research will further this goal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.