KEYWORDS: Control systems, Motion models, Sensors, Filtering (signal processing), Systems modeling, Computer programming, Adaptive control, Detection and tracking algorithms, Simulink, Control systems design
In this paper, the controllability of a Mecanum omnidirectional vehicle (ODV) is investigated. An adaptive drive
controller is developed that guides the ODV over irregular and unpredictable driving surfaces. Using sensor
fusion with appropriate filtering, the ODV gets an accurate perception of the conditions it encounters and then
adapts to them to robustly control its motion. Current applications of Mecanum ODVs are designed for use
on smooth, regular driving surfaces, and don't actively detect the characteristics of disturbances in the terrain.
The intention of this work is to take advantage of the mobility of ODVs in environments where they weren't
originally intended to be used. The methods proposed in this paper were implemented in hardware on an ODV.
Experimental results did not perform as designed due to incorrect assumptions and over-simplification of the
system model. Future work will concentrate on developing more robust control schemes to account for the
unknown nonlinear dynamics inherent in the system.
An ultra-wideband (UWB) inter-radio ranging technology with measurement resolution of +/-0.5 ft and range up to
0.5 kilometer under certain FCC regulation was recently introduced. However, measurement data are extremely
erroneous due to stochastic variables in the device and multipath radio wave reflections. This paper presents fuzzy
logic tuned double tracking filters as a solution to remove misinformation in the data. The 1st tracker locates the
overall center of the data in the presence of the large sporadic noise. A fuzzy logic admits only neighborhood data
to a 2nd tracker which takes care of smaller deviation noise. The fuzzy neighborhood filter approach has been
successfully applied to clean up the UWB radio ranges. Experimental results are shown.
Unmanned ground vehicles (UGVs) will play an important role in the nation's next-generation ground force. Advances
in sensing, control, and computing have enabled a new generation of technologies that bridge the gap between manual
UGV teleoperation and full autonomy. In this paper, we present current research on a unique command and control
system for UGVs named PointCom (Point-and-Go Command). PointCom is a semi-autonomous command system for
one or multiple UGVs. The system, when complete, will be easy to operate and will enable significant reduction in
operator workload by utilizing an intuitive image-based control framework for UGV navigation and allowing a single
operator to command multiple UGVs. The project leverages new image processing algorithms for monocular visual
servoing and odometry to yield a unique, high-performance fused navigation system. Human Computer Interface (HCI)
techniques from the entertainment software industry are being used to develop video-game style interfaces that require
little training and build upon the navigation capabilities. By combining an advanced navigation system with an intuitive
interface, a semi-autonomous control and navigation system is being created that is robust, user friendly, and less
burdensome than many current generation systems.
mand).
This research is part of a broader effort to develop a supervisory control system for small robot navigation. Previous research and development focused on a "one-touch, point-and-go" navigation control system using visual homing. In the current research, we have begun to investigate visual tracking methods to extend supervisory control to tasks involving tracking and pursuit of a moving object. Ground-to-ground tracking of arbitrary targets in natural and damaged environments is challenging. Automatic tracking is expected to fail due to line-of-sight obstruction, lighting gradients, rapid changes in perspective and orientation, etc. In supervisory control, the automatic tracker needs able to alert the operator when it is at risk of losing track or when it may have already lost track, and do so with a low false alarm rate. The focus of the current research is on detecting tracking failure during pursuit. We are attempting to develop approaches to detecting failure that can integrate different low-level tracking algorithms. In this paper, we demonstrate stereo vision methods for pursuit tracking and examine several indicators of track loss in field experiments with a variety of moving targets in natural environment.
When people drive off-road, we look at the upcoming terrain and make a variety of judgments-whether to avoid or attempt to cross a patch of terrain, whether to slow down or speed up, etc. We consider a variety of factors including perceived slope, obstacles, resistance, traction, sinkage, roughness, and the limitations of our perception. We judge many of the handling factors based our recollections of driving on other terrain with similar visual appearance. Perception and terrain understanding algorithms with similar capabilities are needed for unmanned ground vehicle (UGV) autonomous mobility. The objective of this research is to begin to develop methods that can be used by a UGV to learn to associate terrain appearance with handling on the terrain. We demonstrate methods to identify models of speed and acceleration as function of throttle command, and power consumption as a function of speed and acceleration using data collected by on-board sensors as the UGV executes test maneuvers. We demonstrate methods to characterize terrain type from visual appearance. We investigate the hypothesis that terrain with different handling characteristics can be discriminated based on visual appearance characterization.
This paper details the development of a minimal set of locally distributed navigation beacons that can provide new waypoints in dense obstacle fields. The 'beacons' provide direction and magnitude inputs for the robot to use for its next waypoint. The beacons are placed in such a manner that all locations within a bounded playing field can reach a goal area in a desired number of steps. This guarantee of total coverage comes only with tuning the magnitudes and directions of each beacon (as well as their position in the field). Key to this approach is the underlying 'color map'. The color map assigns a color to regions of the playing field based on whether the region terminates at the goal ('green'); leaves the playing field and doesn't return ('red'); or doesn’t leave the playing field but does not terminate at the goal (within a fixed number of steps)-also know as 'stagnation' ('yellow'). Changes in the placement of the beacons and their associated parameters result in changes to the color map. A software tool has been developed to allow a user to see the instantaneous changes in the color map when changes are made to the beacons. This paper will also describe how the beacons are related to both Voronoi diagrams and nearest neighbor classifiers-thus generating the final name for the navigation beacons; Voronoi Classifiers. Future work is detailed including the development of color maps for other cost metrics (such as distance traveled, power consumed or terrain trafficabilty) and efforts in developing an algorithm to find the infimum solution (minimize the maximum steps, distance, etc.).
We present two methods for a localization system, defined as the "angle of arrival" scheme, which computes position and heading of an autonomous vehicle system (AVS) fusing both odometry data and the measurements of the relative azimuth angles of known landmarks (in this case, reflectors of a stabilized laser/reflector system). The first method involves a combination of a geometric transformation and a recursive least squares approach with forgetting factor. The second method presented is a direct approach using variants of the Unscented Kalman filter. Both methods are examined in simulation and the results presented.
Military and security operations often require that participants move as quickly as possible, while avoiding harm. Humans judge how fast they can drive, how sharply they can turn and how hard they can brake, based on a subjective assessment of vehicle handling, which results from responsiveness to driving commands, ride quality, and prior experience in similar conditions. Vehicle handling is a product of the vehicle dynamics and the vehicle-terrain interaction. Near real-time methods are needed for unmanned ground vehicles to assess their handling limits on the current terrain in order to plan and execute extreme maneuvers. This paper describes preliminary research to develop on-the-fly procedures to capture vehicle-terrain interaction data and simple models of vehicle response to driving commands, given the vehicle-terrain interaction data.
The U.S. Army is seeking to develop autonomous off-road mobile robots to perform tasks in the field such as supply delivery and reconnaissance in dangerous territory. A key problem to be solved with these robots is off-road mobility, to ensure that the robots can accomplish their tasks without loss or damage. We have developed a computer model of one such concept robot, the small-scale "T-1" omnidirectional vehicle (ODV), to study the effects of different control strategies on the robot's mobility in off-road settings. We built the dynamic model in ADAMS/Car and the control system in Matlab/Simulink. This paper presents the template-based method used to construct the ADAMS model of the T-1 ODV. It discusses the strengths and weaknesses of ADAMS/Car software in such an application, and describes the benefits and challenges of the approach as a whole. The paper also addresses effective linking of ADAMS/Car and Matlab for complete control system development. Finally, this paper includes a section describing the extension of the T-1 templates to other similar ODV concepts for rapid development.
Proprioception is a sense of body position and movement that supports the control of many automatic motor functions such as posture and locomotion. This concept, normally relegated to the fields of neural physiology and kinesiology, is being utilized in the field of unmanned mobile robotics. This paper looks at developing proprioceptive behaviors for use in controlling an unmanned ground vehicle. First, we will discuss the field of behavioral control of mobile robots. Next, a discussion of proprioception and the development of proprioceptive sensors will be presented. We will then focus on the development of a unique neural-fuzzy architecture that will be used to incorporate the control behaviors coming directly from the proprioceptive sensors. Finally we will present a simulation experiment where a simple multi-sensor robot, utilizing both external and proprioceptive sensors, is presented with the task of navigating an unknown terrain to a known target position. Results of the mobile robot utilizing this unique fusion methodology will be discussed.
This paper describes a method of acquiring behaviorist-based reactive control strategies for an autonomous skid-steer robot operating in an unknown environment. First, a detailed interactive simulation of the robot (including simplified vehicle kinematics, sensors and a randomly generated environment) is developed with the capability of a human driver supplying all control actions. We then introduce a new modular, neural-fuzzy system called Threshold Fuzzy Systems (TFS). A TFS has two unique features that distinguish it from traditional fuzzy logic and neural network systems; (1) the rulebase of a TFS contains only single antecedent, single consequence rules, called a Behaviorist Fuzzy Rulebase (BFR) and (2) a highly structured adaptive node network, called a Rule Dominance Network (RDN), is added to the fuzzy logic inference engine. Each rule in the BFR is a direct mapping of an input sensor to a system output. Connection nodes in the RDN occur when rules in the BFR are conflicting. The nodes of the RDN contain functions that are used to suppress the output of other conflicting rules in the BFR. Supervised training, using error backpropagation, is used to find the optimal parameters of the dominance functions. The usefulness of the TFS approach becomes evident when examining an autonomous vehicle system (AVS). In this paper, a TFS controller is developed for a skid-steer AVS. Several hundred simulations are conducted and results for the AVS with a traditional fuzzy controller and with a TFS controller are compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.