Bio-inspired algorithms have been increasingly applied for autonomous robot path planning problems in complex environments. These environments are often restrictive in nature, where robot navigation must succeed with low margins of error. The complexity of the environment is a performance limiting factor based on density of obstacles and navigability of the robot in difficult environments. The scale of the environment to be examined for any given problem also contributes to the performance of solutions for path planning. These performance limitations are especially evident in time sensitive real-world applications, like autonomous off-road vehicles or search and rescue situations, where computation quality and immediacy are highly valued. One method to mitigate the shortcomings of bio-inspired algorithms involves destructing the problem environment into readily solvable segments. This paper proposes a graph-based near optimal path approach leveraging a bio-inspired algorithm for rapid path planning in autonomous environments. The proposed model utilizes centroid cell decomposition to establish a map in complex environments in a graph-based form. In this approach, centroid points are regulated and determined by the bio-inspired optimization as part of generating final robot trajectories. To improve upon the shortcomings of typical graph-based algorithms, ant colony optimization is applied afterwards to determine the near optimal robot traversal path. The model is validated through simulated environments for performance with comparable algorithms.
Off-road autonomous navigation remains an ongoing challenge for autonomous ground vehicles (AGV). The challenges of navigating in an unstructured environment include identifying and detecting both positive and negative obstacles, distinguishing navigable from non-navigable vegetation, identifying soft soil, and negotiating rough or sloping terrain. While many recent works have dealt with various aspects of the off-road navigation problem, up to now there has not been a free and open-source autonomy stack for off-road that included integrated modules for perception, planning, and control. Therefore, we have recently developed the NATURE (Navigating All Terrains Using Robotic Exploration) autonomy stack as a publicly available resource to facilitate the advancement of off-road navigation research. The NATURE stack is implemented using the Robotic Operating System (ROS) and can be built to work with both ROS-1 and ROS-2. The modular nature of the NATURE stack makes it an ideal resource for researchers who want to evaluate a particular algorithm for perception, planning, or control without developing an entire navigation stack from scratch. NATURE features several options for both global and local path planning including A*, artificial potential field, and spline-based planning, as well as multiple options for perception including a simple geometrically based obstacle finder and more advanced custom traversability algorithm derived from 3D lidar. In this presentation we give an overview of the NATURE stack and show some past uses of the stack in both simulated and field experiments.
Failures by autonomous ground vehicles (AGV) may be caused by many different factors in hardware, software, or integration. Effective safety and reliability testing for AGV is complicated by the fact that failures are not only infrequent but also difficult to diagnose. In this work, we will discuss the results of a three-phase project to develop a simulation-based approach to AGV architecture design, test implementation, and simulation integration. This approach features a modular AGV architecture, reliability testing with a physics-based simulator (the MSU Autonomous Vehicle Simulator, or MAVS), and validation with a limited number of field trials.
Autonomous driving in off-road environments is challenging as it does not have a definite terrain structure. Assessment of terrain traversability is the main factor in deciding the autonomous driving capability of the ground vehicle. Traversability in off-road environments is defined as the drivable track on the trails by different vehicles used in autonomous driving. It is very crucial for the autonomous ground vehicle (AGV) to avoid obstacles such as trees, boulders etc. while traversing through the trails. The goal of this research has three main objectives: a) collection of 2D camera data in the off-road / unstructured environment, b) annotation of 2D camera data depending on the vehicles’ ability to drive through the trails , and c) application of semantic segmentation algorithm on the labeled dataset to predict the trajectory based on the type of ground vehicle. Our models and labeled datasets will be publicly available.
Autonomous navigation (also known as self-driving) has rapidly advanced in the last decade for on-road vehicles. In contrast, off-road vehicles still lag in autonomous navigation capability. Sensing and perception strategies used successfully in on-road driving fail in the off-road environment. This is because on-road environments can often be neatly categorized both semantically and geometrically into regions like driving lane, road shoulder, and passing lane and into objects like stop sign or vehicle. The off-road environment is neither semantically nor geometrically tidy, leading to not only difficulty in developing perception algorithms that can distinguish between drivable and non-drivable regions, but also difficulty in the determination of what constitutes "drivable" for a given vehicle. In this work, the factors affecting traversability are discussed, and an algorithm for assessing the traversability of off-road terrain in real time is developed and presented. The predicted traversability is compared to ground-truth traversability metrics in simulation. Finally, we show how this traversability metric can be automatically calculated by using physics-based simulation with the MSU Autonomous Vehicle Simulator (MAVS). A simulated off-road autonomous navigation task using a real-time implementation of the traversability metric is presented, highlighting the utility of this approach.
Semantic Segmentation using convolutional neural networks is a trending technique in scene understanding. As these techniques are data-intensive, several devices struggle to store and process even a small batch of images at a time. Also, as the volume of training datasets required by the training algorithms is very high, it might be wise to store these datasets in their compressed form. Not only this, in order to correspond the limited bandwidth of the transmission network the images could be compressed before sending to the destination. Joint Photography Expert Group (JPEG) is a famous technique for image compression. However, JPEG introduces several unwanted artifacts in the images after compression. In this paper, we explore the effect of JPEG compression on the performance of several deep-learning-based semantic segmentation techniques for both the synthetic and real-world dataset at various compression levels. For some established architectures trained with compressed synthetic and real-world dataset, we noticed the equivalent (and sometimes better) performances compared to uncompressed dataset with substantial amount of storage space reduced. We also analyze the effect of combining original dataset with the compressed dataset with different JPEG quality levels and witnessed a performance improvement over the baseline. Our evaluation and analysis indicates that the segmentation network trained on compressed dataset could be a better option in terms of performance. We also illustrate that the JPEG compression acts as a data augmentation technique improving the performance of semantic segmentation algorithms.
For autonomous vehicles 3D, rotating LiDAR sensors are often critically important towards the vehicle’s ability to sense its environment. Generally, these sensors scan their environment, using multiple laser beams to gather information about the range and the intensity of the reflection from an object. LiDAR capabilities have evolved such that some autonomous systems employ multiple rotating LiDARs to gather greater amounts of data regarding the vehicle’s surroundings. For these multi–LiDAR systems, the placement of the sensors determine the density of the combined point cloud. We perform preliminary research regarding the optimal LiDAR placement strategy on an off–road, autonomous vehicle known as the Halo project. We use the Mississippi State University Autonomous Vehicle Simulator (MAVS) to generate large amounts of labeled LiDAR data that can be used to train and evaluate a neural network used to process LiDAR data in the vehicle. The trained networks are evaluated and their performance metrics are then used to generalize the performance of the sensor pose. Data generation, training, and evaluation, was performed iteratively to perform a parametric analysis of the effectiveness of various LiDAR poses in the Multi–LiDAR system. We also, describe and evaluate intrinsic and extrinsic calibration methods that are applied in the multi–LiDAR system. In conclusion we found that our simulations are an effective way to evaluate the efficacy of various LiDAR placements based on the performance of the neural network used to process that data and the density of the point cloud in areas of interest.
The Sensor Analysis and Intelligence Laboratory (SAIL) at Mississippi State University's (MSU's) Center for Advanced Vehicular Systems (CAVS) and the Social, Therapeutic and Robotic Systems Lab (STaRS) at MSU's Computer Science and Engineering department have designed and implemented a modular platform for automated sensor data collection and processing, named the Hydra. The Hydra is an open-source system (all artifacts and code are published to the research community), and it consists of a modular rigid mounting platform (sensors, processors, power supply and conditioning) that utilize the Picatinny rail (a standardized mounting system originally developed for firearms) as a rigid mounting system, a software platform utilizing the Robotic Operating System (ROS) for data collection, and design packages (schematics, CAD drawings, etc.). The Hydra system streamlines the assembly of a configurable multi-sensor system. This system is motivated to enable researchers to quickly select sensors, assemble them as an integrated system, and collect data (without having to recreate the Hydras hardware and software). Prototype results are presented from a recent data collection on a small robot during a SWAT-robot training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.