Engineers find it of importance to have a quantitative understanding of the charge-depletion characteristics of the battery-bank that powers a mobile unmanned ground vehicle (UGV) so as to have mission-duration, cost-of-transport, range, and other useful estimates. Data analysis to determine the energy use of a ‘large’ wheeled robot – the Clearpath™ Warthog – with Gross Vehicle Weight (GVW) 590 Kg. -- are here discussed. The analysis is based on gravel-surface, straight-path level trials. The results of this analysis inform how far the UGV can travel over the specified surface. We give basic methods for obtaining expected energy usage: tables with estimates for cost of transport and for mission range. Included in the analysis is a nonparametric method for identifying and dealing with the small number of ‘outlier,’ readings that often occur in field trials.
Robots are ideal surrogates for performing tasks that are dull, dirty, and dangerous. To fully achieve this ideal, a robotic teammate should be able to autonomously perform human-level tasks in unstructured environments where we do not want humans to go. In this paper, we take a step toward realizing that vision by introducing the integration of state of the art advancements in intelligence, perception, and manipulation on the RoMan (Robotic Manipulation) platform. RoMan is comprised of two 7 degree of freedom (DoF) limbs connected to a 1 DoF torso and mounted on a tracked base. Multiple lidars are used for navigation, and a stereo depth camera visualizes point clouds for grasping. Each limb has a 6 DoF force-torque sensor at the wrist, with a dexterous 3-finger gripper on one limb and a stronger 4-finger claw-like hand on the other. Tasks begin with an operator specifying a mission type, a desired final destination for the robot, and a general region where the robot should look for grasps. All other portions of the task are completed autonomously. This includes navigation, object identification and pose estimation (if the object is known) via deep learning or perception through search, fine maneuvering, grasp planning via grasp library, arm motion planning, and manipulation planning (e.g. dragging if the object is deemed too heavy to freely lift). Finally, we present initial test results on two notional tasks: clearing a road of debris such as a heavy tree or a pile of unknown light debris, and opening a hinged container to retrieve a bag inside it.
This paper presents results from an experiment performed at the Combat Capabilities Development Command, Army Research Laboratory, Autonomous Systems Division (ASD) on the precision of a 7-degree-of-freedom robotic manipulator used on the RoMan robotic platform. We quantified the imprecision in the arm end-effector final position after arm movements ranging over distances from 362 mm to 1300 mm. In theory, for open-loop grasping, one should be able to compute the final X-Y-Z position of the gripper using forward kinematics. In practice, uncertainty in the arm calibration induces uncertainty in the forward kinematics so that it is desirable to measure this imprecision after different arm calibrations. Forty-one runs were performed under different calibration regimes. Ground truth was provided by measuring arm motions with a Vicon motion capture system while the chassis of the platform remained stationary during the experiment. Using a digital protractor to align the arm joints to the ground plane for a “Level” type calibration, the average total offset of the gripper in 3D space was 19.6 mm with a maximum of about 30 mm. After a “Field” (i.e. Hand-Eye) calibration, which aligned fiducials on the joints, the average total offset came to 37.8 mm with a maximum of about 80 mm. Distance travelled by the arm was found to be uncorrelated with total offset. The experiment demonstrated that Total (X, Y, Z) Offset in the gripper final position is reduced significantly if the robot arm is first calibrated using a standard “Level” calibration. The “Field” calibration method results in a significant increase in Offset variation.
In December of 2017, members of the Army Research Laboratory’s Robotics Collaborative Technology Alliance (RCTA) conducted an experiment to evaluate the progress of research on robotic grasping of occluded objects. This experiment used the Robotic Manipulator (RoMan) platform equipped with an Asus Xtion to identify an object on a table cluttered with other objects, and to grasp and pick up the target object. The identification and grasping was conducted with varying input factor assignments following a formal design of experiments; these factors comprised different sizes of target, varied target orientation, variation in the number and positions of objects which occluded the target object from view, and different levels of lighting. The grasping was successful in 18 out of 23 runs (78% success rate). The grasping action was conducted within constraints placed on the position and orientation of the RoMan with respect to the table of target objects. The Statistical approach of a ‘deterministic’ design and the use of odds ratio analysis were applied to the task at hand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.