Paper
1 April 1991 Hand-eye coordination for grasping moving objects
Peter K. Allen, Billibon Yoshimi, Alexander Timcenko, Paul Michelman
Author Affiliations +
Proceedings Volume 1383, Sensor Fusion III: 3D Perception and Recognition; (1991) https://doi.org/10.1117/12.25255
Event: Advances in Intelligent Robotics Systems, 1990, Boston, MA, United States
Abstract
Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for grasping a moving object. This task requires proper coordination between at least 3 separate subsystems: dynamic vision sensing, real-time arm control, and grasp control. As with humans, our system first visually tracks the object’s 3-D position. Because the object is in motion, this must be done in a dynamic manner to coordinate the motion of the robotic arm as it tracks the object. The dynamic vision system is used to feed a real-time arm control algorithm that plans a trajectory. The arm control algorithm is implemented in two steps: 1) filtering and prediction, and 2) kinematic transformation computation. Once the trajectory of the object is tracked, the hand must intercept the object to actually grasp it. We present 3 different strategies for intercepting the object and results from the tracking algorithm.
© (1991) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Peter K. Allen, Billibon Yoshimi, Alexander Timcenko, and Paul Michelman "Hand-eye coordination for grasping moving objects", Proc. SPIE 1383, Sensor Fusion III: 3D Perception and Recognition, (1 April 1991); https://doi.org/10.1117/12.25255
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Robotics

Visualization

Optical tracking

Control systems

Image processing

3D image processing

Back to Top