Advanced sensor platforms often contain a wide array of sensors in order to collect and process a diverse range of environmental data. Proper calibration of these sensors is important so that the collected data can be interpreted and fused into an accurate depiction of the environment. Traditionally, LiDAR-stereo camera calibration requires human assistance to manually extract point pairs between the LiDAR and the camera system. Here, we present a fully automated technique for calibrating a visible camera system with a 360⁰ field-of-view LiDAR. This calibration is achieved by using the standard planar checkerboard calibration pattern to calculate the calibration parameters (intrinsic and extrinsic) for the stereo camera system. We then present a novel pipeline to determine accurate rigid-body transformation between LiDAR and the stereo camera coordinate systems with no additional experimental setup or human assistance. Our innovation lies in using the planarity of the checkerboard, whose surface coefficients can be estimated relative to the camera coordinates as well as the LiDAR sensor coordinates. We determine the rigid-body transformation between two sets of coefficients of the same calibration surface through least squares minimization. We then refine the estimate through iterative closest point minimization between the 3D points on the checkerboard pattern viewed from the LiDAR and the camera system. Using measurements from multiple views, we increase the confidence in the transformation estimate. The proposed method is less cumbersome and time consuming, unifying the stereo camera and LiDAR-camera calibration in a single step using only one calibration pattern.
There have been large gains in the field of robotics, both in hardware sophistication and technical capabilities.
However, as more capable robots have been developed and introduced to battlefield environments, the problem of
interfacing with human controllers has proven to be challenging. Particularly in the field of military applications,
controller requirements can be stringent and can range from size and power consumption, to durability and cost.
Traditional operator control units (OCUs) tend to resemble laptop personal computers (PCs), as these devices are
mobile and have ample computing power. However, laptop PCs are bulky and have greater power requirements.
To approach this problem, a light weight, inexpensive controller was created based on a mobile phone running the
Android operating system. It was designed to control an iRobot Packbot through the Army Research Laboratory
(ARL) in-house Agile Computing Infrastructure (ACI). The hardware capabilities of the mobile phone, such as Wi-
Fi communications, touch screen interface, and the flexibility of the Android operating system, made it a compelling
platform. The Android based OCU offers a more portable package and can be easily carried by a soldier along with
normal gear requirements. In addition, the one hand operation of the Android OCU allows for the Soldier to keep an
unoccupied hand for greater flexibility.
To validate the Android OCU as a capable controller, experimental data was collected evaluating use of the
controller and a traditional, tablet PC based OCU. Initial analysis suggests that the Android OCU performed
positively in qualitative data collected from participants.
Large gains in the automation of human detection and tracking techniques have been made over the past several years.
Several of these techniques have been implemented on larger robotic platforms, in order to increase the situational
awareness provided by the platform. Further integration onto a smaller robotic platform that already has obstacle
detection and avoidance capabilities would allow these algorithms to be utilized in scenarios that are not plausible for
larger platforms, such as entering a building and surveying a room for human occupation with limited operator
intervention.
However, transitioning these algorithms to a man-portable robot imparts several unique constraints, including limited
power availability, size and weight restrictions, and limited processor ability. Many imaging sensors, processing
hardware, and algorithms fail to adequately address one or more of these constraints.
In this paper, we describe the design of a payload suitable for our chosen man-portable robot, the iRobot Packbot. While
the described payload was built for a Packbot, it was carefully designed in order to be platform agnostic, so that it can be
used on any man-portable robot. Implementations of several existing motion and face detection algorithms that have
been chosen for testing on this payload are also discussed in some detail.
Pan-tilt-zoom (PTZ) cameras are frequently used in surveillance applications as they can observe a much larger region of
the environment than a fixed-lens camera while still providing high-resolution imagery. The pan, tilt, and zoom
parameters of a single camera may be simultaneously controlled by online users as well as automated surveillance
applications. To accurately register autonomously tracked objects to a world model, the surveillance system requires
accurate knowledge of camera parameters. Due to imprecision in the PTZ mechanism, these parameters cannot be
obtained from PTZ control commands but must be calculated directly from camera imagery. This paper describes the
efforts undertaken to implement a real-time calibration system for a stationary PTZ camera. The approach continuously
tracks distinctive image feature points from frame to frame, and from these correspondences, robustly calculates the
homography transformation between frames. Camera internal parameters are then calculated from these homographies.
The calculations are performed by a self contained program that continually monitors images collected by the camera as
it performs pan, tilt, and zoom operations. The accuracy of the calculated calibration parameters are compared to ground
truth data. Problems encountered include inaccuracies in large orientation changes and long algorithm execution time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.