Cellular simultaneous recurrent networks (CSRN)s have been traditionally exploited to solve the digital control and
conventional maze traversing problems. In previous works, we investigated the use of CSRNs to register simulated
binary images with in-plane rotations between ±20° using two different CSRN architectures such as one with a
general multi-layered perceptron (GMLP) architecture; and another with modified MLP architecture with multilayered
feedback. We further exploit the CSRN for registration of realistic binary and gray scale images under
rotation. In this current work we report results of applying CSRNs to perform image registration under affine
transformations such as rotation and translation. We further provide extensive analyses of CSRN affine registration
results for appropriate cost function formulation. Our CSRN results analyses show that formulation of locally
varying cost function is desirable for robust image registration under affine transformation.
Cellular simultaneous recurrent networks (CSRN)s have been traditionally exploited to solve digital control and
conventional maze traversing problems. In a previous work [1], we investigate the use of CSRNs for image registration
under affine transformations for binary images. In Ref. [1], we attempt to register simulated binary images with in-plane
rotations between ±20° using two different CSRN implementations such as with 1) a general multi-layered perceptron
(GMLP) architecture; and (2) a modified MLP architecture with multi-layered feedback. Our results in Ref. 1 show that
both architectures achieve moderate local registration, with our modified MLP architecture producing a best result of
around 64% for cost function accuracy and 98% for image registration accuracy. In this current work, for the first time in
literature, we investigate gray scale image registration using CSRNs. We exploit both types of CSRNs for registration of
realistic images and perform complete evaluation of both binary and gray-scale image registration. Simulation results
with both CSRN architectures show an average cost function accuracy of 40.5% and an average image accuracy of
33.2%, with a best result of 46.2% and 40.3%, respectively. Image results clearly show that the CSRN shows promise
for use in registration of gray-scale images.
In order for a mobile robot to successfully navigate its environment, it must have knowledge about the objects in its immediate vicinity. The robot can use this information for localization, navigation and object avoidance. Among many sensors available for object detection we are primarily interested in camera-based vision for indoor robot navigation. In this work, we focus on using a single camera to detect objects in the field of view of the robot for the purpose of obstacle avoidance. In order to obtain an integrated robot obstacle avoidance and navigation technique, we investigate a modular approach. In the first module, we extend an appearance based object detection (ABOD) technique to automatically identify individual objects. We then extract strong corner features, overlaying them over the identified objects. This allows us to select a few representative corners for each object. In the second module, we attempt to group these strong corner features using a planar homography technique to define more natural features such as 'planes' for further processing. As an added feature, we utilize the strong corner features generated from module 1, the corresponding features in the next frame from module 2 and a basic optical flow technique for tracking these identified objects. In the third and final module, we obtain distance and heading information for each of obstacles as the robot avoids and navigates in an indoor environment. We show both simulation and actual results on a mobile robot for each of these three modules. We hope to integrate these three modules to obtain a single camera-based integrated robot obstacle avoidance and navigation technique in future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.