PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We present a computational method for determining the characteristic views of general quadric-surfaced solids. Each characteristic view (CV) is a representative view of a characteristic-view domain (CVD). The viewing space of a solid is fully described by its CVs and the CVD graph (or aspect graph) which shows the interrelationships among the CVDs. The paper discusses the feasibility of using the CV concept for 3D object recognition and pose estimation. An illustrative example is included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of generating models of 3D objects automatically from exploratory view-sequences of the objects. Neural network techniques are described which cluster the frames of video-sequences into view-categories, called aspects, representing the 2D characteristic views. Feedforward processes insure that each aspect is invariant to the apparent position, size, orientation, and foreshortening of an object in the scene. The aspects are processed in conjunction with their associated aspect-transitions by the Aspect Network to learn and refine the 3D object representations on-the-fly. Recognition is indicated by the object-hypothesis which has accumulated the maximum evidence. The object-hypothesis must be'consistent with the current view, as well as the recent history of view transitions stored in the Aspect Network. The “winning” object refines its representation until either the attention of the camera is redirected or another hypothesis accumulates greater evidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review the basic ideas behind using reference points to determine the 3D position of a known object. We introduce a parallel distributed processing method for solving a family of nonlinear equations that define the constraints dictated by the point projections when the correspondence is known. This method involves updating the activations of nodes as well as the weights of the network, and as such we refer to it as an adaptive network. We discuss the possible extension of this method to the case when the correspondence is not known.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study of 3-D vision is presented where real distances among some points of a scene are evaluated. To this end, two or three images taken at unknown positions are considered, and the "eight point algorithm" is applied.
An experimental way to test the accuracy of estimations is considered, and an extension of a linear method to three frames is discussed. Simulations and actual experiments have been carried out and results point out the validity of the three view approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth and orientation information are important cues for the reconstruction of three-dimensional surfaces in computer vision. The statistical fusion of data obtained by slightly different views of the same scene is studied as a way for improving the accuracy and reliability of the data and consequent result of the integration step.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a novel, viewer-centered approach to modeling the geometry of the visible occluding contour of solid 3D shape. A description of the change in appearance of the rim and occluding contour as a function of viewpoint allows the organization of features of the occluding contour for indexing and matching. This organization makes the features of the occluding contour explicit for matching in a dynamic context where image features are changing over time, and in a static context where matching methods must iteratively refine an estimation of viewpoint. The rim appearance representation models the exact appearance of the occluding contour formed by the edges of a polyhedron that is assumed to be an approximation of a smooth shape. An algorithm is presented for constructing the rim appearance representation. Bounds on space and time are given, and implementation results show that the rim appearance representation is significantly smaller than the aspect graph and the aspect representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an evaluation of several pixel level optical flow techniques, for flow computation accuracy. Flow accuracy is characterized with respect to spatio-temporal image characteristics relevant to moving target detection. Results of flow computation and target detection are presented for infrared (8 - 12 /?m) imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for using time sequence video data from a single camera to determine position and orientation (pose) of spin stabilized satellites with respect to a robotic spacecraft is discussed. The system utilizes novelty detection and filtering for locating novel parts and a neural net to track these parts over time. The present paper addresses the estimation of pose from the tracks of the novel regions. The path traced out by a given part (or region) is approximately elliptical in image space, and a psuedoinverse technique is used to find a best-fit ellipse for a set of track points. The position, shape, and orientation of the ellipse are functions of the satellite geometry and its pose. Using this ellipse, and information from a model of the given satellite, an iterative technique is used to perturb an initial guess of pose such that the error between the best-fit ellipse and a predicted ellipse is minimized. Results of using this algorithm on sequences of images of a satellite at various poses and under various lighting conditions are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for recognition, tracking, and pose estimation of 3-d objects in intensity imagery commonly assume that features are the result of imaging surface landmarks. This assumption is most commonly violated when a feature is detected on an occluding boundary generated by a smoothly curving surface. We have developed a method that can recognize, track, and determine the pose of arbitrarily shaped, partially visible 3-d objects in both intensity and range imagery. We describe the results of tests on real intensity imagery and synthetic range imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computing object motion from time-varying multi-sensor data and fusing this data into a coherent map of the object and/or its environment are important problems in robotics. In this paper, we present a new algorithm for motion estimation from sparse range data acquired from multiple sensors namely, a stereo camera system and an array of laser range finders. The motion estimates from this algorithm are input to a Kalman filter based state estimator for continuosly tracking a free-flying object in space under zero-gravity conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents moment-based algorithms for matching and motion estimation of 3-D point or line sets and application of these algorithms to object tracking over long time sequences. The motion analysis is done by identifying two sets of coordinate directions based on relative position of points (or lines) before and after the motion. Since these coordinate vectors are motion invariant, the relationship between them gives parameters of rigid motion. However, we need to verify that the sets before and after the motion are matched before applying motion estimation algorithm. We propose several measures suitable for matching of 3-D point (and line) sets, test them on simulated data and develop several criteria for determining noise sensitivity of matching and motion estimation algorithms. Finally, we apply the proposed algorithm to the long sequence (24) of real data (moving vehicle) on which 3-D points were determined by stereo matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust estimation of scene-depth is essential to many tasks in three dimensional visual perception. Image-flow is a major source of depth information. This paper decsribes a new framework for computing image-flow from time-varying imagery and recovering scene-depth from image-flow. In this framework, image-flow information available in the time-varying imagery is classified into two categories - conservation information and neighborhood information. Each type of information is recovered in the form of an estimate accompanied by a covariance matrix. Image-flow is then computed, along with confidence measures, by fusing the two estimates on the basis of their covariance matrices. The framework is shown to allow estimation of certain types of discontinuous flow-fields without any a- priori knowledge about the location of discontinuities. Furthermore, because of its estimation-theoretic nature, the framework lends itself naturally to incremental estimation of scene-depth using Kalman filtering-based techniques that fuse depth estimates over succesive frames of the image-seqeuence. The depth maps obtained by this scheme preserve the depth-discontinuities very well. This property of the framework is crucial to reliable recovery of 3-D features, e.g. depth-edges, from depth-maps. Algorithms based on this framework are used to recover image-flow and depth-maps from a variety of image-sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss efforts directed to study touch as the primary sensing modality for a robotic manipulatory system. We provide a framework for active robotic exploration strategy and discuss the role of tactile and force/torque sensors in the exploration of a robotic workcell. Further, we describe a number of techniques useful in processing tactile information acquired using passive as well as active sensing modes. Successful adaptation of touch in robotic manipulatory systems entails in both enhancing the tactile sensor technology, and developing techniques for efficient and accurate analysis of the tactile information. Concepts and algorithms developed in this paper are evaluated using a laboratory test bed which includes an industrial robot with a parallel jaw end-effector with dynamic feed-back from a 10 x 16 tactile sensor and a force/torque sensor. Results of the experimental studies show promise for the basic active exploration framework and various algorithmic modules constituting the exploration process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of finding the pose of an object grasped by a hand. The problem can be stated simply: Given a hand grasping an object, and given a model of that object, determine the position and orientation of the object with respect to the hand. The method described uses only joint angle and torque sensing, and is able to localize two-dimensional objects grasped by the Utah-MIT Dextrous Hand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of sequences of images over time provides a means of extracting meaningful information which is used to compute and track the three-dimensional position of a moving object This paper describes an application in which sensory feedback based on time-varying camera images is used to provide position information to a manipulator control system. The system operates in a real-time environment and provides updated information at a rate which permits intelligent trajectory planning by the control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for grasping a moving object. This task requires proper coordination between at least 3 separate subsystems: dynamic vision sensing, real-time arm control, and grasp control. As with humans, our system first visually tracks the object’s 3-D position. Because the object is in motion, this must be done in a dynamic manner to coordinate the motion of the robotic arm as it tracks the object. The dynamic vision system is used to feed a real-time arm control algorithm that plans a trajectory. The arm control algorithm is implemented in two steps: 1) filtering and prediction, and 2) kinematic transformation computation. Once the trajectory of the object is tracked, the hand must intercept the object to actually grasp it. We present 3 different strategies for intercepting the object and results from the tracking algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many complex robot systems (multifingered manipulators, legged platforms and kinematically independent robots) can benefit from techniques which determine how to use environmental geometry to advantage in interaction tasks. To support sensing and control operations, we propose two model types: local geometric models and global force domain models. The former facilitates the consistent interpretation of sensor evidence, while the latter supports reasoning about contact interactions with the environment. An expressive means for describing grasp objectives and a control strategy for designing the geometry of contact mediated by force domain goals are presented. Methods for incremental geometric and force domain modeling are presented. Examples of constructive modeling and the use of these models for grasping and manipulation are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic devices under control of a remote human operator are increasingly attractive for space, underseas, nuclear, and waste management applications. In these arenas, the target tasks are often casually structured, non-repetitive activities which, given currently available automation technologies, seem to suggest teleoperative, versus purely robotic implementations. However, there may be a better alternative, one that is achievable in the reasonable future -- augment conventional teleoperations systems with computer assists -- to both improve task performance and lower operator workload. For instance, we illustrate in this paper how computer assists can improve teleoperator trajectory tracking during both free and force-constrained motions. Specifically, we report on computer graphics techniques which enable the human operator to both visualize and predict his detailed 3-D trajectories in real-time; we also describe man-machine interactive control procedures for better management of manipulator contact forces and positioning. Collectively, these new advanced teleoperations techniques both enhance system performance and significantly reduce control problems long-associated with teleoperations under time delay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Invited Session: Integration in Human Shape Perception
When two sources of depth information are combined, the result might be dominance of one source, cooperative interaction between the sources, or an additive combination of the sources. For transparent stereo-motion displays we found a cooperative interaction: structure-from-motion facilitates disparity processing, probably by helping to resolve the stereo correspondence problem. For opaque stimuli the combination of structure-from-motion and stereopsis appears to be additive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Bayesian approach to vision provides a fruitful theoretical framework for integrating different depth modules. In this formulation depth can be represented by one or more surfaces. Prior probabilities, corresponding to natural constraints, can be defined on these surfaces to avoid the ill-posedness of vision. We advocate strong coupling between different depth cues, so that the different modules can interact during computation. This framework is rich enough to accommodate straightforwardly both consonant and contradictory cue integration, by the use of binary decision units. These units can be interpreted in terms of robust statistics. A number of existing psychophysical experiments can be understood within this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a series of experiments designed to test (1) whether human observers combine depth cues using a weighted average when depth estimates in different maps are nearly consistent, (2) whether human observers behave as robust estimators when depths become increasingly inconsistent, and (3) whether the weights used in the linear rule of combination change to reflect the estimated reliability of different depth cues. We report initial experiments concerning texture and motion. The data are clearly consistent with the notion that the depth percept is a linear combination of the individual depth values portrayed by each cue. By randomly varying the shapes of the texture elements, the texture cue is artificially made unreliable, and the data support the hypothesis that unreliable cues arc given less weight. Finally, there is an indication that when cues are strongly inconsistent, the weight on one of the cues is lowered, consistent with the hypothesis of robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new approach to surface reconstruction motivated by concepts from numerical grid generation. We develop adaptive mesh models that nonuniformly sample and reconstruct input shape data. Adaptive meshes are dynamic models assembled from nodal masses connected by adjustable springs. Acting as mobile sampling sites, the nodes observe interesting properties of the input data, such as depths, gradients, and curvatures. The springs automatically adjust their stiffnesses based on the locally sampled information in order to concentrate nodes near rapid shape variations. The representational power of an adaptive mesh is enhanced by its ability to optimally distribute the available degrees of freedom in accordance with the local complexity of the input data. Surface reconstruction using adaptive meshes runs at interactive rates with continuous 3D display on a graphics workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a unification framework for three-dimensional shape reconstruction using physically- based models. Most shape-from-X techniques use an “observable” (e.g., disparity, intensity, and texture gradient) and a model, which is based on specific domain knowledge (e.g., triangulation principle, reflectance function, and texture distortion equation) to predict the observable, in 3-D shape reconstruction. We show that all these “observable—prediction-model” types of techniques can be incorporated into our framework of energy constraint on a flexible, deformable image frame. In our algorithm, if the observable does not confirm with that predicted by the corresponding model, a large “error” potential results. The error potential gradient forces the flexible image frame to deform in space. The deformation brings the flexible image frame to “wrap” onto the surface of the imaged 3-D object. Surface reconstruction is thus achieved through a “package wrapping” process by minimizing the discrepancy in the observable and the model prediction. The dynamics of such a wrapping process are governed by the least action principle which is physically correct. A physically-based model is essential in this general shape reconstruction framework because of its capability to recover the desired 3-D shape, to provide an animation sequence of the reconstruction, and to include the regularization principle into the theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new segmentation technique for very sparse surfaces which is based on minimizing the energy of the surfaces in the scene. While it could be used in almost any system as part of surface reconstruction/model recovery, the algorithm is designed to be usable when the depth information is scattered and very sparse, as is generally the case with depth generated by stereo algorithms. We describe a sequential implementation that constructs seed surfaces, automatically sets thresholds, adds points to the seeds, merges surfaces, and corrects for incorrectly added points. We discuss a parallel implementation that runs on the Connection Machine™. We show results from a sequential algorithm that processes synthetic or range finder data.
The idea of segmentation by energy minimization is not new. However, prior techniques have relied on discrete regularization or Markov random fields to model the surfaces to build smooth surfaces and detect depth edges. Both of the aforementioned techniques are ineffective at energy minimization for very sparse data. In addition, out method does not require edge detection and is thus also applicable when edge information is unreliable or unavailable.
The technique presented herein models the surfaces with reproducing kernel-based splines which can be shown to solve a regularized surface reconstruction problem. From the functional form of these splines we derive computable bounds on the energy of a surface over a given finite region. The computation of the spline, and the corresponding surface representation are quite efficient for very sparse data. An interesting property of the algorithm is that it makes no attempt to determine segmentation boundaries; the algorithm can be viewed as a classification scheme which partitions the data into collections of points which are “from” the same surface. Among the significant advantages of the method is the capacity to process overlapping transparent surfaces, as well as surfaces with large occluded areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes recent research in two related areas. The main result concerns the stereo matching problem. The discussion of this result forms the bulk of the paper. The second area concerns the problem of visual surface reconstruction from sparse depth data, and in particular, the significance of errors (outliers) in the data. Sections 2 through 4 cover stereo matching. Section 5 discusses surface reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss surface reconstruction preserving discontinuities. Given a set of function values, we first detect discontinuities. Then we construct a surface that preserves discontinuities. The discontinuity detection is based on a residual analysis in high dimensions, and the surface fitting uses multivariate smoothing splines with discontinuities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the research activities of the Exploratory Computer Vision Group at the IBM Thomas J. Watson Research Center; this is a follow-up of the work reported previously.6
The focus of the ongoing work is the development of an experimental vision system for recognition of 3D objects. The thrust of the development of the vision system is to investigate techniques that may lead to a system that scales with the size of the problem; here, by the size of the problem, we mean the complexity of the scene - the number of object in the scene - and, the number of objects in the database - i.e., the number of objects that the system can recognize.
Fusion is a recurring theme in our research. E.g., fusion of evidence about different features extracted from the data; fusion of information obtained at different points in the image; fusion of information extracted from high and low-resolution images. Therefore, rather than focussing on a particular aspect of our work, we present an overview of the work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the estimation of motion parameters and scene structure through the fusion of successive stereo pairs. While a least-squares estimator is quite stable in the presence of well-behaved noise, it gives disastrous results when the input data are contaminated with a few outliers. Due to difficulties in stereo and temporal image matching, such outliers cannot be easily eliminated within the feature matching stage. Therefore, immunity to outliers is essential to motion and structure estimation algorithms. The robust estimator described in this paper reduces the influence of outliers so that the estimates are not very sensitive to gross errors in the input data. Experiments with real world images are presented with automatically established stereo and temporal matching. The accuracy of the estimated motion and depth map of the real scene is partially validated with the ground truth. Results show that the robust estimator is stable in the presence of outliers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The vision system described in this paper reconstructs 3-D scenes from sequences of noisy binocular images. First, the system establishes all possible matches between the feature pixels in the first binocular image pair and assigns a confidence value to a possible match. Because of finite resolution of cameras, each possible match is associated with a 3-D volume, instead of a 3-D point. The possible matches are used to predict projections of associated 3-D volumes onto the remaining binocular image pairs. These projections are utilized to limit searches for possible matches. The new matches are used to update confidence values using an optimal fusion algorithm. After fusion, matched pixels with high confidence values are considered as correct matches. The spatial uncertainty due to finite camera resolution is reduced by fusing the 3-D information provided by binocular image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of building a map of the environment utilizing sensory depth information obtained from multiple viewpoints. The desired representation of the environment is in the form of a finite-resolution three-dimensional grid of voxels. Each voxel within the grid is assigned a binary value corresponding to its occupancy state. We present an approach for multi-sensory depth information assimilation based on Dempster-Shafer theory for evidential reasoning. This approach provides a mechanism to explicitly model ignorance which is desirable when dealing with an unknown environment. A fundamental requirement for such an approach to be used is accurate knowledge of the camera motion between two viewpoints. We present a robust least median of squares (LMS) based algorithm to recover this motion which provides a self-calibration mechanism. We present results obtained from this approach on a laboratory stereo sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient system for recovering the structural properties of objects. We build adaptive maps for interpreting the object structure. The System is decomposed into four major modules, the sensing interface acquires visual data and performs low and intermediate vision tasks for enhancing the acquired image sequences. A knowledge base contains different visual primitives and the possible exploratory actions. The map building and decision making module gets its input from the sensing interface and utilizes the different predicates that are stored in the knowledge base in order to resolve possible uncertainties. The decisions made in the map building and decision making module are then utilized by the controller module which resolves possible inconsistencies due to physical limitations of the sensing devices. The above process is repeated until the map contains a minimal number of interpretations that cannot be reduced further.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new K-nearest neighbor (KNN) statistic is introduced to fuse information from multiple sensors/features into a single dimensional decision space for electronic vision systems. Theorems establish the relationship of the KNN statistic to other probability density function distance measures such as the Kolmogorov-Smirnov Distance and the Tie Statistic. A new KNN search algorithm is presented along with factors for selecting K. Applications include cueing and texture recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a scheme for integrating data in a multi sensor system. The different sensors of the system are treated as a decision making team that cooperates and coordinates the actions and positions of the members so as to achieve a common goal. We adopt a hierarchical data-integration approach that is capable of combining the different information levels provided by the different sensors of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We view the problem of sensor-based decision-making in terms of two components: a sensor fusion component that isolates a set of models consistent with observed data, and an evaluation component that uses this information and task-related information to make model-based decisions. In previous work we have described a procedure for computing the solution set of parametric equations describing a sensor-object imaging relationship, and also discussed the use of task-specific information to support set-based decision-making methods.
In this paper, we investigate the implications of allowing one of the decision-making options to be “no decision,” whereupon a human might be called to aid or interact with the system. In particular, this type of capability supports the construction of supervised or partially autonomous systems. We discuss how such situations might arise and give concrete examples of how a system might reach such a decision using our techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents sensing-knowledge-command(SKC) fusion as a fundamental paradigm of implementing cooperative control for an advanced man-machine system. SKC fusion operates on the "SKC fusion network", representing the connection between sensory data to commands through knowledge. Sensing, knowledge, and command of a human and a machine are merely tapped into the network to provide inputs or stimuli to the network. Such stimuli automatically invokes a SKC fusion process and generates a fused ouput for cooperative control. Once invoked by stimuli, the SKC fusion process enforces the network to converge to a new equilibrium state through the network dynamics composed of data fusion, feature transformation, and constraint propagation. The SKC fusion process thus integrates redundant information, maintains consistency of the network, identifies faulty data and concepts, and specifies those concepts to be strengthened (for enhancing command reliability) through sensor planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When human beings move under sensory (usually visual) guidance, they use cognitive spatial representations that they construct and update as they travel through and become more familiar with the space. Such representations include both information about the layout of the scene visible at each moment, and about the broader space that extends beyond the range of current sensory input. This paper examines what is known about such representations, the changes imposed by updating during the course of locomotion, the acquisition of knowledge as a result of exploring, and the ways in which such representations are indispensible in guiding locomotion. Finally, the role of spatial representations for mobile robots is examined to show that in the absence of some kind of spatial memory, mobile robots will be severely limited in the kinds of spatial tasks they can perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Exploiting prior knowledge about the general characteristics of an environment can reduce the amount of sensing required. For example, in an indoor environment floors tend
to be flat and walls tend to be straight and static. In such an environment, simple range sensors can provide enough information to support robust sensor-driven goal-directed navigation. This paper will describe a navigation experiment using a real robot, and speculate on how the techniques used can be extended to other domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor fusion in robotics, particularly for navigation of autonomous mobile robots, has typically been addressed as a “bottom-up” or data driven process. This has led to a variety of systems that, although somewhat successful, have been difficult to expand to include additional sensors or extend to other domains. The approach taken here is to specify and develop a control scheme which considers the sensor fusion process in the context of the intended actions of the robot, knowledge of the environment, and the available sensor suite.
The resulting control scheme exploits environmental knowledge in three ways in order to reduce processing. First, the control structure supports adaptation of the sensor fusion process to the environment and intended action. An appropriate set of candidate features is selected from the feature extraction library during the investigatory phase. Fusion occurs during the performatory phase in one of three global states: complete sensor fusion; fusion with the possibility of discordance and resultant recalibration of dependent perceptual sources; and fusion with the possibility of discordance and resultant suppression of discordant perceptual sources. Second, the states themselves use environmental knowledge to improve the fusion results as well as the sensing quality. Knowledge about how a sensor behaves under certain environmental conditions can lead to the exclusion of suspect readings from the fusion process. Third, the control scheme allows the system to respond to unexpected or catastrophic changes in the environment or sensors by permitting transitions between states. When an unacceptable discordance is detected between features, the investigatory phase is re-invoked, the system reconfigured, and instantiated in a new state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on progress in using multiple passive sensors to allow a mobile robot to operate successfully in a people populated unstructured indoor environment. The robot locates, follows and approaches people using rotating pairs of passive pyro-electric sensors to determine range and heading of candidate people, using fixed pyro-electric sensors to locate rapidly moving people, and passive touch sensors to avoid local obstacles. The sensors feed directly into behavioral modules, rather than undergo a fusion process. Work is underway to integrate vision in a similar manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on Navigation Templates (or NaTs), this work presents a new paradigm for local navigation which addresses the noisy and uncertain nature of sensor data. Rather than creating a new navigation plan each time the robot's perception of the world changes, the technique incorporates perceptual changes directly into the existing navigation plan. In this way, the robot's navigation plan is quickly and continuously modified, resulting in actions that remain coordinated with its changing perception of the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser rangefinding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An intelligent agent must understand its surroundings by integrating sensory data from many sources over time. This integration typically consists of processing raw data into abstract models that fuse the data from many sensors into a consistent interpretation. This process is often quite complex when attempted with raw data because noise, uncertainty, and missing information create ambiguities that cannot be resolved until after an interpretation is chosen. The very same problems exist in generating a consistent interpretation of data over time, in particular, identifying an object as having been ’’seen” before. This paper suggests that sensor interpretation and model building are active processes driven by an agent’s goals, and that many sensor fusion issues are really issues in planning and acting. An object identification system based on consciously gathering appropriate new data is presented as an example of doing active task directed data fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a multiresolution approach combining stereo intensity data with data produced from a triangulation laser range finder. Stereo data can be acquired rapidly at high-resolution, but the complexity of the correspondence problem has lead researchers to turn to other depth derivation methods. A recent trend has been to use laser range finders which provide direct depth information. Due to limitations of the technology and cost, most available laser range finders produce relatively noisy and sometimes missing depth information at a comparatively lower speed. This research explores the idea of a multiresolution approach for the acquisition of a complete, relatively noise-free, and high-resolution depth map from a low-resolution triangulation range image and a stereo pair of high-resolution intensity images.
Stereo intensity data are collected using an additional camera orthogonal to the laser range finding configuration, thus providing depth information for surfaces invisible to the laser finding system. It also provides smoother depth values at edges, where many laser reflectance problems occur, and finer detail, since intensity data is capable of highlighting more detailed features of an object. Depth information from the laser range data is used at edges common to one of the stereo intensity images, thus reducing the edge correspondence problem by constraining the search for edges in stereo matching. These common edges, and the inter and intra-level linking of edges in the pyramid, allow a process where the coarse laser depth information drives a multiresolution stereo matching process to construct a high-resolution depth map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landmark recognition is a task required of many robotic systems. In this work, we examine the use of a constrained Hough transform used by a mobile robot to locate a docking workstation. This algorithm deals with the uncertainty inherent in a mobile robot by making use of a spatial uncertainty map maintained by the robot. Several iterations of the Hough transform are run with transformed models of the dock. Votes are accumulated in a collapsed Hough space which, although unable to recover range and orientation information, simplifies locating the dock within the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new parallel analytic data fusion method has been developed and tested on real image pairs. This fusion algorithm is based on the interaction between two analytically formulated constraints: (1) the principle of Knowledge Source Aggregation, and (2) the principle of Belief Enhancement/Withdrawal. In this paper, we discuss ways in which a message-passing multiprocessor employing the hypercube interconnection topology is exploited in order to achieve optimal speed-up in the parallel data fusion algorithm. Image parallelism is optimized by having multiple processors execute the same task but operate on different subsets of the data. Two numerical methods used to solve a system of partial differential equations resulting from the use of the Euler-Lagrange equation for the fusion process are compared. Tests conducted on an NCUBE/4 parallel computer have resulted in an effective implementation of the complete fusion process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using polynomials in 3 variables to represent 3-D objects, we have developed surface determination, shape decomposition and template matching algorithms. It is shown that all the morphological operations on images can be done using the polynomial approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hierarchical 3-D multiview representation is used to model targets so that they can be efficiently recognized using sensor platforms distributed over a common geographical area. The hierarchical structure reduces storage requirements and the communication bandwidth required for data association between platforms at each stage of the recognition process
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evaluation and Selection of Sensor Fusion Techniques
The purpose of this article is to describe research in sensor fusion with statistical decision theory in the GRASP Lab, Department of Computer and Information Science, University of Pennsylvania. This article is thus a tutorial overview of the general research problem, the mathematical framework for the analysis, and the results of specific research problems. The intended audience for this article is a reader seeking a self-contained summary of the research. The prerequisite for understanding this article is familiarity with basic mathematical statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many approaches to data fusion involve the use of least squares methods. Such methods are typically used for parameter estimation in applications such as pose estimation, motion analysis, shape estimation, and camera calibration. In this paper we describe the general least squares problem and some common solution methods, and overview its use in several robotic applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unified information fusion, decision and control scheme is presented at different levels of data abstraction for multisensor-based robotic systems. It is based on the framework of Bayesian networks. Shannon's measure of mutual information and error-based measures are used in the decision of information sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of range and intensity image pair of a 3-D scene for model based 3-D object recognition is considered. Hierarchical algorithms exist for segmenting each image individually. We emphasize the degree of similarity in the computation at each level of the hierarchy and note that, in some cases, at the intermediate levels of abstraction (such as local-shape from shading) the uncertainties that exist in range and intensity images are of complementary nature. Certain geometric and surface constraints are illustrated to have such property. A multistage segmentation approach that follows a hierarchical computation in lock-step is then developed. The possibility of parallelizing the computation at different levels is also examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of multiple disparate sensors such as range and visible is commonplace on most modern robotic platforms. Combination of evidence techniques which include Bayesian and Dempster-Shafer can be used to resolve some of the ambiguity in sensor outputs for scene segmentation purposes. This paper compares various versions of the Dempster-Shafer formalism and a neural network model for integration of disparate sensor outputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information gathered by different knowledge sources from the same scene are often uncertain, imprecise, fuzzy, vague, or incomplete. Numerous papers have appeared in the literature dealing with the fusion of this kind of information using different frameworks. In this paper, we review a number of non-deterministic methods for solving the fusion problem. The use of Bayes’ rules in resolving ambiguities and conflicts associated with given bodies of evidence is examined. We also present the theory of belief (i.e., Dempster’s rule of combination) and its use in evidence fusion. The theory of possibility, which has emerged from the theory of fuzzy sets, the sym metric sums, and other hybrid’ techniques are also examined. A meaningful comparison among all these methods is carried out using the same set of synthetic data presented in various frameworks. This example is inspired from a real robotic experiment. The strengths and weaknesses of these techniques are discussed in some detail. Based upon the performance of each method on this particular fusion problem, general comparative remarks are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-sensor fusion deals with the combination of complementary and sometimes contradictory sensor data into a reliable estimate of the environment to achieve a sum which is better than the parts. Multiple sensors can be used to overcome problems associated with object recognition systems. The introduction of multiple sensors into such a system emphasizes the need for useful methods for combining sensor outputs. Multiple sensors can yield duplicate information that can be used to verify input and possibly to ease the task of object recognition. Since each sensor output contains noise, multiple sensors can be used to determine the same property, but with the consensus of all sensors. We introduce a Bayesian approach for combining sensor outputs that increases the confidence in features supported by multiple sensors and reduces the confidence in unsupported features. This paper describes how feature level input from an arbitrary number of sensors may be combined to make 3-D object recognition more accurate. An example involving features from range, intensity, and tactile is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed Decision (Evidence) Fusion (DD(E)F) exhibits some interesting characteristics which are not present in centralized, or raw data, fusion. The interesting characteristics relate to the semantic information that the decisions (in the broader sense of the term) convey which (semantic information) is not present, at least explicitly, when raw data is fused. Different theories and results related to DD(E)F have appeared in the literature. Each theory takes a different stand on the definition of how to measure evidence or combine decisions. The objective of this paper is to investigate the nature of DD(E)F and establish a comparative basis between the two most prominent theories in DD(E)F, namely the Bayesian and Dempster-Shafer theories. To that extent, the similarities and differences between the two theories that result from the semantic differences in the format of the fused information are investigated. A performance comparison between the two theories is attempted. A Generalized Evidence Processing (GEP) theory that extends the Bayesian approach into fuzzy decision making is used to compare the performance of a Bayesian soft decision making system with that of a hard decision making Bayesian system. The similarities and differences between the GEP combining rule and the Dempster's combining rule are discussed and a consistency comparison between the two rules is performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Before combining measurement data from multiple sensors, it is first necessary to identify those measurements that correspond to the same “world” feature or target. This paper addresses three topics related to the problem of establishing feature correspondence (data association). First, a standard maximum likelihood (ML) decision rule for feature correspondence is reviewed, emphasizing the relationship between the decision rule and the Kalman filter model structure. Next, four measurement “primitives” are developed as a convenient data represention for fusion of diverse measurements. First-order models can be constructed from these primitives for a wide range of sensor types and applications.
Finally, these ideas are illustrated with examples involving stereo (binocular) image feature correspondence and fusion, starting first with a two dimensional example which is then generalized to the three dimensional case of practical interest. A novel method is presented for registering observed features from two (or more) cameras that provides “triangulation” range estimates along with feature correspondence statistics. The stereo image association problem is also addressed for the case when both cameras measure optical flow (angle-rate) of discrete features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.