Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research,
we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the
geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies
course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the
modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video
in natural settings generates complex imagery that requires advanced applications of computer vision research to generate
registrations and mappings between the views of separate observers. By developing such mappings, we could then place
many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations,
saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment
setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain.
Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In
the course of this research we have developed a unified, open-source, software framework for processing, visualization,
and interaction of mobile eye-tracking and high-resolution panoramic imagery.
Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.
Automated panorama creation usually requires camera calibration or extensive knowledge of camera locations and
relations to each other. Registration problems are often solved by these same camera parameters or the result of complex
point matching schemes. This paper presents a novel automated panorama creation algorithm by using an affine
transformation search based on maximized mutual information (MMI). MMI techniques are often limited to airborne and
satellite imagery or medical images, but we can show that a simple MMI algorithm very well approximates realistic
scenes of varying depth distortion. This study was performed on stationary color surveillance video cameras and proves
extremely worthwhile in any system with limited or no a priori camera-to-camera parameters. This algorithm is quite
robust on a very large range of strict- to nearly-affine related scenes, and provides a great approximation for the overlap
regions in scenes related by a projective homography. Practical considerations were surprisingly significant in molding
the development of this robust and versatile algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.