Modern computational science poses two challenges for scientific visualization: managing the size of resulting
datasets and extracting maximum knowledge from them. While our team attacks the first problem by implementing
parallel visualization algorithms on supercomputing architectures at vast scale, we are experimenting
with autostereoscopic display technology to aid scientists in the second challenge. We are building a visualization
framework connecting parallel visualization algorithms running on one of the world's most powerful supercomputers
with high-quality autostereo display systems. This paper is a case study of the development of an end-to-end
solution that couples scalable volume rendering on thousands of supercomputer cores to the scientists' interaction
with autostereo volume rendering at their desktops and larger display spaces. We discuss modifications to our
volume rendering algorithm to produce perspective stereo images, their transport from supercomputer to display
system, and the scientists' 3D interactions. A lightweight display client software architecture supports a variety
of monoscopic and autostereoscopic display technologies through a flexible configuration framework. This case
study provides a foundation that future research can build upon in order to examine how autostereo immersion
in scientific data can improve understanding and perhaps enable new discoveries.
Autostereoscopy (AS) is an increasingly valuable virtual reality (VR) display technology; indeed, the IS&T / SPIE
Electronic Imaging Conference has seen rapid growth in the number and scope of AS papers in recent years. The first
Varrier paper appeared at SPIE in 2001, and much has changed since then. What began as a single-panel prototype has
grown to a full scale VR autostereo display system, with a variety of form factors, features, and options. Varrier is a
barrier strip AS display system that qualifies as a true VR display, offering a head-tracked ortho-stereo first person
interactive VR experience without the need for glasses or other gear to be worn by the user.
Since Varrier's inception, new algorithmic and systemic developments have produced performance and quality
improvements. Visual acuity has increased by a factor of 1.4X with new fine-resolution barrier strip linescreens and
computational algorithms that support variable sub-pixel resolutions. Performance has improved by a factor of 3X using
a new GPU shader-based sub-pixel algorithm that accomplishes in one pass what previously required three passes. The
Varrier modulation algorithm that began as a computationally expensive task is now no more costly than conventional
stereoscopic rendering. Interactive rendering rates of 60 Hz are now possible in Varrier for complex scene geometry on
the order of 100K vertices, and performance is GPU bound, hence it is expected to continue improving with graphics
card enhancements.
Head tracking is accomplished with a neural network camera-based tracking system developed at EVL for Varrier.
Multiple cameras capture subjects at 120 Hz and the neural network recognizes known faces from a database and tracks
them in 3D space. New faces are trained and added to the database in a matter of minutes, and accuracy is comparable
to commercially available tracking systems.
Varrier supports a variety of VR applications, including visualization of polygonal, ray traced, and volume rendered
data. Both AS movie playback of pre-rendered stereo frames and interactive manipulation of 3D models are supported.
Local as well as distributed computation is employed in various applications. Long-distance collaboration has been
demonstrated with AS teleconferencing in Varrier. A variety of application domains such as art, medicine, and science
have been exhibited, and Varrier exists in a variety of form factors from large tiled installations to smaller desktop
forms to fit a variety of space and budget constraints.
Newest developments include the use of a dynamic parallax barrier that affords features that were inconceivable with a
static barrier.
This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.