For most of the past 100 years, cinema has been the premier medium for defining and expressing relations to the visible world. However, cinematic spectacles delivered in darkened theaters are predicated on a denial of both the body and the physical surroundings of the spectators who are watching it. To overcome these deficiencies, filmmakers have historically turned to narrative, seducing audiences with compelling stories and providing realistic characters with whom to identify. This paper describes several research projects in interactive panoramic cinema that attempt to sidestep the narrative preoccupations of conventional cinema and instead are based on notions of space, movement and embodied spectatorship rather than traditional storytelling. Example projects include interactive works developed with the use of a unique 360 degree camera and editing system, and also development of panoramic imagery for a large projection environment with 14 screens on 3 adjacent walls in a 5-4-5 configuration with observations and findings from an experiment projecting panoramic video on 12 of the 14, in a 4-4-4 270 degree configuration.
This system has been in development at Keio University in Japan and pulls together several techniques including Micro Archiving and interactive stereoscopic displays. The exhibit, shown at Siggraph, engages visitors who are invited to visualize and interact with microscopic structures that cannot be seen with the naked eye, but that commonly exist in our everyday surroundings. The exhibit presents a virtual world in which dead specimens of bugs come back to life as virtual bugs, and freely walk around - visitors interact with these virtual bugs and examine the virtual models in detail.
The Virtual Explorer project at the University of California, San Diego, is creating immersive, highly- interactive virtual environments for scientific visualization and education. We are creating an integrated model system to demonstrate the potential applications of VR in the educational arena, and are also developing a modulator software framework for the further development of the Virtual Explorer model for other fields.
Since the late 80s, the popular imagination surrounding virtual systems has been lively and contested, an intriguing brew of cyberpunk fiction, government and corporate research, and product development, with a dash of countercultural excess. Virtual systems, in their myriad forms, have captured the interest not only of scientists and engineers, but also of a broad spectrum of social actors, including the popular and alternative press, fiction and comic writers, visual artists, film and television producers, as well as large sectors of a curious public, all of whom have produced diverse and creative images of these systems for a range of different audiences. The circulation of images of virtual systems points to some of the ways in which the production of technology can be located not only in engineering labs but also various realms of mass media and public culture. Focusing on images of gloves and goggles, this paper describes some of the pathways through which images of virtual systems have traveled.
Today, the media of VR and Telepresence are in their infancy and the emphasis is still on technology and engineering. But, it is not the hardware people might use that will determine whether VR becomes a powerful medium--instead, it will be the experiences that they are able to have that will drive its acceptance and impact. A critical challenge in the elaboration of these telepresence capabilities will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location or experience. This paper will describe the recent development of several Virtual Experiences including: `Menagerie', an immersive Virtual Environment inhabited by virtual characters designed to respond to and interact with its users; and `The Virtual Brewery', an immersive public VR installation that provides multiple levels of interaction in an artistic interpretation of the brewing process.
In earlier work at the NASA/Ames Research Center there was a need to develop a standard hardware platform for supporting multiple Virtual Environment display systems. Besides providing the electrical interface between different display types and various graphics systems, the platform was also required to support common auxiliary functions that were not otherwise easily implemented. Examples of these auxiliary functions include interfacing with video camera systems, recording and playback of stereo video on conventional equipment (including portable recorders), gray level calibration signal generation for display setup, generating alignment patterns for Inter Pupillary Distance adjustment and image reversing for `mirrored' image correction. The platform concept evolved through several iterations into a bus oriented, modular system employing a standard P1 VME backplane and a suite of plug-in function modules. Six systems were constructed between 1987 and 1988 and most are still in use. The platform design details (including schematics and fabrication drawings) have been made publicly available through the NASA/Ames Office of Technology Utilization. In surveying display system offerings from the proliferating list of current manufacturers, there seem to be no commercially available equivalents to this `standard' platform. Believing that such a platform would be of use within the Virtual Reality community, this paper describes the platform functions, rational and the general electronics associated with implementing them.
This paper describes an ongoing effort to develop one of the first fully immersive virtual environment installations that is inhabited by virtual characters and presences specially designed to respond to and interact with its users. This experience allows a visitor to become visually and aurally immersed in a 3D computer generated environment that is inhabited by many virtual animals. As a user explores the virtual space, he/she encounters several species of computer generated animals, birds, and insects that move about independently, and interactively respond to the user's presence in various ways. The hardware configuration of this system includes a head-coupled, stereoscopic color viewer, and special DSP hardware that provides realistic, 3D localized sound cues linked to characters and events in the virtual space. Also, the virtual environment and characters surrounding the user are generated by a high performance, real-time computer graphics platform. The paper describes the computer programs that model the motion of the animals, the system configuration that supports the experience, and the design issues involved in developing a virtual environment system for public installation.
KEYWORDS: Virtual reality, 3D modeling, Surgery, Visualization, Human-machine interfaces, 3D displays, Bone, Databases, Data modeling, Visual process modeling
A virtual environment system has been developed for viewing and manipulating a model of the human leg. The model can be used to simulate the biomechanical consequences of various reconstructive surgical procedures. Previously, the model was implemented on a standard engineering workstation, and interaction was limited to a mouse and screen cursor. By incorporating the leg model into a virtual environment, the authors were able to assess the value of a head-coupled stereo display and direct 3-D manipulation for a surgery simulation application. This application is an interesting test case for a virtual environment because it requires visualization and manipulation of complex 3-D geometries. Since the model can be used as the basis for a number of biomechanical analyses, the virtual environment provides an opportunity to visualize the resulting datasets in the context of the 3-D model. The components used in assembling the system are described the design and implementation of this system is discussed, and a set of interface techniques that allow direct 3-D interaction with the model is presented.
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a
remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop
and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote
manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted
on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in
coordination with head position transmitted from the user.
This paper provides an overall system description focused on the design and implementation of the camera and
platform hardware configuration and the development of control software. Results of preliminary performance
evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related
psychophysiological effects and objectives.
This paper considers the issue of total system lag in real-time interactive computer graphics environments. In these
systems, such as virtual environments and simulators, system lag dramatically effects the usability of the system. There
are two types of lag discussed in this paper: transmission lag time, the time difference between the moving of a sensing
device (such as a position tracker) and the display of that device's motion on a graphic display; and position lag, the
difference between the actual position of a tracker in motion compared and the displayed position of the tracker at the same
time. Using the Virtual Interactive Environment Workstation being developed at NASA Ames Research Center as the
system to be measured, a method of measuring these types of lag using a video technique is described. The relationship
between the two types of lag is observed and modeled, as well as a relationship between system lag and graphic update rate.
It is found that the position lag can be understood in terms of the transmission lag, so that optimizing a system for small
transmission lag will also optimize for small position lag. Using the results described in this paper the lag in other
systems can be estimated and the effect of graphics performance on system lag can be predicted.
This paper describes the implementation and integration of the Ames counterbalanced CRT-based stereoscopic viewer
(CCSV). The CCSV was developed as a supplementary viewing device for the Virtual Interface Environment
Workstation project at NASA Ames in order to provide higher resolution than is currently possible with LCD based
head-mounted viewers. The CCSV is currently used as the viewing device for a biomechanical CAD environment
which we feel is typical of the applications for which the CCSV is appropriate. The CCSV also interfaces to a remote
stereo camera platform.
The CCSV hardware consists of a counterbalanced kinematic linkage, dual-CRT based stereoscopic viewer with wide
angle optics, video electronics box, dedicated microprocessor system monitoring joint angles in the linkage, host
computer interpreting the sensor values and running the application which renders right and left views for the viewer's
CRTs.
CCSV software includes code resident on the microprocessor system, host computer device drivers to communicate
with the microprocessor, a kinematic module to compute viewer position and orientation from sensor values, graphics
routines to change the viewing geometry to match viewer optics and movements, and an interface to the application.
As a viewing device, the CCSV approach is particularly well suited to applications in which 1) the user moves back
and forth between virtual environment viewing and desk work, 2) high resolution views of the virtual environment are
required or 3) the viewing device is to be shared among collaborators in a group setting. To capitalize on these
strengths, planned improvements for future CCSVs include: defining an appropriate motion envelope for desk top
applications, improving the feel of the kinematics within that envelope, improving realism of the display by adding
color and increasing the spatial resolution, reducing lag, and developing interaction metaphors within the 3D
environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.