The unique qualities of the TI DLP devices have enabled a number of interesting applications. The DLP is essentially a
fast binary light modulator and using the power of modern graphics processors these devices can be driven with images
computed on the fly at rates of several thousand frames per second. A number of these applications have been
developed by the University of Southern California where fast light is exploited to create a light field display. In another
application, fast light is coupled with a synchronized high speed camera to extract the 3D shape of an object in real time.
Starting with a list of typical hand actions - such as touching or twisting - a collection of physical input device prototypes was created to study better ways of engaging the body and mind in the computer aided design process. These devices were interchangeably coupled with a graphics system to allow for rapid exploration of the interplay between the designer's intent, body motions, and the resulting on-screen design. User testing showed that a number of key considerations should influence the future development of such devices: coupling between the physical and virtual worlds, tactile feedback, and scale. It is hoped that these explorations contribute to the greater goal of creating user interface devices that increase the fluency, productivity and joy of computer-augmented design.
This system has been in development at Keio University in Japan and pulls together several techniques including Micro Archiving and interactive stereoscopic displays. The exhibit, shown at Siggraph, engages visitors who are invited to visualize and interact with microscopic structures that cannot be seen with the naked eye, but that commonly exist in our everyday surroundings. The exhibit presents a virtual world in which dead specimens of bugs come back to life as virtual bugs, and freely walk around - visitors interact with these virtual bugs and examine the virtual models in detail.
DLP (Digital Light Processing) is about to invade stereo applications, one of the last bastions of CRT projection technology. This paper presents various methods for achieving stereo and their application to DLP projectors. The newly developed sequential stereo capable projectors are also introduced and their performance characteristics discussed along with artifacts. Also presented are ways to employ these projectors to realize multiple simultaneous viewers.
This paper presents three methods for classifying and qualifying virtual and immersive environments. The first is to plot modes of use against environment types. The second is to create a matrix analyzing display type against interaction methodology. The third is to analyze the system as if it created a volume of 3D pixels and determine if the quality of the created pixel volume is appropriate for the given application and use.
KEYWORDS: Visualization, Virtual reality, Analog electronics, 3D modeling, Control systems, Interfaces, Head, Scientific visualization, Hand-held displays, 3D displays
We have built a hand-held palette for touch-based interaction in virtual reality. this palette incorporates a high-resolution digitizing touch screen for input. It is see-through, and therefore does not occlude objects displayed behind it. These properties make it suitable for direct manipulation techniques in a range of virtual reality display systems. We implemented several interaction techniques based on this palette for an interactive scientific visualization task. These techniques, the tool's design, and its limitations are discussed in this paper.
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
As the field of immersive display systems matures, the tools that are being created become more specialized and specific to the application at hand. This process is leading to a rich set of diverse approaches that appear to be grouped into three categories: head mounted displays, spatially immersive displays, and virtual model displays. This paper briefly introduces and describes these classifications and then highlights virtual model displays with recent observations from a variety of users and applications.
This paper describes new ways of using textures to substitute for complex geometric models. A stereo texture is a stereo pair of images mapped onto geometry and presented in a stereo display. The viewer sees the stereo pair and can thus perceive depth information in the textured image. This technique can be used to replace large parts of a complex model with simple base geometry and a stereo texture. The stereo textures can replace the scene beyond a frame or portal. If the stereo texture is placed some distance behind the frame, the viewer gets motion parallax, between the frame and the scene. The textures may also contain information in association with the image for tasks such as picking.
Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.
While virtual environment systems are typically thought to consist of a headmounted display and a flex-sensing glove, alternative peripheral devices are beginning to be developed in response to application requirements. Three such alternatives are discussed: fingertip sensing gloves, fixed stereoscopic viewers and counterbalanced headmounted displays. A subset of commercial examples that highlight each alternative is presented, as well as a brief discussion of interesting engineering and implementation issues.
We are interested in the application of computer animation to surgery. Our current project, a navigation and visualization tool for knee arthroscopy, relies on real-time computer graphics and the human interface technologies associated with virtual reality. We believe that this new combination of techniques will lead to improved surgical outcomes and decreased health care costs. To meet these expectations in the medical field, the system must be safe, usable, and cost-effective. In this paper, we outline some of the most important hardware and software specifications in the areas of video input and output, spatial tracking, stereoscopic displays, computer graphics models and libraries, mass storage and network interfaces, and operating systems. Since this is a fairly new combination of technologies and a new application, our justification for our specifications are drawn from the current generation of surgical technology and by analogy to other fields where virtual reality technology has been more extensively applied and studied.
Many researchers have felt that counterbalanced, stereoscopic immersive displays were an interim technology that would be supplanted as advances in LCDs and electronics made lightweight, head-mounted viewers popular. While there is still a long way to go in the development of truly practical head-mounted displays, it now seems clear that counterbalanced display will always play a significant role in the development, applications, and general dissemination of virtual environment tools. This paper hopes to explain the unexpected popularity of these devices, and to highlight features of these displays that have become apparent since the 1989 SPIE paper that described an early workable example of this genre. In addition, this paper describes the current state of this technology and the acceptance of counterbalanced displays in a wide range of applications since the original SPIE paper.
This paper describes the implementation and integration of the Ames counterbalanced CRT-based stereoscopic viewer
(CCSV). The CCSV was developed as a supplementary viewing device for the Virtual Interface Environment
Workstation project at NASA Ames in order to provide higher resolution than is currently possible with LCD based
head-mounted viewers. The CCSV is currently used as the viewing device for a biomechanical CAD environment
which we feel is typical of the applications for which the CCSV is appropriate. The CCSV also interfaces to a remote
stereo camera platform.
The CCSV hardware consists of a counterbalanced kinematic linkage, dual-CRT based stereoscopic viewer with wide
angle optics, video electronics box, dedicated microprocessor system monitoring joint angles in the linkage, host
computer interpreting the sensor values and running the application which renders right and left views for the viewer's
CRTs.
CCSV software includes code resident on the microprocessor system, host computer device drivers to communicate
with the microprocessor, a kinematic module to compute viewer position and orientation from sensor values, graphics
routines to change the viewing geometry to match viewer optics and movements, and an interface to the application.
As a viewing device, the CCSV approach is particularly well suited to applications in which 1) the user moves back
and forth between virtual environment viewing and desk work, 2) high resolution views of the virtual environment are
required or 3) the viewing device is to be shared among collaborators in a group setting. To capitalize on these
strengths, planned improvements for future CCSVs include: defining an appropriate motion envelope for desk top
applications, improving the feel of the kinematics within that envelope, improving realism of the display by adding
color and increasing the spatial resolution, reducing lag, and developing interaction metaphors within the 3D
environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.