PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7238, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model.
In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fully automatic system for human detection and tracking in front of an Interactive Whiteboard is presented. When a
person is between a projector and the projection area, deleterious effects can be created from light shining on the face.
We developed a stereo vision system that can be used to mitigate problems arising from this issue by accurately
detecting the human body and masking the face. We present two main parts of this system: namely, automatic system
calibration and the human detection and tracking. We use a checkerboard pattern that is projected on the whiteboard at
start-up for automatic calibration. Grid patterns from two images are processed, and points between them are detected
and localized. A projective transform is used to set the homography between the two images. Testing shows precise
automatic calibration, with an average RMS error of 0.4 pixels in the off-line test. Human detection and tracking is
accomplished using a similarity measure, foreground segmentation, principle component analysis, body shape feature
extraction, disparity measure, and location estimation. We achieved an average detection rate of 97.7 % in the off-line
tests. The method was fully implemented in a real-time system and testing showed the system to be very robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of augmented reality, it is important to solve a geometric registration problem between real and virtual worlds.
To solve this problem, many kinds of image based online camera parameter estimation methods have been proposed. As one
of these methods, we have been proposed a feature landmark based camera parameter estimation method. In this method,
extrinsic camera parameters are estimated from corresponding landmarks and image features. Although the method can
work in large and complex environments, our previous method cannot work in real-time due to high computational cost
in matching process. Additionally, initial camera parameters for the first frame must be given manually. In this study,
we realize real-time and manual-initialization free camera parameter estimation based on feature landmark database. To
reduce the computational cost of the matching process, the number of matching candidates is reduced by using priorities
of landmarks that are determined from previously captured video sequences. Initial camera parameter for the first frame
is determined by a voting scheme for the target space using matching candidates. To demonstrate the effectiveness of
the proposed method, applications of landmark based real-time camera parameter estimation are demonstrated in outdoor
environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evoking Environments through Artful Distinctiveness
We have employed methodologies of human centered design to inspire and guide the engineering of a definitive low-cost
aesthetic multimodal experience intended to stimulate cultural growth. Using a combination of design research, trend
analysis and the programming of immersive virtual 3D worlds, over 250 innovative concepts have been brainstormed,
prototyped, evaluated and refined. These concepts have been used to create a strategic map for the development of highimpact
virtual art experiences, the most promising of which have been incorporated into a multimodal environment
programmed in the online interactive 3D platform XVR. A group of test users have evaluated the experience as it has
evolved, using a multimodal interface with stereo vision, 3D audio and haptic feedback. This paper discusses the
process, content, results, and impact on our engineering laboratory that this research has produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diego Velázquez's Las meninas (1656) has been called by some art experts "the most important painting of
the 17th century," "a theology of painting," and even "the world's greatest painting"; it has been the subject
of intensive study. The work depicts a complex scene in the Alcázar palace of King Philip IV of Spain, and
includes mirror reflections of the king and queen, apparently standing in place of the viewer, as well as the artist
himself standing before an enormous canvas on an easel. Nevertheless, questions remain about the studio and
the proper viewing configuration: Is the artist looking toward the perspectivally correct position of the viewer
in the museum space (center of projection), outside the picture space? Does the perspectivally correct position
correspond to the locations of the king and queen seen reflected in the mirror? Is the bright illumination on the
king and queen (as revealed in the mirror) consistent with the lighting in the tableau itself? We addressed these
questions in a new way: by building a full computer graphics model of the figures and tableau as well as the
viewer's space outside the painting. In our full model, the painting itself is represented as a translucent window
onto which the picture space is projected toward the center of projection, that is, the viewer. Our geometric
and (new) lighting evidence confirm Janson's and Snyder's contention that the plane mirror on the back wall
reflects the other side of the large painting depicted within the tableau, not the king and queen themselves in
the studio. We believe our computer graphics synthesis of both the tableau within the painting and the viewer's
space in the real world is the first of its kind to address such problems in the history of art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal for Becoming Dragon was to develop a working, immersive Mixed Reality system by using a motion capture
system and head mounted display to control a character in Second Life - a Massively Multiplayer Online 3D
environment - in order to examine a number of questions regarding identity, gender and the transformative potential of
technology. This performance was accomplished through a collaboration between Micha Cardenas, the performer and
technical director, Christopher Head, Kael Greco, Benjamin Lotan, Anna Storelli and Elle Mehrmand.
The plan for this project was to model the performer's physical environment to enable them to live in the virtual
environment for extended amounts of time, using an approach of Mixed Reality, where the physical world is mapped
into the virtual. I remain critical of the concept of Mixed Reality, as it presents an idea of realities as totalities and as
objective essences independent of interpretation through the symbolic order. Part of my goal with this project is to
explore identity as a process of social feedback, in the sense that Donna Haraway describes "becoming with"iii, as well as
to explore the concept of Reality Spectrum that Augmentology.com discusses, thinking about states such as AFK (Away
From Keyboard) that are in-between virtual and corporeal presence.iv Both of these ideas are ways of overcoming the
dualisms of mind/body, real/virtual and self/other that have been a problematic part of thinking about technology for so
long. Towards thinking beyond these binaries, Anna Munster offers a concept of enfolding the body and technologyv,
building on Gilles Deleuze's notion of the baroque fold. She says "the superfold... opens up for us a twisted topology of
code folding back upon itself without determinate start or end points: we now live in a time and space in which body and
information are thoroughly imbricated."vi She elaborates on this notion of body and code as becoming with each other
saying "the incorporeal vectors of digital information draw out the capacities of our bodies to become other than matter
conceived as a mere vessel for consciousness or a substrate for signal... we may also conceive of these experiences as a
new territory made possible by the fact that our bodies are immanently open to these kinds of technically symbiotic
transformations"vii. A number of the technologies used in this performance were used in an attempt to blur the line
between the actual and the digital, such as motion capture, live video streaming into Second Life and 3D fabrication of
physical copies of Second Life avatars.
The performance was developed using the following components:
- An Emagin Z800 immersive head mounted display (HMD) allowed the performer to move around in the
physical environment within Calit2 and still remain "in game". Head tracking and stereoscopic imagery help to
provide a realistic feeling of immersion. We built on the University of Michigan 3D (UM3D) lab's stereoscopic
patch for the Second Life client, updating it to work with the latest version of Second Life.
- A motion tracking system. A Vicon MX40+ motion capture system was installed into the Visiting Artist Lab at
CRCA, which served as the physical performance space, to allow real-time motion tracking data to be sent to a
PC running Windows. Using this data, the plan was to map the physical motion in the real world back into
game space, so that, for example, the performer could easily get to their food source or to the restroom. We
developed a C++ bridge that includes a parser for the Vicon real time data stream in order to communicate this
to the Second Life server to produce changes in avatar and object positions based on real physical movement.
The goal was to get complete body gestures into Second Life in near real time.
- A Puredata patch called Lila, developed by Shahrokh Yadegadi of UCSD, which was used to modulate the
performer's voice, to provide a voice system that allowed chat ability in Second Life, which was less gendered
and less human.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dots and Dashes is a virtual reality artwork that explores online romance over the telegraph, based on Ella Cheever
Thayer's novel Wired Love - a Romance in Dots and Dashes (an Old Story Told in a New Way)1. The uncanny
similarities between this story and the world of today's virtual environments provides the springboard for an exploration
of a wealth of anxieties and dreams, including the construction of identities in an electronically mediated environment,
the shifting boundaries between the natural and machine worlds, and the spiritual dimensions of science and technology.
In this paper we examine the parallels between the telegraph networks and our current conceptions of cyberspace, as
well as unique social and cultural impacts specific to the telegraph. These include the new opportunities and roles
available to women in the telegraph industry and the connection between the telegraph and the Spiritualist movement.
We discuss the development of the artwork, its structure and aesthetics, and the technical development of the work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a project designed to use the power of online virtual worlds as a place of camaraderie and healing for
returning United States military veterans-a virtual space that can help them deal with problems related to their time of
service and also assist in their reintegration into society. This veterans' space is being built in Second Life®, a popular
immersive world, under consultation with medical experts and psychologists, with several types of both social and
healing activities planned. In addition, we address several barrier issues with virtual worlds, including lack of guides or
helpers to ensure the participants have a quality experience. To solve some of these issues, we are porting the advanced
intelligence of the ICT's virtual human characters to avatars in Second Life®, so they will be able to greet the veterans,
converse with them, guide them to relevant activities, and serve as informational agents for healing options. In this way
such "avatar agents" will serve as autonomous intelligent characters that bring maximum engagement and functionality
to the veterans' space. This part of the effort expands online worlds beyond their existing capabilities, as currently a
human being must operate each avatar in the virtual world; few autonomous characters exist. As this project progresses
we will engage in an iterative design process with veteran participants who will be able to advise us, along with the
medical community, on what efforts are well suited to, and most effective within, the virtual world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work describes the process of developing a 3D Virtual Reality (VR) DJ simulation game intended to be displayed
on a stereoscopic display. Using a DLP projector and shutter glasses, the user of the system plays a game in which he or
she is a DJ in a night club. The night club's music is playing, and the DJ is "scratching" in correspondence to this music.
Much in the flavor of Guitar Hero or Dance Dance Revolution, a virtual turntable is manipulated to project information
about how the user should perform. The user only needs a small set of hand gestures, corresponding to the turntable
scratch movements to play the game. As the music plays, a series of moving arrows approaching the DJ's turntable
instruct the user as to when and how to perform the scratches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the
project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions
to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural
hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one
scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D
simulation results and of terrain data with a high resolution that is available for this region.
Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are
presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale
datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis
has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs.
cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its
efficiency for the geophysical domain. Runtime measurements conclude this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Forensic stereoscopic analysis of historical aerial photography is successfully identifying the causes of environmental
degradation, including erosion and unlawful releases of hazardous wastes into the environment. The photogrammetric
evidence can successfully pinpoint the specific locations of undocumented hazardous waste landfills and other types of
unlawful releases of chemicals and wastes into the environment, providing location data for targeted investigation,
characterization, and subsequent remediation. The findings of these studies are being effectively communicated in a
simple, memorable, and compelling way by projecting the three-dimensional (3-D) sequences of historical aerial
photography utilizing polarized 3-D presentation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation
with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual
hydrology observatory application to facilitate the introduction of field experience and observational skills into
hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation
part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting
(WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from
WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology
Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and
data optimization. Once all the visualization components are generated, they are integrated into the simulation data
using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology
Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the
integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like
system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with
metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral
experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the
19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven
virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century
scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at
the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active
development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel
(10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile,
100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the
installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality
that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced
expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a
combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall
system design. We discuss the resulting aesthetic experience in relation to the overall system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.