Visualizing light detection and ranging (LIDAR) scans consisting of millions of points on large displays poses a significant challenge. High-resolution large displays allow researchers to examine point clouds in detail. However, how to interact with point clouds rendered on large displays is a difficult problem. We present a case study that visualizes LIDAR point clouds on a tiled display wall termed highly interactive-parallelized display (HIPerDisplay). It has twenty 24-inch liquid-crystal displays with a total resolution of 46 Mpixels. Interaction between the user and the display wall is achieved by using a video camera system that is able to track the position of a hand-held light ball device. A user holds it to manipulate point clouds on the HIPerDisplay. Case studies are conducted to study the LIDAR scans of slopes in the Houshanyue mountain areas in Taiwan. Experiments were conducted to examine the advantages of using the HIPerDisplay for point clouds in data postprocessing. The experiments assess two tasks for manipulating point cloud data designed to evaluate the efficiency of the interactive devices. To evaluate the efficiency of the system, a group of 30 graduate students participated in the experiment. User surveys were performed to evaluate the efficiency of the system and to discover the users’ opinions about using the interactive device in a large display environment. The results showed that the participants preferred to perform LIDAR data operation tasks on a high-resolution large display environment rather than on a single monitor. The results also showed that the HIPerDisplay offered superior performance for the processing of large LIDAR datasets.
The goal of this research is to compare the performance of different stereoscopic displays and tracking/interaction
devices in the context of motor behavior and interaction quality within various Virtual Reality (VR) environments.
Participants were given a series of VR tasks requiring motor behaviors with different degrees of freedom. The VR tasks
were performed using a monoscopic display and two stereoscopic displays (shutter glasses and autostereoscopic display)
and two tracking devices (optical and magnetic). The two 3D tracking/ interaction devices were used to capture
continuous 3D spatial hand position with time stamps. Participants completed questionnaires evaluating display comfort
and simulation fidelity among the three displays and the efficiency of the two interaction devices. The trajectory of
motion was reconstructed from the tracking data to investigate the user's motor behavior. Results provide information
on how stereoscopic displays can affect human motor behavior and interaction modes during VR tasks. These
preliminary results suggest that the use of shutter glasses provides a more immersive and user-friendly display than
autostereoscopic displays. Results also suggest that the optical tracking device, available at a fraction of the cost of the
magnetic tracker, provides similar results for users in terms of functionality and usability features.
We have developed a novel VR task: the Dynamic Reaching Test, that measures human forearm movement in 3D
space. In this task, three different stereoscopic displays: autostereoscopic (AS), shutter glasses (SG) and head
mounted display (HMD), are used in tests in which subjects must catch a virtual ball thrown at them. Parameters such
as percentage of successful catches, movement efficiency (subject path length compared to minimal path length), and
reaction time are measured to evaluate differences in 3D perception among the three stereoscopic displays. The SG
produces the highest percentage of successful catches, though the difference between the three displays is small,
implying that users can perform the VR task with any of the displays. The SG and HMD produced the best movement
efficiency, while the AS was slightly less efficient. Finally, the AS and HMD produced similar reaction times that
were slightly higher (by 0.1 s) than the SG. We conclude that SG and HMD displays were the most effective, but only
slightly better than the AS display.
We describe a set of experiments that compare 2D CRT, shutter glasses and autostereoscopic displays; measure user preference for different tasks in different displays; measure the effect of previous user experience in the interaction performance for new tasks; and measure the effect of constraining the user's hand motion and hand-eye coordination. In this set of tests, we used interactive object selection and manipulation tasks using standard scalable configurations of 3D block objects. We also used a 3D depth matching test in which subjects are instructed to align two objects located next to each other on the display to the same depth plane. New subjects tested with hands out of field of view constraint performed more efficiently with glasses than with autostereoscopic displays, meaning they were able to match the objects with less movement. This constraint affected females more negatively than males. From the results of the depth test, we note that previous subjects on average performed better than the new subjects. Previous subjects had more correct results than the new subjects, and they finished the test faster than the new subjects. The depth test showed that glasses are preferred to autostereo displays in a task that involves only stereoscopic depth.
KEYWORDS: Glasses, 3D displays, Camera shutters, Autostereoscopic displays, 3D image processing, Image quality, 3D metrology, Optical spheres, Head, Data processing
In this paper we describe experimental measurements and comparison of human interaction with three different types of stereo computer displays. We compare traditional shutter glasses-based viewing with three-dimensional (3D) autostereoscopic viewing on displays such as the Sharp LL-151-3D display and StereoGraphics SG 202 display. The method of interaction is a sphere-shaped “cyberprop” containing an Ascension Flock-of-Birds tracker that allows a user to manipulate objects by imparting the motion of the sphere to the virtual object. The tracking data is processed with OpenGL to manipulate objects in virtual 3D space, from which we synthesize two or more images as seen by virtual cameras observing them. We concentrate on the quantitative measurement and analysis of human performance for interactive object selection and manipulation tasks using standardized and scalable configurations of 3D block objects. The experiments use a series of progressively more complex block configurations that are rendered in stereo on various 3D displays. In general, performing the tasks using shutter glasses required less time as compared to using the autostereoscopic displays. While both male and female subjects performed almost equally fast with shutter glasses, male subjects performed better with the LL-151-3D display, while female subjects performed better with the SG202 display. Interestingly, users generally had a slightly higher efficiency in completing a task set using the two autostereoscopic displays as compared to the shutter glasses, although the differences for all users among the displays was relatively small. There was a preference for shutter glasses compared to autostereoscopic displays in the ease of performing tasks, and glasses were slightly preferred for overall image quality and stereo image quality. However, there was little difference in display preference in physical comfort and overall preference. We present some possible explanations of these results and point out the importance of the autostereoscopic "sweet spot" in relation to the user's head and body position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.