Nowhere is the need to understand large heterogeneous datasets more important than in disaster monitoring
and emergency response, where critical decisions have to be made in a timely fashion and the discovery of
important events requires an understanding of a collection of complex simulations. To gain enough insights
for actionable knowledge, the development of models and analysis of modeling results usually requires that
models be run many times so that all possibilities can be covered. Central to the goal of our research is,
therefore, the use of ensemble visualization of a large scale simulation space to appropriately aid decision makers
in reasoning about infrastructure behaviors and vulnerabilities in support of critical infrastructure analysis. This
requires the bringing together of computing-driven simulation results with the human decision-making process
via interactive visual analysis. We have developed a general critical infrastructure simulation and analysis
system for situationally aware emergency response during natural disasters. Our system demonstrates a scalable
visual analytics infrastructure with mobile interface for analysis, visualization and interaction with large-scale
simulation results in order to better understand their inherent structure and predictive capabilities. To generalize
the mobile aspect, we introduce mobility as a design consideration for the system. The utility and efficacy of
this research has been evaluated by domain practitioners and disaster response managers.
Displays supporting stereoscopic and head-coupled motion parallax can enhance human perception of containing 3D
surfaces and 3D networks but less for so volumetric data. Volumetric data is characterized by a heavy presence of
transparency, occlusion and highly ambiguous spatial structure. There are many different rendering and visualization
algorithms and interactive techniques that enhance perception of volume data and these techniques‟ effectiveness have
been evaluated. However, how VR display technologies affect perception of volume data is less well studied. Therefore,
we conduct two formal experiments on how various display conditions affect a participant‟s depth perception accuracy
of a volumetric dataset. Our results show effects of VR displays for human depth perception accuracy for volumetric
data. We discuss the implications of these finding for designing volumetric data visualization tools that use VR displays.
In addition, we compare our result to previous works on 3D networks and discuss possible reasons for and implications
of the different results.
This paper presents the concept, working prototype and design space of a two-handed, hybrid spatial user interface for minimally immersive desktop VR targeted at multi-dimensional visualizations. The user interface supports dual button balls (6DOF isotonic controllers with multiple buttons) which automatically switch between 6DOF mode (xyz + yaw,pitch,roll) and planar-3DOF mode (xy + yaw) upon contacting the desktop. The mode switch automatically switches a button ball’s visual representation between a 3D cursor and a mouse-like 2D cursor while also switching the available user interaction techniques (ITs) between 3D and 2D ITs. Further, the small form factor of the button ball allows the user to engage in 2D multi-touch or 3D gestures without releasing and re-acquiring the device. We call the device and hybrid interface the HyFinBall interface which is an abbreviation for ‘Hybrid Finger Ball.’ We describe the user interface (hardware and software), the design space, as well as preliminary results of a formal user study. This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and interactions
A typical approach to exploring Light Detection and Ranging (LIDAR) datasets is to extract features using pre-defined
segmentation algorithms. However, this approach only provides a limited set of features that users can investigate. To
expand and represent the rich information inside the LIDAR data, we introduce a linked feature space concept that
allows users to make regular, conjunctive, and disjunctive discoveries in non-uniform LIDAR data by interacting with
multidimensional transfer functions. We achieve this by providing interactions for creating multiple scatter-plots of
varying axes, establishing chains of plots based on selection domains, linking plots using logical operators, and viewing
selected brushing results in both a 3D view and selected scatter-plots. Our highly interactive approach to visualizing
LIDAR feature spaces facilitates the users' ability to explore, identify, and understand data features in a novel way. Our
approach for exploring LIDAR data can directly lead to better understanding of historical LIDAR datasets, and increase
the turnaround time and quality of results from time-critical LIDAR collections after urban disasters or on the battlefield.
Infrastructure safety affects millions of U.S citizens in many ways. Among all the infrastructures, the bridge
plays a significant role in providing substantial economy and public safety. Nearly 600,000 bridges across the
U.S are mandated to be inspected every twenty-four months. Although these inspections could generate great
amount of rich data for bridge engineers to make critical maintenance decisions, processing these data has become
challenging due to the low efficiency from those traditional bridge management systems. In collaboration with
North Carolina Department of Transportation (NCDOT) and other regional DOT collaborators, we present our
knowledge integrated visual analytics bridge management system. Our system aims to provide bridge engineers a
highly interactive data exploration environment as well as knowledge pools for corresponding bridge information.
By integrating the knowledge structure with visualization system, our system could provide comprehensive
understandings of the bridge assets and enables bridge engineers to investigate potential bridge safety issues and
make maintenance decisions.
Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to
make efficient and effective informed decisions. The management involves a multi-faceted operation that requires
the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical
assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing
and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management.
This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world
practitioners from industry, local and federal government agencies.
IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding
and enforcement of complex inspection process that can bridge the gap between evidence gathering
and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation,
representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and
ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation
system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations
that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented
Architecture (SOA) framework to compose and provide services on-demand.
IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance
and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme
events.
KEYWORDS: Visualization, Visual analytics, Bridges, Data integration, Data processing, Human-machine interfaces, Data storage, Data mining, Inspection, LIDAR
In the information age today, we are experiencing an explosion of data and information from a variety of sources
unlike anything that the world has seen before. While technology has advanced to keep up with the collection
and storage of data, what we lack now is the ability to analyze and understand the meaning behind the data.
Traditionally, data mining and data management techniques require the data to be uniform such that a single
process can search for knowledge within the data. However, in analysis of complex tasks where knowledge and
information need to be pieced together from different sources of data, a new paradigm is required. In this paper,
we present a framework of using visual analytical approaches to integrate multiple heterogeneous processes that
can each analyze a specific type of data. Under this framework, stand-alone software solutions can focus on
specific aspects of the problem based on domain-specific techniques. The framework serves as a visual repository
for all the information and knowledge discovered by each individual process, and allows the user to interactively
perform sense-making analysis to form a cohesive and comprehensive understanding of the problem at hand. We
demonstrate the effectiveness of this framework by applying it to inspecting bridge conditions that utilizes data
sources from 2D imagery, 3D LiDAR, and multi-dimensional data based on bridge reports.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.