A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional
point cloud derived from multi-view depth extraction techniques. By exploring the information
provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the
two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global
positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate
system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute
of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was
collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional
model geometry extracted using modern point-cloud processing techniques.
A common task in the analysis of digitized histological sections is reconstructing a volumetric representation
of the original specimen. Image registration algorithms are used in this task to compensate for translational,
rotational, scale, shear, and local geometric differences between slices. Various systems have been developed
to perform volumetric reconstruction by registering pairs of successive slices according to rigid, similarity,
affine, and/or deformable transformations. To provide a coarse initial volumetric reconstruction, rigid
transformations may be too constrained, as they do not allow for scale or shear; but, affine transformations may
be too flexible, enabling larger scale or shear factors than physically reflected in the histological sections.
One difficulty with these systems is caused by the aperture problem; even if successive slices are registered
reasonably well, the composition of transformations over tens or hundreds of slices can yield global twisting and
scale and shear changes that yield a volumetric reconstruction that is significantly distorted from the shape of
the true specimen. The impact of the aperture problem can be reduced by considering more than two successive
images in the registration process. Systems that take this approach use global energy functions, elastic spring
models, post hoc filtering/smoothing, or solutions to shortest-path problems on graphs.
In this article, we propose a volume reconstruction algorithm that handles the aperture problem and yields
nearly rigid transformations (i.e., affine transformations with small scale and shear factors). Our algorithm is
based on robust geometric alignment of descriptive feature points (for example, using SIFT16) via constrained
optimization. We will illustrate our algorithm on the task of volumetric reconstruction from histological sections
of a chicken embryo with an embedded tumor spheroid.
Conference Committee Involvement (3)
Applications of Machine Learning 2021
4 August 2021 | San Diego, California, United States
Applications of Machine Learning 2020
23 August 2020 | Online Only, California, United States
Applications of Machine Learning
13 August 2019 | San Diego, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.