Synthetic data are frequently used to supplement a small set of real images and create a dataset with diverse features, but this may not improve the equivariance of a computer vision model. Our work answers the following questions: First, what metrics are useful for measuring a domain gap between real and synthetic data distributions? Second, is there an effective method for bridging an observed domain gap? We explore these questions by presenting a pathological case where the inclusion of synthetic data did not improve model performance, then presenting measurements of the difference between the real and synthetic distributions in the image space, latent space, and model prediction space. We find that augmenting the dataset with pixel-level augmentation effectively reduced the observed domain gap, and improves the model F1 score to 0.95 compared to 0.43 for un-augmented data. We also observe that an increase in the average cross entropy of the latent space feature vectors is positively correlated with increased model equivariance and the closing of the domain gap. The results are explained using a framework of model regularization effects.
KEYWORDS: Image segmentation, Ultrasonography, Signal to noise ratio, Signal attenuation, 3D image processing, Expectation maximization algorithms, Breast, 3D modeling, Tissues, Magnetic resonance imaging
This paper examines three Bayesian statistical segmentation techniques with an innovative attenuation compensation on synthetic data and breast ultrasound medical images. All use expectation maximization for estimating the Gaussian model parameters and segment the data using a three-dimensional (3-D) Markov random field pixel neighborhood. This paper compares three Bayesian segmentation techniques: maximum a posteriori simulated annealing (MAP-SA), MAP iterated conditional modes (MAP-ICM), and maximization of posterior marginals (MPM). We conclude that because of the high speckle noise and adverse attenuation challenges of breast ultrasound, the MPM algorithm has the best performance. This is due to better localized segmentation than the other MAP techniques. We present results first with synthetic images then with breast ultrasound. Our new contributions for a 3-D breast ultrasound produce improved results using a model of the noise, in which the Gaussian mean is proportional to the image attenuation with depth, combined with a new prior probability model.
3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in
consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried
surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow
extraction of depth information from the object field through a controlled introduction of defocus into the system. The
design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the
depth of field requirements for image processing. In this work a corrective model has been designed for use with the
Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a
3D imaging application. The modeled performance is compared to the measured performance of the corrected system
over a large range of focal lengths.
A new method for capturing 3D video from a single imager and lens is introduced. The benefit of this method is that it
does not have the calibration and alignment issues associated with binocular 3D video cameras. It also does not require
special ranging transmitters and sensors. Because it is a single lens/imager system, it is also less expensive than either
the binocular or ranging cameras. Our system outputs a 2D image and associated depth image using the combination of
microfluidic lens and Depth from Defocus (DfD) algorithm. The lens is capable of changing the focus to obtain two
images at the normal video frame rate. The Depth from Defocus algorithm uses the in focus and out of focus images to
infer depth. We performed our experiments on synthetic and on the real aperture CMOS imager with a microfluidic lens.
On synthetic images, we found an improvement in mean squared error compared to the literature on a limited test set. On
camera images, our research showed that DfD combined with edge detection and segmentation provided subjective
improvements in the resulting depth images.
Conference Committee Involvement (2)
Digital Photography and Mobile Imaging XI
9 February 2015 | San Francisco, California, United States
Digital Photography X
3 February 2014 | San Francisco, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.