Presentation + Paper
30 April 2018 Augmented reality data generation for training deep learning neural network
Kevin Payumo, Alexander Huyen, Landan Seguin, Thomas T. Lu, Edward Chow, Gil Torres
Author Affiliations +
Abstract
One of the major challenges in deep learning is retrieving sufficiently large labeled training datasets, which can become expensive and time consuming to collect. A unique approach to training segmentation is to use Deep Neural Network (DNN) models with a minimal amount of initial labeled training samples. The procedure involves creating synthetic data and using image registration to calculate affine transformations to apply to the synthetic data. The method takes a small dataset and generates a highquality augmented reality synthetic dataset with strong variance while maintaining consistency with real cases. Results illustrate segmentation improvements in various target features and increased average target confidence.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kevin Payumo, Alexander Huyen, Landan Seguin, Thomas T. Lu, Edward Chow, and Gil Torres "Augmented reality data generation for training deep learning neural network", Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490U (30 April 2018); https://doi.org/10.1117/12.2305202
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Neural networks

Video

Augmented reality

Data modeling

Autoregressive models

Image processing

Back to Top