Paper
22 February 2012 Automatic 2D-to-3D image conversion using 3D examples from the internet
J. Konrad, G. Brown, M. Wang, P. Ishwar, C. Wu, D. Mukherjee
Author Affiliations +
Proceedings Volume 8288, Stereoscopic Displays and Applications XXIII; 82880F (2012) https://doi.org/10.1117/12.910601
Event: IS&T/SPIE Electronic Imaging, 2012, Burlingame, California, United States
Abstract
The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
J. Konrad, G. Brown, M. Wang, P. Ishwar, C. Wu, and D. Mukherjee "Automatic 2D-to-3D image conversion using 3D examples from the internet", Proc. SPIE 8288, Stereoscopic Displays and Applications XXIII, 82880F (22 February 2012); https://doi.org/10.1117/12.910601
Lens.org Logo
CITATIONS
Cited by 37 scholarly publications and 5 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D image processing

Databases

Image filtering

Image fusion

3D modeling

Video

Associative arrays

Back to Top