Sub-Pixel Super Resolution (SPSR) is an established technique for extracting high-resolution information by combining multiple low-resolution images. Typically these images are related by very small, known deformations along the sensor plane and can be analytically reconstructed into one large image. Unlike previous methods which largely rely on naturally occurring changes to the optical system (e.g., hand-shake, rotations, etc.) adaptive optics (AO) systems are capable of inducing customized optical changes at very high temporal resolutions. We propose a method for SPSR which leverages the AO deformable mirror by finding the optimal phases for sub-pixel shifting. We accomplish this by end-to-end optimization of the phase and super resolution method while taking into account the underlying optical system. This technique can be applied to both analytical and state of the art deep learning methods.
While deep learning has led to breakthroughs in many areas of computer science, its power has yet to be fully exploited in the area of adaptive optics (AO) and astronomy as a whole. In this paper we describe the first steps taken to apply deep, convolutional neural networks to the problem of wavefront reconstruction and prediction and demonstrate their feasibility of use in simulation. Our preliminary results show we are able to reconstruct wavefronts comparably well to current state of the art methods. We further demonstrate the ability to predict future wavefronts up to five simulation steps with under 1nm RMS wavefront error.
Conventional cameras record all light falling on their sensor regardless of the path that light followed to get there. In this paper we give an overview of a new family of computational cameras that offers many more degrees of freedom. These cameras record just a fraction of the light coming from a controllable source, based on the actual 3D light path followed. Photos and live video captured this way offer an unconventional view of everyday scenes in which the effects of scattering, refraction and other phenomena can be selectively blocked or enhanced, visual structures that are too subtle to notice with the naked eye can become apparent, and object appearance can depend on depth. We give an overview of the basic theory behind these cameras and their DMD-based implementation, and discuss three applications: (1) live indirect-only imaging of complex everyday scenes, (2) reconstructing the 3D shape of scenes whose geometry or material properties make them hard or impossible to scan with conventional methods, and (3) acquiring time-of-flight images that are free of multi-path interference.
Conference Committee Involvement (8)
Optoelectronic Imaging and Multimedia Technology X
15 October 2023 | Beijing, China
Optoelectronic Imaging and Multimedia Technology IX
5 December 2022 | Online Only, China
Optoelectronic Imaging and Multimedia Technology VIII
10 October 2021 | Nantong, JS, China
Optoelectronic Imaging and Multimedia Technology VII
12 October 2020 | Online Only, China
Optoelectronic Imaging and Multimedia Technology VI
21 October 2019 | Hangzhou, China
Optoelectronic Imaging and Multimedia Technology V
11 October 2018 | Beijing, China
Optoelectronic Imaging and Multimedia Technology IV
12 October 2016 | Beijing, China
Optoelectronic Imaging and Multimedia Technology III
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.