Advanced Photonics, Vol. 1, Issue 04, 046001, (August 2019) https://doi.org/10.1117/1.AP.1.4.046001
TOPICS: Neural networks, Sensors, Signal detection, Optical networks, Neurons, Machine learning, Data modeling, Photodetectors, Photonics, Performance modeling
Optical computing provides unique opportunities in terms of parallelization, scalability, power efficiency, and computational speed and has attracted major interest for machine learning. Diffractive deep neural networks have been introduced earlier as an optical machine learning framework that uses task-specific diffractive surfaces designed by deep learning to all-optically perform inference, achieving promising performance for object classification and imaging. We demonstrate systematic improvements in diffractive optical neural networks, based on a differential measurement technique that mitigates the strict nonnegativity constraint of light intensity. In this differential detection scheme, each class is assigned to a separate pair of detectors, behind a diffractive optical network, and the class inference is made by maximizing the normalized signal difference between the photodetector pairs. Using this differential detection scheme, involving 10 photodetector pairs behind 5 diffractive layers with a total of 0.2 million neurons, we numerically achieved blind testing accuracies of 98.54%, 90.54%, and 48.51% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively. Moreover, by utilizing the inherent parallelization capability of optical systems, we reduced the cross-talk and optical signal coupling between the positive and negative detectors of each class by dividing the optical path into two jointly trained diffractive neural networks that work in parallel. We further made use of this parallelization approach and divided individual classes in a target dataset among multiple jointly trained diffractive neural networks. Using this class-specific differential detection in jointly optimized diffractive neural networks that operate in parallel, our simulations achieved blind testing accuracies of 98.52%, 91.48%, and 50.82% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively, coming close to the performance of some of the earlier generations of all-electronic deep neural networks, e.g., LeNet, which achieves classification accuracies of 98.77%, 90.27%, and 55.21% corresponding to the same datasets, respectively. In addition to these jointly optimized diffractive neural networks, we also independently optimized multiple diffractive networks and utilized them in a way that is similar to ensemble methods practiced in machine learning; using 3 independently optimized differential diffractive neural networks that optically project their light onto a common output/detector plane, we numerically achieved blind testing accuracies of 98.59%, 91.06%, and 51.44% for MNIST, Fashion-MNIST, and grayscale CIFAR-10 datasets, respectively. Through these systematic advances in designing diffractive neural networks, the reported classification accuracies set the state of the art for all-optical neural network design. The presented framework might be useful to bring optical neural network-based low power solutions for various machine learning applications and help us design new computational cameras that are task-specific.