We present both data-free and data-driven methods for the all-optical synthesis of an arbitrary complex-valued linear transformation using diffractive surfaces. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is larger than a threshold, dictated by the multiplication of the number of pixels at the input (I) and output (O) fields-of-views, i.e., N>IxO, both methods succeed in all-optical implementation of the target transformation. However, compared to data-free designs, deep learning-based diffractive designs with multiple diffractive layers are found to achieve significantly larger diffraction efficiencies and their all-optical transformations are much more accurate when N< IxO.
|