KEYWORDS: 3D mask effects, Lithography, Extreme ultraviolet, Education and training, Light sources and illumination, Diffraction, Computer simulations, Modeling, Near field, Waveguides, Deep learning, Computational lithography
Background: The increasing demands on computational lithography and computational imaging in the design and optimization of lithography processes necessitate rigorous modeling of EUV light diffracted from the mask. Traditional electromagnetic field (EMF) solvers are inefficient for large-scale technology problems, while deep neural networks rely on a huge amount of expensive rigorously simulated or measured data. Aim: In order to overcome these constraints, we explore the potential of physics-informed neural networks (PINN) as a promising solution for addressing complex optical problems in the field of EUV lithography. Approach: We extend the existing MaxwellNet to simulate the light diffraction from typical reflective EUV masks. The coupling of the predicted diffraction spectrum with image simulations enables the evaluation of PINN performance in predicting relevant lithographic metrics and typical mask 3D effects. Results: The results of modeling near- and far-field diffraction using PINN showcase a good performance in terms of convergence behavior, stability, accuracy, and a significant speed-up (up to ×10000) compared to the rigorous 3D mask simulation using an established numerical EMF solver. In contrast to other machine learning approaches, PINN is able to accurately simulate the near field, learns the involved physics, and captures the optical and mask-induced 3D effects. PINNs can predict lithographic process windows with sufficient accuracy. Conclusions: Differently from numerical solvers, once trained, generalized PINN can simulate light scattering in several milliseconds without re-training and independently of problem complexity. This opens up the capabilities for partially coherent imaging simulations without the Hopkins approach, source optimization, and fast investigation of mask 3D effects.
KEYWORDS: 3D modeling, Photomasks, 3D image processing, Extreme ultraviolet, Data modeling, Lithography, Extreme ultraviolet lithography, Process modeling, Computer programming, 3D acquisition
Background: As extreme ultraviolet lithography (EUV) lithography has progressed toward feature dimensions smaller than the wavelength, electromagnetic field (EMF) solvers have become indispensable for EUV simulations. Although numerous approximations such as the Kirchhoff method and compact mask models exist, computationally heavy EMF simulations have been largely the sole viable method of accurately representing the process variations dictated by mask topography effects in EUV lithography.
Aim: Accurately modeling EUV lithographic imaging using deep learning while taking into account 3D mask effects and EUV process variations, to surpass the computational bottleneck posed by EMF simulations.
Approach: Train an efficient generative network model on 2D and 3D model aerial images of a variety of mask layouts in a manner that highlights the discrepancies and non-linearities caused by the mask topography.
Results: The trained model is capable of predicting 3D mask model aerial images from a given 2D model aerial image for varied mask layout patterns. Moreover, the model accurately predicts the EUV process variations as dictated by the mask topography effects.
Conclusions: The utilization of such deep learning frameworks to supplement or ultimately substitute rigorous EMF simulations unlocks possibilities of more efficient process optimizations and advancements in EUV lithography.
We implement a data efficient approach to train a conditional generative adversarial network (cGAN)
to predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs.
Previous studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.