Over the years, lithography engineers have continued to focus on CD control, overlay and process capability to meet node requirements for yield and device performance. Previous work by Fukuda1 developed a multi-exposure technique at multi-focus positions to image contact holes with adequate DOF. Lalovic2 demonstrated a fixed 2-wavelength technique to improve DOF called RELAX. The concept of multi-focal imaging (MFI) was introduced3 demonstrating two focal positions are created that are averaged over the exposure field, this wavelength “dithering” approach which can be turned on and off, thus eliminating any potential scanner calibration issues.
In this work, the application of this imaging method (1 exposure-2 focus positions) can be used in thick photoresist and high aspect ratio applications. An example of thick photoresist imaging is shown in figure 1. We demonstrate 5um line and space features in 10um of photoresist at 3 different imaging conditions. On the left, single focus imaging (SFI) at best dose and focus, the center image which is also SFI but at a defocus of +3.2um. On the right is MFI with 2 focus positions of 0 and 2.8um. Here we can see a significant improvement in the SWA linearity and image profile quality. A second example of high aspect ratio imaging using MFI is shown in figure 2. The aspect ratio of 13:1 is shown for this. The use of Tachyon KrF MFI source – mask optimization flow will be reviewed to demonstrate optimum conditions to achieve Customer required imaging to meet specific layer requirements.
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time.
Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO).
Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy.
ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges.
In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
Sub-Resolution Assist Features (SRAF) are widely used for Process Window (PW) enhancement in computational
lithography. Rule-Based SRAF (RB-SRAF) methods work well with simple designs and regular repeated patterns, but
require a long development cycle involving Litho, OPC, and design-technology co-optimization (DTCO) engineers.
Furthermore, RB-SRAF is heuristics-based and there is no guarantee that SRAF placement is optimal for complex
patterns. In contrast, the Model-Based SRAF (MB-SRAF) technique to construct SRAFs using the guidance map is
sufficient to provide the required process window for the 32nm node and below. It provides an improved lithography
margin for full chip and removes the challenge of developing manually complex rules to assist 2D structures. The
machine learning assisted SRAF placement technique developed on the ASML Brion Tachyon platform allows us to
push the limits of MB-SRAF even further. A Deep Convolutional Neural Network (DCNN) is trained using a
Continuous Transmission Mask (CTM) that is fully optimized by the Tachyon inverse lithography engine. The neural
network generated SRAF guidance map is then used to assist full-chip SRAF placement. This is different from the
current full-chip MB-SRAF approach which utilizes a guidance map of mask sensitivity to improve the contrast of
optical image at the edge of lithography target patterns. We expect that machine learning assisted SRAF placement can
achieve a superior process window compared to the MB-SRAF method, with a full-chip affordable runtime significantly
faster than inverse lithography. We will describe the current status of machine learning assisted SRAF technique and
demonstrate its application on the full chip mask synthesis and how it can extend the computational lithography
roadmap.
With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
With the advancement of VLSI technology nodes, light diffraction caused lithographic hotspots have become a serious problem affecting manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with extreme scaling of transistor feature size and more and more complicated layout patterns, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. In this paper, we present a deep convolutional neural network (CNN) targeting representative feature learning in lithography hotspot detection. We carefully analyze impact and effectiveness of different CNN hyper-parameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always minorities in VLSI mask design, the training data set is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from high false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply minority upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves highly comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.