The goal of this paper is to explore machine learning solutions to improve the run-time of model-based retargeting in the mask synthesis flow. The purpose of retargeting is to re-size non-lithography friendly designs so that the design geometries are shifted to a more lithography-robust design space. However, current model-based approaches can take significant run-time. As a result, this step is rarely done in production settings. Different machine learning solutions for resolution enhancement techniques (RETs) have been previously proposed. For instance, to model optical proximity correction (OPC) or inverse lithography (ILT). In this paper, we compare and expand some of these solutions. In the end, we will discuss the experimental results that can achieve a nearly 360x run-time improvement while maintaining similar accuracy to traditional retargeting techniques.
In this paper, we will present a machine learning solution targeted for memory customers including both assist feature and main feature mask synthesis. In a previous paper, we demonstrated machine learning ILT solutions for the creation of assist features using a neural network. In this paper, we extend the solution to include main features masks, which we can create using machine learning models which take into account the full ILT corrected masks during training. In practice, while the correction of main features is often visually more intuitive, there are underlying edge to edge and polygon to polygon interactions that are not easily captured by local influence edge perturbations found in typical OPC solvers but can be captured by ILT and machine learning solutions trained on ILT masks.
Memory cells and access structures consume a large percentage of area in embedded devices so there is a high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. As the scaling drives lithography even deeper into the low-k1 regime, cooptimization of design layout, mask, and lithography is critical to deliver a production-worthy patterning solution. Computational lithography like Inverse Lithography Technology (ILT) has demonstrated it is an enabling technology to derive improved solutions over traditional OPC as reported in multiple prior publications. In this paper, we will present results of a study on advanced memory cell design optimization with Cell-Level ILT (CL-ILT) where significant design hierarchy can be retained during ILT optimization. Large numbers of cell design variations are explored with automatically generated patterns from ProteusTM Test Pattern Generator (TPG). Fully automated flows from pattern generation to mask synthesis with ILT, data analysis and results visualization are built on ProteusTM Work Flow (PWF) for exploring a fully parameterized design space of interest. Mask complexity including assist features (AF) types, rule or model based, and main feature segmentation are also studied to understand the impact on wafer lithographic performance. A heatmap view of results generated from this design exploration provides a clear and intuitive way to identify maximum design limits of memory cells. Comparison of results from ILT and traditional OPC will be presented as well with both wafer and simulation data.
Since its introduction at Luminescent Technologies and continued development at Synopsys, Inverse Lithography Technology (ILT) has delivered industry leading quality of results (QOR) for mask synthesis designs. With the advent of powerful, widely deployed, and user-friendly machine learning (ML) training techniques, we are now able to exploit the quality of ILT masks in a ML framework which has significant runtime benefits. In this paper we will describe our MLILT flow including training data selection and preparation, network architectures, training techniques, and analysis tools. Typically, ILT usage has been limited to smaller areas owing to concerns like runtime, solution consistency, and mask shape complexity. We will exhibit how machine learning can be used to overcome these challenges, thereby providing a pathway to extend ILT solution to full chip logic design. We will demonstrate the clear superiority of ML-ILT QOR over existing mask synthesis techniques, such as rule based placements, that have similar runtime performance.
Below the 28nm node the difficulty of using subresolution assist features (SrAFs) in OPC/RET schemes increases substantially with each new device node. This increase in difficulty is due to the need for tighter process window control for smaller target patterns, the increased risk of SrAF printing , and also the increased difficulty of SrAF mask manufacture and inspection. Therefore, there is a substantially increased risk of SrAFs which violate one or more manufacturability limits.
In this paper, we present results of our work to evaluate methods to pre-characterize designs which are likely to become problematic for SrAF placement. We do this by evaluating different machine learning methods, inputs and functions.
A hybrid multi-step method for Sub-Resolution Assist Feature (SRAF) placement is presented. The process window, characterized by process variation bands (PV-bands), is subjected to optimization. By applying a state-of-the-art advanced pattern matching based approach, the SRAF placement is optimized to maximize the process window. Due to the complexity of building a complete Rule-Based SRAF (RBSRAF) solution and the performance limitation of the Model-Based SRAF solution (MBSRAF), the hybrid pattern based SRAF reduces the complexity and improves performance. In this paper, the hybrid pattern-based SRAF algorithm and its implementation, as well as testing results, are discussed with respect to process window and performance.
To ensure a high patterning quality, the etch effects have to be corrected within the OPC recipe in addition to the traditional lithographic effects. This requires the calibration of an accurate etch model and optimization of its implementation in the OPC flow. Using SEM contours is a promising approach to get numerous and highly reliable measurements especially for 2D structures for etch model calibration. A 28nm active layer was selected to calibrate and verify an etch model with 50 structures in total. We optimized the selection of the calibration structures as well as the model density. The implementation of the etch model to adjust the litho target layer allows a significant reduction of weak points. We also demonstrate that the etch model incorporated to the ORC recipe and run on large design can predict many hotspots.
A two-step full-chip simulation method for optimization of sub-resolution assist feature placement in a random
logic Contact layer using ArF immersion Lithography is presented. Process window, characterized by depth of
focus (DOF) , of square or rectangular target features is subject to optimization using the optical and resist effects
described by calibrated models (Calibre ®
nmOPC, nmSRAF simulation platform). By variation of the assist
feature dimension and their distance to main feature in a test pattern, a set of comprehensive rules is derived
which is applied to generate raw assist features in a random logic layout. Concurrently with the generation of
the OPC shapes for the main features, the raw assist feature become modified to maximize process window and
to ensure non-printability of the assist features. In this paper, the selection of a test pattern, the generation of
a set of "golden" rules of the raw assist feature generation and their implementation as well as the assist feature
coverage in a random logic layout is presented and discussed with respect to performance.
A dynamic feedback controller for Optical Proximity Correction (OPC) in a random logic layout using ArF
immersion Lithography is presented. The OPC convergence, characterized by edge placement error (EPE), is
subjected to optimization using optical and resist effects described by calibrated models (Calibre®
nmOPC
simulation platform). By memorizing the EPE and Displacement of each fragment from the preceding OPC
iteration, a dynamic feedback controller scheme is implemented to achieve OPC convergence in fewer iterations.
The OPC feedback factor is calculated for each individual fragment taking care of the cross-MEEF (mask error
enhancement factor) effects. Due to the very limited additional computational effort and memory consumption,
the dynamic feedback controller reduces the overall run time of the OPC compared to a conventional constant
feedback factor scheme. In this paper, the dynamic feedback factor algorithm and its implementation, as well
as testing results for a random logic layout, are compared and discussed with respect to OPC convergence and
performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.