Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time.
Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO).
Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy.
ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges.
In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
A heuristic optimization approach has been developed to optimize SRAF (sub resolution assist feature) placement rules for advanced technology nodes by using a genetic algorithm. This approach has demonstrated the capability to optimize a rule-based SRAF (RBSRAF) solution for both 1D and 2D designs to improve PVBand and avoid SRAF printing. Compared with the MBSRAF based POR (process of record) solution, the optimized RBSRAF can produce a comparable PVBand distribution for a full chip test case containing both random SRAM and logic designs with a significant 65% SRAF generation time reduction and 55% total OPC time reduction.
The continuous scaling of semiconductor devices is quickly outpacing the resolution improvements of lithographic exposure tools and processes. This one-sided progression has pushed optical lithography to its limits, resulting in the use of well-known techniques such as Sub-Resolution Assist Features (SRAF’s), Source-Mask Optimization (SMO), and double-patterning, to name a few. These techniques, belonging to a larger category of Resolution Enhancement Techniques (RET), have extended the resolution capabilities of optical lithography at the cost of increasing mask complexity, and therefore cost. One such technique, called Inverse Lithography Technique (ILT), has attracted much attention for its ability to produce the best possible theoretical mask design. ILT treats the mask design process as an inverse problem, where the known transformation from mask to wafer is carried out backwards using a rigorous mathematical approach. One practical problem in the application of ILT is the resulting contour-like mask shapes that must be “Manhattanized” (composed of straight edges and 90-deg corners) in order to produce a manufacturable mask. This conversion process inherently degrades the mask quality as it is a departure from the “optimal mask” represented by the continuously curved shapes produced by ILT. However, simpler masks composed of longer straight edges reduce the mask cost as it lowers the shot count and saves mask writing time during mask fabrication, resulting in a conflict between manufacturability and performance for ILT produced masks1,2. In this study, various commonly used metrics will be combined into an objective function to produce a single number to quantitatively measure a particular ILT solution’s ability to balance mask manufacturability and RET performance. Several metrics that relate to mask manufacturing costs (i.e. mask vertex count, ILT computation runtime) are appropriately weighted against metrics that represent RET capability (i.e. process-variation band, edge-placement-error) in order to reflect the desired practical balance. This well-defined scoring system allows direct comparison of several masks with varying degrees of complexities. Using this method, ILT masks produced with increasing mask constraints will be compared, and it will be demonstrated that using the smallest minimum width for mask shapes does not always produce the optimal solution.
With ever shrinking critical dimensions, half nm OPC errors are a primary focus for process improvement in computational lithography. Among many error sources for 2x and 1x nodes, 3D mask modeling has caught the attention of engineers and scientists as a method to reduce errors at these nodes. While the benefits of 3D mask modeling are well known, there will be a runtime penalty of 30-40% that needs to be weighed against the benefit of optical model accuracy improvements. The economically beneficial node to adopt 3D mask modeling has to be determined by balancing these factors. In this paper, a benchmarking study has been conducted on a 20nm cut mask, metal and via layers with two different computational lithography approaches as compared with standard thin-mask approximation modeling. Besides basic RMS error metrics for model calibration and verification, through pitch and through size optical proximity behavior, through focus model predictability, best focus prediction and common DOF prediction are thoroughly evaluated. Runtime impact and OPC accuracy are also studied.
The objective of this work is to describe the advances in the use of C-Quad polarized illumination
for densest pitches in back end of line thin wire in 32m technology and outlook for 28 nm
technology with NA of 1.35 on a 193nm wavelength scanner. Through simulation and experiments,
we found that moving from Annular to C-Quad illumination provides improvement in intensity and
contrast. We studied the patterning performance of C-Quad illumination for 1D dense, semi dense,
isolated features with and without polarization. Polarization shows great improvement in contrast
and line edge roughness for dense pattern. Patterning performance of isolated and semi-isolated
features was the same with and without polarization.
Performing model-based optical proximity correction (MBOPC) on layouts has become an integral part of
patterning advanced integrated circuits. Earlier technologies used sparse OPC, the run times of which explode when the
density of layouts increases. With the move to 45 nm technology node, this increase in run time has resulted in a shift to
dense simulation OPC, which is pixel-based. The dense approach becomes more efficient at 45nm technology node and
beyond. New OPC model forms can be used with the dense simulation OPC engine, providing the greater accuracy
required by smaller technology nodes. Parameters in the optical model have to be optimized to achieve the required
accuracy. Dense OPC uses a resist model with a different set of parameters than sparse OPC. The default search ranges
used in the optimization of these resist parameters do not always result in the best accuracy. However, it is possible to
improve the accuracy of the resist models by understanding the restrictions placed on the search ranges of the physical
parameters during optimization. This paper will present results showing the correlation between accuracy of the models
and some of these optical and resist parameters. The results will show that better optimization can improve the model
fitness of features in both the calibration and verification set.
The lithographic processes and resolution enhancement techniques (RET) needed to achieve pattern fidelity are
becoming more complicated as the required critical dimensions (CDs) shrink. For technology nodes with smaller
devices and tolerances, more complex models and proximity corrections are needed and these significantly increase
the computational requirements. New simulation techniques are required to address these computational challenges.
The new simulation technique we focus on in this work is dense optical proximity correction (OPC). Sparse OPC
tools typically require a laborious, manual and time consuming OPC optimization approach. In contrast, dense OPC
uses pixel-based simulation that does not need as much manual setup. Dense OPC was introduced because sparse
simulation methodology causes run times to explode as the pattern density increases, since the number of simulation
sites in a given optical radius increases.
In this work, we completed a comparison of the OPC modeling performance and run time for the dense and the
sparse solutions. The analysis found the computational run time to be highly design dependant. The result should
lead to the improvement of the quality and performance of the OPC solution and shed light on the pros and cons of
using dense versus sparse solution. This will help OPC engineers to decide which solution to apply to their
particular situation.
Performing MBOPC (model based Optical Proximity Correction) on layouts is an essential part of patterning advanced integrated circuits. With constantly shrinking CD tolerance and tighter ACLV budgets, the model has to be accurate within a few nanometers. The accuracy of a model in predicting wafer behavior dictates the success of the patterning process. Model calibration is a critical procedure to provide an accurate correlation between design and wafer features. The model calibration process consists of arriving at a variable threshold polynomial as a function of aerial image parameters -intensity maximum (Imax), intensity minimum (Imin), Slope, curvature etc.. In this paper, a strong correlation between the accuracy of the model and the image parameters is demonstrated. Data from model calibration of two different layers in 65nm technology node will be shown to demonstrate the dependence of model accuracy on aerial image parameters. Data show that accuracy of the model degrades a function of resolution, i.e. features with low Imax, low slope and low contrast are difficult to model than higher resolution features. During calibration of the model, some parameters can be adjusted to obtain a balance between the model accuracy of weak and stronger resolution features. Suggestions for improving the accuracy of the weaker features based on an analysis of the image parameters are shown. The correlation between accuracy of the model and image parameters will be useful in limiting OPC corrections on features with poor aerial image quality.
The dimensional variations caused by topography differences between active and non active shallow trench isolation (STI) areas, at the gate level, need to be controlled through proper use of reflectivity control methods. Line-width variation caused by topography can either be a disastrous problem or so small that it is hard to detect. Some of the primary variables include the step-height, active-area-width and planarization length of the BARC being used. In order to experimentally compare different reflectivity control methods, wafers were built with steps ranging from 7.5 nm higher to 27 nm lower than the surroundings. Organic BARC thicknesses of 90 and 130 nm were evaluated. Two resist thicknesses were also evaluated. Along with examining the effect of step-height, we also examined the effect of active-area-widths ranging from 0.5 um to 4.5 um. The data demonstrate that line-width variation going over this variety of steps is well under 1 nm when BARC and resist thicknesses are optimized.
The development of 100-nm design rule technologies is currently taking place in many R&D facilities across the world. For some critical alyers, the transition to 193-nm resist technology has been required to meet this leading edge design rule. As with previous technology node transitions, the materials and processes available are undergoing changes and improvements as vendors encounter and solve problems. The initial implementation of the 193-nm resits process did not meet the photolithography requirements of some IC manufacturers due to very high Post Exposure Bake temperature sensitivity and consequently high wafer to wafer CD variation. The photoresist vendors have been working to improve the performance of the 193-nm resists to meet their customer's requirements. Characterization of these new resists needs to be carried out prior to implementation in the R&D line. Initial results on the second-generation resists evaluated at Cypress Semicondcutor showed better CD control compared to the aelrier resist with comparable Depth of Focus (DOF), Exposure Latitute, Etch Resistance, etc. In addition to the standard lithography parameters, resist characterization needs to include defect density studies. It was found that the new resists process with the best CD control, resulted in the introduction of orders of magnitude higher yield limiting defects at Gate, Contact adn Local Interconnect. The defect data were shared with the resists vendor and within days of the discovery the resist vendor was able to pinpoint the source of the problem. The fix was confirmed and the new resists were successfully released to production. By including defect monitoring into the resist qualification process, Cypress Semiconductor was able to 1) drive correction actions earlier resulting in faster ramp and 2) eliminate potential yield loss. We will discuss in this paper how to apply the Micro Photo Cell Monitoring methodology for defect monitoring in the photolithogprhay module and the qualification of 193nm resist processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.