The extension of 193nm exposure wavelength to smaller nodes continues the trend of increased data complexity and
subsequently longer mask writing times. In particular inverse lithography methods create complex mask shapes. We
introduce a variety of techniques to mitigate the impact - data simplification post-optical proximity correction (OPC), L-Shots,
multi-resolution writing (MRW) and optimization based fracture. Their potential for shot count reduction is
assessed. All of these techniques require changes to the mask making work flow at some level - the data preparation and
verification flow, the mask writing equipment, the mask inspection and the mask qualification in the wafer
manufacturing line. The paper will discuss these factors and conduct a benefit - effort assessment for the deployment.
Some of the techniques do not reproduce the originally targeted mask shape. The impact of the deviations will be studied
at wafer level with simulations of the exposure process and quantified as to their impact on the exposure process
window. Based on the results of the assessment a deployment strategy will be discussed.
The extension of 193nm exposure wavelength to smaller nodes continues the trend of increased data complexity and
subsequently longer mask writing times. We review the data preparation steps post tapeout, how they influence shot
count as the main driver for mask writing time and techniques to reduce that impact. The paper discusses the application
of resolution enhancements and layout simplification techniques; the fracture step and optimization methods; mask
writing and novel ideas for shot count reduction.
The paper will describe and compare the following techniques: optimized fracture, pre-fracture jog alignment,
generalization of shot definition (L-shot), multi-resolution writing, optimized-based fracture, and optimized OPC output.
The comparison of shot count reduction techniques will consider the impact of changes to the current state of the art
using the following criteria: computational effort, CD control on the mask, mask rule compliance for manufacturing and
inspection, and the software and hardware changes required to achieve the mask write time reduction. The paper will
introduce the concepts and present some data preparation results based on process correction and fracturing tools.
The increasing complexity of RET solutions with each new process node has increased the shot count of advanced
photomasks. In particular, the introduction of inverse lithography masks represents a significant increase in mask
complexity. Although shot count reduction can be achieved through careful management of the upstream OPC
strategy and improvement of fracture algorithms, it is also important to consider more dramatic departures from
traditional fracture techniques. Optimization based fracture allows for overlapping shots to be placed in a manner that
allows the mask intent to be realized while achieving significant savings in shot count relative to traditional fracture
based methods. We investigate the application of Optimization based fracture to reduce the shot count of inverse
lithography masks, provide an assessment of the potential shot count savings, and assess its impact on lithography
process window performance.
We propose changing the shot pattern between passes in multi-pass vector e-beam writing in order to reduce total shot count. One pass is detailed while the other is simplified. Mask process correction is used to produce the correct image from the sum of the exposures; a fundamental contstraint is enforced to retain process latitude. Results from a software implementation show a total shot savings in the range of 18 to 31 percent for two-pass writing versus the conventional writing scheme in which each pass writes identical shot sets. Simulation results demonstrate the feasibility of the technique.
We propose a new aperture stage for shaped e-beam exposure tools. This aperture stage is able to print an "L" shape in a
single exposure shot. The aperture may be used in mask- and wafer-patterning e-beam tools. The physical and
mechanical nature of the aperture appears to be fundamentally similar to existing apertures that form rectangular shapes,
yet it reduces the required shot count for exposure by as much as half.
With each new process technology node, chip designs increase in complexity and size, leading to a steady
increase in data volumes. As a result, mask data prep flows require more computing resources to maintain
the desired turn-around time (TAT) at a low cost. The effect is aggravated by the fact that a mask house
operates a variety of equipment for mask writing, inspection and metrology - all of which, until now,
require specific data formatting. An industry initiative sponsored by SEMI® has established new public
formats - OASIS® (P39) for general layouts and OASIS.MASK (P44) for mask manufacturing equipment -
that allow for the smallest possible representation of data for various applications. This paper will review a
mask data preparation process for mask inspection based on the OASIS formats that also reads
OASIS.MASK files directly in real time into the inspection tool. An implementation based on standard
parallelized computer hardware will be described and characterized as demonstrating throughputs required
for the 45nm and 32nm technology nodes. An inspection test case will also be reviewed.
Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is
a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete
DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis.
In particular, the methodology contains:
- A DRC-like layout DP compliance and design verification functions;
- A parameterization scheme that codifies manufacturing knowledge and capability;
- Judicious use of physical effect simulation to improve double-patterning quality;
- An efficient, high capacity mask synthesis function for post-tapeout processing;
- A verification function to determine the correctness and qualify of a DP solution;
Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing
with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch
(LELE) [1] requires an additional develop and etch step, which eliminates the resolution degradation which occurs in
multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a
32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this
same design would be 0.22 [2], which is sub-resolution.
This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and
decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also
demonstrates verification solution implementation in the chip design flow and post-tapeout flow.
With each new process technology node chip designs increase in complexity and size, and mask data prep flows require
more compute resources to maintain the desired turn around time (TAT) at a low cost. Securing highly scalable
processing for each element of the flow - geometry processing, resolution enhancements and optical process correction,
verification and fracture - has been the focal point so far. The utilization for different flow elements depends on the
operation, the data hierarchy and the device type. This paper introduces a dynamic utilization driven compute resource
control system applied to large scale parallel computation environment. The paper will analyze performance metrics
TAT and throughput for a production system and discuss trade-offs of different parallelization approaches in data
processing regarding interaction with dynamic resource control. The study focuses on 65nm and 45nm designs.
Increasing pattern density and the higher complexity of advanced OPC and RET technologies
have lead to an explosion in mask data volume. This increased data volume leads to increased
mask write times, inspection times, and costs. In the past, several techniques for reducing the
mask shot count have been proposed, including OPC fragment alignment, jog alignment, jog
smoothing, and design intent-aware layout fragmentation among others. This paper will explore
the tradeoffs between mask shot count and simulated print quality for various shot count
reduction strategies.
As tolerance requirements for the lithography process continue to shrink with each new technology node, the
contributions of all process sequence steps to the critical dimension error budgets are being closely examined, including
wafer exposure, resist processing, pattern etch, as well as the photomask process employed during the wafer exposure.
Along with efforts to improve the mask manufacturing processes, the elimination of residual mask errors via pattern
correction has gained renewed attention. The portfolio of correction tools for mask process effects is derived from well
established techniques commonly used in optical proximity correction and in electron beam proximity effect
compensation. The process component that is not well captured in the correction methods deployed in mask
manufacturing today is etch. A mask process model to describe the process behavior and to capture the physical effects
leading to deviation of the critical dimension from the target value represents the key component of model-based
correction and verification. This paper presents the flow for generating mask process models that describe both shortrange
and long-range mask process effects, including proximity loading effects from etching, pattern density loading
effects, and across-mask process non-uniformity. The flow is illustrated with measurement data from real test masks.
Application of models for both mask process correction and verification is discussed.
As tolerance requirements for the lithography process continue to shrink, the complexity of the optical proximity
correction is growing. Smaller correction grids, smaller fragment lengths and the introduction of pixel-based simulation
lead to highly fragmented data fueling the trend of larger file sizes as well as increasing the writing times of the vector
shaped beam systems commonly used for making advanced photomasks. This paper will introduce an approach of
layout modifications to simplify the data considering both fracturing and mask writing constraints in order to make it
more suitable for these processes. The trade-offs between these simplifications and OPC accuracy will be investigated.
A data processing methodology that allows preserving the OPC accuracy and modifications all the way to the mask
manufacturing will also be described. This study focuses on 65nm and 45nm designs.
In order to fully exploit the design knowledge during the operation of mask manufacturing equipment, as well as to
enable the efficient feedback of manufacturing information upstream into the design chain, close communication links
between the data processing domain and the machine are necessary.
With shrinking design rules and modeling technology required to drive simulations and corrections, the amount and
variety of measurements, for example, is steadily growing. This requires a flexible and automated setup of parameters
and location information and their communication with the machine.
The paper will describe a programming interface based on the Tcl/Tk language that contains a set of frequently
reoccurring functions for data extraction and search, site characterization, site filtering, and coordinate transfer. It
enables the free programming of the links, adapting to the flow and the machine needs. The interface lowers the effort
to connect to new tools with specific measurement capabilities, and it reduces the setup and measurement time. The
interface is capable of handling all common mask writer formats and their jobdecks, as well as OASIS and GDSII data.
The application of this interface is demonstrated for the Carl Zeiss AIMSTM system.
Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the
design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it
does not compensate for the mask writer and mask process characteristics. The Sigma7500-II deep-UV laser mask writer
projects the image of a programmable spatial light modulator (SLM) using partially coherent optics similar to wafer
steppers, and the optical proximity effects of the mask writer are in principle correctable with established OPC methods.
To enhance mask patterning, an embedded OPC function, LinearityEqualizeTM, has been developed for the Sigma7500-
II that is transparent to the user and which does not degrade mask throughput. It employs a CalibreTM rule-based OPC
engine from Mentor Graphics, selected for the computational speed necessary for mask run-time execution. A multinode
cluster computer applies optimized table-based CD corrections to polygonized pattern data that is then fractured
into an internal writer format for subsequent data processing. This embedded proximity correction flattens the linearity
behavior for all linewidths and pitches, which targets to improve the CD uniformity on production photomasks.
Printing results show that the CD linearity is reduced to below 5 nm for linewidths down to 200 nm, both for clear and
dark and for isolated and dense features, and that sub-resolution assist features (SRAF) are reliably printed down to 120
nm. This reduction of proximity effects for main mask features and the extension of the practical resolution for SRAFs
expands the application space of DUV laser mask writing.
Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the
design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it
does not directly compensate for the mask writer and mask process characteristics. The Sigma7500 deep-ultraviolet
(DUV) mask writer projects the image of a programmable spatial light modulator (SLM) onto the mask using partially
coherent optics similar to wafer steppers, and the residual optical proximity effects of the mask writer are in principle
correctable with established OPC methods.
To enhance mask patterning, an embedded OPC function called LinearityEqualizerTM has been developed for the
Sigma7500 that is transparent to the user and which does not degrade mask throughput. It employs the Mentor Graphics
Calibre OPC engine, selected for the computational speed necessary for mask run-time execution. A multi-node cluster
computer applies optimized table-based CD corrections to polygonized pattern data, which is then refractured into a
standard writer format for subsequent data processing. This short-range proximity correction works in conjunction with
ProcessEqualizerTM, a previously developed print-time function that reduces long-range process-related CD errors. OPC
flattens the linearity behavior for all linewidths and pitches, which should improve the total CD uniformity on
production photomasks. Along with better resolution of assist features, this further extends the application space of DUV
mask writing. Testing shows up to a 4x reduction in the range of systematic CD deviations for a broad array of feature
sizes and pitches, and dark assist features are reliably printed down to 120 nm at mask scale.
Data Preparation for photomask manufacturing is characterized by computational complexity that grows faster than the
evolution of computer processor ability. Parallel processing generally addresses this problem and is an accepted
mechanism for preparing mask data. One judges a parallel software implementation by total time, stability and
predictability of computation. We apply several fundamental techniques to dramatically improve these metrics for a
parallel, distributed MDP system. This permits the rapid, predictable computation of the largest mask layouts on
conventional computing clusters.
The continuous drive of the semiconductor industry towards smaller features sizes requires mask manufacturers to achieve ever tighter tolerances for the most critical dimensions on the mask. CD uniformity requires particularly tight control. Equipment manufacturers and process engineers target their development to support these requirements. But as numerous publications indicate, more sophisticated data correction methods are still employed to compensate for shortcomings in equipment and process or to account for the boundary conditions in some layouts that contribute to process deviations. Among the corrected effects are proximity and linearity effects, fogging and etch effects, and pattern fidelity. Different designs vary by pattern size distribution as well as by pattern density distribution. As the implementation of corrections for optical proximity effects in wafer lithography has shown, breaking up the original polygons in the design layout for selective and environment-aware correction yields increased data volumes and can have an impact on the data quality of the mask writing data.
The paper investigates the effect of various correction algorithms specifically deployed for mask process effects on top of wafer process related corrections. The impact of MPC flows such as rule-based linearity and proximity correction and density-based long range effect correction on the metrics for data preparation and mask making is analyzed. Experimental data on file size, shot count and data quality indicators including small figure counts are presented for different correction approaches and a variety of correction parameters.
The diversification of mask making equipment in modern mask manufacturing has led to a large variety of different mask writing and inspection formats. Dispositioning the equipment and managing the data flow has turned into a challenging task. The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using established data format specifications. Several trends explain this: size, content and complexity of the designs are growing; the application of RET increases the vertex counts; complex data preparation flows post tape-out result in a large number of intermediate representations of the data. In addition assembly steps are introduced prior to mask making for leveling critical parameters. Despite the continuous effort to improve the performance of the individual tools that handle the data, is has become apparent that enhancements to the entire flow are necessary to gain efficiency. One concept suggested is the unification of the mask data representation: establishing a common format that can be accepted by all tools. This facilitates a streamlining of data prep flows to eliminate processing overhead and repeated execution of similar functions. OASIS, the new stream format developed under the sponsorship of SEMI, has the necessary features to full-fill the role of a common format in mask manufacturing. The paper describes the implementation of OASIS as a common intermediate format in the mask data preparation flow as well as its usage with additional restrictions as a common Variable-Shaped-Beam mask writer format. The benefits are illustrated with experimental results. Different implementation scenarios are discussed.
The drive of the semiconductor industry towards smaller and smaller features sizes requires more sophisticated correction methods to guarantee the final tolerances for the etched features in both wafer manufacturing and mask making. The wavelength gap in lithography and process effects as well as dependencies on the design content have led to the tremendous variety of resolution enhancement techniques and process correction approaches that are currently applied to a design on its path to manufacturing. As the 65nm nodes become production ready and the 45nm node shifts into the focus of the development effects like flare in wafer exposure, fogging effects in ebeam mask exposure and others that previously could be ignored are becoming significant so that their correction prior to manufacturing is required. That means additional correction steps are necessary to complete the data preparation. These put a larger burden on the data processing path and raise concerns over data volume and processing time limitations. Hierarchical processing methods have proven very effective in the past to keep data volumes and processing time in control.
The paper explores the design trends and the potential of hierarchical processing under the new circumstances. Extended data flows with a variety of correction steps are investigated. Experimental results that demonstrate the benefit of hierarchical methods in conjunction with parallel processing methods like multithreading and distributed processing are provided. The benefit of introducing more effective data formats like OASIS in these flows will be illustrated.
Scattered light in optical lithography, also known as flare, has been shown to cause potentially significant linewidth variation at low-k1 values. The interaction radius of this effect can extend essentially from zero to the full range of a product die and beyond. Because of this large interaction radius the correction of the effect can be very computation-intensive. In this paper, we will present the results of our work to characterize the flare effect for 65nm and 90nm poly processes, model that flare effect as a summation of gaussian convolution kernels, and correct it within a hierarchical model based OPC engine. Novel methods for model based correction of the flare effect, which preserve much of the design hierarchy, is discussed. The same technique has demonstrated the ability to correct for long-range loading effects encountered during the manufacture of reticles.
The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using current data format specifications. A number of factors contribute to the problem: size, content and complexity of the designs are growing; the application of RET increases the vertex counts; complex data preparation flows post tape-out result in a large number of intermediate representations of the data and assembly steps are introduced for leveling critical parameters. Based on the choices for the mask making equipment the final result of the flow - the mask writer data - varies. While there is a continuous effort to improve the individual performance of the tools that handle the data, is has become apparent that enhancements to the entire flow are necessary to gain efficiency. Two ways are explored in the present study - the elimination of processing overhead and repeated execution of similar functions and the simplification of the data flow by reducing the number of formats involved. OASIS, the new stream format developed under the sponsorship of SEMI, has the necessary features to fullfill this role. The paper will describe the concept of OASIS as a common intermediate format in the mask data preparation flow and illustrate the benefits with experimental results. A concept for a common mask writer format based on OASIS will be proposed. It considers format dependencies for the mask writing performance for different type of mask writing equipment. Different implementation scenarios are discussed.
The continuous integration trend in design and broad deployment of resolution enhancement techniques (RET) have a tremendous impact on circuit file size and pattern complexity. Increasing design cycle time has drawn attention to the data manipulation steps that follow the physical layout of the design. The contributions to the total turn-around time for a design are twofold: the time to get the data ready for the hand-off to the mask writer is growing, but also the time it takes to write the mask is heavily influenced by the size and complexity of the data. In order to reduce the time that is required for the application of RET and the export of the data to mask writer formats, massively parallel processing approaches have been described. This paper presents such computing algorithms for the hierarchical implementation of RET and mask data preparation (MDP). We focus on the parallel and flexible deployment of a new hybrid multithreaded and distributed processing scheme in homogeneous and heterogeneous computer networks called MTFlex. We describe the new methodology and discuss corresponding hardware and software configurations. The application of this “MTFlex” computing scheme to different tasks in post-tapeout data preparation is shown in examples.
Mask manufacturing for the 100 and 65nm nodes is accompanied by an increasing deployment of VSB mask writing machines. The continuous integration trend in design and broad deployment of RET have a tremendous impact on file size and pattern complexity. The impact on the total turn-around time for a design is twofold: the time to get the data ready for the hand-off to the mask writer is growing but also the time it actually takes to write the mask is heavily influenced by the size and complexity of the data. Different parameters are measures of how the flow and the particular tooling impact both portions. The efficiency of the data conversion flow conducted by a software tool can be measured by the output file size, the scalability of the computing during parallel processing on multiple processors and the total cpu-time for the transformation. The mask writing of a particular data set is affected by the file size and the shot count. The latter one is the total amount of shots that are required to expose all patterns on the mask. The shot count can be estimated based on the figure count by type and their dimensions. The results of the fracturing have an impact on the mask quality -- in particular the grid size and the number and locations of small figures.
Critical features of a product layout like isolated structures and complicated two-dimensional situations including line ends have often a smaller process window compared to regular highly nested features. It has been observed that the application of optical proximity corrections (OPC) can create yet more aggressive layout situations. Although corrected layouts meet the target contour under optimal exposure conditions, the process window of these structures under non-optimal conditions is thereby potentially reduced. This increases the risk of shorts and opens in the resist images of the designs under non-optimal exposure conditions. The requirement from a lithographer's point of view is to conduct a correction that considers the process window aspect besides the desired target contour. The present study investigates a concept of using the over-dose and under-dose responses of the simulated image of an exposed structure to optimize the correction value. The simulations describing the lithographic imaging process are based on an enhanced variable threshold model (VTRE). The placement error of the simulated edge of a structure is usually corrected for the nominal dose and focus settings. In the new concept the effective edge placement error is defined as the average of the edge placement errors for the over-dose and the edge placement error for the under-dose conditions. If a specific layout has a very non-symmetric response to over-/under exposure for the evaluated condition, it is prone to a certain failure mechanism (open or short). Hence calculating the average of the edge placement errors will shift the effective correction towards a layout with larger process window. The paper evaluates this concept for 100 nm ground rules and 193 nm lithography conditions. Examples of corrected layouts are presented together with experimental data. The limitations of the approach are discussed.
RET treatments have become as integral a part of silicon manufacturing as steppers. For the 100-nm node, none of the critical layers can be adequately resolved without the application of at least one technique, and sometimes several in combination. All of these techniques can only be specified exactly in limited layout cases that are small enough for study and refinement. When the parameters defined in the initial study are applied to the full chip, however, the variability of real layout always leads to cases where the RET performs less than optimally. In fact, for most technqiues, the real layout imposes a balance between different layout needs. As an example, consider the use of off-axis illumination with sub-resolution assist features (SRAF). The illumination that performs ideally for the dense regions of the layout clearly does not work for all pitches thus the introduction of SRAF. Due to the limitations of infrastructure the SRAF assisted design is never an exact match to the dense reign the illumination is tuned for. The result is two-fold: one, the illumination must be relaxed in order not to be too selective of pitch, and the line width control across the chip becomes difficult. OPC and altPSM also both lead to the same two results when applied full chip.
We describe a mask data preparation (MDP) flow using GDSII data. We describe why this flow is minimally disruptive, flexible, and efficient. The mask floor-planning composition can be stored in GDSII so that all layout data is in the same format. We describe how GDSII data can be hierarchically fractured. We use a technique involving the optimal determination of 'cover cells'. Use of cover cell placements allows a large reduction in runtime and data volume. In one case, we fractured a very large design with assist bars and OPC to a 28 GB MEBES mode 5 file. After application of cover cells, the same data fractures to roughly 360 MB -- a reduction of more than 75x.
Using a new functionality of the Calibre PrintImage tool, a method for side lobe correction is presented. A full chip aerial image mapping is first obtained and then analyzed to detect and output polygons corresponding to chip areas where the aerial image intensity is above a user set threshold. Using state of the art DRC tool and associated RET software from Mentor Graphics we are able to propose a completely automated flow for side lobe detection and correction. Mask manufacturing complexity can also be taken into consideration using geometrical constraints similar to those used for scattering bars, such as minimum length, minimum width and minimum space to main features.
To follow the SIA roadmap, lithographers must deal everyday with the bad effects of a low-kl lithography transfer process. One of the ways to reduce the pressure associated with such low-kl values is to use Alternating Phase Shift Masks (henceforth “Alt-PSM”). Unfortunately, Alt-PSM also has some drawbacks, such as transmission imbalance between the phase shifted and non-phase shifted areas, and aspect ratio phase etch depth variation resulting from the mask etching process. Moreover, fast two-dimensional simulators that are commonly used in resolution enhancement simulation are unable to directly predict these inherently three-dimensional effects. We demonstrate a general approach to simulate and correct these effects in large circuit designs by combining accurate mask representation with Optical and Process Correction (“OPC”). Using a DRC tool, geometry in the input circuit design is partitioned based on size and shape. Guided by accurate three-dimensional simulations or empirical data, these partitions may be classified and assigned different phases and transmission values to more realistically simulate the mask. By using this more accurate mask representation in our integrated OPC tool, Calibre OPCPro, we are able to correct designs for these three-dimensional mask effects as well as for conventional proximity effects.
We study the influence of process parameters on strong phase shifted and binary mask designs. The impact of a poly gate alternate phase shifting technique on CD control is analyzed for a microprocessor design. A combination of OPC and PSM tools are used to assess sensitivity of CD to the variations of defocus, exposure dose, and mask misalignment, with and without PSM. A simulation region of 640x310 microns with 20000 MOSFETs is cut out from a random logic design. The edge placement error measurement sites are assigned each 200 nm across the transistor channels to fine-monitor CD variations. Four additional measurement sites are put close to the channel ends to monitor these regions susceptible to the CD variation. We use fast simulation technique that employs optical SOCS (Sum of Coherent Systems) decomposition and Extended Variable Threshold model. Optical parameters settings are chosen to be different for the binary and PSM masks to ensure comparable CD distributions in the center of the process windows. The PSM design is a 2-mask strong phase shifter design for poly gate level. Model-based OPC is applied to all relevant layers of the design including trim masks. To explore exposure-dose-misalignment input parameter space we setup partial factorial DOE with more than 100 runs each resulting in an EPE distribution for a parameter combination. We analyzed EPE shift and EPE dispersion. A definition of an EPE-based process window is proposed to capture the “proximity signature” of the design and its dependence on the process parameters. Comparison of binary and PSM designs yielded reliable quantitative measures of the PSM design performance gain.
Conventional methods of CD-limited yield and process capability analysis either completely ignore the intra-die CD variability caused by the optical and process proximity effects or assume it is normally distributed. We show that these assumptions do not hold for the aggressive subresolution designs. The form and modality of intra-die poly-gate CD variability strongly depend on the defocus and exposure values. We study the influence of process parameters on strong phase shifted and binary mask designs. A definition of a CD-based process window is proposed to capture the 'proximity signature' of the design and its dependence on process parameters.
In this paper, we discuss some of the problems encountered when implementing 2-mask strong phase shifter designs for the poly gate level in logic designs. Experimental results are presented showing pattern fidelity for different reticle designs. Simulations are presented indicating the improvement in pattern fidelity that can be expected from using OPC. PSM assignment and model-based OPC correction are performed by the Calibre-OPC tool from Mentor Graphics. In conclusion we show that while fairly simple design can be used to achieve 250nm design rules, in order to achieve both pattern fidelity as well as small feature size it is necessary to use OPC to correct for pattern distortion for design rules of 180nm and below.
In this paper we describe the use of sparse aerial image simulation coupled with process simulation, using the variable threshold resist (VTR) model, to do optical and process proximity correction (OPC) on phase shift masks (PSM). We will describe the OPC of PSM, including attenuated PSM, clear field PSM, and double exposure PSM. We will explain the method used to perform such OPC and show examples of critical dimension control improvements generated from such a technique. Simulations, PSM assignment and model based OPC corrections are performed with Calibre Workbench, Calibre DRC, Calibre PSMgate and Calibre OPCpro tools from Mentor Graphics. In conclusion we will show that PSM techniques need to be corrected by a phase aware proximity correction tool in order to achieve both pattern fidelity as well as small feature size on the wafer in a production environment.
For lithography smaller that 180 nm using 248 nm steppers, phase-shifting lithography is becoming more routine. However, when applied to very small dimensions, OPC effects begin to become pronounced. We have design a new phase- shifting test structure for reticles to address these phase shifting distortions, and report on its use.
In this paper we discuss some of the problems and solutions discovered when implementing 2-mask strong phase shifter designs for the poly gate level in logic designs. Experimental results are presented showing pattern fidelity for different reticle designs. Simulations are presented indicating the improvement in pattern fidelity that can be expected from using OPC. Simulations, PSM assignment and model-based OPC correction are performed by the Calibre WORKbench, Calibre DRC, Calibre PSMgate and Calibre OPCpro tools from Mentor Graphics. In conclusion, we show that while fairly simple designs can be used to achieve 250 nm design rules (approximately 150 nm gates), in order to achieve both pattern fidelity as well as small feature size it is necessary to use 3-layer/phase-aware model-based OPC to correct for pattern distortion for design rules of 180 nm and below (approximately 100 nm phase-shifted gates).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.