PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7275, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous publications we have proposed a hierarchical variability model and verified it with 90nm test data. This
model is now validated with a new set of 45nm test chips. A mixed sampling scheme with both sparse and exhaustive
measurements is designed to capture both wafer level and chip level variations. Statistical analysis shows that the acrosswafer
systematic function can be sufficiently described as parabolic, while the within-die systematic variation is now
very small, with no discernible systematic component. Analysis of pattern dependent effects on leakage current shows
that systematic pattern-to-pattern LEFF variation is almost eliminated by optical proximity correction (OPC), but stressrelated
variation is not. Intentionally introduced gate length offset between two wafers in our dataset provides insight to
device parameter variability and sheds additional light on the underlying sources of process variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To address the variability challenges inherent to 45 and 32nm as early as possible, a model-based variability analysis has
been implemented to predict lithography induced electrical variability in standard cell libraries, and this analysis was
used optimize the cell layout and decrease variability by up to 40%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A tiny footprint electrically probable single layer defocus monitor/test structure has been designed and tested to show
sub-10nm resolution in electrical or electronic defocus monitoring. Electronic testing is a low-cost must have for on-chip
production process monitoring which will become necessary for effective Design for Manufacturing. This programmable
defocus monitor can be designed to pinch open at various levels of defocus by modifying four different layout
parameters, CD, probe size, offset, and the number of rings. An array of these structures can be read as a series of opens
and shorts, or 1s and 0s, to electronically extract defocus. One important feature of this defocus test structure is that it has
an asymmetric response through focus, which translates to a high sensitivity to defocus at low defocus values or close to
nominal conditions. Simulation and experimental results have shown good sensitivity for both on axis, tophat, and offaxis,
quasar, illumination. This paper will present both simulation and experimental results that demonstrate the
programmability and sensitivity of this test structure to defocus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computation lithography relies on algorithms. These algorithms exhibit variability that can be as much as 5%
(1 σ) of the critical dimension for the 65-nm technology. Using hotspot analysis and fixing as an example, such
variability can be addressed on the algorithm level via controlling and eliminating its root causes, and on the
application level by setting specifications that are commensurate with both the limitations of the algorithms and
the goals of the application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The time-to-market driven need to maintain concurrent process-design co-development, even in spite of discontinuous
patterning, process, and device innovation is reiterated. The escalating design rule complexity resulting from increasing
layout sensitivities in physical and electrical yield and the resulting risk to profitable technology scaling is reviewed.
Shortcomings in traditional Design for Manufacturability (DfM) solutions are identified and contrasted to the highly
successful integrated design-technology co-optimization used for SRAM and other memory arrays. The feasibility of
extending memory-style design-technology co-optimization, based on a highly simplified layout environment, to logic
chips is demonstrated. Layout density benefits, modeled patterning and electrical yield improvements, as well as
substantially improved layout simplicity are quantified in a conventional versus template-based design comparison on a
65nm IBM PowerPC 405 microprocessor core. The adaptability of this highly regularized template-based design
solution to different yield concerns and design styles is shown in the extension of this work to 32nm with an increased
focus on interconnect redundancy. In closing, the work not covered in this paper, focused on the process side of the
integrated process-design co-optimization, is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization at the mask plane of the effects of illumination, proximity, and defocus is used to give physical insight into
restricted design rules, layout choices, and residual edge placement errors. To facilitate this work, a pattern matching
code has been tuned, tested, and enhanced. The richness of the original code with complex match factors, mask Boolean
operations, and mask weights and phases has been adapted to operate on clear-field attenuated-phase-shifting masks with
asymmetrical illumination. To account for illumination effects, the aberration spillover is multiplied by the Fourier
transform of the angular distribution of the intensity spectrum incident on the mask. In a study of binary mask layouts,
the R-squared correlation of the prediction of image intensity with Pattern Match Factor increased from 0.57 to 0.89
when annular illumination was included in the spillover function. In addition, features to visualize the mutual
coherence, shifts of the illumination, and source maps have been added.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As design rules and corresponding logic standard cell layouts continue to shrink node-on-node in
accordance with Moore's law, complex 2D interactions, both intra-cell and between cells, become much
more prominent. For example, in lithography, lack of scaling of λ/NA implies aggressive use of resolution
enhancement techniques to meet logic scaling requirements-resulting in adverse effects such as
'forbidden pitches'-and also implies an increasing range of optical influence relative to cell size. These
adverse effects are therefore expected to extend well beyond the cell boundary, leading to lithographic
marginalities that occur only when a given cell is placed "in context" with other neighboring cells in a
variable design environment [1]. This context dependence is greatly exacerbated by increased use of strain
engineering techniques such as SiGe and dual-stress liners (DSL) to enhance transistor performance, both
of which also have interaction lengths on the order of microns. The use of these techniques also breaks the
formerly straightforward connection between lithographic 'shapes' and end-of-line electrical performance,
thus making the formulation of design rules that are robust to process variations and complex 2D
interactions more difficult.
To address these issues, we have developed a first-principles-based simulation flow to study contextdependent
electrical effects in layout, arising not only from lithography, but also from stress and
interconnect parasitic effects. This flow is novel in that it can be applied to relatively large layout clips-
required for context-dependent analysis-without relying on semi-empirical or 'black-box' models for the
fundamental electrical effects. The first-principles-based approach is ideal for understanding contextdependent
effects early in the design phase, so that they can be mitigated through restrictive design rules.
The lithographic simulations have been discussed elsewhere [1] and will not be presented in detail. The
stress calculations are based on a finite-element method, extrapolated to mobility using internal algorithms.
While these types of calculations are common in '1D' TCAD space, we have modified them to handle ~10
μm X 10 μm clips in reasonable runtime based on advances in software and optimization of computing
resources, structural representations and simulation grids.
In this paper, we discuss development and validation of the simulation flow, and show representative
results of applying this flow to analyze context-dependent problems in a 32-nm low-power CMOS process.
Validation of the flow was accomplished using a well-characterized 40/45-nm CMOS process
incorporating both DSL and SiGe. We demonstrate the utility of this approach not only to establishing
restrictive design rules for avoiding catastrophic context-dependent effects, but also to flag individual cells
and identify cell design practices that exhibit unacceptable levels of context-dependent variability. We
further show how understanding the sources of stress variation is vital to appropriately anchoring SPICE
models to capture the impact of context-dependent electrical effects. We corroborate these simulations
with data from electrical test structures specifically targeted to elucidate these effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As design rule (DR) scaling continues to push lithographic imaging to higher numerical aperture (NA) and smaller k1
factor, extensive use of resolution enhancement techniques becomes a general practice. Use of these techniques not only
adds considerable complexity to the design rules themselves, but also can lead to undesired and/or unanticipated
problematic imaging effects known as "hotspots." This is particularly common for metal layers in interconnect
patterning due to the many complex random and bidirectional (2D) patterns present in typical layout. In such situations,
the validation of DR becomes challenging, and the ability to analyze large numbers of 2D layouts is paramount in
generating a DR set that encodes all lithographic constraints to avoid hotspot formation.
Process window (PW) and mask error enhancement factor (MEEF) are the two most important lithographic constraints in
defining design rules. Traditionally, characterization of PW and MEEF by simulation has been carried out using discrete
cut planes. For a complex 2D pattern or a large 2D layout, this approach is intractable, as the most likely location of the
PW or MEEF hotspots often cannot be predicted empirically, and the use of large numbers of cut planes to ensure all
hotspots are detected leads to excessive simulation time. In this paper, we present a novel approach to analyzing fullfield
PW and MEEF using the inverse lithography technology (ILT) technique, [1] in the context of restrictive design
rule development for the 32nm node. Using this technique, PW and MEEF are evaluated on every pixel within a design,
thereby addressing the limitations of cut-plane approach while providing a complete view of lithographic performance.
In addition, we have developed an analysis technique using color bitmaps that greatly facilitates visualization of PW and
MEEF hotspots anywhere in the design and at an arbitrary level of resolution.
We have employed the ILT technique to explore metal patterning options and their impact on 2D design rules. We show
the utility of this technique to quickly screen specific rule and process choices-including illumination condition and
process bias-using large numbers of parameterized structures. We further demonstrate how this technique can be used
to ascertain the full 2D impact of these choices using carefully constructed regression suites based on standard random
logic cells. The results of this study demonstrate how this simulation approach can greatly improve the accuracy and
quality of 2D rules, while simultaneously accelerating learning cycles in the design phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chip performance and yield are increasingly limited by systematic and random variations introduced during wafer
processing. Systematic variations are layout-dependent and can be broadly classified as optical or non-optical in nature.
Optical effects have their origin in the lithography process including mask, RET, and resist. Non-optical effects are
layout-dependent systematic variations which originate from processes other than lithography. Some examples of nonoptical
effects are stress variations, well-proximity effect, spacer thickness variations and rapid thermal anneal (RTA)
variations. Semiconductor scaling has led to an increase in the complexity and impact of such effects on circuit
parameters. A novel technique for dataprep called electrically-driven optical proximity correction (ED-OPC) has been
previously proposed which replaces the conventional OPC objective of minimization of edge placement error (EPE) with
an electrical error related cost function. The introduction of electrical objectives into the OPC flow opens up the
possibility of compensating for electrical variations which do not necessarily originate from the lithographic process. In
this paper, we propose to utilize ED-OPC to compensate for optical as well as non-optical effects in order to mitigate
circuit-limited variability and yield. We describe the impact of non-optical effects on circuit parameters such as
threshold voltage and mobility. Given accurate models to predict variability of circuit parameters, we show how EDOPC
can be leveraged to compensate circuit performance for matching designer intent. Compared to existing
compensation techniques such as gate length biasing and metal fills, the primary advantage of using ED-OPC is that the
process of fragmentation in OPC allows greater flexibility in tuning transistor properties. The benefits of using ED-OPC
to compensate for non-optical effects can be observed in reduced guard-banding, leading to less conservative designs. In
addition, results show a 4% average reduction in spread in timing in compensating for intra-die threshold voltage
variability, which potentially translates to mitigation of circuit-limited yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present the impact of the variations in lithography on the performance of
analog circuits. Matching pairs of devices are critical in the design of many analog circuit blocks.
Simple circuits, such as current mirrors and differential pairs, depend on the matching between
transistors to provide accurate bias currents and symmetrical ac gains with minimal offsets. Complex
analog systems, such as analog-to-digital converters and phase-locked loops, depend on the
matching between different active and passive devices, such as transistors and resistors. Variations
in lithography encountered during fabrication of analog circuits can lead to matching errors and
hence performance and yield losses for the device. The impact of some lithographical errors, such as
focus and dose errors, mask errors, and lens aberrations, on some of the key analog building blocks,
namely a single transistor, a current mirror and a differential pair is presented. The errors of such
cases are also explored for the ring oscillator, demonstrating the extent of performance variation on
complex analog circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Valeriy Sukharev, Ara Markosian, Armen Kteyan, Levon Manukyan, Nikolay Khachatryan, Jun-Ho Choy, Hasmik Lazaryan, Henrik Hovsepyan, Seiji Onoue, et al.
Proceedings Volume Design for Manufacturability through Design-Process Integration III, 72750H (2009) https://doi.org/10.1117/12.813882
A novel model-based algorithm provides a capability to control full-chip design specific variation in pattern transfer
caused by via/contact etch processes. This physics based algorithm is capable of detecting and reporting etch hotspots
based on the fab defined thresholds of acceptable variations in critical dimension (CD) of etched shapes. It can be used
also as a tool for etch process optimization to capture the impact of a variety of patterns presented in a particular design.
A realistic set of process parameters employed by the developed model allows using this novel via-contact etch (VCE)
EDA tool for the design aware process optimization in addition to the "standard" process aware design optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An interval-value based circuit simulation engine is proposed to estimate the transistor-level circuit performance
distribution without Monte-Carlo simulations. In the proposed flow, variability in process variables is first casted into an
interval representation; then an interval-valued circuit simulator, in which all real number operations are replaced by
interval operations, is used to simulate the circuit; the interval-valued simulation results can be used to extract
performance statistics. A runtime reduction over both Monte-Carlo simulation and response surface modeling has been
demonstrated, while excellent accuracies in transistor-level performance statistics are maintained. Future work includes
incorporating non-Gaussian distributions into the interval simulation, and adapting an interval-value based framework
into a design flow suitable for statistical performance optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have conducted a study of context-dependent variability for cells in a 45nm library, including both lithography and
stress effects, using the Cadence Litho Electrical Analyzer (LEA) software. Here, we present sample data and address a
number of questions that arise in such simulations. These questions include identification of stress effects causing
context dependence, impact of the number of contexts on the results, and combining lithography-induced variations due
to overlay error with context-dependent variations. Results of such simulations can be used to drive a number of
corrective and adaptive actions, among them layout modification, cell placement restrictions, or optimal design margin
determination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parameter-specific and simulation-calibrated ring oscillator (RO) inverter layouts are described for identifying and
quantitatively modeling sources of circuit performance variation from source/drain stress, shallow trench isolation (STI)
stress, lithography, etch, and misalignment. This paper extends the RO approach by adding physical
modeling/simulation of the sources of variability to tune the layouts of monitors for enhanced sensitivity and selectivity.
Poly and diffusion layout choices have been guided by fast-CAD pattern matching. The accuracy of the fast-CAD
estimate from the Pattern Matcher for these lithography issues is corroborated by simulations in Mentor Graphics
Calibre. Generic conceptual results are given based on the experience from preparing of proprietary layouts that pass
DRC check for a 45 nm test chip with ST Micro. Typical improvements in sensitivity of 2 fold are possible with layouts
for lithography focus. A layout monitor for poly to diffusion misalignment based on programmable off-sets shows a
0.8% change in RO frequency per 1nm poly to diffusion off-set. Layouts are also described for characterizing stress
effects associated with diffusion area size, asymmetry, vertical spacing, and multiple gate lengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scaling of integrated circuits over the past several ten years has been done successfully by improvement of
photolithography equipment and resolution enhancement technique. The smaller the feature size is, the tighter
controllability of critical dimension (CD) is required. Enormous efforts have been made to achieve device specifications.
Especially in logic devices as system on chip, controllability of gate transistor CD is the one of the greatest concern for
both designer and manufacturer since characteristics of device chip, speed and power, are largely depend on the gate CD.
From the viewpoint of manufacturer all gate transistors on a chip have equivalent weight and tight CD controls are
applied to them. Nevertheless, each transistor has a various weight and required controllability is definitely different
from the viewpoint of chip designer. In this paper, we introduce the concepts of tolerance as representation of design
intentions. An intention derived at chip designing stage is converted to a formula which is comprehensive and
measurable at manufacturing1,2. Timing margin of each path, which is derived from timing analysis at chip design, can be
converted to the most comprehensive formula as CD tolerance, for instance. Two major application of the tolerance
deduced from design intention will be presented. The first one is reduction of OPC processing time and the second
application of the tolerance is qualification at photo-mask and wafer processing. Comprehension of design intentions and
interpretation of them to tolerance will be promising way for cost effective manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As advanced manufacturing processes become more stable, the need to adapt new designs to fully utilize
the available manufacturing technology becomes a key technologic differentiator. However, many times
such gains can only be realized and evaluated during full chip analysis. It has been demonstrated that the
most accurate layout verification methods require application of the actual OPC recipes along with most of
the mask data preparation that defines the pattern transfer characteristics of the process. Still, this method in
many instances is not sufficiently fast to be used in a layout creation environment which undergoes
constant updates.
By doing an analysis of typical mask data processing, it is possible to determine that the most CPUintensive
computations are the OPC and contour simulation steps needed to perform layout printability
checks. Several researchers have tried to reduce the time it takes to compute the OPC mask by introducing
matrix convolutions of the layout with empirically calibrated two-dimensional functions. However,
most of these approaches do not provide a sufficient speed-up since they only replace the OPC computation
and still require a full contour computation. Another alternative is to try to find effective ways of pattern
matching those topologies that will exhibit transfer difficulties4, but such methods lack the ability to be
predictive beyond their calibration data.
In this paper we present a methodology that includes common resolution enhancement techniques, such as
retargeting and sub-resolution assist feature insertion, and which replaces the OPC computation and
subsequent contour calculation with an edge bias function based on an empirically-calibrated, directional,
two-dimensional function. Because the edge bias function does not provide adequate control over the
corner locations, a spline-based smoothing process is applied. The outcome is a piecewise-linear curve
similar to those obtained by full lithographic simulations.
Our results are analyzed from the point of view of runtime and matching with respect to a complete
verification process that uses full mask data preparation followed by production-quality contour
simulations under a variety of process variations, including perturbations to focus, mask bias and exposure.
One of the main concerns with using an empirical model is its ability to predict topologies that were not
part of the original calibration. While there is indeed a dependency on the model in regard to the data used
for calibration, the results indicate that this dependency is weak and that such models are able to provide
sufficient accuracy with much more tolerable computation times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm is presented which performs a model-based colouring of a given layout for double patterning.
The algorithm searches the space of patterns which can be printed with a particular wavelength and numerical
aperture, and seeks to find a pair of patterns which combine to produce the desired target layout. This is
achieved via a cost function which encodes the geometry of the layout and allowable edge placement tolerances.
If the layout is not printable by double patterning, then the algorithm provides a closest solution and indicates
hotspots where the target is not feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While predicting and removing of lithographic hot-spots are a matured practice in recent semiconductor industry, it is
one of the most difficult challenges to achieve high quality detection coverage and to provide designer-friendly fixing
guidance for effective physical design implementation. In this paper, we present an accurate hot-spot detection method
through leveling and scoring algorithm using weighted combination of image quality parameters, i.e., normalized image
log-slope (NILS), mask error enhancement factor (MEEF), and depth of focus (DOF) which can be obtained through
lithography simulation. Hot-spot scoring function and severity level are calibrated with process window qualification
results. Least-square regression method is used to calibrate weighting coefficients for each image quality parameter.
Once scoring function is obtained with wafer results, it can be applied to various designs with the same process. Using
this calibrated scoring function, we generate fixing guidance and rule for the detected hot-spot area by locating edge bias
value which can lead to a hot-spot free score level. Fixing guidance is generated by considering dissections information
of OPC recipe. Finally, we integrated hot-spot fixing guidance display into layout editor for the effective design
implementation. Applying hot-spot scoring and fixing method to memory devices of the 50nm node and below, we could
achieve a sufficient process window margin for high yield mass production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In May 2006, the Mask Design, Drawing, and Inspection Technology Research Department (Mask D2I) at the
Association of Super-Advanced Electronics Technologies (ASET) launched 4-year program for reducing mask
manufacturing cost and TAT by concurrent optimization of MDP, mask writing, and mask inspection [1]. One area of
the project focuses on the extraction and utilization of repeating patterns. The repeating patterns are extracted from the
mask data after OPC. The information is then used in Character Projection (CP) for reducing the shot counts during the
electron beam writing. In this paper we will address the verification of the efficiency in extracting repeating pattern from
the actual device production data obtained from the member companies of MaskD2I, and will report on the improvement
of the software tool by these results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Device and reliability performances of Deep Sub-Micron devices depend on the physical layout and the sensitivity
of this layout to the foundry process. However, layout optimization does not necessarily means layout "relaxation"
for all structures. A complex design needs a "judgment" system, which will identify the "quality" of each design
rule and suggest locations to be modified. Another important task is to be able quantitatively compare between
layouts, designed for the same purpose (standard cells for example). In this work, we propose a "ranking" system,
which analyzes the design and prioritizes the places to be modified. Our "ranking" system consists of set of rules
based on the wafer foundry process. Different check rules have different impact on performance and because
of that they get different priorities within the final results. Based on that, the overall design score, and the
rule-priority-of-improvement are calculated. We start by presenting our ranking analysis system. Afterwards, we
compare several standard cells libraries, designed by leading 3rd party IP houses. Based on the ranking results,
guidelines and priority for layout modification are defined. We also discuss the impact of different DRC coding
methods on the scoring values. For example, checking the overlap of M1 layer over contact by measuring the
enclosure or by measuring the overlap area. Finally, we show our analysis for several similar cells as well as for
a full-chip design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, various DFM techniques are developed and adopted by the designers to improve circuit yield and
reliability. The benefits from applying a DFM technique to a circuit often come at the expense of degrading other
process or design attributes. In this paper, we discuss two widely deployed techniques: double vias and wire
spreading/widening, show the benefits and trade-offs of their usage, and practical ways to implement them in SoC
designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the process development advances to deep sub-100 nm technology, many new
technologies such as immersion lithography and hyper NA lens design are developed for
the improved on-wafer pattern resolution to meet the technology requirement. During the
early process development such as 45 nm technology, it was not clear that lithography
tool could meet stringent CD variation requirement. Many rules such as fixed poly pitch,
single poly orientation, and dummy poly insertion for diffusion edge transistors were
implemented [1, 2] to ensure that, with designated litho-tool, the CD variation control
could be minimized. These rules generally added layout design complexity and area
penalty. It would be efficient that these rules could be evaluated and properly
implemented with data collected from well-design test structures.
In this work, a set of simple test structures with various dummy poly gate lengths
and numbers of dummy poly gates, and fix-pitch poly gate orientations were
implemented in the process development test vehicles (TV's). Electrical, simulation, and
in-line CD data of these test structures were collected. Analysis of the data and related
design rule optimization and implementation are described. This work helped to optimize and to properly implement the 45 nm gate poly
design rules during early process development for Xilinx FPGA product development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we provide some data on the actual scaling of OPC runtime that we have experienced at AMD. We review
the expected OPC requirements down to the 16 nm node and develop a model to predict the total CPU requirements to
process a single chip design. We will also review the scalability of "hardware acceleration" under a variety of scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have constructed hotspot management flow with a die-to-database (D2DB) inspection system for spacer
patterning technologies (SPTs) which are among the strongest candidates in double patterning technologies below 3x nm
half-pitch generations. At SPIE 2006[1], we reported in "Hotspot management" that extracted hotspot by full-chip
lithography simulation could be quickly fed back to OPC, mask making, etc. Since the SPT includes process complexity
from resist patterning to final device patterning, however, it is difficult to exactly estimate hotspots on final patterned
features on wafers by full-chip lithography simulation. Therefore, experimental full-chip inspection methodologies for
hotspots extraction are necessary in order to construct hotspot management for SPTs. In this work, we applied the D2DB
inspection system with electron beam (EB) to SPTs in hotspot management flow. For the D2DB inspection system, the
NGR-2100 has remarkable features for the full-chip inspection within reasonable operating time. This system provides
accurate hotspot extraction by EB with wider field of view (FOV) than that of SEMs. With the constructed hotspot
management flow, extracted hotspots for SPT involving errors of around 10nm could easily be fed back to fix the wafer
processes and mask data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The RET selection process, for 32 nm and 22 nm technology nodes, is becoming evermore complex due to an increase in
the availability of strong resolution enhancements (e.g., polarization control, custom exotic illuminators, hyper NA).
Lithographers often select the illuminator geometries based on analyzing aerial images for a limited set of structures.
However, source-shape geometries optimized using this methodology is not always optimal for other complex patterns.
This leads to critical hot-spots on the final wafer images in form of bridges and gaps. Lithographers would like to
analyze the impact of selected source-shape on wafer results for the complex patterns before running the physical
experiments. Physics based computational lithography tools allow users to predict the accurate wafer images. This
approach allows users to run large factorial experiments for simple and complex designs without running physical
experiments. In this study, we will analyze the lithographic performance of simple 1D patterns using aerial image models
and physical resist models with calibrated resist parameters1,2,3,4 for two commercial resists. Our goal is to determine
whether physical resist models yield a different optimal solution as compared to the aerial image model. We will explore
several imaging parameters - like Numerical Aperture (NA), source geometries (Annular, Quadrupole, etc.), illumination
configurations and anchor features for different sizes and pitches. We will apply physics based OPC and compute
common process windows using physical model. In the end, we will analyze and recommend the optimal source-mask
solution for given set of designs based on all the models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sub-resolution assist features (SRAF) insertion using mask synthesis process based on pixel-based mask
optimization schemes has been studied in recent years for various lithographical schemes, including 6%
attenuated PSM (AttPSM) with off-axis illumination. This paper presents results of application of the pixelbased
optimization technology to 6% and 30% AttPSM mask synthesis. We examine imaging properties of
mask error enhancement factor (MEEF), critical dimension (CD) uniformity, and side-lobe printing for
random contact hole patterns. We also discuss practical techniques for manipulating raw complex shapes
generated by the pixel-based optimization engine that ensure mask manufacturability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of Line Edge Roughness (LER) and Line Width Roughness (LWR) has long surpassed its
effect on process control. As devices scale down, the roughness effects have become a major hindrance for further
advancement along Moore's law. Many studies have been conducted over the years on the sensitivity of LER to various
changes in the materials and the process, which have been considered the main way to tackle the problem - especially
through Photoresist improvement. However, despite the increased development of DFM tools in recent years, limited
research was done as to LER sensitivity to layout, and the research that was done was limited to proximity effects.
In this paper, we study the sensitivity of LER to the layout around the transistor, defined by the gate structure
of poly over AA (Active Area). Using different types and geometries of transistors, we found that the poly-gate LER is
sensitive to the structure of the Active Area around it (source/drain from gate to contact, both shape and length). Using
local LER measurement (moving standard deviation of poly edge location), we found a clear correlation between LER
value and the length of the AA/STI boundary located at a close range. Longer AA edges yield higher LER, as proved by
comparing gate LER of dog-bone transistor with classical transistor. Based on these results, we suggest that LER is
sensitive not only to proximity effects, but also to the layout of underlying layers, through the effect of light scattering
of the edges during the lithographic process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As semiconductor technologies move toward 70nm
generation and below, contact-hole is one of the most
challenging features to print on wafer. There are two
principle difficulties in defining small contact-hole
patterns on wafer. One is insufficient process margin
besides poor resolution compared with line-space pattern.
The other is that contact-hole should be made through
pitches and random contact-hole pattern should be
fabricated from time to time.
PIXBAR technology is the candidate which can help
improve the process margin for random contact-holes.
The PIXBAR technology lithography attempts to
synthesize the input mask which leads to the desired
output wafer pattern by inverting the forward model from
mask to wafer. This paper will use the pixel-based mask
representation, a continuous function formulation, and
gradient-based interactive optimization techniques to
solve the problem. The result of PIXBAR method helps
gain improvement in process window with a short
learning cycle in contact-hole pattern assist-feature
testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a novel application of layout printability verification (LPV) to assess the scalability of physical
layout components from 32 nm to 28 and 22 nm with respect to process variability metrics. Starting from the description
of a mature LPV flow, the paper illustrates the core methodology for deriving a metric for design scalability. The
functional dependency between the scalability metric and the scaling factor can then be modeled to study the scaling
robustness of a set of representative layouts. Conversely, quantitative data on scalability limits can be used to determine
which design rules can be pushed and which must be relaxed in the transition from 32 to 22 nm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A compact model for estimating delay variations due to double patterning lithography process variations on interconnect
layers is presented. Through process simulation and circuit analysis of one-dimensional interconnect topologies, the
delay response from focus, exposure, and overlay is studied. Using a process window defined by 10% linewidth change
from focus and exposure, and ±10% overlay error, a worst case change in delay of 3.9% is observed for an optimal
buffer circuit. It is shown that such delay responses can be modeled using a second order polynomial function of process
parameters. The impact of multiple interconnect variations in unique layout environments is studied using multiple
segments of interconnects each experiencing different variations. The overall delay responses are then examined, and it
is shown that for these layout structures, the separate variations combine in a manner that is both additive and
subtractive, thereby reducing the overall delay variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In double patterning lithography (DPL), overlay error between two patterning steps at the same layer translates into CD
variability. Since CD uniformity budget is very tight, overlay control becomes a tough challenge for DPL. In this paper,
we electrically evaluate overlay error for BEOL DPL with the goal of studying relative effects of different overlay sources
and interactions of overlay control with design parameters. Experimental results show the following: (a) overlay electrical
impact is not significant in case of positive-tone DPL (< 3.4% average capacitance variation) and should be the base for
determining overlay budget requirement; (b) when considering congestion, overlay electrical impact reduces in positivetone
DPL; (c) Design For Manufacturability (DFM) techniques like wire spreading can have a large effect on overlay
electrical impact (20% increase of spacing can reduce capacitance variation by 22%); (d) translation overlay has the largest
electrical impact compared to other overlay sources; and (e) overlay in y direction (x for horizontal metalization) has
negligible electrical impact and, therefore, preferred routing direction should be taken into account for overlay sampling
and alignment strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in lithography patterning have been the primary driving force in microelectronics manufacturing processes.
With the increasing gap between the wavelength of the optical source and feature sizes, the accompanying strong
diffraction effects have a significant impact on the pattern fidelity of on-silicon layout shapes. Layout patterns become
highly sensitive to those context shapes lying within the optical radius of influence. Under such optical proximity effects,
manufacturability hot spots such as necking and bridging may occur. Studies have shown that manufacturability hot
spots are pattern dependent in nature and should be considered at the design stage [1]. It is desirable to detect these hot
spots as early as possible in the design flow to minimize the costs for correction.
In this work, we propose a hot spot prediction method based on a support vector machine technique. Given the location
of a hot spot candidate and its context patterns, the proposed method is capable of efficiently predicting whether a
candidate would become a hot spot. It takes just seconds to classify thousands of samples. Due to its computational
efficiency, it is possible to use this method in physical design tools to rapidly assess the quality of printed patterns. We
demonstrate one such application in which we evaluate the layout quality in the boundary region of standard cells. In the
conventional standard cell layout optimization process, lithography simulation is the main layout verification method.
Since it is a very time-consuming process, the iterative optimization approach between simulation and layout correction
[2] takes a long time and only a limited number of context patterns can be explored. We show that with the proposed hot
spot prediction method, for each standard cell, a much greater context pattern space can be explored, and the context
sensitivity of a hot spot candidate located near a cell boundary can be estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides details of the implementation of a new design hotspot classification and detection system, and
presents results of using the system to detect hotspots in layouts. A large set of hotspot snippets is grouped into a small
number of clusters containing geometrically similar hotspots. A fast incremental clustering algorithm is used to perform
this task efficiently on very large datasets. Each cluster is analyzed to produce a characterization of a class of hotspots,
and a pattern matcher is used to detect hotspots in new design layouts based on the hotspot class descriptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical
proximity correction (OPC) are applied, standard design rule constraints (DRC) sometimes fails to fully capture the
concept of design manufacturability. DRC Plus augments standard DRC by applying fast 2D pattern matching to design
layout to identify problematic 2D patterns missed by DRC. DRC Plus offers several advantages over other DFM
techniques: it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not
require compute intensive simulations, and it does not require highly-accurate lithographic models. These advantages
allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC.
The creation of DRC Plus rules, however, remains a challenge. Hotspots derived from lithographic simulation may be
used to create DRC Plus rules, but the process of translating a hotspot into a pattern is a difficult and manual effort. In
this paper, we present an algorithmic methodology to identify hot patterns using lithographic simulation rather than
hotspots. First, a complete set of pattern classes, which covers the entire design space of a sample layout, is computed.
These pattern classes, by construction, can be directly used as DRC Plus rules. Next, the manufacturability of each
pattern class is evaluated as a whole. This results in a quantifiable metric for both design impact and manufacturability,
which can be used to select individual pattern classes as DRC Plus rules. Simulation experiment shows that hundreds of
rules can be created using this methodology, which is well beyond what is possible by hand. Selective visual inspection
shows that algorithmically generated rules are quite reasonable. In addition to producing DRC Plus rules, this
methodology also provides a concrete understanding of design style, design variability, and how they affect
manufacturability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the transistors are scaled down, undesirable performance mismatch in identically designed transistors increases
and hence causes greater impact on circuit performance and yield. Since Line-End Roughness (LER)
has been reported to be in the order of several nanometers and not to decrease as the device shrinks, it has
evolved as a critical problem in the sub-45nm devices and may lead to serious device parameter fluctuations and
performance limitation for the future VLSI circuit application. Although LER is a kind of random variation, it is
undesirable and has to be analyzed because it causes the device to fluctuate. In this paper, we present a new cell
characterization methodology which uses the non-rectangular gate print-images generated by lithography and
etch simulations with the random LER variation to estimate the device performance of a sub-45nm design. The
physics based TCAD simulation tool is used for validating the accuracy of our LER model. We systematically
analyze the random LER by taking the impact on circuit performance due to LER variation into consideration
and suggest the maximum tolerance of LER to minimize the performance degradation. We observed that the
driving current is highly affected by LER as the gate length becomes thinner. We performed lithography simulations
using 45nm process window to examine the LER impact of the state-of-the-art industrial devices. Results
show that the rms value of LER is as much as 10% from its nominal line edge, and the saturation current can
vary by as much as 10% in our 2-input NAND cell.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology for predicting on and off-state transistor performance is described in this paper. In general,
this flow consists of systematic Edge-Contour-Extraction (ECE) from devices under the manufacturing,
followed by device simulation. Gate parameter extraction calculates an equivalent gate length and width
(Leq, Weq) for non-rectangular gates. The methodology requires a model describing MOSFET behavior of
current versus width for various gate lengths and voltages. Non-rectangular gates are described by a
weighted sum of the currents from a discrete representation (i.e. Total gate current is determined by a
weighted sum since the current distribution is not homogeneous along the channel). Thus, for a given L, W
and V, the current should be discoverable from the calibrated model. This approach is more general than
previous work as both Leq and Weq are determined for a given voltage which permits the model to predict on
and off-current with a single spice netlist as opposed to previous work which only considered adjustments
to the channel length.
In this work, two transistor series at two different drawn pitch conditions (dense and isolated) were
manufactured, followed by state-of-the-art ECE. The contours obtained directly by SEM measurements
were used to perform an electrical device simulation for each individual transistor in the series.
This paper demonstrates the possibility to analyze the transistor's electrical performance at nominal and
off-process conditions. The presented simulation flow provides the advantage of early-in-time prediction of
the transistor performances, measuring large volume of devices in a fast and accurate fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A circuit-topology-driven approach to Optical Proximity Correction (OPC) is presented. By tailoring device
critical dimension (CD) statistical distribution to the device function in the circuit, and ensuring that the CD
distribution stays within the correct (possibly variable) limits during process maturation and other process
changes, it can be an effective tool for optimizing circuit's performance/yield tradeoff in high-volume manufacturing.
Calibre's proprietary Programmable Electrical Rule Checks (PERC) module is used to recognize the
topology. Alternatively, an external static timing tool can be used to identify critical devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rigorous 3D process and device simulation has been applied to transistors with curved channel shapes that are inevitable
due to the optical proximity effects. The impact of channel curvature on the transistor performance has been
benchmarked using the universal Ion/Ioff coordinates. Systematic study of the different non-rectangular channel shapes
included straight lines at an angle different than 90 degrees and concave and convex shapes with different curvature
radii. The study reveals that any deviation from the ideal rectangular shape affects transistor performance. The amount
of enhancement or degradation depends on particular shape, with on current, threshold voltage, and off current
responding very differently to the same shape variation. The type and amount of performance variation is very different
for the distorted channel length (i.e. poly gate shape) vs distorted channel width (i.e. active layer shape). Degradation of
over 50% in the on current at a fixed off current has been observed in the most unfavorable cases for each of the two
critical mask layers. On the other hand, a desirable over 3x off current reduction at a fixed on current can be achieved by
selecting a beneficial channel shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the 40nm technology node, lithographic effects have a significant impact on the electrical characteristics of CMOS
transistors, which directly affects the performance of circuits containing these devices. Many of these effects are
systematic and intra-cell, and can therefore be accurately modeled by accounting for layout proximity effects during
cell characterization. However, because the final cell placement for real designs is not known at the time of
characterization, inter-cell proximity variations cannot be treated systematically at that time. We present a method to
analyze inter-cell proximity variation statistically, and approximate the effect of context as a random variable during
full chip verification. We then show an example analysis applied to standard logic cells in a 40nm technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Sidewall Spacer Double Patterning (SSDP) technique, also referred to as Self-Aligned Double Patterning (SADP),
has been adopted as the primary double patterning solution for 32nm technology nodes and below for flash memory
manufacturing. Many are now looking to migrate the technique to DRAM and random Logic layers. However, DRAM
and especially Logic have far more complex layout requirements than NAND-FLASH, requiring a more sophisticated
use of the SSDP technique. To handle the additional complexities an automated electronic design tool was used to
calculate optimal layout splits of a design target into 2 or 3 masks. The model was programmed with immersion
lithography and dry-193nm lithography MRC input rules and on wafer performance was tested. We discuss the
patterning needs from the trim-mask and the pad-mask and associated lithography process window requirements and
alignment accuracies necessary to pursue 32nm and 22nm half-pitch designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a new contouring technology that executes contour re-alignment based on a matching of the measured
contour with the design data. By this 'secondary' pattern matching (the 'primary' being the pattern recognitions that is
done by the SEM during the measurement itself), rotation errors and XY shifts are eliminated, placing the measured
contour at the correct position in the design coordinates system. In the next phase, the developed method can generate
an averaged contour from multiple SEM images of identical structures, or from plural contours that are aligned
accurately by the algorithm we developed.
When the developed contouring technology is compared with the conventional one, it minimizes contouring errors and
pattern roughness effects to the minimum and enables contouring that represents the contour across the wafer.
The Contour that represents the contour across the wafer we call "Measurement Based Averaged Contour" or MBAC.
We will show that an OPC model that is built from these MBACs is more robust than an OPC model built from
contours that did not get this additional re-alignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the VLSI technology scales down to sub 40nm process node, the application of EUV is still far from
reality, which forces 193nm ArF light source to be used at 32nm/22nm node. This large gap causes severe light
refraction and hence reliable printing becomes a huge challenge. Various resolution enhancement technologies
(RETs) have been introduced in order to solve this manufacturability problem, but facing the continuously
shrinking VLSI feature size, RETs will not be able to conquer the difficulties by themselves. Since layout
patterns also have a strong relationship with their own printability, therefore litho-friendly design methodology
with process concern becomes necessary. In the very near future, double patterning technology (DPT) will be
needed in the 32nm/22nm node, and this new process will bring major change to the circuit design phases for
sure.
In this paper, we try to solve the printability problem at the cell design level. Instead of the conventional 2-D
structure of the standard cell, we analyze the trend of the application of 1-D cell based on three emerging double
patterning technologies. Focusing on the dense line printing technology with off-axis illumination, line-end gap
distribution is studied to guide our methodology for optimal cell design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the corner rounding effect in litho process, it is hard to make the wafer image as sharp as the
drawn layout near two-dimensional pattern in IC design1, 2. The inevitable gap between the design and
the wafer image make the two-dimensional pattern correction complex and sensitive to the OPC
correction recipe. However, there are lots of different two-dimensional patterns, for example, concave
corner, convex corner, jog, line-end and space-end. Especially for Metal layer, there are lots of jogs are
created by the rule-based OPC. So OPC recipe developers have to spend lots to efforts to handle
different two-dimensional fragment with their own experience.
In this paper, a general method is proposed to simplify the correction of two-dimensional structures.
The design is firstly smoothed and then simulation sites are move from the drawn layer to this new
layer. It means that the smoothed layer is used as OPC target instead of the drawn Manhattan pattern.
Using this method, the OPC recipe tuning becomes easier. In addition, the convergence of
two-dimensional pattern is also improved thus the runtime is reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rule-based fragmentation has been used for many years in Optical Proximity
Correction (OPC). It breaks the edge of polygons into small pieces according to the
pre-defined rule based on the topography and context before model-based OPC.
Although it works well in most case, it can not place the fragment point onto the
proper position which decided by inherent Optical and process requirement
sometimes.
In this paper, an adaptive fragmentation is proposed. The polygon is first dissected
according to the traditional rule. In the following iteration, the edge is re-fragmented,
in which some fragments are deleted and some new fragments are created, according
to their image properties. Using this method, the dissection point can be placed in the
right position. It can improve the correction accuracy and eliminate the unwanted
fragment at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the semiconductor industry moves to the 45nm node and beyond, the tolerable
lithography process window significantly shrinks due to the combined use of high NA
and low k1 factor. This is exacerbated by the fact that the usable depth of focus at 45nm
node for critical layer is 200nm or less. Traditional Optical Proximity Correction (OPC)
only computes the optimal pattern layout to optimize its lithography patterning at
nominal process condition (nominal defocus and nominal exposure dose) according to an
OPC model calibrated at this nominal condition, and this may put the post-OPC layout at
non-negligible patterning failure risk due to the inevitable process variation (mainly
defocus and dose variations). With a little sacrifice at the nominal condition, process
variation aware OPC can greatly enhance the robustness of post-OPC layout patterning in
the presence of defocus and dose variation. There is also an increasing demand for
through process window lithography verification for post-OPC circuit layout. The corner
stone for successful process variation aware OPC and lithography verification is an
accurately calibrated continuous process window model which is a continuous function
of defocus and dose. This calibrated model needs to be able to interpolate and extrapolate
in the usable process window.
Based on Synopsys' OPC modeling software package-ProGen and ProGenPlus, we
developed an automated process window (PW) modeling module, which can build
process variation aware process window OPC model with continuously adjustable
process parameters: defocus and dose. The calibration of this continuous PW model was
performed in a single calibration process using silicon measurement at nominal condition
and off-focus-off-dose conditions. Through the example of several process window
models for layers at 45nm technology nodes, we demonstrated that this novel continuous
PW modeling approach can achieve very good performance both at nominal condition
and at interpolated or extrapolated off-focus-off-dose conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Double Patterning is seen as the prime technology to keep Moore's law on path while EUV technology is
still maturing into production worthiness. As previously seen for alternating-Phase Shift Mask
technology[1], layout compliance of double patterning is not trivial [2,3] and blind shrinks of anything but
the most simplistic existing layouts, will not be directly suitable for double patterning. Evaluating a
production worthy double patterning engine with highly non-compliant layouts would put unrealistic
expectations on that engine and provide metrics with poor applicability for eventual large designs. The true
production use-case would be for designs that have at least some significant double patterning compliance
already enforced at the design stage. With this in mind a set of ASIC design blocks of different sizes and
complexities were created that were double patterning compliant. To achieve this, a set of standard cells
were generated, which individually and in isolation were double patterning compliant, for multiple layers
simultaneously. This was done using the automated Standard Cell creation tool CadabraTM [4]. To create a
full ASIC, however, additional constraints were added to make sure compliance would not be broken
across the boundaries between standard cells when placed next to each other [5]. These standard cells were
then used to create a variety of double patterning compliant ASICs using iCCompilerTM to place the cells
correctly. Now with a compliant layout, checks were made to see if the constraints made at the micro level
really do ensure a fully compliant layout on the whole chip and if the coloring engine could cope with such
large datasets. A production worthy double patterning engine is ideally distributable over multiple
processors [6,7] so that fast turn-around time can be achievable on even the largest designs. We
demonstrate the degree of linearity of scaling achievable with our double patterning engine. These results
can be understood together with metrics such as the distribution of the sizes of networks requiring coloring
resulting from these designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Overlay performance and control requirements have become crucial for achieving high yield and reducing rework process.
Increasing discrepancy between hardware solutions and overlay requirements, especially in sub-40nm dynamic random access
memory (DRAM) devices, motivates us to study process budgeting techniques and reasonable validation methods. In this paper, we
introduce a SMEM (Statistical process Margin Estimation Method) to design the DRAM cell architecture which considers critical
dimension (CD) and overlay variations in the perspectives of both cell architecture and manufacturability. We also proposed the
method to determine overlay specifications. Using the methodologies, we obtained successfully optimized sub-40 DRAM cells which
accurately estimated process tolerances and determined overlay specifications for all layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic layout optimization is becoming an important component of the DfM work flow, as the number of
recommended rules and the increasing complexity of trade-offs between them makes manual optimization increasingly
difficult and time-consuming. Automation is rapidly becoming the best consistent way to get quantifiable DfM
improvements, with their inherent yield and performance benefits for standard cells and memory blocks. Takumi autofixer
optimization of Common Platform layouts resulted in improved parametric tolerance and improved DfM metrics,
while the cell architecture (size and routability) and the electrical characteristics (speed/power) of the layouts remained
intact. Optimization was performed on both GDS-style layouts for standard cells, and on CDBA (Cadence Data Base
Architecture)-style layout for memory blocks. This paper will show how trade-offs between various DfM requirements
(CAA, recommended rules, and litho) were implemented, and how optimization for memories generated by a compiler
was accomplished. Results from this optimization work were verified on 45nm design by model and rule based DfM
checking and by wafer yields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a highly integrated method of mask and silicon metrology. The method adopts a
metrology management system based on DBM (Design Based Metrology). This is the high accurate
contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. We have
inspected the high accuracy, stability and reproducibility in the experiments of integration. The accuracy
is comparable with that of the mask and silicon CD-SEM metrology. In this report, we introduce the
experimental results and the application. As shrinkage of design rule for semiconductor device advances,
OPC (Optical Proximity Correction) goes aggressively dense in RET (Resolution Enhancement
Technology). However, from the view point of DFM (Design for Manufacturability), the cost of data
process for advanced MDP (Mask Data Preparation) and mask producing is a problem. Such trade-off
between RET and mask producing is a big issue in semiconductor market especially in mask business.
Seeing silicon device production process, information sharing is not completely organized between
design section and production section. Design data created with OPC and MDP should be linked to
process control on production. But design data and process control data are optimized independently.
Thus, we provided a solution of DFM: advanced integration of mask metrology and silicon metrology.
The system we propose here is composed of followings.
1) Design based recipe creation:
Specify patterns on the design data for metrology. This step is fully automated since they are interfaced
with hot spot coordinate information detected by various verification methods.
2) Design based image acquisition:
Acquire the images of mask and silicon automatically by a recipe based on the pattern design of
CD-SEM.It is a robust automated step because a wide range of design data is used for the image
acquisition.
3) Contour profiling and GDS data generation:
An image profiling process is applied to the acquired image based on the profiling method of the field
proven CD metrology algorithm. The detected edges are then converted to GDSII format, which is a
standard format for a design data, and utilized for various DFM systems such as simulation.
Namely, by integrating pattern shapes of mask and silicon formed during a manufacturing process
into GDSII format, it makes it possible to bridge highly accurate pattern profile information over to the
design field of various EDA systems.
These are fully integrated into design data and automated. Bi-directional cross probing between mask
data and process control data is allowed by linking them. This method is a solution for total optimization
that covers Design, MDP, mask production and silicon device producing.
This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In state of the art integrated circuit industry for transistors gate length of 45nm and beyond, the sharp distinction between
design and fabrication phases is becoming inadequate for fast product development. Lithographical information along
with design rules has to be passed from foundries to designers, as these effects have to be taken into consideration during
the design stage to insure a Lithographically Friendly Design, which in turn demands new communication channels
between designers and foundries to provide the needed litho information. In the case of fabless design houses this
requirement is faced with some problems like incompatible EDA platforms at both ends, and confidential information
that can not be revealed by the foundry back to the design house.
In this paper we propose a framework in which we will try to demonstrate a systematic approach to match any
lithographical OPC solution from different EDA vendors into CalibreTM. The goal is to export how the design will look
on wafer from the foundry to the designers without saying how, or requiring installation of same EDA tools.
In the developed framework, we will demonstrate the flow used to match all steps used in developing OPC starting from
the lithography modeling and going through the OPC recipe. This is done by the use of automated scripts that
characterizes the existing OPC foundry solution, and identifies compatible counter parts in the CalibreTM domain to
generate an encrypted package that can be used at the designers' side.
Finally the framework will be verified using a developed test case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the deep sub-micron semiconductor manufacturing process, the Chemical-Mechanical Polishing (CMP) is
applied on conductor layers to create a planar surface over the wafer. To ensure layer uniformity after CMP and to avoid
metal dishing and erosion effects, dummy metals are usually inserted to the layers either by designers or foundries.
However, adding dummy metal polygons can have undesirable impact to the capacitance and hence the timings of the
clock paths and signal paths in the design.
Chartered and Magma jointly developed and validated a methodology combining the router timing-aware track fill
followed by foundry metal fill to minimize the timing impact of the metal fill to the design as well as achieving high
quality copper uniformity.
In this paper, we will show the proposed metal fill methodology outperform the conventional approaches of metal fill or
track fill. The proposed metal fill was validated using Static Timing Analysis and an accurate silicon calibrated CMP
model is used for copper (Cu) thickness distributions comparisons. From the 65nm case study results, the timing impact
to the design in terms of total number of nets with slack degradation has been reduced from 4% to 0.24%. And the
copper uniformity in terms of standard deviation of the copper density has been improved from 0.192 to 0.142 on
average. The deployment of proposed metal fill is integrated seamlessly into the reference design flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the critical challenges facing lithographers is how to optimize the numerical aperture (NA) and
illumination source intensity and polarization distribution to deliver the maximum process window for a
given design in manufacturing. While the maximum NA has topped out at 1.35, the available illuminator
options continue to increase, including the eventual possibility of dynamically programmable pixelized
illumination to deliver nearly any imaginable source shape profile. New approaches to leverage this
capability and simultaneously optimize the source and mask shapes (SMO) on a per-design basis are
actively being developed. Even with the available "standard" illumination source primitive shapes,
however, there exist a huge range of possible choices available to the lithographer. In addition, there are
multiple conceivable cost functions which could be considered when determining which illumination to
utilize for a specified technology and mask layer. These are related to the primary lithographic variables of
exposure dose, focus, and mask size, and include depth of focus (DOF), exposure latitude (EL), normalized
image log slope (NILS), image contrast, and mask error enhancement factor (MEEF). The net result can be
a very large quantity of simulation data which can prove difficult to assess, and often manifest different
extrema, depending upon which cost function is emphasized. We report here on the use of several analysis
methods, including process variability bands, as convenient metrics to optimize full-chip post-OPC CD
control in conjunction with illumination optimization tooling. The result is a more thorough and versatile
statistical analysis capability than what has traditionally been possible with a CD cutline approach. The
method is analogous to conventional process window CD plots used in lithography for many years.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Colin Hui, Xian Bin Wang, Haigou Huang, Ushasree Katakamsetty, Laertis Economikos, Mohammed Fayaz, Stephen Greco, Xiang Hua, Subramanian Jayathi, et al.
Proceedings Volume Design for Manufacturability through Design-Process Integration III, 72751R (2009) https://doi.org/10.1117/12.816556
Chemical Mechanical Polishing (CMP) has been used in the manufacturing process for copper (Cu) damascene process.
It is well known that dishing and erosion occur during CMP process, and they strongly depend on metal density and line
width. The inherent thickness and topography variations become an increasing concern for today's designs running
through advanced process nodes (sub 65nm). Excessive thickness and topography variations can have major impacts on
chip yield and performance; as such they need to be accounted for during the design stage.
In this paper, we will demonstrate an accurate physics based CMP model and its application for CMP-related hotspot
detection. Model based checking capability is most useful to identify highly environment sensitive layouts that are prone
to early process window limitation and hence failure. Model based checking as opposed to rule based checking can
identify more accurately the weak points in a design and enable designers to provide improved layout for the areas with
highest leverage for manufacturability improvement. Further, CMP modeling has the ability to provide information on
interlevel effects such as copper puddling from underlying topography that cannot be captured in Design-for-
Manufacturing (DfM) recommended rules.
The model has been calibrated against the silicon produced with the 45nm process from Common Platform (IBMChartered-
Samsung) technology. It is one of the earliest 45nm CMP models available today. We will show that the
CMP-related hotspots can often occur around the spaces between analog macros and digital blocks in the SoC designs.
With the help of the CMP model-based prediction, the design, the dummy fill or the placement of the blocks can be
modified to improve planarity and eliminate CMP-related hotspots. The CMP model can be used to pass design
recommendations to designers to improve chip yield and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To provide fabless designers the same advantage as Integrated Device Manufacturer (IDMs), a design-oriented litho
model has been calibrated and an automated lithography (litho) hotspot detection and fixing flow has been implemented
during final routing optimization.
This paper shows how a design-oriented litho model was built and used to automate a litho hotspot fixing design flow.
The model, calibrated and validated against post-OPC contour data at 99%, was embedded into a Litho Physical
Analyzer (LPA) tech file. It allowed the litho contour of drawn layouts to be simulated at full chip level to detect litho
hotspots and to provide fixing guidelines. Automated hotspots fixing was hence made possible by feeding the guidelines
to the fixing tools in an industry based integrated flow. Post-fixing incremental checks were also performed to converge
to a clean design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapid-thermal annealing (RTA) is widely used in scaled CMOS fabrication in order to achieve ultra-shallow junction.
However, recent results report systematic threshold voltage (Vth) change and increased device variation due to the RTA
process [1][2]. The amount of such changes further depends on layout pattern density. In this work, a suite of
thermal/TCAD simulation and compact models to accurately predict the change of transistor parameters is developed.
The modeling results are validated with published silicon data, improving design predictability with advanced
manufacturing process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.