As new microelectronic designs are being developed, the demands on image overlay and pattern dimension control are compounded by requirements that pattern edge placement errors (EPEs) be at a single-nanometer levels. Scanner performance plays a key role in determining location of the pattern edges at different device layers, not only through overlay but also through imaging performance. The imaging contributes to edge displacement through the variations of the image dimensions and by shifting the images from their target locations. We discuss various aspects of advanced image control relevant to a 10-nm node integrated circuit design. We review a range of issues of pattern edge placement directly linked to pattern imaging. We analyze the impact of different pattern design and scanner-related edge displacement drivers. We present two examples of imaging strategies to pattern logic device metal layer cuts. We analyze EPEs of the cut images resulting from optimized layout design and scanner setup, and we draw conclusions on edge placement control versus imaging performance requirements.
Demand for ever increasing level of microelectronics integration continues unabated, driving the reduction of the integrated circuit critical dimensions, and escalating requirements for image overlay and pattern dimension control. The challenges to meet these demands are compounded by requirement that pattern edge placement errors be at single nanometer levels. Layout design together with the patterning tools performance play key roles in determining location of the pattern edges at different device layers. However, complexities of the layout design often lead to stringent tradeoffs for viable optical proximity correction and imaging strategy solutions. As a result, in addition to scanner overlay performance, pattern imaging plays a key role in the pattern edge placement. The imaging contributes to edge displacement by impacting the image dimensions and by shifting the images relative to their target locations. In this report we discuss various aspects of advanced image control at 10 nm integrated circuit design rules. We analyze the impact of pattern design and scanner performance on pattern edges. We present an example of complex, three step litho-etch patterning involving immersion scanners. We draw conclusion on edge placement control when complex images interact with wafer topography.
This report presents a model to predict, analyze, and monitor pattern edge placements errors occurring during integrated circuit manufacture. The edge placement errors are driven by overlay and imaging capabilities of scanners and pattering tools. The model can be used to analyze the impact of various imaging strategies on pattern placement statistics of layers composing ICs. Such analysis is essential to both, IC designers and lithography engineers, striving to successfully fabricate complex designs at economical manufacture yields. The report discusses key contributors to the image edge placement errors and presents examples of edge placement predictions based on scanner records. The edge placement error examples presented in this report are based on scanner overlay and CD uniformity performance for the current generation of integrated circuit designs.
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell’s Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell’s Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models’ impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction, OPC, Optical Proximity Effect matching, OPEM, and Source-Mask Optimization, SMO. Lithographic mask models used by these simulators and their interactions with scanner illuminator models are key drivers impacting the accuracy of the image predications of the computational lithography applications. To construct topographic mask model for hyper-NA scanner, the interactions of the fields with the mask topography have to be accounted for by numerically solving Maxwell’s equations. The simulators used to predict the image formation in the hyper-NA scanners have to rigorously treat the topographic masks and the interaction of the mask topography with the scanner illuminators. Such mask models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. To address the high costs of the computational lithography for hyper-NA scanners, we have adopted Reduced Basis, RB, method to efficiently extract accurate, near field images from a modest sample of rigorous, Finite Element, FE, solutions of Maxwell’s equations for the topographic masks. The combination of RB and FE methods provides means to efficiently generate near filed images of the topographic masks illuminated at oblique angles representing complex illuminator designs. The RB method’s ability to provide reliable results from a small set of pre-computed, rigorous results provides potentially tremendous computational cost advantage. In this report we present RB/FE technique and discuss the accuracy vs. compute time tradeoffs of hyper-NA imaging models incorporating topographic mask images obtained with the RB/FE method. The examples we present are representative of the analysis of the optical proximity effects for the current generation of IC designs.
Photolithography simulations are widely used to predict, to analyze and to design imaging
processes in scanners used for IC manufacture. The success of these efforts is strongly dependent
on their ability to accurately capture the key drivers responsible for the image formation. Much
effort has been devoted to understanding the impacts of illuminator and projection lens models on
the accuracy of the lithography simulations [1-3]. However, of equal significance is the role of
the mask models and their interactions with the illuminator models.
Image modeling and simulation are critical to extending the limits of leading edge lithography technologies used
for IC making. Simultaneous source mask optimization (SMO) has become an important objective in the field of
computational lithography. SMO is considered essential to extending immersion lithography beyond the 45nm
node. However, SMO is computationally extremely challenging and time-consuming. The key challenges are due
to run time vs. accuracy tradeoffs of the imaging models used for the computational lithography.
We present a new technique to be incorporated in the SMO flow. This new approach is based on the reduced
basis method (RBM) applied to the simulation of light transmission through the lithography masks. It provides a
rigorous approximation to the exact lithographical problem, based on fully vectorial Maxwell's equations. Using
the reduced basis method, the optimization process is divided into an offline and an online steps. In the offline
step, a RBM model with variable geometrical parameters is built self-adaptively and using a Finite Element
(FEM) based solver. In the online step, the RBM model can be solved very fast for arbitrary illumination
and geometrical parameters, such as dimensions of OPC features, line widths, etc. This approach dramatically
reduces computational costs of the optimization procedure while providing accuracy superior to the approaches
involving simplified mask models. RBM furthermore provides rigorous error estimators, which assure the quality
and reliability of the reduced basis solutions.
We apply the reduced basis method to a 3D SMO example. We quantify performance, computational costs
and accuracy of our method.
KEYWORDS: Photomasks, Diffraction, Projection systems, Polarization, 3D modeling, Near field, Einsteinium, Near field optics, Spherical lenses, Semiconducting wafers
This paper presents a consistent and modularized approach to modeling projection optics. Vector nature of light
and polarization effect are considered from the very beginning at source, through mask and projection lens down
into film stack. High-NA and immersion effect are also included. Of particular interest is the formulation of
a modularized framework for computing optical images that allows various mask models (a thin-mask model,
an empirical approximate mask model, or a rigorous mask 3D solver) to be used. We demonstrate that under
Kirchoff thin-mask assumption our formulation is the same as Smythe formula. A compact film-stack model is
formulated. The formulation is first presented in Abbe's source integration approach and then reformulated in
Hopkins' TCC approach which allows for a SVD decomposition, which is computationally more efficient for a
fixed optical setting.
Low pass filtering of mask diffraction orders, in the projection tools used in microelectronics
industry, leads to a range of optical proximity effects, OPEs, impacting integrated circuit pattern
images. These predictable OPEs can be corrected with various, model-based optical proximity
correction methodologies, OPCs , the success of which strongly depends on the completeness of
the imaging models they use.
The image formation in scanners is driven by the illuminator settings and the projection lens
NA, and modified by the scanner engineering impacts due to: 1) the illuminator signature, i.e. the
distributions of illuminator field amplitude and phase, 2) the projection lens signatures
representing projection lens aberration residue and the flare, and 3) the reticle and the wafer scan
synchronization signatures. For 4x nm integrated circuits, these scanner impacts modify the
critical dimensions of the pattern images at the level comparable to the required image tolerances.
Therefore, to reach the required accuracy, the OPC models have to imbed the scanner illuminator,
projection lens, and synchronization signatures.
To study their effects on imaging, we set up imaging models without and with scanner
signatures, and we used them to predict OPEs and to conduct the OPC of a poly gate level of 4x
nm flash memory. This report presents analysis of the scanner signature impacts on OPEs and
OPCs of critical patterns in the flash memory gate levels.
Source Mask Optimization techniques are gaining increasing attention as RET computational lithography techniques in
sub-32nm design nodes. However, practical use of this technique requires careful considerations in the use of the
obtained pixilated or composite source and mask solutions, along with accurate modeling of mask, resist, and optics,
including scanner scalar and vector aberrations as part of the optimization process. We present here a theory-to-practice
case of applying ILT-based SMO on 22nm design patterns.
EUV exposure tools are the leading contenders for patterning critical layers at the 22nm technology node.
Operating at the wavelength of 13.5nm, with modest projection optics numerical aperture (NA), EUV projectors allow
less stringent image formation conditions. On the other hand, the imaging performance requirements will place high
demands on the mechanical and optical properties of these imaging systems.
A key characteristic of EUV projection optics is the application of a reflective mask, which consists of a reflective
multilayer stack on which the IC layout is represented by the reflectivity discontinuities1. Several mask concepts can
provide such characteristics, such as thick absorbers on top of a reflective multi-layer stack, masks with embedded
absorbers, or absorber-free masks with patterns etched in a reflective multilayer.
This report analyzes imaging performance and tradeoffs of such new mask designs. Various mask types and
geometries are evaluated through imaging simulations. The applied mask models take into account the topographic
nature of the mask structures, as well as the fundamental, vectorial characteristics of the EUV imaging process.
Resulting EUV images are compared in terms of their process stability as well as their sensitivities to the EUV-specific
effects, such as pattern shift and image tilt, driven by the reflective design of the exposure system and the mask
topography.
The simulations of images formed in EUV exposure tools are analyzed from the point of view of the EUV mask
users. The fundamental requirements of EUV mask technologies are discussed. These investigations spotlight the
tradeoffs of each mask concept and could serve as guidelines for EUV mask engineering.
We quantify the OPC accuracy improvement obtained by including the stepper signatures in the OPC model. The
analysis takes into account the complete cycle of OPC model calibration, OPC execution, and image verification of the
OPCed photomask. We use the Nikon Scanner Signature File (NSSF) version 1.5 for the NSR-S610C immersion
scanner; and an OPC model that accounts for vectorial imaging, the polarization map of the illumination, and the pupil
Jones matrix map of the projection optics. We verify that the OPC model closely agrees with a commercial lithography
simulator. We use a 42 nm half-pitch NAND-flash layout to illustrate our point. Post-OPC CD errors obtained when
excluding information about the stepper signature are 11.9 nm (max) and 2.8 nm (RMS). These values drop to 1.9 nm
(max) and 0.7 nm (RMS) when the NSSF is included in the OPC model. In practice, OPC models are calibrated using
CD measurements taken on printed test patterns, which are affected by the scanner signature. OPC model calibration
indirectly and partially captures the scanner signature; however, including the NSSF directly in the model increases
accuracy. In addition, the number of edge-placement errors (EPE) exceeding 1 nm dropped by an order of magnitude
when the NSSF was directly included in the OPC model, as compared to capturing the same information incompletely
using the model calibration instead.
Optical imaging of IC critical designs is impacted by optical proximity effects, OPEs,
originating from finite numerical aperture of projection lenses used in modern projectors. The
OPE's are caused by filtering of pattern diffraction orders falling outside of the lens band pass.
Controlling OPEs is so critical to IC performance, that IC design community implemented optical
proximity correction, OPC, modifying the IC mask patterns to provide wafer images matching the
IC design intent. The mainstream OPC uses optical models representing fundamental imaging
setup and it does not capture the impacts of engineering scanner tool constraints.
The OPEs are impacted by scanner lens and illuminator signatures causing CD excursions
large in comparison to the CD error budgets(1). The magnitude of the scanner impacts on OPEs
necessitated new optical modeling paradigm involving imaging models imbedding scanner
signatures representing population of scanners of a given type. These scanner-type based models
represent quantum leap in accuracy of lithography simulation technology, resulting in OPE and
OPC representing a broad range of realistic scanner characteristics(2).
In this context, a relevant question is: to what degree, the signatures of individual scanners
impact the accuracy of imaging models and OPE predictions? To answer this question, we
analyzed optical proximity responses of hyper-NA scanners represented by their signatures. We
first studied a set of OPEs impacted by the scanner-type signatures. We then generated a set of
corresponding OPEs impacted by the signatures of individual scanners. We compared the two
kinds of OPEs and highlighted the scanner-specific image formation responses.
Low pass filtering taking place in the projection tools used by the IC industry leads to a range
of optical proximity effects, OPEs, resulting in undesired characteristics of patterns projected by
the scanners. Commonly used scanner imaging models are capable of capturing OPEs driven by
the fundamental imaging conditions such as wavelength, illuminator layout, reticle technology,
and lens numerical aperture.
Production optical proximity correction (OPC) tools employ compact optical models in order to accurately
predict complicated optical lithography systems with good theoretical accuracy. Theoretical accuracy is
not the same as usable prediction accuracy in a real lithographic imaging system. Real lithographic
systems have deviations from ideal behavior in the process, illumination, projection and mechanical
systems as well as in metrology. The deviations from the ideal are small but non-negligible. For this study
we use realistic process variations and scanner values to perform a detailed study of useful OPC model
accuracy vs. the variation from ideal behavior and vs. theoretical OPC accuracy. The study is performed
for different 32nm lithographic processes. The results clearly show that incorporating realistic process,
metrology and imaging tool signatures is significantly more important to predictive accuracy than small
improvements in theoretical accuracy.
The requirement for OPC modeling accuracy becomes increasingly stringent as the semiconductor industry enters sub-
0.1um regime. Targeting at capturing the IC pattern printing characteristics through the lithography process, an OPC
model is usually in the form of the first principle optical imaging component, refined by some phenomenological
components such as resist and etch. The phenomenological components can be adjusted appropriately in order to fit the
OPC model to the silicon measurement data. The optical imaging component is the backbone for the OPC model, and it
is the key to a stable and physics-centric OPC model.
Scanner systematic signatures such as illuminator pupil-fill, illuminator polarization, lens aberration, lens apodization,
flare, etc., previously ignored without significant accuracy sacrifice at previous technology nodes, but are playing non-negligible
roles at 45nm node and beyond. In order to ensure that the OPC modeling tool can accurately model these
important scanner systematic signatures, the core engine (i.e. the optical imaging simulator) of OPC simulator must be
able to model these signatures with sufficient accuracy.
In this paper, we study the impact on optical proximity effect (OPE) of the aforementioned scanner systematic signatures
on several 1D (simple line space, doublet line and doublet space) and 2D (dense line end pullback, isolated line end
pullback and T-bar line end pullback) OPC test patterns. We demonstrate that the scanner systematic signatures have
significant OPE impact on the level of several nanometers. The predicted OPEs and impact from our OPC simulator
matches well with results from an industry standard lithography simulator, and this has laid the foundation of accurate
and physics-centric OPC model with the systematic scanner signatures incorporated.
An accurate optical model is the foundation of an accurate optical proximity correction (OPC) model, which has always been the key for successful implementation of model-based OPC. As critical dimension (CD) control requirements become severe at the 45- and 32-nm device generations, OPC model accuracy and hence optical model accuracy requirements become more stringent. In previous generations, certain optical effects could be safely ignored. For example, the transmission attenuation particularly at high spatial frequencies caused by lens apodization effects and organic pellicle films was ignored or not accurately modeled in conventional OPC simulators. These effects are now playing a more important role in OPC modeling as technology scales down. Our simulations indicate these effects can cause CD modeling errors of 5 nm or larger, at the 45-nm technology node and beyond. Therefore, they must be accurately modeled in OPC modeling. In our OPC modeling methodology, we propose two novel low-pass-filter models to capture the frequency-dependent transmission attenuation due to lens apodization and to pellicle films. These parameterized novel low-pass-filter models ensure that lens apodization and pellicle-film-induced transmission attenuation can be appropriately account for through regression during the experimental OPC model calibration stage in the case where no measured transmission data are available, thus enabling physics-centric OPC model building with considerably higher accuracy. We can then avoid overfitting the OPC model, which could cause instability in the OPC correction stage. The validity and efficiency of the proposed novel models are also verified using an industry-standard lithography simulator as well as an experimental OPC model calibration at the 45-nm technology node.
Low pass filtering taking place in the projection tools used by IC industry leads to a range of
optical proximity effects resulting in undesired IC characteristics. To correct these predicable
OPEs, EDA industry developed various, model-based correction methodologies. Of course, the
success of this mission is strongly dependent on how complete the imaging models are. To
represent the image formation and to capture the OPEs, the EDA community adopted various
models based on simplified representations of the projection tools. Resulting optical proximity
correction models are capable of correcting OPEs driven by the fundamental imaging conditions
such as wavelength, illuminator layout, reticle technology, and lens numerical aperture, to name
a few.
It is well known in the photolithography community that OPEs are dependent on the scanner
characteristics. Therefore, to reach the level of accuracy required by the leading edge IC designs,
photolithography simulation has to include systematic scanner fingerprint data. These tool
fingerprints capture excursions of the imaging tools from the ideal imaging setup conditions.
They quantify the performance of key projection tool components such as illuminator and lens
signatures. To address the imaging accuracy requirements, the scanner engineering and the EDA
communities developed OPC models capable of correcting for imaging tools engineering
attributes captured by the imaging tools fingerprints.
Deployment of immersion imaging systems has presented the photolithography community
with new opportunities and challenges. These advanced scanners, designed to image in deep
sub-wavelength regime, incorporate features invoking the optical phenomena previously
unexplored in commercial scanners. Most notably, the state of the art scanners incorporate
illuminators with high degree of polarization control and projection lenses with hyper-NAs. The
image formation in these advanced projectors exploits a wide range of vectorial interactions
originating at the illuminator, on the pattern mask, in the projection lens and at the wafer. The
presence of these, previously subdued phenomena requires that the imaging simulation
methodologies be refined, increasing the complexity of the OPE models and optical proximity
correction methodologies.
To meet the imaging resolution requirements, driven by the evolution of IC design rules, leading-edge
scanners incorporate projection lenses with hyper-NAs. Moreover, immersion scanners are being
introduced into IC manufacture. Both dry and immersion tools explore the lens design regimes of
unprecedented complexity.
The need to predict, to analyze and to control the IC pattern CDs is met by various photolithography
simulators. The continuing demand for simulation accuracy is reflected by the requirement to quantify
the scanner projection lens fingerprints, i.e. projection lens infinitesimal excursions from the ideal
performance. The scanner engineering community has been relying on photolithography simulators to
analyze the impact of the projection lens fingerprints on the imaging characteristics.
However small, these excursions are always present in the projection tools and they control important
imaging characteristics such as overlay, CD uniformity, across-field exposure latitude, to name but a
few. Customarily, phase front aberrations and lens pupil apodization signatures have been used to
predict the scanners imaging responses. Of course, the need to design, to manufacture and to deploy
scanners of ever improving quality resulted in dramatic reductions of these non-ideal imaging
excursions.
The evolution of IC designs and imaging tools complexity escalate the requirements for imaging
simulation accuracy. Simultaneously, predicting scanner imaging response has become a key mission in
the Deign For Manufacture arena. In view of these developments, it necessary to pose a question if the
conventional equipment engineering and imaging simulation methodologies predict scanner imaging
responses with the accuracy required by the IC design rules. Differently put, the question is: what is
necessary to provide simulation accuracy required by the current IC design rules? This report attempts
to address these questions.
As scanner projection lens captures only a finite number of IC pattern diffraction orders. This low
pass filtering leads to a range of optical proximity effects such as pitch-dependent CD variations,
corner rounding and line-end pullback, resulting in imaged IC pattern excursions from the
intended designs. These predictable OPEs are driven by the imaging conditions, such as
wavelength, illuminator layout, reticle technology, and lens numerical aperture. To mitigate the
pattern excursion due to OPEs, the photolithography community developed optical proximity
correction methodologies, adopted and refined by the EDA industry. In the current
implementations, OPC applied to IC designs can correct layouts to compensate for OPEs and to
provide imaged patterns meeting the design requirements.
IC manufacture has to meet stringent requirements pushing the imaging tools beyond their limits. The key performance attribute of the imaging tool is the quality of the image projected on wafer plane. The image quality is controlled by the wavefront aberrations present in the projection lens pupil. Therefore the quality of the lenses can be represented by either various image quality metrics or by the data on the lens pupil aberration residua. Projection lens quality can be quantified by interferometers capturing the lens pupil residual aberration, leading to estimates of the image quality. These various techniques can be used off-line, testing projection lenses installed on a dedicated test bench, used during or after lens manufacture, or in-situ, testing the lenses installed in the projection tools, often at the IC manufacturing floor. These techniques have inherent tradeoffs in terms their accuracy, portability, ease-of-use and completeness of the aberration and imaging metrics. Such tradeoffs determine which technique is the most appropriate for various applications ranging from lens quality control during imaging tool manufacture, to tool qualification during its installation and setup, to tool monitoring and tuning during the IC manufacture. It is acknowledged within the scanner engineering community that qualification and maintenance of tools used for critical level pattering requires in-situ lens monitoring technique. Such method would also help to select and to fine tune the imaging tools to design-specific requirements of IC critical patterns. A preferred method of aberration monitoring should be highly compatible with routine scanner operation and should be independent of resist process conditions. This paper presents aerial image-based technique to monitor and to diagnose the quality of projection lenses used in scanners. The method involves aerial image sensor, AIS. We start with a discussion of the fundamental principles of operation and the key design issues impacting the accuracy of the technique. We follow with an examples of the AIS aberration test. These tests lead to a discussion of the method's capabilities to quantify the performance of the imaging tools.
IC manufacture often has to meet stringent requirements pushing the imaging tools beyond their limits. Selection and optimization of steppers used to image patterns with critical dimensions at a fraction of wavelength has to consider tool’s aberration residue and the imaging tradeoffs of the patterned features. This report presents methodology to select tool-specific, multi-feature optima for imaging tools performing beyond their design points.
IC manufacturing at 65 nm node requires careful selection of imaging technology. To select appropriate
approach, a wide range of impacts has to be considered. In particular, imaging, mask, and resist
technologies all contribute to final CD control of the features patterned and their imaging latitude during IC
manufacture.
To select imaging strategy, we conducted simulation analysis of dry ArF, dry F2, and immersion ArF
imaging technologies. During the selection process, each technology has to be evaluated at its imaging
optimum defined in terms of projection lens NA and illuminator design as well as the mask design details;
such analysis has to be specific to the requirements of the IC design critical levels.
One of the key technology characteristics is the imaging tool impact on patterned level. This impact can be
quantified by the projection lens aberration residue and its flare, both dependent on the image location.
Introduction of aberration and flare signatures into the imaging analysis enables definition of tool
performance metric common to the entire image field, and it spotlights across-field imaging tradeoffs.
In addition to these factors (i.e. the imaging technology- and the tool-related impact), the impact of wafer
stack on image formation in resist has to be considered. In particular, Fresnel losses, resist photochemistry,
and optical path differences of diffraction orders in dense medium have to be accounted for. Such approach
leads to estimates of resist refraction and contrast on the formation of critical features.
This review presents comprehensive analysis of all key factors driving imaging latitude of critical levels at
65 nm node. These factors were representing impacts of imaging strategy, mask and resist technologies.
The analysis presented below spotlights imaging tradeoffs of dry ArF, dry F2, and immersion ArF imaging
technologies.
Photolithography simulation has become a common methodology used in engineering tasks such as critical level patterning analysis and process design, patterning tool qualification to meet the process control requirements, selection of the patterning tools capable of delivering requisite patterning performance, and projection lens tuning for optimum patterning performance. Such diverse use of simulation is motivated by the need to quantify the patterning tradeoffs, when the performance margins collapse around the fundamental process constraints. These complex analysis and design tasks relay on various photolithography simulators available as commercial or proprietary software. The diversity of the available simulators poses two issues: what is the role of numerical methodologies in modifying the simulation analysis otherwise limited by the image formation fundamentals, and to what extend the results obtained with different simulators are similar to each other. In this paper, we present the results of the comparison involving three simulators, two of them commercial. The comparison involved image formation simulations of the current generation of the critical IC designs. The comparison was a basis of a judgment on the portability of simulation analyses obtained by various photolithography simulation tools.
Integrated circuits patterning faces escalating demands challenging the fundamental constraints of the photolithography tools. The challenge is to qualify patterning tools beyond their design objectives, to extend their use for the future manufacturing requirements. To address these challenges, we have adopted a three-step approach: 1) selection of the patterning strategies appropriate for a given set of design rules, 2) projection tool selection to match its capabilities with the process control requirements, and 3) tool’s fine-tuning to maximize patterning process latitude. Step 1 is customary exposure strategy optimization. Steps 2 and 3 go beyond common practice. These two steps rely on aberration residue data obtained by in-situ phase measuring interferometer. The comprehensive, three-step strategy involves all of the key factors impacting the imaging control of critical patterns. In this paper we present the key elements of the patterning strategy and projection lens optimization. We show an example illustrating the three steps of process and tool qualification for aggressive, sub-wavelength design rules. The example presents selection of optimum patterning strategy, the patterning tool selection based on their aberration residuum, and the projection lens residual aberration fine-tuning. The patterning approach resulting from the methodology presented here is compatible with IC manufacturing environment. The approach extends the use of the imaging tools beyond their design objective.
As ASIC manufacture continues to evolve towards 0.35 micrometer, photolithography optimization becomes increasingly complex. I-line photolithography at these feature sizes results in proximity effects contributing to CD budgets and dominating the CD control. One of the critical levels of the current generation ASIC devices is the polysilicon gate level containing a set of lines in nesting configurations ranging from dense to isolated. The optical proximity effects of such geometries are pitch-dependent. Thus the key challenge of the gate level exposure is CD control of the features nested on a wide range of pitches. The state-of-the-art photolithography tools used for critical level manufacture are equipped with a wide range of illumination options including conventional, small-sigma, and off-axis. These options expand the exposure capabilities of steppers and complicate the optimization of the photolithography. The complexity of the image formation, coupled with the number of stepper exposure options, vastly expands the parameter space of photolithography optimization. The optimization of the photolithography process has to take into consideration the requirements of IC manufacture. These requirements include the CD tolerance, the depth of focus and the exposure latitude. The numeric value of each represents statistical and systematic factors influencing the yield of manufacture as well as the CD tolerance reflecting the IC performance goals. Our goal was to optimize the CD performance of critical level i-line photolithography. Our strategy combined resist model simulation and proof-of-principle testing. We analyzed a set of features with the nominal, pitch-independent CDs. We analyzed the CD range of variation for different pitches characteristic for the polysilicon gate level. The analysis was performed for a wide range of illumination/exposure conditions representing capabilities of the state-of-the-art, commercial i-line steppers. To qualify the exposure options, we have developed a metric taking into consideration the requirements of IC manufacture. We conducted systematic studies of the CD range versus illumination and exposure conditions. As a result, we identified the exposure strategies leading to the range of CD variation meeting the tolerance requirements of the ASIC manufacture. A methodology combining the resist image simulation and limited resist testing allowed us to find quickly the optimum exposure strategy supportive of manufacturing requirements. It also resulted in a great reduction of resources required to conduct the process characterization and the CD metrology. We applied this methodology to optimize the exposure condition of a current generation ASIC polysilicon gate level. The optimization methodology was verified experimentally. This discussion presents examples of optimization solutions. The report reviews the results of the resist modeling simulation, and reviews the results of the proof-of-principle metrology. We compare the modeling and the metrology and draw conclusions on the quality of the models' predictions. We interpret the model results in terms of CD characteristics of the critical level features exposed and developed in the resist. Finally, we assess the value of anchored resist simulation as a predictor of the CD characteristics.
Critical level photolithography optimization for 0.35 μm design rules faces new challenges threatening the process control and jeopardizing manufacturing yields. To address these challenges we have adopted a methodology combining photolithography simulation and limited proof-of-principle DOE. It allowed us to analyze the process control requirements and to establish photolithography strategies resulting in optimum manufacturing performance. As manufacturing CD's continue to shrink towards and beyond the wave length at which photolithography is performed, the lithography process control becomes limited by optical proximity effects. The proximity effects modify the geometry of the reticle features printed in the resist and thus they impact both CD control and overlay performance. The proximity effects can be highly non-linear and thus difficult to quantify unless sophisticated analytical methods are adopted. Another challenge faced by IC manufacturing rose with the advent of illumination options [l, 2]. In particular, off-axis illumination options offered with commercial exposure tools expand the exposure capabilities of steppers. Inevitably, these new illuminators complicate optimization of photolithography process. If the optimization task is to be executed in a purely empirical fashion, the optimization resources requirements become prohibitive. To simplify the photolithography optimization cycle, we have developed methodology combining process modeling [3] and limited proof-of-principle testing; the modeling was done with Nikon Corporation's proprietary simulator. Our methodology was employed in photolithography process optimization of a set of critical levels used in manufacture of a DRAM chip. Photolithography modeling in support of the process optimization resulted in a set of illumination and projection optics alternatives. The optimization of illumination strategies involved conventional and off-axis exposure options. The modeling identified illumination tradeoffs in terms of process depth of focus, DOF, exposure latitude EL, and stepper throughput. The results of the analysis suggested off-axis illumination as a basis for a robust exposure process. The results of the modeling served as a basis for a reduced design-of-experiment, DOE, quantifying the CD characteristics of the levels. We present the results of photolithography optimization of a chosen critical level exposed with an i-line stepper, and discuss the merits of the methodology. The discussion presents examples of optimization solutions. This report reviews the results of the modeling, including aerial image analysis and resist simulation. The report also reviews the results of the proof-ofprinciple metrology. We compare the modeling and the metrology and draw conclusions on the quality of the models' predictions. We interpret the model results in terms of CD characteristics of the critical level features exposed and developed in the resist. The aerial image analysis and resist simulation allowed us to reduce the optimization space. This, in tum, reduced the resource required to conduct the limited, proof-of-principle verification. We stress the advantage of using the methodology combining process modeling together with limited, proof-of-principle metrology to simplify and to shorten the exposure optimization process.
Acousto-optically mode-locked cw Nd:YAG and Nd:YLF lasers have been efficiently frequency doubled with noncritically phase-matched lithium triborate. LiB3O5 crystals from 6 mm to 15 mm in length were obtained from Castech, People's Republic of China. These were polished and coated for antireflection at both 1064 nm and 532 nm by Coherent Components Group, Auburn, Calif. The coating has a damage threshold in excess of 1 GW/cm2 for mode-locked pulses. More than 11 W of average power at 532 nm has been generated by single pass conversion for a 25 W input at 1064 nm, a conversion efficiency of greater than 45%. Second harmonic generation dependence on laser power and focusing, and on crystal length and temperature have been measured and modeled. Stable longterm operation and applications for high power modelocked 532 nm laser pulses are discussed.
The frequency doubling of laser radiation at 1064 nm is studied in order to characterize efficient harmonic materials capable of delivering second-harmonic average power at the multiwatt level. Three nonlinear materials are considered: Mg:LiNbO3, potassium titanyl phosphate (KTP), and lithium triborate (LBO). No photoreactive damage is observed in Mg:LiNbO3; however, it exhibits broadening of the temperature tuning curves and distortion of the harmonic beam. An average output power in excess of three watts is extracted from KTP, but the material shows optically induced nonuniformities in the n(z) refractive index. LBO as a harmonic converter achieves 2.2 W at 532 nm, though the fundamental beam has to be tightly focused in the crystal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.