The deployment of angular scatterometry as a powerful and effective process control methodology has recently included the measurement of etched metal features in a typical complex Aluminum stack. With the control of metal process steps taking a more critical role in silicon manufacturing, a fast, reproducible and accurate methodology for measuring CD and depth is necessary. With the half-pitch of the metal pattern being as low as the minimum device feature, etch rate measurements on above-micron test structures are hardly indicative of the pattern-dependent etch profiles and behavior. Angular scatterometry offers a non-destructive, fast and powerful approach for measuring the profiles of the yield-relevant array features in metal applications.
In this work we demonstrate the application of angular scatterometry to the qualification of metal etchers. Etch depth is difficult to control and must be inspected with slow techniques such as profilometry. In addition to the slow response time and sparse radial sampling, contact profilometry is susceptible to residual resist and polymer residue as well as to the variations in the TiN ARC layer affecting the measurement of the Aluminum etch rates. We show that the choice of a suitable profile model and accurate knowledge of the optical properties allow scatterometry to overcome all of these traditional challenges.
We demonstrate that angular scatterometry is sensitive to the parameters of interest for controlling metal etchers, specifically etch depth, CD and profile. Across an experimental design that introduced intentional variations in these parameters, angular scatterometry results were able to track the variations accurately. In addition, profile results determined through scatterometry compare favorably with cross-sectional SEM images and measurements. Measurement precision results will also be presented.
The need to improve the Overlay and CD Budget requirements of current device technologies has driven the introduction of tool dedication schemes in semiconductor manufacturing. Dedication schemes have provided an opportunity to minimize systematic field distortion differences from layer to layer. The cost and manufacturing complexity of dedication schemes can however be a burden on the process and tools required. We will present experimental results of an aberration measurement method used on a Front End of Line tool-set to empirically describe the matching of a series of tools used in a dedicated processing scheme. We will also show simulation results of Pattern Placement Error and CD uniformity effects for the highlighted aberrations. We will use these findings to support product results generated while exercising dedication break analyses experiments on this tool set.
For the 150 nm and smaller half-pitch geometries, many DRAM manufacturers frequently employ dedicated exposure tool strategy for processing of most critical layers. Individual die tolerances of less than 40 nm are not uncommon for such compact geometries and a method is needed to reduce systematic overlay errors. The dedication strategy relies on the premise that a component of the systematic error induced by the inefficiencies in the exposure tool encountered at a specific layer can be diminished by re-exposing subsequent layer(s) on the same tool thus canceling out a large component of this error. In the past this strategy has, in general, resulted in better overall alignment performance, better exposure tool modeling and in decreased residual modeling errors. Increased alignment performance due to dedication does not come without its price. In such a dedicated strategy wafers are committed to process on the same tool at subsequent lithographic layers thus decreasing manufacturing flexibility and in turn affecting cost through increased processing cycle time. Tool down-events and equipment upgrades requiring significant downtime can also have a significant negative impact on running of a factory. This paper presents volume results for the 140 nm and 110 nm half-pitch geometries using 248 nm and 193 nm respective exposure wavelength state-of-art systems that show that dedicated processing still produces superior overlay and device performance results when compared blindly against non-dedicated processing. Results are also shown that at a given time an acceptable match may be found producing near equivalent results for non-dedicated processing. Changes in alignment capability are also observed after major equipment maintenance and component replacement. A point-in-time predictor strategy utilizing residual modeling errors and a set of modified performance specifications is directly compared against measured overlay data after patterning, against within field AFOV measurements after etching of the pattern and to final device performance.
Due to necessities of semiconductor manufacturing some of the most critical lithographic layers may utilize only dielectric anti-reflection coatings (DARC) and not the organic anti-reflective coatings in order to minimize substrate effects on critical dimension (CD) control and in order to position the process at a best possible node. As a result of this relationship, stricter limits of control of index of refraction and extinction coefficient are generally imposed on the DARC process. While the DARC process may utilize a gas flow adjustment in order to control the optical constants, one of the biggest obstacles becomes the film thickness metrology, which is most often used via either the Bruggeman or the Harmonic Oscillator models to measure the desired optical coefficients. Unfortunately, the control of optical properties to within a few percent is generally outside of the window of specifications of even the latest generation of film thickness metrology tools. Furthermore, with each subsequent exposure node, the wavelength of interest for the optical coefficients is also near the limit of the lamp or radiation source on the film thickness metrology tool thus creating additional noise and measurement instability. An interesting situation is depicted in this paper where the metrology variation in measurement of the optical coefficients for a single stack DARC film is greater than the variation of twenty process chambers. The metrology variation was confined in major part to consist of tool-to-tool variation and of tool changes after any work on the ellipsometer. A systematic way of reducing this measurement variation is presented which allows for introduction of a floating standard tied to the combined average performance of all metrology tools without necessarily using a golden tool or a golden set of wafers. At the same time, offsets are applied to each metrology tool thus ensuring a much tighter population. Although the described situation is not ideal, with the current specifications on measurement of optical coefficients it is one of few methodologies necessary for adequate process control without the expenditure for a new toolset.
The economics of semiconductor manufacturing have forced process engineers to develop techniques to increase wafer yield. Improvements in process controls and uniformities in all areas of the fab have reduced film thickness variations at the very edge of the wafer surface. This improved uniformity has provided the opportunity to consider decreasing edge exclusions, and now the outermost extents of the wafer must be considered in the yield model and expectations. These changes have increased the requirements on lithography to improve wafer edge printability in areas that previously were not even coated. This has taxed all software and hardware components used in defining the optical focal plane at the wafer edge. We have explored techniques to determine the capabilities of extreme wafer edge printability and the components of the systems that influence this printability. We will present current capabilities and new detection techniques and the influence that the individual hardware and software components have on edge printability. We will show effects of focus sensor designs, wafer layout, utilization of dummy edge fields, the use of non-zero overlay targets and chemical/optical edge bead optimization.
Some form of edge bead removal (EBR) is one of the standard requirements for a lithographic process. Without any intervention, resist may accumulate at the edge of the wafer at up to several times the nominal thickness of the resist. In addition to this edge bead, the resist is likely to wrap around the wafer contaminating the backside of the wafer as well. It’s needless to say that such a condition would present a significant contamination risk not only for the resist track and the exposure tool but for process equipment outside of lithography as well. Two not necessarily exclusive strategies have been used in the past for edge bead removal. One is topside chemical EBR where solvent is dispensed on the edge of the wafer as the wafer is rotated immediately after coating, and the other method is where a ring of exposed resist is formed by subjecting the resist on the outer edges of the wafer to a broadband exposure; also know as wafer-edge exposure (WEE). The advantage of the chemical method is that it will remove the photo resist but also the organic anti-reflective coating (ARC), which is not photosensitive. The disadvantage of this method is obvious as any latitude in tool tolerances or imperfections on the wafer will result in solvent dispense to the undesirable areas of the wafer. While the optical method is much cleaner, its main disadvantage is that it will not remove ARC. As the feature size and die size shrink, there is less and less repairable redundancy on modern semiconductor chips. An observed effect in our manufacturing facility has been an increased sensitivity to tool imperfections and a quantifiable level of yield loss due to solvent splashing for the 140nm generation. Accounting for the fact that the ARC layer is generally an order of magnitude thinner than the resist layer, yield-maximizing setup of edge bead removal for one lithographic layer and complete removal of topside chemical EBR is discussed in detail in this paper as well as the extension of the same principle to maximize yield at other layers.
Minimizing alignment errors in the past has been fairly straightforward. The aim has always been to drive the overlay model correctables to zero either instantly or after a number of lots processed in a short time frame depending on the controller setup. Methods for improving alignment have included minimizing components of variation tied to the exposure tool, metrology tool, process setup, or the model itself. Instead of working on these components, a less expensive alternative for improving the final outcome as represented by the device performance may be not to minimize the overlay correctables but to instead drive to a specific target as defined by the process window around any such correctable.
This paper will briefly show that lithography at present geometries is no longer the sole controller of alignment but that in fact other areas such as films, etch, and CMP influence alignment significantly. It will also be shown that in certain instances vertical wafer topography or feature profile may create device asymmetries, which may be compensated partially through application of non-zero overlay correctables. Coping with decreased overlay performance and methodology for controlling overlay biases is also shown.
In modern day semiconductor manufacturing, control of patterned line widths is an especially important task as even the smallest deviation from desired critical dimension (CD) target could result in undesirable electrical results. Traditional methods of CD control have included a feedback component where dose is adjusted through time, based on the measured critical dimensions of previous lots. Depending on process setup, stack influence on patterned features can often be diminished by introduction of organic anti-reflection coating (ARC) prior to the application of photoresist. Unfortunately even with an ARC layer, due to extreme topography and film stack, CD influences may be pronounced. Most often any such CD influence is exhibited as a Lot-2-Lot (L2L) component of variation and to a lesser degree as a Wafer-2-Wafer (W2W) component. A simple feedback system can be adjusted to encompass a larger number of lots for dose recommendations thus making certain to include closer to the entire population of stack variation. An improved control system is one in which this feedback component is supplemented by a feed-forward component where a certain stack predictor is used in providing a specific recommendation for a lot.
Stack information for a lithographic layer used in DRAM manufacturing will be presented alongside with a relationship existing between critical dimensions of features patterned in photoresist and top film thickness. Significant economic rework cost reduction and improvement in CD control with a two-month implementation of a complementary feed-forward and feedback system will be compared against performance with the feedback only system.
Mask Error Enhancement Factor (MEEF) or commonly Mask Error Factor (MEF) has recently become an important metric in determining process requirements in the SIA roadmap. MEF, in general, varies inversely with CD and is often significantly above unity indicating that mask CD errors are in effect magnified during the optical transfer to the wafer. Until recently, the SIA roadmap indicated mask makers needed to allow in the reticle budget for a 1.5X MEF allocation. Discussion at recent industry workshops has indicated that this allocation may be underestimated. We have generated experimental results for vertical, horizontal, dense, and isolated lines as well as contact holes for feature sizes in the range of 150 - 250nm. MEF dependence on the lens, and its variation will be compared across several scanning exposure systems. Role of Numerical Aperture (NA) and role of different illumination settings including conventional, annular, and quadrupole will be measured and compared to the simple theoretical expectations. Finally, MEF will be studied across lens location and correlated to the aberrations. A significant difference in mask error factor between horizontal and vertical lines will also be described in terms of feature size and lens aberrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.