The Design Rule Manual (DRM) is a critical component in the introduction and release of new technology nodes. It is the reference manual of definitive requirements, documented in detail, on all information regarding design rules and technology node design requirements. The DRM is a contract between the foundry and the designer. Designs must meet all documented requirements to be accepted for manufacture. The DRM’s critical role in process design enablement obligates it to a very high quality standard. The DRM must be accurate, reliable, and clear of ambiguity. Qualification of the DRM is crucial as design rules become extremely complex with advancing technology. DRM teams must ensure all descriptions and figures are correct and clear versus target requirements from the beginning of the technology development stage. The qualification process should cover all typical cases as well as corner and unexpected cases. Traditional methods of targeted pattern creation leave gaps in ensuring a high quality DRM. Those methods often miss complex scenarios leading to incomplete DRM documentation or descriptions with vague ambiguity. Ambiguity in the DRM leads to improper DRC rule coding, resulting in erroneous DRC checking. This paper presents a synthetic pattern/layout generation approach to high quality and high coverage DRM and DRC qualification. The generated patterns flow into a post-generation-analysis-fix step that helps discover and analyze issues while initial design rules and DRC code is being developed. Guided random generation of legal layout patterns produces simple and complex pattern configurations to challenge the accuracy and consistency between the original intention of the complex design rules and DRC rule deck. The post-generation-analysis-fix step helps identify locations of potential discrepancy. Flushing out these discrepancies and ambiguities drives enhancements to converge on robust DRM documentation and consistency between design rule intent and DRC run set implementation from early development throughout the life cycle of process node deployment.
At the core of Design-technology co-optimization (DTCO) processes, is the Design Space Exploration (DSE), where different design schemes and patterns are systematically analyzed and design rules and processes are co-optimized for optimal yield and performance before real products are designed. Synthetic layout generation offers a solution. With rules-based synthetic layout generation, engineers design rules to generate realistic layout they will later see in real product designs. This paper shows two approaches to generating full coverage of the design space and providing contextual layout. One approach relies on Monte Carlo methods and the other depends on combining systematic and random methods to core patterns and their contextual layout. Also, in this paper we present a hierarchical classification system that catalogs layouts based on pattern commonality. The hierarchical classification is based on a novel algorithm of creating a genealogical tree of all the patterns in the design space.
Various multi-patterning processes with associated design methodologies have been deployed to address patterning challenges of ArFi and alternate solutions such as EUV, DSA or nanoimprint. Process variability prediction through compact models is sometimes limited to those multipatterning processes used to compose single final target. We may call those sequential processes as representative module for design target layer which is not clearly derived from single litho-etch process but derived from the interaction between various layers. Key challenges for extending multiple pattering are managing design and tolerance variation in multiple pattering steps with proper restrictions, and visualizing interlayer errors (w/ bridge & pinch and overlap). Additionally,visualization of final target layers and intermediate layers is important for process/design engineers.
We will demonstrate verification flows for different process modules to verify the failure mechanisms and to aid in visualization, then judge the areas for improvement with existing model based solutions. Then we will also try to investigate possible area for development of accurate residual error prediction from compact models as those errors are accumulated from multiple process effects into final CD measurement from design target layers. This may lead to new dimensions of modeling process effects we’ve never considered because those signatures were lumped between processes to processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.