LEOMAX START-UP AGENT
System documentation, individual algorithm demonstration and user manual for an ISO9XXX compliant multi-agent artificial intelligence semiconductor start-up simulation tool with error prediction, progress monitoring and results diagnosis
ŠNick Atchison 2005, 2006, 2007, 2008, 2009, 2010 Work in progress
My E-MAIL: XXXpostmaster@ holomirage.com. (Remove the XXX spam blocker from the e-mail address)
Background: The first industrial revolution was enabled by the use of wind and water powered mechanisms to supplement human and animal work. These previously untapped energy sources allowed production efficiency to improve. The second industrial revolution was triggered by the increased mechanization of production enabled by the use of fossil fuels. Increased mechanization put a premium on well documented procedures related to complex manufacturing equipment and processes. The third industrial revolution was enabled by the development of cheap information processing equipment which allowed for automated documentation, concurrent analysis and multi-agent virtual offices. The third industrial revolution will place a premium on flexibility related to utilizing multi-agent cybernetic systems to automate New Product Introduction.
Intent: Present a Computer Aided New Product Identification multi-agent system which uses the Product Data Sheet as the key defining document. Once a boiler plate product data sheet is personalized, a NPI run card is developed, the company plan needed to support the run card is automatically derived along with a planning simulator. The planning simulator converts into an automated progress monitoring tool, capable of self evaluation and performance diagnostics. It generates all reports, formulates yield prediction and provides expert change advice.
Basis: The ~1000 semiconductor fabless design companies have essentially the same structure, function and intended performance. The main differences between the different companies is found in their product data sheets.
Method: Use Comparative Systems Theory based Fractal Taxonomic Methods to build fractal based simulations of NPI startups. FRACTAL TAXONOMIC METHODS
Procedure: See Table of Contents. (((Need URL ???)))
Result: A systematic presentation of concepts related to computer assisted NPI utilizing cutting edge multi-agent systems that can simulate and automate of all company activities that relate to new product introduction such as staff planning, specification generation, production planning, status report planning and AI based yield enhancement analysis.
Table of Contents
I. Multi-Agent Cyber NPI Specifications
A) Specifies Generic Company System:
1. Specifies a company structure with 27 distinct Sections organized as 3 Groups which have 3 Departments each which in turn have 3 Sections each.
2. Specifies functional capability of each group, department and section.
3. Specifies Start-up Sequence of groups, departments and sections.
B) Specifies Required Analytical Procedures
1. Specifies company generator.
2. Specifies analytical algorithms.
3. Specifies Cyber NPI Multi-Agent system operation.
C) Specifies Cyber NPI System:
1. Multi-Agent NPI Specification – Lists the abstracts of the 28 patents related to the Cyber NPI System. 8
2. Data Requirements.
3. Results Expected. explains a company organization starting with 3 Groups which have 3 Departments each which in turn have 3 Sections each. The company has a total to 27 sections.
II. Demo of Algorithms needed for Automated Company Generator, Analytical Methods and Multi-Agent AI tool:
(Demo of Algorithms needed for Automated Company Generator, Analytical Methods and Multi-Agent AI tool: Algorithims)
A) Data Sheet to Company:
1. Data Sheet to Run Card Demo
2. Run Card to Simulators Demo
3. Simulator to Analysis Demo
B) Standard Analytical Methods
1. Company Analysis Algorithm Demo
2. Production Analysis Algorithm Demo
3. Yield Analysis Algorithms Demo
3a Advanced Systematic Analytical methods:
3a1. Spatial Analytical Methods.
3a2. Temporal Analytical Methods.
3a3. Conditional Analytical Methods.
3b Comparative N-dimensional Analytical methods The Integrated Yield Management Triangle (IYM):
3b1. The original IYM Triangle — Figure 1 - was developed by Nick Atchison and Ron Ross as a teaching tool to present a schematic representation of the data analysis techniques that were in use at the time, ~1996. The diagram was used to organize a hierarchy of analytical procedures capable of predicting the FAB yield and performing root cause analysis of process and design problems.
3b2. The 6 Triangle Stack (STS) To correct limitations of the IYM Triangle, a schematic diagram of yield analysis consisting of a stack of identical, repeating analysis diagrams was developed. Before the stack of diagrams could be made, a simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would move sequentially from the top general level to bottom root-cause level.
3b3. The N-Dimensional, Holographic Analysis of Yield Variation and Cause To correct limitations of the STS Triangle, a schematic diagram of yield analysis consisting of a stack of identical, repeating analysis diagrams that corresponds to the “atomic” production flow chart was developed. Before the stack of diagrams could be made, a revised simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would — at each step of the flow chart — be able to move sequentially from the top general level to bottom root-cause level.
3c Algorithms needed for Automated Multi-Agent NPI systems:
3c1. Spine – A self generating sequence of commands that extends its self based the product data sheet and skyhook routines that perform the miracle of creation.
3c2. Unconscious Mind – An innate complex of functions that emerges from the ferment of the individual algorithms.
3c3. Conscious Mind – An innate complex of self functions that emerges from the ferment of cash memory. It provides the real time interface including speech recognition driven holographic avatars.
III. Cyber NPI System Operation:
A) Generic Company Generation
1. Load the company generator with the product data sheet.
2. Populate the functional capability of each group, department and section.
3. Initiate start-up sequence of groups, departments and sections.
B) Analytical Procedures Operation
1. Initiate company report system.
2. Initiate analytical algorithms.
3. Initiate the Cyber NPI System operation.
C) Cyber NPI System Operation
1. Real time, On Line Multi-Agent NPI Operations sequence.
2. Real time, On Line Data Uploading sequence.
3. Real time, On Line Cyber Adviser Interaction. explains a systematic functional capability based on each of the 27 sections having 27 distinct functions.
THIS AREA IS UNDER CONSTRUCTION
MISSING SECTION I.
II-A Standard Analytical Methods
II-A(1) Company & NPI Simulation
II-A(2) Circuit Simulation
II-A(3) Manufacturing Simulation
II-B. Advanced Systematic Analytical methods
II-B(1) Spatial Analytical Methods
II-B(2) Temporal Analytical Methods
II-B(3) Conditional Analytical Methods
II-C. Comparative N-dimensional Analytical methods
II-C(1) The Integrated Yield Management Triangle(IYM)
The original IYM Triangle — Figure 1 - was developed by Nick Atchison and Ron Ross as a teaching tool to present a schematic representation of the data analysis techniques that were in use at the time, ~1996. The diagram was used to organize a hierarchy of analytical procedures capable of predicting the FAB yield and performing root cause analysis of process and design problems.
II-C(2) The 6 Triangle Stack (STS)
To correct limitations of the IYM Triangle, a schematic diagram of yield analysis consisting of a stack of identical, repeating analysis diagrams was developed. Before the stack of diagrams could be made, a simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would move sequentially from the top general level to bottom root-cause level.
II-C(3) The N-Dimentional, Holographic Analysis of Yield Variation and Cause
To correct limitations of the STS Triangle, a schematic diagram of yield analysis consisting of a stack of identical, repeating analysis diagrams that corresponds to the “atomic” production flow chart was developed. Before the stack of diagrams could be made, a revised simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would — at each step of the flow chart — be able to move sequentially from the top general level to bottom root-cause level.
III. Automated Multiagent NPI Planning, Performance Monitoring and Data Analysis System Specification, and explanatory Articles
IA - Fractal Organizational Structure
Fractal Organizational Structure provides a congruent, hierarchical company organization based on repeating cycles within cycles of structure-function-content.
1 Company comprised of 3 Groups with 3 Departments each with 3 Sections each = 27 Sections
An evenly branching explicit organizational structure is the simplest, low entropy method to model, control, and monitor the organization of fabless semiconductor companies. This structure is based on a general, generative structure that is inherent to all fabless startups. The organization departmental sequence is ordered by the sequence in which the activities of the department are required by a new product introduction. This evenly branching structure is reflected in a congruent company data base, specification numbering system, product numbers, manufacturing numbering system and sale order tracking number system.
This systems analysis breaks all economic endeavors into 3 basic activities (directorates A to C) with 3 basic responsibilities each ( departments 1 to 9)
(A) Innovation — (1)Organizing, (2)Funding & (3)Predicting
(B) Development — (4)Developing, (5)Designing & (6)Instantiating
(C) Implementation — (7)Building, (8)Selling & (9)Delivering
Each of the 9 departments has three sections each defined by three basic activities of the department. This means there are 27 sections. Each of the sections has three basic specifications which relate to pre-tapeout activities. This number of specifications evenly grows to 9 per section during full qualification. During full production the number of specifications per section is 27 per section. This means that initially there is one general company specification, three group specifications, nine departmental specifications, 27 section specifications and 3 specifications for each section that are pre-packaged and similar for all semiconductor companies. The 81 pre-packaged specifications carry the company up through tape out. After tapeout each specification generates three more specifications each which are related to manufacturing. After Full Qualification each specification again expands creating three specifications each which are related to production. All specifications are pre-written as a vertical Excel column - flow chart. Each stage of the excel flow chart is expanded horizontally — annotated pre and post tasks/bills of materials — to create the specification. The number of steps in the flow chart is fixed. The number of elements in the row can vary.
Click on the following URLs to see a preliminary papers that lead up to the current topic:
This background paper proposes a hypothetical modular 10 department company structure with 10 operations that are repeated at 10 stages.
This background paper lists 1000 step sequence that occurs in most new product introductions that the above model organization would actually have to perform.
This paper presents an Excel spread sheet showing a model of 1 company which is comprised of 3 Groups with 3 Departments each with 3 Sections each = 27 Sections. This fractal-modular model company organization begins to merge the depa`rtment classification with the activity classification system so that the overall system is a holographic organizational system.
Each NPI Operation Sequence is a linear serries of ~1000 steps that can be derived from the Master Product Data Sheet.
Each step is described by a specificaton that is computer generated.
Each specification consists of a 3x3x3 array of elements.
Each element is a form that can be logically derived from the product data sheet and general operation specifications.
Other representatio of the Nodes
IB - Automated NPI Specification Generation
This paper proposes a set of modular, evenly branching functions as the simplest, low entropy method to model, control, and monitor the function and specification of fabless semiconductor companies. Since most of the general functions of a startup are the same for all startups, it is possible to specify the general functions that are inherent to all fabless startups.
Note that the product data sheet which is expanded by technology, design and product engineering becomes the “super data sheet” that is used to personalize the general specification system. The super data sheet has an entry for each of the 1092 functions that specify any particular restrictions and requirements related to a given operation. This is less complicated than it seems because the only elements required to be specified for a new product are the deviations from the standard “master super product data sheet”. This list of deviations from the standard master engineering product data sheet is called the “Instantiation Data Sheet”. A computer program first substitutes the modified sections of the Instantiation data sheet into the “super data sheet”. The program then combines the information in the modified “super data sheet” with the general specification format to yield the personalized specification system.
It is important to note that the modular specification of the testing is important to the modular data analysis.
The master document is the quality plan. It contains the list of all authorized objects, activities and reports. The Quality Plan describes the structure of the specification system, contains the basic specification format and the basic super data sheet format. The quality plan resembles this paper.
The specification system is designed to take advantage of the “Hyper-Deck” technology in which all documents can be “Hyper-Linked” to each other. The specification system is planned with this capability in mind. Because all specs contain (point at) all other specifications, the system is holographic.
The specifications are the result of a generative process in which the basic specification form is “filled out” with information from the data sheet. Because the specification are generated by the combination of a specification model and a fill out table which is in turn modified by a differences table, the specification system is considered to be a very regular and consistent and error free. Because every statement in the super data sheet is tested, the testing said to be is “congruent” to the data sheet and the test generation, optimization and reduction can be automated. Because the test generation is automated, the characterization and analysis can be automated.
The extreme congruence and modularity of product specification design, simulation, fabrication, test and analysis, all data can be directly compared. This is called an anything to anything comparator. With some effort it is possible to make an everything to everything comparator. It is functionally like a database without keys where all data is held in a giant n-dimensional flat file. Such a totally interrelated relational data base is called a holoplex.
Each specification is written like an experiment. Each of the specifications has a Title, Number Method, a Procedure, and a Results section. The method section consists of A Flow Chart of MS Project Diagram containing 3 actions. The procedure section consists of an EXCEL data sheet with the 3 actions listed in column Z and pre-actions and post-actions listed in sequence on either side of the z column. The results section consists of the reporting forms and analysis requirements. The three actions consist of verifying the preparations, performing the operation and verifying the results.
Click on the following URL to see a spreadsheet of the expansion of the number of specs related to the developmental stage of the company:
Click on the following URL to see a preliminary paper:
Click on the following URL to see a preliminary paper on fractal specification structure and taxonomy:
The specification system describes a set of modular, evenly branching functions as the simplest, low entropy method to model, control, and monitor the function and specification of fabless semiconductor companies. Since most of the general functions of a startup are the same for all startups, it is possible to specify the general functions that are inherent to all fabless startups. A spreadsheet column listing the one company prime spec, each of the 3 group specifications, each of the 9 departmental master specifications, each of the 27 section master specifications with three each stage one specifications with 3 each stage two sub specifications and 3 three each sub-sub specifications would have 1092 rows. This spread sheet with 1092 entries would provide the master company runcard. On either side of this column would be the pre and post activities required by the operation. If the number of the Pre and Post operations average to ~10 then there are ~10,920 atomic level operations to a new product introduction spreadsheet runcard.
In fact, the 1092 steps for New Product Introduction or Semiconductor Startup are generic and the same for all companies.
It is not until the row related to each of the 1092 stages are filled in that the specifications system becomes personalized.
The personalization of the specification system is accomplished by merging the product “super” data sheet with the specification system.
Marketing proposes a product data sheet which is expanded by technology, design and product engineering. This is done by modifying a generic “super product data sheet”. The generic super data sheet has an entry for each of the 1092 functions that specify all particular restrictions and requirements related to a given product. Note that the all testing and qualification and reliability plan requirements are listed in the super data sheet. Note that the specification system is created by a generative fractal program. This insures the specification base hyper-deck is consistent, congruent and error free.
The specification system is based on an Excel spreadsheet format, it easily becomes the basis of a cybernetic runcard that specifies and tracks the performance of activities. Specifications point at (refer to) the specifications above it is the specification hierarchy.
The basic structure of any specification is as follows:
Number/ Data Base Location — Spec# 000-000-REV
Purpose: Group Requirements Statement — Spec# X00-000-REV
Scope: Department Requirements Statement — Spec# XY0-000-REV
Responsibility: Section Verification Requirements Statement — Spec# XYZ-000-REV
Method: Primary Function Specification - Spec# XYZ-A00-REV
Procedure: Secondary Function Specification - Spec# XYZ-AB0-REV
Results/Metrics: Tertiary Function specification - Spec# XYZ-000-REV
The Marketing Part Number = MK#
The Ordering Part Number = MK# + OPTIONS # ( There is a set list of options which correlates with a given qualified manufacturing flow number.)
` The Manufacturing Part Number = MK# + OPTIONS# + Index#1 (There is an index of each MK# + OPTIONS# that contains a list of all 1092 specification rev numbers in sequence. Any change in the REV number of any specification causes an update in the Index#1.
Lot Number is a 3 character base 99 Index2# that points to a list of all Vendor lot numbers Associated with the lot build, this list includes the Manufacturing Part#. This number points to all test and qualification test data. When the data is available this number is extended by 3 additional characters and contains 1 extra character for wafer and 2 extra characters for die number.
The marking number contains The Ordering Part Number and the extended Lot Number. There may also be additional numbers added to the package such as the assembly date code. The key number is the Extended Lot Number. It contains a method to reach back through all specifications and BOMs and runcards used during the construction of the part. This is extremely important to automatic performance analysis methods.
This next URL presents a simple table discussed above.
II — Automated NPI Monitoring & Performance Analysis Methods
The third segment of this paper proposes a modular, evenly branching explicit testing as the simplest, low entropy method to initiate and monitor the performance of fabless semiconductor companies. This performance, monitoring and analysis is based on a general performance, monitoring and analysis that is inherent to all fabless startups. The monitoring and analysis methods presented are at the forefront of modern AI capability.
Note that the product data sheet which is expanded by technology, design and operations becomes the “engineering data sheet” that is used to personalize the general specification system. The super data sheet has an entry for each of the 1092 functions that specify any particular restrictions and requirements related to a given operation. It is important to note that the modular specification of the testing is important to the modular data analysis.
The stability of the product is reported as the deviation from the ideal performance stated in the “super product data sheet”. The average wafer FAB has over 2000 SPC data sheets derived from measurements taken on wafers that are updated hourly. The 50% of the 1000 units of equipment used to fabricate the wafer may be generating several gigabytes of information a day via their electronic chart recorders. At e-test 200 to 500 tests are performed on up to 10 sites per wafer on as many as 24 production wafers per lot. At wafer sort all die on all wafers may be tested as many as 5 times. At package test each package may be tested up to 5 times. Quality assurance and Reliability add additional complexity to the testing process and enormous amounts of data. All of the tests mentioned so far need to be cross correlated in a meaningful way. This work must be done with automated data analysis systems that can provide early warnings of subtle, aberrant changes that are indicative of pending disaster. When defects do occur, their root cause must be located and corrected quickly.
The key to the advanced AI capability is derived from a built in “Holographic” production numbering system. In the past I have gone to great lengths to convert widely disparate numbering systems into one “tag” system that allowed me to program data analysis routines using “C” pointer math. This essentially like machine language — the fastest way to do things on a computer. The manufacturing numbering system and customer ordering numbering system is based on and congruent to the modular organizational format discussed so far. The long manufacturing number is has two main parts. The first part is a specification index number of a list of the specifications and date code of each specification that was active during the fabrication of the part. The second portion of the manufacturing part number is a serial number of an index of all of the vendor part numbers applied to the manufacturing lot. For parts that have wafers and die identification, the index number is distinct for each die.
All vendors now publish their foundry data on EXCEL data sheets that are available on the Web. Simple AWK or Perl program ingest the vendor supplied into tables that are digested by more complex data analysis programs.
IIA. Non-Systematic Data Analysis Tools:
Non Systematic data analysis tools Such as SPC charts are in common use. Alarms occur after the fact of a single measurement value excursion or tendency is detected. Standard procedures will not be discussed here. In Appendix I there is a list of articles and example programs related to non-systematic yield analysis
IIB. Systematic Yield Management Tools:
In the semiconductor industry today it is common to use off the shelf commercial data analysis and new product introduction packages which can cost over a million dollars a year to support. The articles presented in this web page provide access to some of the enabling statistical techniques on which the commercial tools are based. The tools can be programmed and maintained by the individual engineer. Many of the tools can be used with EXCEL. Before going into the actual data analysis methods, it is first important to understand the fundamental structure of Systematic Yield Management as a comprehensive analytical system. Next is a discussion of spatial, Temporal and Relational Yield Tools
IIB(1) Spatial Yield Tools:
There are actually a wide range of spatial yield tools such as die map, stepper field map, wafer map, boat map, etc.The next picture shows a wafer zone analysis plot. I developed this method after reviewing 4,000 wafer maps from one product line. The original analysis method used EXCEL spread sheets. In later renditions I converted the analysis to RS1, C and C++. The yield of each machine in FAB and Test were monitored by zone. This was done because the different machines in FAB tend to have lower yields in specific regions of the wafer. Taking a simple average of the data values by wafer will decrease the accuracy of the analysis. Most FAB data analysis tools now do this kind of analysis automatically.
WAFER ZONE MAP
This is a wafer map that compares two values to each other in wafer space.
This could be a comparison of one wafer sort value to another or an E-test value vs. a Wafer Sort value. When this type of plot is used with more complex where an e-test value is plotted against a wafer sort value and against a post burn-in leakage test the basis for whole system holistic data analysis is developed.
IIB(2) Temporal Yield Tools:
Yield as a function of time. ( Quad Plot To Be Added. ) Although the Quad Plot contains four different plots of which only one plots results with a time X axis, they all actually occurrences that are time dependant.
IIB(3) Relational Yield Tools:
Note that each element of the production flow char is reviewed and classified as to the type (occurrence pattern) of faults they can cause. In this case the relationship between the values observed in terms of there distribution — their relation to one another. Different equipment units or processes can create different types of distributions.
The review of the type of occurrence pattern observed begins to focus on possible causes.
Chart #1 Diagrammatic Analysis of distribution Types
Chart #2 Weighted Fish Diagram Classifying Problem Affect by Distribution Effect
Chart #3 Table of Diagrammatic Analysis Results
1) Initial Bond Strength
Process Settings Governed by specification — would cause problems to the whole lot
Polyimide trapping Not seen as a problem at ASE incoming inspect
Pad Metal Damage Under process control — all die processed pass incoming inspect
Contamination Under process control — all die processed pass incoming inspect
2) Reduction of Bond Strength
Glue On BP Before Bonding Under process control — all die processed pass incoming inspect
High Dt during Assembly DOE testing shows that this is not a problem
Glue Polymer on BP Under process control — all die processed pass incoming inspect
Purple Plague DOE testing shows that this is not a problem
3) Increased Snap Forces
Over Temp @ B.I. Under process control — all die processed pass outgoing inspect
Package Warping Checked a failing package — was not warped
MSL 3/5 Con B/C Checked several failing lots
Assembly Damage Causes gross leakage
4) Freak Catastrophes
Probe Pad Cratering Fail rate is independent of the number of probes
Glue Touches Ball Bond Proven to be capable of being a cause that is occurring at this time
Glue Touches Bond Wire Proven to be capable of being a cause that is occurring at this time
Glue/Die Edge Over Flow Proven to be capable of being a cause that is occurring at this time
Systematic Yield Methodology: (SYM)
The basic analytical procedure that underlies all SYM methods is an everything to everything comparison program using that originally used the RS1 language which was known as “V0D0”. In the early RS1 format, analysis took 6 days to run and brought the Engineering Alpha VAX work schedule to a standstill. I rewrote the program in C using 3 level pointer math and the run time dropped to 6 seconds. Once this tool was up and running, many complex SYM tools were developed including Product Wafer Sort Yield Sensitivity to E-test Parameter Value Analysis. This tool became known as Product Sensitivity Analysis — PSA. TI licensed PSA to DataPower which was bought out by PDF Solutions. The first complex WEB based version of V0D0 used a web page as the GUI which called CGI tools to activate Unix shell programs which in turn called the C code. The graphic output of this system was accomplished when the C code finished the data table and returned control the the Unix shell program which then called a PERL program using a GD graphics module to generate and return the graphs to the users web page.
The image below is a screen shot of a graphic output developed using “G” a public domain 3D Stereo graphic language I developed with H. Wolderidge, which uses a PC compatible version of Cal Tech Intermediate Format (CIF )language. G will be presented later in the appendix.
PSA - Stochastic Graphical Analysis Using “G”
Articles related to Systematic Semiconductor Data Mining Methods:
Patents related to Systematic Semiconductor Data Mining Algorithms:
Patented by Nick Atchison and Nick Atchison & Ron Ross or Nick Atchison & Brad Ferrell. You can use the CROSS-REFERENCES TO RELATED APPLICATONS and Referenced By sections of the patents to locate other related patents by other individuals.
IIC. Holographic Yield Management Tools:
HYM is the next step up from Systematic Yield Analysis (SYM). This set of tools moves beyond the Systematic Yield Methodology (SYM) in that it does not simply compare two or three variables at once. HYM looks at new product development as an integrated whole. The IYM Triangle is the first attempt to diagram the analytical process as a whole. It presents a hierarchical schematic of FAB/E-Test/W-Sort data and related analysis methods.
A clear understanding of the IYM Triangle is necessary because the choice of the analytical sequence used to find the root cause of a given problem is not simple and varies according to type of problem. Very often, the power of the tools can and do overwhelm the non-expert user. On the other hand, an expert yield analyst can do the required extractions and analyzes using a simple SQL data base or — in the case of a start-up - file based system. In fact, all of 42 basic analytical methods now in commercial and home grown data analysis packages that were developed by expert data analysts working for major semiconductor companies. The commercial data analysis package vendors have pulled the diverse methods used by the experts together into more accessible and usable tools.
In-Line Statistical Process Control gathers the data used at the bottom of the IYM Triangle. At each tier of the IYM Triangle more data are added. This means that data from each operation can be analyzed laterally (within tier) and vertically (tier to tier) during root cause analysis. Non-Systematic Statistical Process Control looks at data at each operation. Systematic Yield Analysis looks for correlations between within tier and tier to tier data. Holographic Yield Analysis uses Hypergraphic methods to look at all cross correlations as holistic phenomena. The IYM Triangle provides a schematic for top down diagnoses of the root cause of a yield problem.
The IYM Triangle
The original IYM Triangle — Figure 1 - was developed by Nick Atchison and Ron Ross as a teaching tool to present a schematic representation of the data analysis techniques that were in use at the time, ~1996. The diagram was used to organize a hierarchy of analytical procedures capable of predicting the FAB yield and performing root cause analysis of process and design problems. Note that the IYM Triangle is a essentially a binomial hierarchy
The 6 Triangle Stack
To correct limitations of the IYM, a schematic diagram of yield analysis consisting of a stack of identical, repeating analysis diagrams was developed. Before the stack of diagrams could be made, a simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would move sequentially from the top general level to bottom root-cause level. Figure 2 shows the Six Stack Analytic Triangle that was developed to eliminate the limitations of the IYM method.
A new systematic root cause analysis method has been developed at that has proved to be instrumental in finding the root cause of low yield at package test, assembly, wafer sort, E-test and all FAB process stages. In this paper, simple six level diagram is presented and compared to the older single triangle IYM Triangle diagram to explain the enhancements related to the new stacked triangle method. The sequence of operations described in the hierarchical structure of the 6 level schematic diagram describes a sequential analytical method that limits the number of analysis that need to be done. By working through the levels correctly, only the critical analyses are done.
Advanced Holographic Yield Prism (HYP) Tool:
To correct limitations of the of previous schematic diagrams of yield analysis, a new schematic consisting of a stack of identical, repeating analysis diagrams was developed. Before the stack of diagrams could be made, a simple yet general analysis method that could be used at all levels had to be developed. The elements of the analysis had to be hierarchically arranged so that analysis would move sequentially from the top general level to bottom root-cause level. This diagram is called the Holographic Yield Prism (HYP). The HYP is needed to represent the more complex temporal spatial and conditional relationships of the entire manufacturing flow. HYP is also known as “Derivational Yield Analysis”
Note that the HYP is a trinomial hierarchy. The HYP tool set includes extensive use of “Hyper Graphics”. Hyper Graphics are graphs with many axes that show the relationship between Spatial, Temporal and Relational data in one graph. A unique set of analytical tools is used to prepare the graphs that perform calculations using “Compressed Data”.