• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Simulation model simplification in semiconductor manufacturing

Stogniy, Igor 16 November 2021 (has links)
Despite the fact that discrete event simulation has existed for more than 50 years, sufficiently large simulation models covering several enterprises are still not known. Those simulation models that do exist in the industry are usually used to test individual scenarios rather than for optimization. The main reason for this is the high computational complexity. A solution could be the use of simplified models. However, this problem has not been sufficiently investigated. This dissertation is devoted to the problem. The problem can be briefly formulated as the following question: How to simplify a simulation model in order to minimize the run time of the simplified model and maximize its accuracy? Unfortunately, the answer to this question is not simple and requires many problems to be solved. This thesis formulates these problems and proposes ways to solve them. Let us briefly list them. In order to simplify simulation models in conditions close to real ones, it is suggested to use statistical models specially developed for this purpose. Based on experimental data, this thesis analyzes various ways of aggregating process flows. However, the main method of simplification is the substitution of tool sets for delays. Two approaches to the use of delays are considered: distributed and aggregated delays. Obviously, the second approach reduces the simulation time for the simplified model more. However, distributed delays allow us to identify meaningful effects arising from the simulation model simplification. Moreover, it is interesting to compare the two methods. A significant problem is determining the criterion for selecting the tool set to be substituted. In this thesis, ten heuristics are considered for this purpose, each with two variations. Another problem is the calculation of delays. Here we consider three variations and compare them in terms of the accuracy of simplified simulation models. In this thesis, we consider two dispatching rules: First In First Out and Critical Ratio. The first rule provides a more predictable behavior of simplified simulation models and makes it easier to understand the various effects that arise. The second rule allows us to test the applicability of the proposed simplification methods to conditions close to the real world. An important problem is the analysis of the obtained results. Despite the fact that this issue is well studied in the field of simulation, it has its own nuances in the case of analyzing the simplified models' accuracy. Moreover, in the scientific literature, various recommendations were found, which were experimentally tested in this thesis. As a result, it turned out that not all traditional accuracy measurements can be adequately used for the analysis of simplified models. Moreover, it is additionally worth using those techniques and methods, which usually in simulation modeling refer to the analysis of input data. In this thesis, about 500,000 experiments were performed, and more than 2,000 reports were generated. Most of the figures presented in this thesis are based on specific reports of the most significant interest.:1 Introduction 6 1.1 Preamble 7 1.2 Scope of the research 8 1.3 Problem definition 10 2 State of the art 12 2.1 Research object 13 2.1.1 Tool sets 15 2.1.2 Down and PM events 17 2.1.3 Process flows 18 2.1.4 Products 20 2.2 Simplification 21 2.2.1 Simplification approaches 23 2.2.2 Tool set and process step simplification 24 2.2.3 Process flow and product nomenclature simplification 26 2.2.4 Product volume and lot release rule simplification 27 2.3 Discussion about bottleneck 29 2.3.1 Bottleneck definitions 29 2.3.2 Why do we consider bottlenecks? 32 2.3.3 Bottleneck detection methods 33 3 Solution 41 3.1 Design of experiments 42 3.1.1 α – forecast scenario 43 3.1.2 β – lot release strategy 60 3.1.3 γ – process flow aggregation 64 3.1.4 δ – delay position 78 3.1.5 ε – dispatching rule 79 3.1.6 ζ – sieve functions 80 3.1.7 η – delay type 89 3.1.8 Experimental environment 90 3.2 Experiment analysis tools 93 3.2.1 Errors 93 3.2.2 Deltas 94 3.2.3 Correlation coefficients and autocorrelation function 96 3.2.4 T-test, U-test, and F-test 97 3.2.5 Accuracy measurements based on the probability density function 98 3.2.6 Accuracy measurements based on the cumulative distribution function 99 3.2.7 Simple example 100 3.2.8 Simulation reports 104 3.2.9 Process step reports 105 3.2.10 Model runtime characteristics 106 4 Evaluation 111 4.1 Scenario “Present”, static product mix and all process flows (α1, β1, γ1) 113 4.1.1 Similarity and difference of errors and deltas for static product mix (β1) 113 4.1.2 “Butterfly effect” of Mean Absolute Error (MAE) 114 4.1.3 “Strange” behavior of correlation and autocorrelation 116 4.1.4 “Pathological” behavior of Mean Absolute Error (MAE) 117 4.1.5 Lot cycle time average shift 120 4.1.6 Delay type (η) usage analysis 125 4.1.7 Introduction to sieve function (ζ) analysis 132 4.1.8 Delay position (δ) analysis 136 4.1.9 δ2 calibration (improvement of the lot cycle time average shift) 140 4.1.10 Using t-test, U-test, and F-test as accuracy measurements 144 4.1.11 Using accuracy measurements based on the probability density function 153 4.1.12 Using accuracy measurements based on the cumulative distribution function 159 4.1.13 X-axes of the accuracy measurements 163 4.1.14 Sieve function (ζ) comparison (accuracy comparison) 165 4.2 Scenario “Present”, dynamic product mix and all process flows (α1, β2, γ1) 174 4.2.1 Modeling time gain, autocorrelation, correlation, and MAE in β2 case 174 4.2.2 Errors and deltas in β2 case 176 4.2.3 Lot cycle time average shift in β2 case 177 4.2.4 Accuracy measurement in β2 case and accuracy comparison 180 4.3 Scenario “Past + Future”, dynamic product mix and all process flows (α2, β2, γ1) 187 4.3.1 Delays in α2 case 187 4.3.2 Deltas, correlation, and MAE in α2 case 190 4.3.3 Accuracy comparison 192 4.4 Process flow aggregation (α1, β1, γ1) vs. (α1, β1, γ2) vs. (α1, β1, γ3) 196 4.4.1 Lot cycle time average 196 4.4.2 Lot cycle time standard deviation 200 4.4.3 Correlation, Mean Absolute Error, and Hamming distance 202 4.4.4 Accuracy comparison 204 4.5 Additional experiments of gradual process step merge (α1, β1, γ3). 210 4.5.1 Gradual merge experimental results. 210 4.5.2 Theoretical explanation of the gradual merge experimental results 212 4.6 Process flow aggregation (α1, β2, γ1) vs. (α1, β2, γ2) vs. (α1, β2, γ3) 214 4.6.1 Correlation, MAE, and deltas 214 4.6.2 Accuracy comparison 218 4.7 Process flow aggregation (α2, β2, γ1) vs. (α2, β2, γ2) vs. (α2, β2, γ3) 223 4.7.1 Delays in {α2, γ2} case 223 4.7.2 Accuracy comparison 226 5 Conclusions and outlook 228 6 Appendices. 232 6.1 Appendix A. Simulation reports overview. 232 6.2 Appendix B. Sieve functions comparison for {α1, β1, γ1, δ2_cal, ε1} 237 7 References 245
2

Quality based scheduling for an example of semiconductor manufactory

Doleschal, Dirk, Schöttler, Elisa Sophie 30 April 2021 (has links)
Quality is an important measurement within a semiconductor manufactory. Due to the fact that yield is directly affected by quality of the manufacturing process, in this paper a quality based scheduling approach will be presented which compares different methods like dispatching, MIP and CP, regarding different objectives. To test the different used methods a benchmark model of a semiconductor manufactory is build up. Here a lithography work center is used in detail where the rest of the fabrication is only build up as a delay station. With this model the repeatability for the example of a lithography step is investigated. Thereby in this investigation it is assumed, that each lithography tool has an offset which is transferred to the structure. Now the quality of a product should be best, if the offset from one layer to the next layer is minimized.
3

Development and Simulation Assessment of Semiconductor Production System Enhancements for Fast Cycle Times

Stubbe, Kilian 08 March 2010 (has links) (PDF)
Long cycle times in semiconductor manufacturing represent an increasing challenge for the industry and lead to a growing need of break-through approaches to reduce it. Small lot sizes and the conversion of batch processes to mini-batch or single-wafer processes are widely regarded as a promising means for a step-wise cycle time reduction. Our analysis with discrete-event simulation and queueing theory shows that small lot size and the replacement of batch tools with mini-batch or single wafer tools are beneficial but lot size reduction lacks persuasive effectiveness if reduced by more than half. Because the results are not completely convincing, we develop a new semiconductor tool type that further reduces cycle time by lot streaming leveraging the lot size reduction efforts. We show that this combined approach can lead to a cycle time reduction of more than 80%.
4

Development and Simulation Assessment of Semiconductor Production System Enhancements for Fast Cycle Times

Stubbe, Kilian 29 January 2010 (has links)
Long cycle times in semiconductor manufacturing represent an increasing challenge for the industry and lead to a growing need of break-through approaches to reduce it. Small lot sizes and the conversion of batch processes to mini-batch or single-wafer processes are widely regarded as a promising means for a step-wise cycle time reduction. Our analysis with discrete-event simulation and queueing theory shows that small lot size and the replacement of batch tools with mini-batch or single wafer tools are beneficial but lot size reduction lacks persuasive effectiveness if reduced by more than half. Because the results are not completely convincing, we develop a new semiconductor tool type that further reduces cycle time by lot streaming leveraging the lot size reduction efforts. We show that this combined approach can lead to a cycle time reduction of more than 80%.

Page generated in 0.0852 seconds