• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applications de techniques avancées de contrôle des procédés en industrie du semi-conducteur.

Jedidi, Nader 05 October 2009 (has links) (PDF)
Cette thèse porte sur le développement d‘outils de contrôle avancé des procédés appliqués à l'industrie microélectronique. Des analyses statistiques ont mis en évidence la longueur de grille en poly-silicium comme principal responsable des variabilités lot à lot temporelle et spatiale des performances électriques des transistors courts (courants de saturation, de fuite et la tension de seuil). Une nouvelle stratégie de régulation mieux adaptée a été étudiée : le contrôle coopératif qui s'appuie sur un algorithme d'identification récursif. Les performances de plusieurs estimateurs en ligne ont été simulées et comparées. Une régulation de compensation a aussi été développée entre la gravure de la grille et l'implantation des poches permettant de compenser la déviation de la longueur de la grille en ajustant la dose d'implantation des poches. Sa mise en production a permis de réduire la dispersion lot à lot de 40%.
2

Control performance assessment of run-to-run control system used in high-mix semiconductor manufacturing

Jiang, Xiaojing 04 October 2012 (has links)
Control performance assessment (CPA) is an important tool to realize high performance control systems in manufacturing plants. CPA of both continuous and batch processes have attracted much attention from researchers, but only a few results about semiconductor processes have been proposed previously. This work provides methods for performance assessment and diagnosis of the run-to-run control system used in high-mix semiconductor manufacturing processes. First, the output error source of the processes with a run-to-run EWMA controller is analyzed and a CPA method (namely CPA I) is proposed based on closed-loop parameter estimation. In CPA I, ARMAX regression is directly applied to the process output error, and the performance index is defined based on the variance of the regression results. The influence of plant model mismatch in the process gain and disturbance model parameter to the control performance in the cases with or without set point change is studied. CPA I method is applied to diagnose the plant model mismatch in the case with set point change. Second, an advanced CPA method (namely CPA II) is developed to assess the control performance degradation in the case without set point change. An estimated disturbance is generated by a filter, and ARMAX regression method is applied to the estimated disturbance to assess the control performance. The influence of plant model mismatch, improper controller tuning, metrology delay, and high-mix process parameters is studied and the results showed that CPA II method can quickly identify, diagnose and correct the control performance degradation. The CPA II method is applied to industrial data from a high-mix photolithography process in Texas Instruments and the influence of metrology delay and plant model mismatch is discussed. A control performance optimization (CPO) method based on analysis of estimated disturbance is proposed, and optimal EWMA controller tuning factor is suggested. Finally, the CPA II method is applied to non-threaded run-to-run controller which is developed based on state estimation and Kalman filter. Overall process control performance and state estimation behavior are assessed. The influence of plant model mismatch and improper selection of different controller variables is studied. / text
3

Control-friendly scheduling algorithms for multi-tool, multi-product manufacturing systems

Bregenzer, Brent Constant 27 January 2012 (has links)
The fabrication of semiconductor devices is a highly competitive and capital intensive industry. Due to the high costs of building wafer fabrication facilities (fabs), it is expected that products should be made efficiently with respect to both time and material, and that expensive unit operations (tools) should be utilized as much as possible. The process flow is characterized by frequent machine failures, drifting tool states, parallel processing, and reentrant flows. In addition, the competitive nature of the industry requires products to be made quickly and within tight tolerances. All of these factors conspire to make both the scheduling of product flow through the system and the control of product quality metrics extremely difficult. Up to now, much research has been done on the two problems separately, but until recently, interactions between the two systems, which can sometimes be detrimental to one another, have mostly been ignored. The research contained here seeks to tackle the scheduling problem by utilizing objectives based on control system parameters in order that the two systems might behave in a more beneficial manner. A non-threaded control system is used that models the multi-tool, multi-product process in a state space form, and estimates the states using a Kalman filter. Additionally, the process flow is modeled by a discrete event simulation. The two systems are then merged to give a representation of the overall system. Two control system matrices, the estimate error covariance matrix from the Kalman filter and a square form of the system observability matrix called the information matrix, are used to generate several control-based scheduling algorithms. These methods are then tested against more tradition approaches from the scheduling literature to determine their effectiveness on both the basis of how well they maintain the outputs near their targets and how well they minimize the cycle time of the products in the system. The two metrics are viewed simultaneously through use of Pareto plots and merits of the various scheduling methods are judged on the basis of Pareto optimality for several test cases. / text
4

Optimisation et réduction de la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation / Optimization and reduction of the variability of a new nonvolatile memory architecture ultra-low power consumption

Agharben, El Amine 05 May 2017 (has links)
Le marché mondial des semi-conducteurs connait une croissance continue due à l'essor de l'électronique grand public et entraîne dans son sillage le marché des mémoires non volatiles. L'importance de ces produits mémoires est accentuée depuis le début des années 2000 par la mise sur le marché de produits nomades tels que les smartphones ou plus récemment les produits de l’internet des objets. De par leurs performances et leur fiabilité, la technologie Flash constitue, à l'heure actuelle, la référence en matière de mémoire non volatile. Cependant, le coût élevé des équipements en microélectronique rend impossible leur amortissement sur une génération technologique. Ceci incite l’industriel à adapter des équipements d’ancienne génération à des procédés de fabrication plus exigeants. Cette stratégie n’est pas sans conséquence sur la dispersion des caractéristiques physiques (dimension géométrique, épaisseur…) et électriques (courant, tension…) des dispositifs. Dans ce contexte, le sujet de ma thèse est d’optimiser et de réduire la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation.Cette étude vise à poursuivre les travaux entamés par STMicroelectronics sur le développement, l’étude et la mise en œuvre de boucles de contrôle de type Run-to-Run (R2R) sur une nouvelle cellule mémoire ultra basse consommation. Afin d’assurer la mise en place d’une régulation pertinente, il est indispensable de pouvoir simuler l’influence des étapes du procédé de fabrication sur le comportement électrique des cellules en s’appuyant sur l’utilisation d’outils statistiques ainsi que sur une caractérisation électrique pointue. / The global semiconductor market is experiencing steady growth due to the development of consumer electronics and the wake of the non-volatile memory market. The importance of these memory products has been accentuated since the beginning of the 2000s by the introduction of nomadic products such as smartphones or, more recently, the Internet of things. Because of their performance and reliability, Flash technology is currently the standard for non-volatile memory. However, the high cost of microelectronic equipment makes it impossible to depreciate them on a technological generation. This encourages industry to adapt equipment from an older generation to more demanding manufacturing processes. This strategy is not without consequence on the spread of the physical characteristics (geometric dimension, thickness ...) and electrical (current, voltage ...) of the devices. In this context, the subject of my thesis is “Optimization and reduction of the variability of a new architecture ultra-low power non-volatile memory”.This study aims to continue the work begun by STMicroelectronics on the improvement, study and implementation of Run-to-Run (R2R) control loops on a new ultra-low power memory cell. In order to ensure the implementation of a relevant regulation, it is essential to be able to simulate the process manufacturing influence on the electrical behavior of the cells, using statistical tools as well as the electric characterization.
5

Robust Algorithms for Optimization of Chemical Processes in the Presence of Model-Plant Mismatch

Mandur, Jasdeep Singh 12 June 2014 (has links)
Process models are always associated with uncertainty, due to either inaccurate model structure or inaccurate identification. If left unaccounted for, these uncertainties can significantly affect the model-based decision-making. This thesis addresses the problem of model-based optimization in the presence of uncertainties, especially due to model structure error. The optimal solution from standard optimization techniques is often associated with a certain degree of uncertainty and if the model-plant mismatch is very significant, this solution may have a significant bias with respect to the actual process optimum. Accordingly, in this thesis, we developed new strategies to reduce (1) the variability in the optimal solution and (2) the bias between the predicted and the true process optima. Robust optimization is a well-established methodology where the variability in optimization objective is considered explicitly in the cost function, leading to a solution that is robust to model uncertainties. However, the reported robust formulations have few limitations especially in the context of nonlinear models. The standard technique to quantify the effect of model uncertainties is based on the linearization of underlying model that may not be valid if the noise in measurements is quite high. To address this limitation, uncertainty descriptions based on the Bayes’ Theorem are implemented in this work. Since for nonlinear models the resulting Bayesian uncertainty may have a non-standard form with no analytical solution, the propagation of this uncertainty onto the optimum may become computationally challenging using conventional Monte Carlo techniques. To this end, an approach based on Polynomial Chaos expansions is developed. It is shown in a simulated case study that this approach resulted in drastic reductions in the computational time when compared to a standard Monte Carlo sampling technique. The key advantage of PC expansions is that they provide analytical expressions for statistical moments even if the uncertainty in variables is non-standard. These expansions were also used to speed up the calculation of likelihood function within the Bayesian framework. Here, a methodology based on Multi-Resolution analysis is proposed to formulate the PC based approximated model with higher accuracy over the parameter space that is most likely based on the given measurements. For the second objective, i.e. reducing the bias between the predicted and true process optima, an iterative optimization algorithm is developed which progressively corrects the model for structural error as the algorithm proceeds towards the true process optimum. The standard technique is to calibrate the model at some initial operating conditions and, then, use this model to search for an optimal solution. Since the identification and optimization objectives are solved independently, when there is a mismatch between the process and the model, the parameter estimates cannot satisfy these two objectives simultaneously. To this end, in the proposed methodology, corrections are added to the model in such a way that the updated parameter estimates reduce the conflict between the identification and optimization objectives. Unlike the standard estimation technique that minimizes only the prediction error at a given set of operating conditions, the proposed algorithm also includes the differences between the predicted and measured gradients of the optimization objective and/or constraints in the estimation. In the initial version of the algorithm, the proposed correction is based on the linearization of model outputs. Then, in the second part, the correction is extended by using a quadratic approximation of the model, which, for the given case study, resulted in much faster convergence as compared to the earlier version. Finally, the methodologies mentioned above were combined to formulate a robust iterative optimization strategy that converges to the true process optimum with minimum variability in the search path. One of the major findings of this thesis is that the robust optimal solutions based on the Bayesian parametric uncertainty are much less conservative than their counterparts based on normally distributed parameters.

Page generated in 0.0497 seconds