Spelling suggestions: "subject:"eliability engineering"" "subject:"deliability engineering""
201 |
Fault tree analysis for automotive pressure sensor assembly linesAntony, Albin. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Systems Science and Industrial Engineering Department, 2006. / Includes bibliographical references.
|
202 |
Statistical Learning in Logistics and Manufacturing SystemsWang, Ni 10 May 2006 (has links)
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level.
In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics.
In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints.
Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products.
The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
|
203 |
A prognostic health management based framework for fault-tolerant controlBrown, Douglas W. 15 June 2011 (has links)
The emergence of complex and autonomous systems, such as modern aircraft, unmanned aerial vehicles (UAVs) and automated industrial processes is driving the development and implementation of new control technologies aimed at accommodating incipient failures to maintain system operation during an emergency. The motivation for this research began in the area of avionics and flight control systems for the purpose to improve aircraft safety. A prognostics health management (PHM) based fault-tolerant control architecture can increase safety and reliability by detecting and accommodating impending failures thereby minimizing the occurrence of unexpected, costly and possibly life-threatening mission failures; reduce unnecessary maintenance actions; and extend system availability / reliability.
Recent developments in failure prognosis and fault tolerant control (FTC) provide a basis for a prognosis based reconfigurable control framework. Key work in this area considers: (1) long-term lifetime predictions as a design constraint using optimal control; (2) the use of model predictive control to retrofit existing controllers with real-time fault detection and diagnosis routines; (3) hybrid hierarchical approaches to FTC taking advantage of control reconfiguration at multiple levels, or layers, enabling the possibility of set-point reconfiguration, system restructuring and path / mission re-planning. Combining these control elements in a hierarchical structure allows for the development of a comprehensive framework for prognosis based FTC.
First, the PHM-based reconfigurable controls framework presented in this thesis is given as one approach to a much larger hierarchical control scheme. This begins with a brief overview of a much broader three-tier hierarchical control architecture defined as having three layers: supervisory, intermediate, and low-level. The supervisory layer manages high-level objectives. The intermediate layer redistributes component loads among multiple sub-systems. The low-level layer reconfigures the set-points used by the local production controller thereby trading-off system performance for an increase in remaining useful life (RUL).
Next, a low-level reconfigurable controller is defined as a time-varying multi-objective criterion function and appropriate constraints to determine optimal set-point reconfiguration. A set of necessary conditions are established to ensure the stability and boundedness of the composite system. In addition, the error bounds corresponding to long-term state-space prediction are examined. From these error bounds, the point estimate and corresponding uncertainty boundaries for the RUL estimate can be obtained. Also, the computational efficiency of the controller is examined by using the number of average floating point operations per iteration as a standard metric of comparison.
Finally, results are obtained for an avionics grade triplex-redundant electro-mechanical actuator with a specific fault mode; insulation breakdown between winding turns in a brushless DC motor is used as a test case for the fault-mode. A prognostic model is developed relating motor operating conditions to RUL. Standard metrics for determining the feasibility of RUL reconfiguration are defined and used to study the performance of the reconfigured system; more specifically, the effects of the prediction horizon, model uncertainty, operating conditions and load disturbance on the RUL during reconfiguration are simulated using MATLAB and Simulink. Contributions of this work include defining a control architecture, proving stability and boundedness, deriving the control algorithm and demonstrating feasibility with an example.
|
204 |
Monitoring and analysis system for performance troubleshooting in data centersWang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was
not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance
troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming.
To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers.
VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel
software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By
running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the
performance issue.
VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found
via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus
with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.
|
205 |
Development of convective reflow-projection moire warpage measurement system and prediction of solder bump reliability on board assemblies affected by warpageTan, Wei 05 March 2008 (has links)
Out-of-plane displacement (warpage) is one of the major thermomechanical reliability concerns for board-level electronic packaging. Printed wiring board (PWB) and component warpage results from CTE mismatch among the materials that make up the PWB assembly (PWBA). Warpage occurring during surface-mount assembly reflow processes and normal operations may cause serious reliability problems. In this research, a convective reflow and projection moire warpage measurement system was developed. The system is the first real-time, non-contact, and full-field measurement system capable of measuring PWB/PWBA/chip package warpage with the projection moire technique during different thermal reflow processes.
In order to accurately simulate the reflow process and to achieve the ideal heating rate, a convective heating system was designed and integrated with the projection moire system. An advanced feedback controller was implemented to obtain the optimum heating responses. The developed system has the advantages of simulating different types of reflow processes, and reducing the temperature gradients through the PWBA thickness to ensure that the projection moire system can provide more accurate measurements.
Automatic package detection and segmentation algorithms were developed for the projection moire system. The algorithms are used for automatic segmentation of the PWB and assembled packages so that the warpage of the PWB and chip packages can be determined individually.
The effect of initial PWB warpage on the fatigue reliability of solder bumps on board assemblies was investigated using finite element modeling (FEM) and the projection moire system. The 3-D models of PWBAs with varying board warpage were used to estimate the solder bump fatigue life for different chip packages mounted on PWBs. The simulation results were validated and correlated with the experimental results obtained using the projection moire system and accelerated thermal cycling tests. Design of experiments and an advanced prediction model were generated to predict solder bump fatigue life based on the initial PWB warpage, package dimensions and locations, and solder bump materials. This study led to a better understanding of the correlation between PWB warpage and solder bump thermomechanical reliability on board assemblies.
|
206 |
Optimal maintenance of a multi-unit system under dependenciesSung, Ho-Joon 17 November 2008 (has links)
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies.
Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model.
This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies.
Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability.
The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem.
Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
|
207 |
The application of the six sigma quality concept to improve process performance in a continuous processing plantNxumalo, G. L 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2005. / ENGLISH ABSTRACT: This report presents the application of the six sigma quality concept in solving a true
business problem. Six sigma is a quality improvement and business strategy/tool
developed by Motorola in the mid 1980s. It aims at delivering products and services that
approach levels of near perfection. To achieve this objective a six sigma process must not
produce more than 3.4 defects per million opportunities, meaning the process should be
at least 99.9997% perfect [Berdebes, 2003]. Motorola's success with six sigma
popularised the concept and it has now been adopted by many of the world's top
compames e.g. General Electric, Allied Signal-Honeywell, etc. All the six sigma
companies report big financial returns as a result of increased quality levels due to the
reduction in the number of defects. 'General Electric reports annual benefits of over $2.5
billion across the organisation from six sigma' [Huag, 2003].
The six sigma concept follows a five step problem-solving methodology known as
DMAIC (Define, Measure, Analyse, Improve, Control) to improve existing processes.
Each of these steps makes use of a range of tools, which include quality, statistical,
engineering, and business tools.
This report first gives a theoretical presentation on quality and six sigma, attempting to
answer the question 'What is six sigma'. A step-by-step guide on how to go through the
DMAIC problem solving cycle is also presented.
The six sigma concept was demonstrated by application to the colour removal process of
a continuous processing plant manufacturing refined sugar. Colour removal is a very
important process in sugar refining since the purpose of a refinery is to remove colour
and other impurities from the raw sugar crystals. The colour removal process consists of
three unit operations; liming, carbonation and sulphitation. Liming involves the addition
of lime (calcium hydroxide) required for the formation of a calcium precipitate in the
next unit operations. Carbonation is carried out in two stages; primary and secondary
carbonation. Both stages involve the formation of a calcium carbonate precipitate, which traps colour bodies and other impurities. Sulphitation occurs in a single step and involve
the formation of a calcium sulphite precipitate which also traps impurities. The pH and
colour are the main variables that are being monitored throughout the colour removal
process. Colour removal process
Raw sugar
Melting Carbonation Crystalli
~ Liming ~ c::J Secondary f+ Sulphitation ..
Sugar
sation
Figure 1: Colour removal process
The pH control of the two colour removal unit operations; carbonation and sulphitation,
is very poor and as a result the colour removal achieved is below expectation. This
compromises the final refined sugar quality since colour not removed in the colour
removal processes ends up in the sugar. The first carbonation stage (primary) fails to
lower the pH to the required specification and the second carbonation stage (secondary)
is highly erratic, the pH fluctuating between too high and too low. The sulphitation
process adds more sulphur dioxide than required and hence the pH is lowered below the
lower specification limit.
The six sigma DMAIC cycle was implemented in order to solve the problem of poor pH
control. The Define phase defined the project and identified the process to be improved.
The Measure phase measured the current performance of the process by collecting past
laboratory data with the corresponding field instruments data. The data was used to draw
frequency distribution plots that displayed the actual variation of the process relative to
the natural variation of the process (specification width) and to calculate process
capability indices. The Analyse phase analysed the data so as to determine the key
sources of variation. The Improve phase used the findings of the analyse phase to propose solutions to improve the colour removal processes. The Control phase proposed a control
plan so as to monitor and sustain the improvement gained.
The key findings of the study are presented below:
• Failure of the first carbonation stage to lower the pH to the required level is due to
insufficient carbon dioxide gas supply.
• The second carbonation reaction occurs very fast hence poor control will result in
high variability.
• The amount of colour removed is dependent on the input raw melt colour.
• The histograms of the colour removal unit operations are off-centered and display a
process variation greater than the specification width and hence a large proportion of
the data falls outside the specification limits.
• The % CaO and CO2 gas addition were found to be the key variables that control the
processes centering on target. The % CaO having a stronger effect in the liming
process and CO2 gas addition on the carbonation process.
• The variation between the field instrument's pH and laboratory pH is the key variable
that control the processes spread (standard deviation of the processes).
• The processes Cpk values are less than C, (Cpk<Cp) meaning the processes can be
improved by controlling the key variables that control centering (% CaO, CO2 gas
addition).
The processes capability indices are low, Cp<l meamng the processes are not
statistically capable of meeting the required specifications at the current conditions.
•
Based on the findings of the study, the following deductions are made for the
improvement of the colour removal processes in better meeting the required
specifications.
• Increase the CO2 gas supply to at least 4900 m31hr, calculated based on the fact that at
least 140 rrr' gas is required per ton of solids in melt [Sugar Milling Research Institute
Course Notes, 2002]. • Control the key variables identified to be the key sources of variation; % CaO, CO2
gas addition and variation between the field instrument's pH and laboratory pH.
Reducing variation in the % CaO and increasing CO2 gas supply will improve the
processes ability to maintain centering at the target specification. Maintaining a
consistent correlation between the two pH readings; field instruments pH and
laboratory pH will reduce the processes standard deviation and hence the processes
spread. Reduction in the processes spread will minimize the total losses outside the
specification limits. This will allow better control of the pH by getting rid of high
fluctuations.
• Control of the input raw melt colour is essential since it has an impact on the degree
of decolourisation. The higher the input colour, the more work required in removing
the colour.
In improving the colour removal processes the starting point should be in ensunng
process stability. Only once this is achieved, the above adjustments may be made to
improve the processes capability. The processes capability will only improve to a certain
extent since from the capability studies it is evident that the processes are not capable of
meeting specifications.
To provide better control and to ensure continuous improvement of the processes the
following recommendations are made:
• Statistical process control charts
The colour removal processes are highly unstable, the use of control charts will help in
detecting any out of control conditions. Once an out of control condition has been
detected, necessary investigations may be made to determine the source of instability so
as to remove its influence. Being able to monitor the processes for out of control
situations will help in rectifying any problems before they affect the processes outputs. • Evaluation of capability indices- ISO 9000 internal audits
Consider incorporating the assessment of the capability indices as part of the ISO 9000
internal audits so as to measure process improvement. It is good practice to set a target
for Cp, the six sigma standard is Cp=2, this however does not mean the goal should be
Cp=2 since this depends on the robustness of the process against variation. For instance
the colour removal processes at the current operating conditions can never reach Cp=2.
This however is not a constraint since for the colour removal processes to better meet pH
specifications it is not critical that they achieve six sigma quality. A visible improvement
may be seen in aiming for Cp=I.
On studying the effects of CO2 gas addition the total data points outside specification
limits reduced from 84 % to 33 % and by reducing the variation between field
instruments pH and laboratory pH for the secondary pH the total data points out of
specification reduced from 55 % to 48 %. These results indicate that by improving C, to
be at least equal to one (Cp=l) the total data points outside specification can reduce
significantly, indicating a high ability of the processes to meet specifications. Thus even
if six sigma quality is not achieved, by focussing on process improvement using six
sigma tools visible benefits can be achieved. / AFRIKAANSE OPSOMMING: Hierdie tesis kyk na die toepassing van die ses sigma kwaliteitskonsep om 'n praktiese
probleem op te los. Ses sigma soos dit algemeen bekend staan is nie slegs 'n
kwaliteitverbeteringstegniek nie maar ook 'n strategiese besigheidsbenadering wat in die
middel 1980s deur Motorolla ontwikkel en bekend gestel is. Die doelstellings is om
produkte en dienste perfek af te lewer. Om die doelwit te kan bereik poog die tegniek om
die proses so te ontwerp dat daar nie meer as 3.4 defekte per miljoen mag wees nie - dit
wil se die proses is 99,9997% perfek [Berdebes, 2003]. As gevolg van die sukses wat
Motorolla met die konsep behaal het, het dit algemene bekendheid verwerf, en word dit
intussen deur baie van die wereld se voorste maatskappy gebruik, o.a. General Electric,
Allied Signal-Honeywell, ens. Al die maatskappye toon groot finansele voordele as
gevolg van die vermindering in defekte wat teweeg gebring is. So by. beloop die jaarlikse
voordele vir General Electric meer as $2.5 biljoen [Huag, 2003].
Die ses sigma konsep volg 'n vyf-stap probleem oplossings proses (in Engels bekend as
DMAIC: Define, Measure, Analyse, Improve, Control), naamlik definieer, meet,
analiseer, verbeter, en beheer om bestaande prosesse te verbeter. In elkeen van die stappe
is daar spesifieke gereedskap oftegnieke wat aangewend kan word, soos by. kwaliteits-,
statistiese--, ingenicurs-cn besigheids tegnieke.
Die verslag begin met 'n teoretiese oorsig oor kwaliteit en die ses sigma proses, waardeur
die vraag "wat is ses sigma" beantwoord word. Daama volg 'n gedetailleerde stap-virstap
beskrywing van die DMAIC probleem oplossingsiklus.
Die toepassing van die ses sigma konsep word dan gedoen aan die hand van 'n spesifieke
proses in die kontinue suiker prosesserings aanleg, naamlik die kleurverwyderingsproses.
Hierdie proses is baie belangrik omdat die doelstellings daarvan juis draai rondom die
verwydering van nie net kleur nie maar ook alle ander vreemde bestanddele van die rou
suiker kristalle. Die proses bestaan uit drie onafhanklike maar sekwensiele aktiwiteite
waardeur verseker word dat die regte gehalte suiker uiteindelik verkry word. Tydens die eerste twee stappe is veral die pH-beheer onder verdenking, sodat die kleur
verwydering nie die gewenste kwaliteit lewer nie. Dit bemvloed op sy beurt die gehalte
van die finale produk, omdat die ongewenste kleur uiteindelik deel is van die suiker. Die
pH inhoud is nie net nie laag genoeg nie, maar ook hoogs veranderlik - in beginsel dus
buite beheer.
Die DMAIC siklus is toegepas ten einde die pH beter te kan beheer. Tydens die
definisiefase is die projek beskryf en die proses wat verbeter moet word identifiseer. In
die meetfase IS die nodige data versamel om sodoende die inherente
prosesveranderlikheid te bepaal. Die belangrikste bronne of veranderlikes wat bydra tot
die prosesveranderlikheid is in die derde-- of analisefase bepaal. Hierdie bevindings is
gebruik tydens die verbeteringsfase om voorstelle ter verbetering van die proses te maak.
Die voorstelle is implementeer en in die laaste fase, naamlik die beheerfase, is 'n plan
opgestel ten einde te verseker dat die proses deurentyd gemonitor word sodat die
verbeterings volhoubaar bly.
'n Hele aantal veranderlikes wat elk bygedra het tot die prosesvariasie is identifiseer, en
word in detail in die verslag beskryf. Gebaseer op die analise en bevindings van die
ondersoek kon logiese aanbevelings gemaak word sodat die proses 'n groot verbetering in
kleurverwydering getoon het. Die belangrikste bevinding was dat die huidige proses nie
die vermoee het om 100% te voldoen aan die spesifikasies of vereistes nie. Die hoofdoel
van die voorstelle is dus om te begin om die prosesveranderlikheid te minimeer of ten
minste te stabiliseer - eers nadat die doel bereik is kan daar voortgegaan word om
verbeteringe te implementeer wat die prosesvermoee aanspreek.
Ten einde hierdie beheer te kan uitoefen en vanasie te verminder IS die volgende
voorstelle gemaak: Statistiese beheer kaarte
Die kleurverwyderingsproses is hoogs onstabiel. Met behulp van statistiese beheer kaarte
is daar 'n vroegtydige waarskuwing van moontlike buite beheer situasies. Die proses kan
dus ondersoek en aangepas word voordat die finale produkkwaliteit te swak word.
• Evaluering van proses vermoee - ISO 9000 interne oudit
Die assesering van die prosesvermoee behoort deel te word van die interne ISO oudit
proses, om sodoende prosesverbeteringe gereeld en amptelik te meet. Die standaard
gestel vir C, behoort gedurig aandag te kry - dit is nie goeie praktyk om bv. slegs 'n
doelwit van C, = 2 soos voorgestel in ses sigma te gebruik nie, maar om dit aan te pas na
gelang van die robuustheid van die proses wat bereik is.
Daar is beduidende voordele bereik deur die toepassing van die DMAIC siklus. So het
byvoorbeeld die persentasie datapunte buite spesifikasie verminder van 84% tot 33%,
bloot deur te kyk na die effek wat die toevoeging van C02 gas tydens die proses het. Dit
toon dus duidelik dat, alhoewel die proses huidiglik nie die vermoee het om te voldoen
aan die vereistes van ses sigma nie, dit wel die moeite werd is om die beginsels en
tegnieke toe te pas.
|
208 |
Semiconductor Yield Modeling Using Generalized Linear ModelsJanuary 2011 (has links)
abstract: Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is essential to identifying processing issues, improving quality, and meeting customer demand in the industry. However, the complicated fabrication process, the massive amount of data collected, and the number of models available make yield modeling a complex and challenging task. This work presents modeling strategies to forecast yield using generalized linear models (GLMs) based on defect metrology data. The research is divided into three main parts. First, the data integration and aggregation necessary for model building are described, and GLMs are constructed for yield forecasting. This technique yields results at both the die and the wafer levels, outperforms existing models found in the literature based on prediction errors, and identifies significant factors that can drive process improvement. This method also allows the nested structure of the process to be considered in the model, improving predictive capabilities and violating fewer assumptions. To account for the random sampling typically used in fabrication, the work is extended by using generalized linear mixed models (GLMMs) and a larger dataset to show the differences between batch-specific and population-averaged models in this application and how they compare to GLMs. These results show some additional improvements in forecasting abilities under certain conditions and show the differences between the significant effects identified in the GLM and GLMM models. The effects of link functions and sample size are also examined at the die and wafer levels. The third part of this research describes a methodology for integrating classification and regression trees (CART) with GLMs. This technique uses the terminal nodes identified in the classification tree to add predictors to a GLM. This method enables the model to consider important interaction terms in a simpler way than with the GLM alone, and provides valuable insight into the fabrication process through the combination of the tree structure and the statistical analysis of the GLM. / Dissertation/Thesis / Ph.D. Industrial Engineering 2011
|
209 |
Metodologia para a determinação dos índices de confiabilidade em subestações de energia elétrica com ênfase nos impactos sociais de uma falhaBarbosa, Jair Diaz January 2015 (has links)
Orientador: Prof. Dr. Ricardo Caneloi dos Santos / Dissertação (mestrado) - Universidade Federal do ABC. Programa de Pós-Graduação em Energia, 2015. / Este projeto de pesquisa estabelece uma metodologia para determinar os índices
de confiabilidade/disponibilidade em subestações de energia elétrica, partindo da
necessidade de tornar as operações de manutenção mais eficazes mitigando os
impactos ambientais, sociais, econômicos e técnicos provocados pelos cortes de
fornecimento de energia elétrica. A metodologia utilizada baseia-se em dois métodos
normalmente utilizados individualmente em estudos de confiabilidade. O método
denominado Árvore de Falhas que proporciona um modelo lógico de possíveis
combinações de falhas para um evento principal, e a simulação de Monte Carlo que
possibilita estimar os índices de interesse do sistema elétrico pela geração aleatória
dos diferentes estados do sistema (operação, falha ou manutenção). Considerando
este contexto, neste trabalho de pesquisa são identificados os pontos vulneráveis, a
probabilidade de falha e a indisponibilidade de cada subestação, com o objetivo de
elevar os índices de confiabilidade, elevar a vida útil dos componentes e
proporcionar um esquema otimizado de manutenção preventiva para as
concessionárias. Consequentemente, o resultado desse trabalho visa diminuir a
frequência dos cortes de energia não programados e seus respectivos impactos
ambientais, sociais e econômicos produzidos pelo não fornecimento de energia
elétrica. Nesse sentido, uma discussão sobre os impactos das falhas elétricas para
sociedade também é realizada. / This work provides a methodology to determine the levels of reliability/availability
in electrical substations, based on the need to improve the efficiency of maintenance
operation reducing negative environmental, social, economic and technical impacts,
caused by power outages. The methodology is based on two methods typically used
individually in reliability studies. The method called Fault Tree that provides a logical
model of possible failure combinations for a major event, and the Monte Carlo
simulation used to determine the power system index by random generation of the
different states of the system (operation, failure or maintenance). Considering this
context, in this work are identified vulnerabilities points, the probability of failure and
the unavailability of each substation, in order to increase the reliability indices,
increase the service life of components and provide a better preventive maintenance
scheduled. Consequently, this works seeks to decrease the frequency of uncontrolled
power cuts and their environmental, social and economic impacts produced by nonsupply
of electricity. In this sense, a discussion about the impacts of electrical faults
to society is also conducted.
|
210 |
Aplicação da análise de sobrevivência na estimativa da vida útil de componentes construtivosCélio Costa Souto Maior 30 December 2009 (has links)
Este trabalho foi elaborado, na tentativa de se aplicar as técnicas e os conceitos que são empregadas na Análise de Sobrevivência, na medicina e aplicá-los na Engenharia Civil, na área de construção. Essas técnicas recebe o nome de
Confiabilidade, onde permite estimar a vida útil de componentes construtivos, através de modelos probabilísticos mais usados nessa área. Esses modelos foram
desenvolvidos e aplicados nos dados colhidos, com uma confiabilidade de 95%. Esse assunto, nessa área, já vem sendo estudado na comunidade científica, mas
especificamente na área da construção civil pouco se conhece, encontrando-se ainda a fase embrionária, merecendo um estudo mais aprofundado. Logo nossa proposição foi a de a partir de análise feita em uma amostra para estudo, tirar algum proveito ou alguma informação que possa ajudar para trabalhos futuros. Foram coletados 60 amostras para estudo, sendo que 10 delas foram censuradas, onde o estudo terminou com um numero pré-determinado de amostras estudadas. Os resultados foram obtidos através do software R, sendo comparados os modelos Exponencial, Weibull e Log-normal, com o estimador de Kaplan-Meier,
mostrando tanto graficamente como através de testes, qual melhor modelo se ajusta aos dados coletados. O modelo que melhor se ajustou foi o Log-normal / This Work was developed as an effort to utilize the techniques and concepts that are employed in Survival Analysis in medicine, in order to apply them in Civil
Engineering in the construction area. These techniques is called Reliability, which allows estimating the life time of building components through probabilistic models
commonly used in that area. These models were developed and applied to the collected data, with a reliability of 95%. This subject, in this area have been studied in
the scientific community, but specifically in the construction area is poorly known and is still in its beginning and, deserves further study. Thus, our proposition was to take some benefit or some information that might help for future work from analysis of a
sample to study. 60 samples were collected for study, and 10 of them were censored, so that the study ended with a predetermined number of samples. The results were
obtained using the software R and compared the models Exponential, Weibull and Log-normal, with the Kaplan-Meier estimator showing both graphically and through
tests, which model best fits the data collected. The model that best fitted was the lognormal
|
Page generated in 0.1094 seconds