• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 136
  • 39
  • 28
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • Tagged with
  • 272
  • 272
  • 52
  • 45
  • 43
  • 37
  • 31
  • 30
  • 30
  • 29
  • 29
  • 29
  • 28
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Monitoring and analysis system for performance troubleshooting in data centers

Wang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.
202

Development of convective reflow-projection moire warpage measurement system and prediction of solder bump reliability on board assemblies affected by warpage

Tan, Wei 05 March 2008 (has links)
Out-of-plane displacement (warpage) is one of the major thermomechanical reliability concerns for board-level electronic packaging. Printed wiring board (PWB) and component warpage results from CTE mismatch among the materials that make up the PWB assembly (PWBA). Warpage occurring during surface-mount assembly reflow processes and normal operations may cause serious reliability problems. In this research, a convective reflow and projection moire warpage measurement system was developed. The system is the first real-time, non-contact, and full-field measurement system capable of measuring PWB/PWBA/chip package warpage with the projection moire technique during different thermal reflow processes. In order to accurately simulate the reflow process and to achieve the ideal heating rate, a convective heating system was designed and integrated with the projection moire system. An advanced feedback controller was implemented to obtain the optimum heating responses. The developed system has the advantages of simulating different types of reflow processes, and reducing the temperature gradients through the PWBA thickness to ensure that the projection moire system can provide more accurate measurements. Automatic package detection and segmentation algorithms were developed for the projection moire system. The algorithms are used for automatic segmentation of the PWB and assembled packages so that the warpage of the PWB and chip packages can be determined individually. The effect of initial PWB warpage on the fatigue reliability of solder bumps on board assemblies was investigated using finite element modeling (FEM) and the projection moire system. The 3-D models of PWBAs with varying board warpage were used to estimate the solder bump fatigue life for different chip packages mounted on PWBs. The simulation results were validated and correlated with the experimental results obtained using the projection moire system and accelerated thermal cycling tests. Design of experiments and an advanced prediction model were generated to predict solder bump fatigue life based on the initial PWB warpage, package dimensions and locations, and solder bump materials. This study led to a better understanding of the correlation between PWB warpage and solder bump thermomechanical reliability on board assemblies.
203

Optimal maintenance of a multi-unit system under dependencies

Sung, Ho-Joon 17 November 2008 (has links)
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
204

The application of the six sigma quality concept to improve process performance in a continuous processing plant

Nxumalo, G. L 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2005. / ENGLISH ABSTRACT: This report presents the application of the six sigma quality concept in solving a true business problem. Six sigma is a quality improvement and business strategy/tool developed by Motorola in the mid 1980s. It aims at delivering products and services that approach levels of near perfection. To achieve this objective a six sigma process must not produce more than 3.4 defects per million opportunities, meaning the process should be at least 99.9997% perfect [Berdebes, 2003]. Motorola's success with six sigma popularised the concept and it has now been adopted by many of the world's top compames e.g. General Electric, Allied Signal-Honeywell, etc. All the six sigma companies report big financial returns as a result of increased quality levels due to the reduction in the number of defects. 'General Electric reports annual benefits of over $2.5 billion across the organisation from six sigma' [Huag, 2003]. The six sigma concept follows a five step problem-solving methodology known as DMAIC (Define, Measure, Analyse, Improve, Control) to improve existing processes. Each of these steps makes use of a range of tools, which include quality, statistical, engineering, and business tools. This report first gives a theoretical presentation on quality and six sigma, attempting to answer the question 'What is six sigma'. A step-by-step guide on how to go through the DMAIC problem solving cycle is also presented. The six sigma concept was demonstrated by application to the colour removal process of a continuous processing plant manufacturing refined sugar. Colour removal is a very important process in sugar refining since the purpose of a refinery is to remove colour and other impurities from the raw sugar crystals. The colour removal process consists of three unit operations; liming, carbonation and sulphitation. Liming involves the addition of lime (calcium hydroxide) required for the formation of a calcium precipitate in the next unit operations. Carbonation is carried out in two stages; primary and secondary carbonation. Both stages involve the formation of a calcium carbonate precipitate, which traps colour bodies and other impurities. Sulphitation occurs in a single step and involve the formation of a calcium sulphite precipitate which also traps impurities. The pH and colour are the main variables that are being monitored throughout the colour removal process. Colour removal process Raw sugar Melting Carbonation Crystalli ~ Liming ~ c::J Secondary f+ Sulphitation .. Sugar sation Figure 1: Colour removal process The pH control of the two colour removal unit operations; carbonation and sulphitation, is very poor and as a result the colour removal achieved is below expectation. This compromises the final refined sugar quality since colour not removed in the colour removal processes ends up in the sugar. The first carbonation stage (primary) fails to lower the pH to the required specification and the second carbonation stage (secondary) is highly erratic, the pH fluctuating between too high and too low. The sulphitation process adds more sulphur dioxide than required and hence the pH is lowered below the lower specification limit. The six sigma DMAIC cycle was implemented in order to solve the problem of poor pH control. The Define phase defined the project and identified the process to be improved. The Measure phase measured the current performance of the process by collecting past laboratory data with the corresponding field instruments data. The data was used to draw frequency distribution plots that displayed the actual variation of the process relative to the natural variation of the process (specification width) and to calculate process capability indices. The Analyse phase analysed the data so as to determine the key sources of variation. The Improve phase used the findings of the analyse phase to propose solutions to improve the colour removal processes. The Control phase proposed a control plan so as to monitor and sustain the improvement gained. The key findings of the study are presented below: • Failure of the first carbonation stage to lower the pH to the required level is due to insufficient carbon dioxide gas supply. • The second carbonation reaction occurs very fast hence poor control will result in high variability. • The amount of colour removed is dependent on the input raw melt colour. • The histograms of the colour removal unit operations are off-centered and display a process variation greater than the specification width and hence a large proportion of the data falls outside the specification limits. • The % CaO and CO2 gas addition were found to be the key variables that control the processes centering on target. The % CaO having a stronger effect in the liming process and CO2 gas addition on the carbonation process. • The variation between the field instrument's pH and laboratory pH is the key variable that control the processes spread (standard deviation of the processes). • The processes Cpk values are less than C, (Cpk<Cp) meaning the processes can be improved by controlling the key variables that control centering (% CaO, CO2 gas addition). The processes capability indices are low, Cp<l meamng the processes are not statistically capable of meeting the required specifications at the current conditions. • Based on the findings of the study, the following deductions are made for the improvement of the colour removal processes in better meeting the required specifications. • Increase the CO2 gas supply to at least 4900 m31hr, calculated based on the fact that at least 140 rrr' gas is required per ton of solids in melt [Sugar Milling Research Institute Course Notes, 2002]. • Control the key variables identified to be the key sources of variation; % CaO, CO2 gas addition and variation between the field instrument's pH and laboratory pH. Reducing variation in the % CaO and increasing CO2 gas supply will improve the processes ability to maintain centering at the target specification. Maintaining a consistent correlation between the two pH readings; field instruments pH and laboratory pH will reduce the processes standard deviation and hence the processes spread. Reduction in the processes spread will minimize the total losses outside the specification limits. This will allow better control of the pH by getting rid of high fluctuations. • Control of the input raw melt colour is essential since it has an impact on the degree of decolourisation. The higher the input colour, the more work required in removing the colour. In improving the colour removal processes the starting point should be in ensunng process stability. Only once this is achieved, the above adjustments may be made to improve the processes capability. The processes capability will only improve to a certain extent since from the capability studies it is evident that the processes are not capable of meeting specifications. To provide better control and to ensure continuous improvement of the processes the following recommendations are made: • Statistical process control charts The colour removal processes are highly unstable, the use of control charts will help in detecting any out of control conditions. Once an out of control condition has been detected, necessary investigations may be made to determine the source of instability so as to remove its influence. Being able to monitor the processes for out of control situations will help in rectifying any problems before they affect the processes outputs. • Evaluation of capability indices- ISO 9000 internal audits Consider incorporating the assessment of the capability indices as part of the ISO 9000 internal audits so as to measure process improvement. It is good practice to set a target for Cp, the six sigma standard is Cp=2, this however does not mean the goal should be Cp=2 since this depends on the robustness of the process against variation. For instance the colour removal processes at the current operating conditions can never reach Cp=2. This however is not a constraint since for the colour removal processes to better meet pH specifications it is not critical that they achieve six sigma quality. A visible improvement may be seen in aiming for Cp=I. On studying the effects of CO2 gas addition the total data points outside specification limits reduced from 84 % to 33 % and by reducing the variation between field instruments pH and laboratory pH for the secondary pH the total data points out of specification reduced from 55 % to 48 %. These results indicate that by improving C, to be at least equal to one (Cp=l) the total data points outside specification can reduce significantly, indicating a high ability of the processes to meet specifications. Thus even if six sigma quality is not achieved, by focussing on process improvement using six sigma tools visible benefits can be achieved. / AFRIKAANSE OPSOMMING: Hierdie tesis kyk na die toepassing van die ses sigma kwaliteitskonsep om 'n praktiese probleem op te los. Ses sigma soos dit algemeen bekend staan is nie slegs 'n kwaliteitverbeteringstegniek nie maar ook 'n strategiese besigheidsbenadering wat in die middel 1980s deur Motorolla ontwikkel en bekend gestel is. Die doelstellings is om produkte en dienste perfek af te lewer. Om die doelwit te kan bereik poog die tegniek om die proses so te ontwerp dat daar nie meer as 3.4 defekte per miljoen mag wees nie - dit wil se die proses is 99,9997% perfek [Berdebes, 2003]. As gevolg van die sukses wat Motorolla met die konsep behaal het, het dit algemene bekendheid verwerf, en word dit intussen deur baie van die wereld se voorste maatskappy gebruik, o.a. General Electric, Allied Signal-Honeywell, ens. Al die maatskappye toon groot finansele voordele as gevolg van die vermindering in defekte wat teweeg gebring is. So by. beloop die jaarlikse voordele vir General Electric meer as $2.5 biljoen [Huag, 2003]. Die ses sigma konsep volg 'n vyf-stap probleem oplossings proses (in Engels bekend as DMAIC: Define, Measure, Analyse, Improve, Control), naamlik definieer, meet, analiseer, verbeter, en beheer om bestaande prosesse te verbeter. In elkeen van die stappe is daar spesifieke gereedskap oftegnieke wat aangewend kan word, soos by. kwaliteits-, statistiese--, ingenicurs-cn besigheids tegnieke. Die verslag begin met 'n teoretiese oorsig oor kwaliteit en die ses sigma proses, waardeur die vraag "wat is ses sigma" beantwoord word. Daama volg 'n gedetailleerde stap-virstap beskrywing van die DMAIC probleem oplossingsiklus. Die toepassing van die ses sigma konsep word dan gedoen aan die hand van 'n spesifieke proses in die kontinue suiker prosesserings aanleg, naamlik die kleurverwyderingsproses. Hierdie proses is baie belangrik omdat die doelstellings daarvan juis draai rondom die verwydering van nie net kleur nie maar ook alle ander vreemde bestanddele van die rou suiker kristalle. Die proses bestaan uit drie onafhanklike maar sekwensiele aktiwiteite waardeur verseker word dat die regte gehalte suiker uiteindelik verkry word. Tydens die eerste twee stappe is veral die pH-beheer onder verdenking, sodat die kleur verwydering nie die gewenste kwaliteit lewer nie. Dit bemvloed op sy beurt die gehalte van die finale produk, omdat die ongewenste kleur uiteindelik deel is van die suiker. Die pH inhoud is nie net nie laag genoeg nie, maar ook hoogs veranderlik - in beginsel dus buite beheer. Die DMAIC siklus is toegepas ten einde die pH beter te kan beheer. Tydens die definisiefase is die projek beskryf en die proses wat verbeter moet word identifiseer. In die meetfase IS die nodige data versamel om sodoende die inherente prosesveranderlikheid te bepaal. Die belangrikste bronne of veranderlikes wat bydra tot die prosesveranderlikheid is in die derde-- of analisefase bepaal. Hierdie bevindings is gebruik tydens die verbeteringsfase om voorstelle ter verbetering van die proses te maak. Die voorstelle is implementeer en in die laaste fase, naamlik die beheerfase, is 'n plan opgestel ten einde te verseker dat die proses deurentyd gemonitor word sodat die verbeterings volhoubaar bly. 'n Hele aantal veranderlikes wat elk bygedra het tot die prosesvariasie is identifiseer, en word in detail in die verslag beskryf. Gebaseer op die analise en bevindings van die ondersoek kon logiese aanbevelings gemaak word sodat die proses 'n groot verbetering in kleurverwydering getoon het. Die belangrikste bevinding was dat die huidige proses nie die vermoee het om 100% te voldoen aan die spesifikasies of vereistes nie. Die hoofdoel van die voorstelle is dus om te begin om die prosesveranderlikheid te minimeer of ten minste te stabiliseer - eers nadat die doel bereik is kan daar voortgegaan word om verbeteringe te implementeer wat die prosesvermoee aanspreek. Ten einde hierdie beheer te kan uitoefen en vanasie te verminder IS die volgende voorstelle gemaak: Statistiese beheer kaarte Die kleurverwyderingsproses is hoogs onstabiel. Met behulp van statistiese beheer kaarte is daar 'n vroegtydige waarskuwing van moontlike buite beheer situasies. Die proses kan dus ondersoek en aangepas word voordat die finale produkkwaliteit te swak word. • Evaluering van proses vermoee - ISO 9000 interne oudit Die assesering van die prosesvermoee behoort deel te word van die interne ISO oudit proses, om sodoende prosesverbeteringe gereeld en amptelik te meet. Die standaard gestel vir C, behoort gedurig aandag te kry - dit is nie goeie praktyk om bv. slegs 'n doelwit van C, = 2 soos voorgestel in ses sigma te gebruik nie, maar om dit aan te pas na gelang van die robuustheid van die proses wat bereik is. Daar is beduidende voordele bereik deur die toepassing van die DMAIC siklus. So het byvoorbeeld die persentasie datapunte buite spesifikasie verminder van 84% tot 33%, bloot deur te kyk na die effek wat die toevoeging van C02 gas tydens die proses het. Dit toon dus duidelik dat, alhoewel die proses huidiglik nie die vermoee het om te voldoen aan die vereistes van ses sigma nie, dit wel die moeite werd is om die beginsels en tegnieke toe te pas.
205

Semiconductor Yield Modeling Using Generalized Linear Models

January 2011 (has links)
abstract: Yield is a key process performance characteristic in the capital-intensive semiconductor fabrication process. In an industry where machines cost millions of dollars and cycle times are a number of months, predicting and optimizing yield are critical to process improvement, customer satisfaction, and financial success. Semiconductor yield modeling is essential to identifying processing issues, improving quality, and meeting customer demand in the industry. However, the complicated fabrication process, the massive amount of data collected, and the number of models available make yield modeling a complex and challenging task. This work presents modeling strategies to forecast yield using generalized linear models (GLMs) based on defect metrology data. The research is divided into three main parts. First, the data integration and aggregation necessary for model building are described, and GLMs are constructed for yield forecasting. This technique yields results at both the die and the wafer levels, outperforms existing models found in the literature based on prediction errors, and identifies significant factors that can drive process improvement. This method also allows the nested structure of the process to be considered in the model, improving predictive capabilities and violating fewer assumptions. To account for the random sampling typically used in fabrication, the work is extended by using generalized linear mixed models (GLMMs) and a larger dataset to show the differences between batch-specific and population-averaged models in this application and how they compare to GLMs. These results show some additional improvements in forecasting abilities under certain conditions and show the differences between the significant effects identified in the GLM and GLMM models. The effects of link functions and sample size are also examined at the die and wafer levels. The third part of this research describes a methodology for integrating classification and regression trees (CART) with GLMs. This technique uses the terminal nodes identified in the classification tree to add predictors to a GLM. This method enables the model to consider important interaction terms in a simpler way than with the GLM alone, and provides valuable insight into the fabrication process through the combination of the tree structure and the statistical analysis of the GLM. / Dissertation/Thesis / Ph.D. Industrial Engineering 2011
206

Metodologia para a determinação dos índices de confiabilidade em subestações de energia elétrica com ênfase nos impactos sociais de uma falha

Barbosa, Jair Diaz January 2015 (has links)
Orientador: Prof. Dr. Ricardo Caneloi dos Santos / Dissertação (mestrado) - Universidade Federal do ABC. Programa de Pós-Graduação em Energia, 2015. / Este projeto de pesquisa estabelece uma metodologia para determinar os índices de confiabilidade/disponibilidade em subestações de energia elétrica, partindo da necessidade de tornar as operações de manutenção mais eficazes mitigando os impactos ambientais, sociais, econômicos e técnicos provocados pelos cortes de fornecimento de energia elétrica. A metodologia utilizada baseia-se em dois métodos normalmente utilizados individualmente em estudos de confiabilidade. O método denominado Árvore de Falhas que proporciona um modelo lógico de possíveis combinações de falhas para um evento principal, e a simulação de Monte Carlo que possibilita estimar os índices de interesse do sistema elétrico pela geração aleatória dos diferentes estados do sistema (operação, falha ou manutenção). Considerando este contexto, neste trabalho de pesquisa são identificados os pontos vulneráveis, a probabilidade de falha e a indisponibilidade de cada subestação, com o objetivo de elevar os índices de confiabilidade, elevar a vida útil dos componentes e proporcionar um esquema otimizado de manutenção preventiva para as concessionárias. Consequentemente, o resultado desse trabalho visa diminuir a frequência dos cortes de energia não programados e seus respectivos impactos ambientais, sociais e econômicos produzidos pelo não fornecimento de energia elétrica. Nesse sentido, uma discussão sobre os impactos das falhas elétricas para sociedade também é realizada. / This work provides a methodology to determine the levels of reliability/availability in electrical substations, based on the need to improve the efficiency of maintenance operation reducing negative environmental, social, economic and technical impacts, caused by power outages. The methodology is based on two methods typically used individually in reliability studies. The method called Fault Tree that provides a logical model of possible failure combinations for a major event, and the Monte Carlo simulation used to determine the power system index by random generation of the different states of the system (operation, failure or maintenance). Considering this context, in this work are identified vulnerabilities points, the probability of failure and the unavailability of each substation, in order to increase the reliability indices, increase the service life of components and provide a better preventive maintenance scheduled. Consequently, this works seeks to decrease the frequency of uncontrolled power cuts and their environmental, social and economic impacts produced by nonsupply of electricity. In this sense, a discussion about the impacts of electrical faults to society is also conducted.
207

Aplicação da análise de sobrevivência na estimativa da vida útil de componentes construtivos

Célio Costa Souto Maior 30 December 2009 (has links)
Este trabalho foi elaborado, na tentativa de se aplicar as técnicas e os conceitos que são empregadas na Análise de Sobrevivência, na medicina e aplicá-los na Engenharia Civil, na área de construção. Essas técnicas recebe o nome de Confiabilidade, onde permite estimar a vida útil de componentes construtivos, através de modelos probabilísticos mais usados nessa área. Esses modelos foram desenvolvidos e aplicados nos dados colhidos, com uma confiabilidade de 95%. Esse assunto, nessa área, já vem sendo estudado na comunidade científica, mas especificamente na área da construção civil pouco se conhece, encontrando-se ainda a fase embrionária, merecendo um estudo mais aprofundado. Logo nossa proposição foi a de a partir de análise feita em uma amostra para estudo, tirar algum proveito ou alguma informação que possa ajudar para trabalhos futuros. Foram coletados 60 amostras para estudo, sendo que 10 delas foram censuradas, onde o estudo terminou com um numero pré-determinado de amostras estudadas. Os resultados foram obtidos através do software R, sendo comparados os modelos Exponencial, Weibull e Log-normal, com o estimador de Kaplan-Meier, mostrando tanto graficamente como através de testes, qual melhor modelo se ajusta aos dados coletados. O modelo que melhor se ajustou foi o Log-normal / This Work was developed as an effort to utilize the techniques and concepts that are employed in Survival Analysis in medicine, in order to apply them in Civil Engineering in the construction area. These techniques is called Reliability, which allows estimating the life time of building components through probabilistic models commonly used in that area. These models were developed and applied to the collected data, with a reliability of 95%. This subject, in this area have been studied in the scientific community, but specifically in the construction area is poorly known and is still in its beginning and, deserves further study. Thus, our proposition was to take some benefit or some information that might help for future work from analysis of a sample to study. 60 samples were collected for study, and 10 of them were censored, so that the study ended with a predetermined number of samples. The results were obtained using the software R and compared the models Exponential, Weibull and Log-normal, with the Kaplan-Meier estimator showing both graphically and through tests, which model best fits the data collected. The model that best fitted was the lognormal
208

Análise da capacidade de carga de fundação por sapatas executadas na cidade de São Caetano do Sul/SP / Bearing capacity analysis of shallow foundation in São Caetano do Sul city

Noguchi, Leandro Tomio 20 August 2018 (has links)
Orientador: Paulo José Rocha de Albuquerque / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo / Made available in DSpace on 2018-08-20T07:37:54Z (GMT). No. of bitstreams: 1 Noguchi_LeandroTomio_M.pdf: 6042585 bytes, checksum: 3b22b2450a9ddc40e2911dbb661a0359 (MD5) Previous issue date: 2012 / Resumo: Este trabalho teve o objetivo estudar às formulações e teorias de capacidade de carga e previsão de recalques para o caso de uma solução em fundação superficial adotado em uma obra de um edifício comercial de 10 pavimentos e 3 subsolos localizado na cidade de São Caetano do Sul/SP, por meio de análise de quatro provas de carga sobre placa. Para tal, o solo local foi submetido a ensaios de laboratório para caracterização e determinação de parâmetros geotécnicos que alimentaram os métodos propostos e apresentados na literatura. Um modelo do ensaio de carga foi simulado em programa de elementos finitos, com os parâmetros dos ensaios de laboratório e assim determinando a curva carga vs recalque. Foram realizadas análises probabilísticas que forneceram o índice de confiabilidade e a probabilidade de ruína, possibilitando a redução do fator de segurança da fundação e o aumento da tensão admissível / Abstract: This work aims to study the methods and theories of bearing capacity and settlement prediction for the solution of a shallow foundation adopted in a commercial building of 10 floors and 3 basements in the city of São Caetano do Sul/SP through analysis of 4 plate loading tests. An undisturbed soil sample was collected that was submitted to laboratory tests for characterization and determination of geotechnical parameters that will support the theoretical methods proposed in the literature. With this it is expected to check the existing calculation methods for bearing capacity and settlement prediction. A model of the load plate test was simulated using a finite element program, with the parameters of laboratory tests, thus determining the load-settlement curve. By the allowable stress obtained, probabilistic analysis were performed, what made possible the calculation of reliability and failure probability indexes, allowing the reduction of the safety factor of the foundation and increase in the allowable stress / Mestrado / Geotecnia / Mestre em Engenharia Civil
209

Improving the reliability of a chemical process plant

Tomo, Zonwabele Zweli Simon 05 June 2012 (has links)
M.Phil. / In modern society, professional engineers, technologists and technical managers are responsible for the planning, design, manufacture, maintenance and operation of the processes and systems ranging from simple processes to complex systems. The failure of these can often cause effects that range from inconvenience and irritation to severe impact on the society and its environment. Users, customers and society in general expect that products be reliable and safe at all times (Allan & Ballinton 1992). The biggest investment in any plant is, arguably, on individual plant equipment. It is therefore reasonable to give the greatest attention possible to the health and integrity of equipment that form part of the chemical process plant.Most of plant failures occur without warning and this result in equipment breakdowns, huge production losses and expensive maintenance. The reaction to plant failures has, in most cases, been a reactive maintenance which means that the plant equipment must fail before the cause of fault is investigated and the equipment is repaired. Reactive maintenance has shortcomings in that it is successful in solving problems temporarily but does not guarantee prevention of fault recurrence. Equipment and process failures waste money on unreliability problems. The question that arises is. ‘How reliable and safe is the plant during its operating life?’ This question can be answered, in part, by the use of quantitative reliability evaluation. The growing need to achieve high availability for large integrated chemical process systems demands higher levels of reliability at the operational stage. Reliability is the probability of equipment or processes to function without failure when operated correctly for a given interval of time under stated conditions. This research dissertation is aimed at developing equipment optimisation program for the chemical process plant by introducing a logical approach to managing the maintenance of plant equipment. Some relevant reliability theory is discussed and applied to the Short – Path Distillation (SPD) plant of SASOL WAX. An analysis of the failure modes and criticality helps to identify plant equipment that needs special focus during inspection.
210

Otimização de políticas de manutenção em redes de distribuição de energia elétrica por estratégias híbridas baseadas em programação dinâmica / Maintenance policies optimization on electric power distribution networks by hybrid strategies based on dynamic programming

Bacalhau, Eduardo Tadeu, 1982- 27 August 2018 (has links)
Orientadores: Christiano Lyra Filho, Fábio Luiz Usberti / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-27T03:25:56Z (GMT). No. of bitstreams: 1 Bacalhau_EduardoTadeu_D.pdf: 1630102 bytes, checksum: 258db7b09d8ce71b7d9a577e1993a3c2 (MD5) Previous issue date: 2015 / Resumo: Este trabalho explora alternativas para a determinação das melhores políticas de planejamento das ações de manutenção preventiva em redes de distribuição de energia elétrica. O problema é uma extensão de abordagens da área de manutenção centrada em confiabilidade (MCC), que vem sendo objeto de pesquisas ao longo das últimas décadas. Por se tratar de um problema de otimização combinatória de difícil solução, são poucos os artigos publicados que envolvem sistemas de escala real, e a maioria dentre esses utiliza meta-heurísticas como estratégia de solução. A abordagem desenvolvida neste trabalho é baseada na técnica de otimização denominada programação dinâmica. Duas estratégias para a redução do espaço de busca são adotadas: uma delas procura identificar e eliminar soluções dominadas; a segunda estratégia envolve a aplicação do processo de otimização da programação dinâmica em torno de uma vizinhança de uma solução promissora, movendo iterativamente em um espaço de soluções --- uma abordagem inspirada na programação dinâmica diferencial discreta. A combinação dessas duas estratégias é denominada Programação Dinâmica com Reduções de Espaço de Estados (PDREE). O trabalho investiga também a construção de estratégias híbridas. Uma das alternativas utiliza um algoritmo genético híbrido para a construção de planos de manutenção iniciais de boa qualidade, posteriormente otimizados pela PDREE. A segunda estratégia híbrida utiliza a PDREE para a construção de boas populações iniciais de soluções, posteriormente otimizada pelo algoritmo genético híbrido. As abordagens desenvolvidas são aplicadas a problemas de escala real e comparadas à abordagem por algoritmo genético híbrido. Os resultados mostram que as ideias desenvolvidas na tese estendem o estado-da-arte sobre a otimização de políticas de manutenção em redes de distribuição de grande porte / Abstract: This work explores alternatives to determine the best planning policies for preventive maintenance on electric power distribution systems. The problem is an extension of approaches of the reliability-centered maintenance area that has been studied allong the last decades. Since this problem is a hard combinatorial optimization problem, there are few works that address real-life systems, and most of these works use methods based on metaheuristic as solution strategy. The approaches proposed in this work are based on the optimization technique named dynamic programming. Two strategies are developed to reduce the search space of dynamic programming: the first strategy seeks to identify and eliminate dominated solutions; the second strategy confines the dynamic programming optimization procedures to the neighborhood of good solutions that move iteratively in the solution space---an approach inspired by the discrete differential dynamic programming method. The combination of both strategies is denominated Dynamic Programming with State Space Reductions (DPSSR). The work also investigates the development of hybrid strategies. One of the alternatives uses a hybrid genetic algorithm to obtain a promising initial maintenance strategy, further optimized by the DPSSR. The second hybrid strategy uses the DPSSR for constructing an initial good population, further optimized by a hybrid genetic algorithm. All the approaches are applied to real-life problems and compared to a pure hybrid genetic algorithm approach. The results show that the ideas developed in the thesis improve the state-of-the-art in obtaining the best maintenance policies for large distribution networks / Doutorado / Automação / Doutor em Engenharia Elétrica

Page generated in 0.1145 seconds