• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 23
  • 6
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 123
  • 123
  • 123
  • 37
  • 29
  • 27
  • 27
  • 27
  • 26
  • 26
  • 18
  • 16
  • 16
  • 14
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Single phase to ground fault detection and location in compensated network

Loos, Matthieu 05 November 2013 (has links)
This work takes place in the context of distribution power system protection and tries to improve the detection and location of earth faults. The protection problem is vast and many ideas emerge every year to enhance the reliability of the grid. The author has focused his energy into the compensated and isolated network protection in the specific case of single phase earth fault. This PhD thesis is divided in two main parts that might be considered as independent. The first part studies the detection of single phase earth fault and the second analyzes the fault location of such fault.<p><p>Pragmatism was asked during these three years because a product development was necessary especially regarding the fault detection problem. The first part of the thesis took 18 months of research and development to obtain a prototype of transient protection able to detect single phase earth fault in compensated and isolated network. The sensitivity of the algorithm has been emphasized regarding the fault impedance and to detect earth fault up to 5 kOhm depending on the network characteristic. The fault location problem has been much more theoretical although the problem links to the accuracy of the algorithm and its robustness regarding wrong fault location indication has been strongly considered.<p><p>Compensated networks and in some conditions isolated networks are distribution from 12 kV up to 110 kV mostly used in East and North Europe but also in China. Others areas also work with such networks but they also have others systems and they do not use them on all the territory. These networks have the particularity to obtain very small fault current in case of single phase earth fault. Low current means the difference between a faulty and a sound feeder is not significant. Therefore classic overcurrent protection is completely useless to protect the network, forcing the development of more complex algorithm. A possibility to overcome the problem of the small fault current is to develop a transient protection. The transient occurring at the beginning of the fault has strong information to distinguish a faulty from a sound feeder. In this work I have chosen to use not only the transient but also the steady state to get the best sensitivity.<p><p>Then the fault location has been investigated but the small information coming from the faulty feeder is not sufficient to have a precise enough position of the fault. Therefore, active system has been suggested to be implemented in the grid to increase the faulty current and have enough power for a precise location. Different existing algorithms based on the steady state at the nominal frequency are compared using a tool developed during this work. Recommendations are then made depending on the topology, the network parameters, the measurements precision, etc. Due to the complexities of the problem, a simulator has been coded in Matlab .The user of a possible fault location must then use this tool to understand and see the future fault location precision that he could obtain from different algorithm on his network. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
112

Desenvolvimento de um software para detecção de erros grosseiros e reconciliação de dados estática e dinâmica de processos químicos e petroquímicos / Development of software for static and dynamic gross error detection and data reconciliation of chemical and petrochemical processes

Barbosa, Agremis Guinho 22 August 2018 (has links)
Orientador: Rubens Maciel Filho / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química / Made available in DSpace on 2018-08-22T08:44:14Z (GMT). No. of bitstreams: 1 Barbosa_AgremisGuinho_D.pdf: 4370227 bytes, checksum: 9fc9a5dfb766e6075fe58104c3c22087 (MD5) Previous issue date: 2008 / Resumo: O principal objetivo deste trabalho foi o desenvolvimento de um software para reconciliação de dados, detecção e identificação de erros grosseiros, estimativa de parâmetros e monitoramento da qualidade da informação em unidades industriais em estado estacionário e dinâmico. O desenvolvimento desse software focalizou atender aos critérios de modularidade, extensibilidade e facilidade de uso. A reconciliação de dados é um procedimento de tratamento de medidas em plantas de processos necessário devido ao fato da inexorável presença de erros aleatórios de pequena magnitude associados aos valores obtidos dos equipamentos de medição. Além dos erros aleatórios, por vezes os dados estão associados a erros de maior magnitude e que constituem uma tendência, ou viés. Erros desta natureza podem ser qualificados e quantificados por técnicas de detecção de erros grosseiros. É importante para aplicação de subrotinas de otimização que os dados sejam confiáveis e livres de erros tanto quanto possível. A tarefa da remoção destes erros através de modelos previamente conhecidos (reconciliação de dados) não é trivial, já sendo estudada no campo da engenharia química nos últimos 40 anos e apresenta uma crescente quantidade de trabalhos publicados. Contudo, uma parte destes trabalhos é voltada para aplicação da reconciliação sobre equipamentos isolados, como tanques, reatores e colunas de destilação, ou pequenos conjuntos destes equipamentos e não são muitos os trabalhos que utilizam dados reais de operação. Isto pode ser atribuído à dimensão do trabalho computacional associado ao grande número de variáveis. O que se propõe neste trabalho é tomar partido da crescente capacidade computacional e das modernas ferramentas de desenvolvimento, provendo uma aplicação na qual seja facilitada a tarefa de descrever sistemas de maior dimensão, para estimar dados de qualidade superior, em tempo hábil, para sistemas de controle e otimização. É importante frisar que a reconciliação de dados e a detecção de erros grosseiros são fundamentais para a confiabilidade de resultados de subrotinas de otimização e controle supervisório e também pode ser utilizada para a reconstrução de estados do processo / Abstract: The main goal of this work was the development of software for data reconciliation, gross errors detection and identification, data reconciliation, parameter estimation, and information quality monitoring in industrial units under steady state and dynamic operation. The development of this software was focused on meeting the criteria of modularity, extensibility, and user friendliness. Data reconciliation is a procedure for measurement data treatment in process plants, which is necessary due the fact of the inexorable presence of random, small magnitude errors associated to the values obtained from measurement devices. In addition to the random errors, sometimes data are associated to major magnitude errors that lead to a trend or bias. Errors of this nature can be qualified and quantified through gross errors detection techniques. It is important for optimization routines that data are reliable and error free as much as possible. The task of removal of these errors using previously known models (data reconciliation) is not trivial, and has been studied for the last 40 years in the field of chemical engineering, showing an increasing amount of published works. However, part of these works is devoted to applying data reconciliation over single equipment, such as tanks, reactors, distillation columns, or small sets of these equipments. Furthermore, not much of this published work relies on real operation data. This can be regarded to the dimension of computational work associated to the great number of variables. This work proposes to take advantage of increasing computational capacity and modern development tools to provide an application in which the task of higher dimension systems description is accomplished with ease in order to produce data estimates of superior quality, in a suitable time frame, to control and optimization systems. It is worthwhile mentioning that data reconciliation and gross error detection are fundamental for reliability of the results from supervisory control and optimization routines, and can be used also to process state reconstruction / Doutorado / Desenvolvimento de Processos Químicos / Doutor em Engenharia Química
113

On improving the performance of parallel fault simulation for synchronous sequential circuits

Tiew, Chin-Yaw 04 March 2009 (has links)
In this thesis, several heuristics that aim to improve the performance of parallel fault simulation for synchronous sequential circuits have been investigated. Three heuristics were incorporated into a well known parallel fault simulator called PROOFS and the efficiency of the heuristics were measured in terms of the number of faults simulated in parallel, the number of gate evaluations, and the CPU time. The three heuristics are critical path tracing, dynamic area reduction and a new heuristic called two level simulation. Critical path tracing and dynamic area reduction which have been previously proposed for combinational circuits are extended for synchronous sequential circuits in this thesis. The two level simulation that was investigated in this thesis is designed for sequential circuits. Experimental results show that critical path tracing is the most effective of the three heuristics. In addition to the three heuristics, new fault injection and fault ordering methods were suggested to improve the speed of an efficient fault simulator called HOPE. HOPE, which was developed at Virginia Tech is, an improved version of PROOFS. HOPE_NEW, which incorporates the two heuristics performs better than HOPE in the number of gate evaluations and the CPU time. HOPE_NEW is about 1.13 times faster than HOPE for the ISCAS89 benchmark circuits. For the largest circuit, the speedup is about 40 percent. / Master of Science
114

Using unsupervised machine learning for fault identification in virtual machines

Schneider, C. January 2015 (has links)
Self-healing systems promise operating cost reductions in large-scale computing environments through the automated detection of, and recovery from, faults. However, at present there appears to be little known empirical evidence comparing the different approaches, or demonstrations that such implementations reduce costs. This thesis compares previous and current self-healing approaches before demonstrating a new, unsupervised approach that combines artificial neural networks with performance tests to perform fault identification in an automated fashion, i.e. the correct and accurate determination of which computer features are associated with a given performance test failure. Several key contributions are made in the course of this research including an analysis of the different types of self-healing approaches based on their contextual use, a baseline for future comparisons between self-healing frameworks that use artificial neural networks, and a successful, automated fault identification in cloud infrastructure, and more specifically virtual machines. This approach uses three established machine learning techniques: Naïve Bayes, Baum-Welch, and Contrastive Divergence Learning. The latter demonstrates minimisation of human-interaction beyond previous implementations by producing a list in decreasing order of likelihood of potential root causes (i.e. fault hypotheses) which brings the state of the art one step closer toward fully self-healing systems. This thesis also examines the impact of that different types of faults have on their respective identification. This helps to understand the validity of the data being presented, and how the field is progressing, whilst examining the differences in impact to identification between emulated thread crashes and errant user changes – a contribution believed to be unique to this research. Lastly, future research avenues and conclusions in automated fault identification are described along with lessons learned throughout this endeavor. This includes the progression of artificial neural networks, how learning algorithms are being developed and understood, and possibilities for automatically generating feature locality data.
115

Uma metodologia de desenvolvimento de diagnóstico guiado para veículos automotivos

Mori, Fernando Maruyama 18 June 2014 (has links)
A utilização de ferramentas externas de diagnóstico guiado tem se tornado cada vez mais importante nas atividades de pós-venda da indústria automotiva. Isso se dá principalmente devido ao uso extensivo de sistemas embarcados nos veículos, tornando-os mais complexos e difíceis de diagnosticar. Atualmente, as técnicas empregadas para o desenvolvimento da ferramenta de diagnóstico guiado são fortemente dependentes da experiência do projetista e centralizadas nas peças e subsistemas do veículo, possibilitando baixo grau de flexibilidade e reaproveitamento da informação. Este trabalho propõe uma nova metodologia para o desenvolvimento da ferramenta de diagnóstico guiado, aplicado a um estudo de caso da indústria automotiva, numa arquitetura de software em três camadas: peças e componentes do veículo, informações e estratégia para o diagnóstico e uma camada de apresentação. Isso permite grande flexibilidade no projeto da ferramenta de diagnóstico guiado para diferentes modelos de veículos, fabricantes de peças e sistemas automotivos. A metodologia proposta é aplicada em um estudo de caso de diagnóstico da Volvo caminhões, mostrando o processo de adaptação da arquitetura de software de três camadas à metodologia proposta e seu impacto no custo do desenvolvimento da ferramenta de diagnóstico. / External guided diagnostic tools are increasingly important to the aftermarket business of automotive industry. It occurs mainly due to the extensive using of embedded systems in vehicles, making them more complex and difficult to diagnose. Currently, the techniques used to develop a guided diagnostic tool are strongly dependent on designer’s experience and are usually focused on parts and vehicle’s subsystems, allowing low flexibility and reduced information reusage. This paper proposes a new methodology for development of a guided diagnostic tool applied to the automotive industry. This methodology is based on a three-tier software architecture composed of vehicle’s parts and components, diagnostic information and strategy, and presentation layer. It allows great flexibility for designing a guided diagnostic tool for different vehicle models, parts OEMs and automotive systems. The proposed methodology has been applied to a case study at Volvo Trucks. The corresponding adaptation process to the three-tier software architecture is presented as well as its impact on development costs. / 5000
116

Uma metodologia de desenvolvimento de diagnóstico guiado para veículos automotivos

Mori, Fernando Maruyama 18 June 2014 (has links)
A utilização de ferramentas externas de diagnóstico guiado tem se tornado cada vez mais importante nas atividades de pós-venda da indústria automotiva. Isso se dá principalmente devido ao uso extensivo de sistemas embarcados nos veículos, tornando-os mais complexos e difíceis de diagnosticar. Atualmente, as técnicas empregadas para o desenvolvimento da ferramenta de diagnóstico guiado são fortemente dependentes da experiência do projetista e centralizadas nas peças e subsistemas do veículo, possibilitando baixo grau de flexibilidade e reaproveitamento da informação. Este trabalho propõe uma nova metodologia para o desenvolvimento da ferramenta de diagnóstico guiado, aplicado a um estudo de caso da indústria automotiva, numa arquitetura de software em três camadas: peças e componentes do veículo, informações e estratégia para o diagnóstico e uma camada de apresentação. Isso permite grande flexibilidade no projeto da ferramenta de diagnóstico guiado para diferentes modelos de veículos, fabricantes de peças e sistemas automotivos. A metodologia proposta é aplicada em um estudo de caso de diagnóstico da Volvo caminhões, mostrando o processo de adaptação da arquitetura de software de três camadas à metodologia proposta e seu impacto no custo do desenvolvimento da ferramenta de diagnóstico. / External guided diagnostic tools are increasingly important to the aftermarket business of automotive industry. It occurs mainly due to the extensive using of embedded systems in vehicles, making them more complex and difficult to diagnose. Currently, the techniques used to develop a guided diagnostic tool are strongly dependent on designer’s experience and are usually focused on parts and vehicle’s subsystems, allowing low flexibility and reduced information reusage. This paper proposes a new methodology for development of a guided diagnostic tool applied to the automotive industry. This methodology is based on a three-tier software architecture composed of vehicle’s parts and components, diagnostic information and strategy, and presentation layer. It allows great flexibility for designing a guided diagnostic tool for different vehicle models, parts OEMs and automotive systems. The proposed methodology has been applied to a case study at Volvo Trucks. The corresponding adaptation process to the three-tier software architecture is presented as well as its impact on development costs. / 5000
117

POLYNOMIAL CURVE FITTING INDICES FOR DYNAMIC EVENT DETECTION IN WIDE-AREA MEASUREMENT SYSTEMS

Longbottom, Daniel W. 14 August 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In a wide-area power system, detecting dynamic events is critical to maintaining system stability. Large events, such as the loss of a generator or fault on a transmission line, can compromise the stability of the system by causing the generator rotor angles to diverge and lose synchronism with the rest of the system. If these events can be detected as they happen, controls can be applied to the system to prevent it from losing synchronous stability. In order to detect these events, pattern recognition tools can be applied to system measurements. In this thesis, the pattern recognition tool decision trees (DTs) were used for event detection. A single DT produced rules distinguishing between and the event and no event cases by learning on a training set of simulations of a power system model. The rules were then applied to test cases to determine the accuracy of the event detection. To use a DT to detect events, the variables used to produce the rules must be chosen. These variables can be direct system measurements, such as the phase angle of bus voltages, or indices created by a combination of system measurements. One index used in this thesis was the integral square bus angle (ISBA) index, which provided a measure of the overall activity of the bus angles in the system. Other indices used were the variance and rate of change of the ISBA. Fitting a polynomial curve to a sliding window of these indices and then taking the difference between the polynomial and the actual index was found to produce a new index that was non-zero during the event and zero all other times for most simulations. After the index to detect events was chosen to be the error between the curve and the ISBA indices, a set of power system cases were created to be used as the training data set for the DT. All of these cases contained one event, either a small or large power injection at a load bus in the system model. The DT was then trained to detect the large power injection but not the small one. This was done so that the rules produced would detect large events on the system that could potentially cause the system to lose synchronous stability but ignore small events that have no effect on the overall system. This DT was then combined with a second DT that predicted instability such that the second DT made the decision whether or not to apply controls only for a short time after the end of every event, when controls would be most effective in stabilizing the system.
118

Modelagem e avaliação da extensão da vida útil de plantas industriais / Modelling and evaluation of industrial plants useful life extension

José Alberto Avelino da Silva 30 May 2008 (has links)
O envelhecimento de uma instalação industrial provoca o aumento do número de falhas. A probabilidade de falhar é um indicador do momento em que deve ser feita uma parada para manutenção. É desenvolvido um método estatístico, baseado na teoria não-markoviana, para a determinação da variação da probabilidade de falhar em função do tempo de operação, que resulta num sistema de equações diferenciais parciais de natureza hiperbólica. São apresentadas as soluções por passo-fracionário e Lax-Wendroff com termo fonte. Devido à natureza suave da solução, os dois métodos chegam ao mesmo resultado com erro menor que 10&#8722;3. No caso estudado, conclui-se que o colapso do sistema depende principalmente do estado inicial da cadeia de Markov, sendo que os demais estados apresentam pouca influência na probabilidade de falha geral do sistema. / During the useful life of an industrial plant, the failure occurrence follows an exponential distribution. However, the aging process in an industrial plant generates an increase of the failure number. The failure probability is a rating for the maintenance stopping process. In this paper, an statistical method for the assessment of the failure probability as a function of the operational time, based on the non-Markovian theory, is presented. Two maintenance conditions are addressed: In the first one, the old parts are utilized, after the repair this condition being called as good as old; in the second one the old parts are substituted by brand new ones this condition being called as good as new. A non-Markovian system with variable source term is modeled by using hyperbolic partial differential equations. The system of equations is solved using the Lax-Wendroff and fractional-step numerical schemes. The two methods achieve to approximately the same results, due to the smooth behavior of the solution. The main conclusion is that the system collapse depends essentially on the initial state of the Markov chain.
119

Classificação automática de falhas em arquitetura orientada a serviços / Automatic fault classification in a service-oriented architecture

Felix, Kleber Gonçalves 29 August 2017 (has links)
Uma arquitetura distribuída é composta de diversos sistemas que trocam mensagens entre si. Falhas na integração destes sistemas podem ocorrer, exigindo uma investigação detalhada dos profissionais de suporte para encontrar a causa raiz do problema. O processo manual de identificação de falhas é difícil e demorado. Ganhos significativos podem ser obtidos através da automação do processo de classificação de falhas. Este trabalho tem por objetivo apresentar um método para auxílio no processo de diagnóstico de falhas, classificando automaticamente as falhas geradas em uma arquitetura orientada a serviços. Este método, denominado SOAFaultControl, se beneficia de arquiteturas distribuídas que adotam SOA e um Enterprise Service Bus (ESB). Utilizando-se de técnicas de aprendizado de máquina, foi possível estabelecer um modelo para classificação de falhas em categorias preestabelecidas. Para alcançar o objetivo deste trabalho foi necessário testar e avaliar os seguintes algoritmos de aprendizagem de máquina: Support Vector Machine, Naive Bayes e AdaBoost. Como resultado, o algoritmo Support Vector Machine obteve melhor desempenho nas métricas: acurácia, precisão, revocação e F1. / A distributed architecture is composed of many systems that exchange messages between each other. Faults in the integration of these systems may occur and they required a detailed investigation of support professionals to identifying the root cause of the problem. The manual process to identify causes of failure is difficult and time-consuming. Significant efficiency gains can be achieved by automating the faults classification process. This work presents a method to support the automated fault diagnostic process, automatically classifying faults generated in a Service Oriented Architecture (SOA). This method denominated SOAFaultControl, may be executed in a distributed architecture that adote SOA and an Enterprise Service Bus (ESB). Using machine learning techniques, was possible build a model to classify fault messages captured in a SOA environment, in pre-established classes. To achieve the objectives of this work it was necessary to test the following machine learning algorithms: Support Vector Machine, Naive Bayes, and AdaBoost. Results show that Support Vector Machine algorithm achieved better performance in the following metrics: precision, accuracy, recall, and F1.
120

Classificação automática de falhas em arquitetura orientada a serviços / Automatic fault classification in a service-oriented architecture

Felix, Kleber Gonçalves 29 August 2017 (has links)
Uma arquitetura distribuída é composta de diversos sistemas que trocam mensagens entre si. Falhas na integração destes sistemas podem ocorrer, exigindo uma investigação detalhada dos profissionais de suporte para encontrar a causa raiz do problema. O processo manual de identificação de falhas é difícil e demorado. Ganhos significativos podem ser obtidos através da automação do processo de classificação de falhas. Este trabalho tem por objetivo apresentar um método para auxílio no processo de diagnóstico de falhas, classificando automaticamente as falhas geradas em uma arquitetura orientada a serviços. Este método, denominado SOAFaultControl, se beneficia de arquiteturas distribuídas que adotam SOA e um Enterprise Service Bus (ESB). Utilizando-se de técnicas de aprendizado de máquina, foi possível estabelecer um modelo para classificação de falhas em categorias preestabelecidas. Para alcançar o objetivo deste trabalho foi necessário testar e avaliar os seguintes algoritmos de aprendizagem de máquina: Support Vector Machine, Naive Bayes e AdaBoost. Como resultado, o algoritmo Support Vector Machine obteve melhor desempenho nas métricas: acurácia, precisão, revocação e F1. / A distributed architecture is composed of many systems that exchange messages between each other. Faults in the integration of these systems may occur and they required a detailed investigation of support professionals to identifying the root cause of the problem. The manual process to identify causes of failure is difficult and time-consuming. Significant efficiency gains can be achieved by automating the faults classification process. This work presents a method to support the automated fault diagnostic process, automatically classifying faults generated in a Service Oriented Architecture (SOA). This method denominated SOAFaultControl, may be executed in a distributed architecture that adote SOA and an Enterprise Service Bus (ESB). Using machine learning techniques, was possible build a model to classify fault messages captured in a SOA environment, in pre-established classes. To achieve the objectives of this work it was necessary to test the following machine learning algorithms: Support Vector Machine, Naive Bayes, and AdaBoost. Results show that Support Vector Machine algorithm achieved better performance in the following metrics: precision, accuracy, recall, and F1.

Page generated in 0.1502 seconds