• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1285
  • 349
  • 214
  • 91
  • 65
  • 53
  • 40
  • 36
  • 27
  • 17
  • 13
  • 13
  • 13
  • 13
  • 7
  • Tagged with
  • 2656
  • 2656
  • 832
  • 814
  • 589
  • 569
  • 448
  • 408
  • 401
  • 331
  • 310
  • 284
  • 259
  • 248
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Análise das variáveis de entrada de uma rede  neural usando teste de correlação e análise de correlação canônica / Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

Valter Magalhães Costa 21 September 2011 (has links)
A monitoração de variáveis e o diagnóstico de falhas é um aspecto importante a se considerar seja em plantas nucleares ou indústrias de processos, pois um diagnóstico precoce de falha permite a correção do problema proporcionando a não interrupção da produção e a segurança do operador e, assim, não causando perdas econômicas. O objetivo deste trabalho é, dentro do universo de todas as variáveis monitoradas de um processo, construir um conjunto de variáveis, não necessariamente mínimo, que será a entrada de uma rede neural e, com isso, conseguir monitorar, o maior número possível de variáveis. Esta metodologia foi aplicada ao reator de pesquisas IEA-R1 do IPEN. Para isso, as variáveis Potência do reator, Vazão do primário, Posição de barras de controle/segurança e Diferença de pressão no núcleo do reator D P, foram agrupadas, pois por hipótese quase todas as variáveis monitoradas em um reator nuclear tem relação com alguma dessas ou pode ser resultado da interação de duas ou mais. Por exemplo, a Potência está relacionada ao aumento e diminuição de algumas temperaturas bem como à quantidade de radiação devido à fissão do urânio; as Barras são reguladoras de potência e, por conseqüência podem influenciar na quantidade de radiação e/ou temperaturas; a Vazão do Circuito Primário, responsável pelo transporte de energia e pela conseqüente retirada de calor do núcleo. Assim, tomando o grupo de variáveis mencionadas, calculamos a correlação existente entre este conjunto B e todas as outras variáveis monitoradas (coeficiente de correlação múltipla), isto é, através do cálculo da correlação múltipla, que é uma ferramenta proposta pela teoria das Correlações Canônicas, foi possível calcular o quanto o conjunto B pode predizer cada uma das variáveis monitoradas. Uma vez que não seja possível uma boa qualidade de predição com o conjunto B, é acrescentada uma ou mais variáveis que possuam alta correlação com a variável melhorando a qualidade de predição. Finalmente, uma rede pode ser treinada com o novo conjunto e os resultados quanto a monitoração foram bastante satisfatórios quanto às 64 variáveis monitoradas pelo sistema de aquisição de dados do reator IEA-R1 através de sensores e atuadores , pois com um conjunto de 9 variáveis foi possível monitorar 51 variáveis. / The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because an early diagnosis allows the correction of the fault and, like this, do not cause the production interruption, improving operators security and its not provoking economics losses. The objective of this work is, in the whole of all variables monitor of a nuclear power plant, to build a set, not necessary minimum, which will be the set of input variables of an artificial neural network and, like way, to monitor the biggest number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. For this, the variables Power, Rate of flow of primary circuit, Rod of control/security and Difference in pressure in the core of the reactor ( D P) was grouped, because, for hypothesis, almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The Power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the Rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures and the Rate of flow of primary circuit has function of the transport of energy by removing of heat of the nucleus Like this, labeling B= {Power, Rate of flow of Primary Circuit, Rod of Control/Security and D P} was computed the correlation between B and all another variables monitoring (coefficient of multiple correlation), that is, by the computer of the multiple correlation, that is tool of Theory of Canonical Correlations, was possible to computer how much the set B can predict each variable. Due the impossibility of a satisfactory approximation by B in the prediction of some variables, it was included one or more variables that have high correlation with this variable to improve the quality of prediction. In this work an artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables using neural networks.
192

Utilização de redes neurais artificiais na monitoração e detecção de falhas em sensores do Reator IEA-R1 / Development of an artificial neural network for monitoring and diagnosis of sensor fault and detection in the IEA-R1 research reactor at IPEN

Elaine Inacio Bueno 20 June 2006 (has links)
Os estudos na área de Monitoração e Diagnóstico de Falhas têm sido estimulados devido ao aumento crescente em qualidade, confiabilidade e segurança nos processos de produção, onde a interrupção da produção por alguma anomalia imprevista pode colocar em risco a segurança do operador, além de provocar perdas econômicas, aumentando os custos com a reparação de algum equipamento danificado. Tendo em vista estes dois fatores, o fator econômico e a própria questão de segurança do operador, torna-se necessário a implementação de Sistemas de Monitoração e Detecção de Falhas. Neste trabalho foi desenvolvido um Sistema de Monitoração e Detecção de Falhas usando a metodologia de Redes Neurais Artificiais que foi aplicado ao reator de pesquisas IEA-R1. O desenvolvimento deste sistema foi dividido em três etapas: sendo a primeira etapa dedicada à monitoração, a segunda a detecção, e a terceira ao diagnóstico de falhas. Na primeira etapa, foram treinadas diversas Redes Neurais Artificiais para a monitoração das variáveis de temperatura, potência e taxa de dose. Para tanto foram utilizadas duas bases dados: uma contendo dados gerados por um modelo teórico do reator, e outra contendo dados referentes a uma semana típica de operação. Na segunda etapa, as redes treinadas para realizar a monitoração das variáveis, foram testadas com uma base de dados contendo falhas inseridas artificialmente nos sensores de temperatura. Como o limite máximo de erro de calibração para termopares especiais é de , foram inseridas falhas de ± nos sensores responsáveis pela leitura das variáveis T3 e T4. Na terceira etapa foi desenvolvido um Sistema Fuzzy para realizar o diagnóstico de falhas, onde foram consideradas 3 condições possíveis de falhas: condição normal, falha de −, e falha de , sendo que o sistema desenvolvido indicará qual o sensor de temperatura está com falha. Cº5,0±Cº1Cº1Cº1+ / The increasing demand on quality in production processes has encouraged the development of several studies on Monitoring and Diagnosis Systems in industrial plant, where the interruption of the production due to some unexpected change can bring risk to the operator\'s security besides provoking economic losses, increasing the costs to repair some damaged equipment. Because of these two points, the economic losses and the operator\'s security, it becomes necessary to implement Monitoring and Diagnosis Systems. In this work, a Monitoring and Diagnosis Systems was developed based on the Artificial Neural Networks methodology. This methodology was applied to the IEA-R1 research reactor at IPEN. The development of this system was divided in three stages: the first was dedicated to monitoring, the second to the detection and the third to diagnosis of failures. In the first stage, several Artificial Neural Networks were trained to monitor the temperature variables, nuclear power and dose rate. Two databases were used: one with data generated by a theoretical model and another one with data to a typical week of operation of the IEA-R1 reactor. In the second stage, the neural networks used to monitor the variables was tested with a fault database. The faults were inserted artificially in the sensors signals. As the value of the maximum calibration error for special thermocouples is , it had been inserted faults of in the sensors for the reading of the variables T3 and T4. In the third stage a Fuzzy System was developed to carry out the faults diagnosis, where were considered three conditions: a normal condition, a fault of , and a fault of . This system will indicate which thermocouple is faulty. Cº5,0±Cº1Cº1±−Cº1+
193

Missing imputation methods explored in big data analytics

Brydon, Humphrey Charles January 2018 (has links)
Philosophiae Doctor - PhD (Statistics and Population Studies) / The aim of this study is to look at the methods and processes involved in imputing missing data and more specifically, complete missing blocks of data. A further aim of this study is to look at the effect that the imputed data has on the accuracy of various predictive models constructed on the imputed data and hence determine if the imputation method involved is suitable. The identification of the missingness mechanism present in the data should be the first process to follow in order to identify a possible imputation method. The identification of a suitable imputation method is easier if the mechanism can be identified as one of the following; missing completely at random (MCAR), missing at random (MAR) or not missing at random (NMAR). Predictive models constructed on the complete imputed data sets are shown to be less accurate for those models constructed on data sets which employed a hot-deck imputation method. The data sets which employed either a single or multiple Monte Carlo Markov Chain (MCMC) or the Fully Conditional Specification (FCS) imputation methods are shown to result in predictive models that are more accurate. The addition of an iterative bagging technique in the modelling procedure is shown to produce highly accurate prediction estimates. The bagging technique is applied to variants of the neural network, a decision tree and a multiple linear regression (MLR) modelling procedure. A stochastic gradient boosted decision tree (SGBT) is also constructed as a comparison to the bagged decision tree. Final models are constructed from 200 iterations of the various modelling procedures using a 60% sampling ratio in the bagging procedure. It is further shown that the addition of the bagging technique in the MLR modelling procedure can produce a MLR model that is more accurate than that of the other more advanced modelling procedures under certain conditions. The evaluation of the predictive models constructed on imputed data is shown to vary based on the type of fit statistic used. It is shown that the average squared error reports little difference in the accuracy levels when compared to the results of the Mean Absolute Prediction Error (MAPE). The MAPE fit statistic is able to magnify the difference in the prediction errors reported. The Normalized Mean Bias Error (NMBE) results show that all predictive models constructed produced estimates that were an over-prediction, although these did vary depending on the data set and modelling procedure used. The Nash Sutcliffe efficiency (NSE) was used as a comparison statistic to compare the accuracy of the predictive models in the context of imputed data. The NSE statistic showed that the estimates of the models constructed on the imputed data sets employing a multiple imputation method were highly accurate. The NSE statistic results reported that the estimates from the predictive models constructed on the hot-deck imputed data were inaccurate and that a mean substitution of the fully observed data would have been a better method of imputation. The conclusion reached in this study shows that the choice of imputation method as well as that of the predictive model is dependent on the data used. Four unique combinations of imputation methods and modelling procedures were concluded for the data considered in this study.
194

Metody umělé inteligence a jejich využití při predikci / Methods of artificial intelligence and their use in prediction

Šerý, Lubomír January 2012 (has links)
Title: Methods of artificial intelligence and their use in prediction Author: Lubomír Šerý Department: Department of Probability and Mathematical Statistics Supervisor: Ing. Marek Omelka, Ph.D., Department of Probability and Mathe- matical Statistics Abstract: In the presented thesis we study field of artificial intelligence, in par- ticular we study part dedicated to artificial neural networks. At the beginning, concept of artificial neural networks is introduced and compared to it's biological base. Afterwards, we also compare neural networks to some generalized linear models. One of the main problems of neural networks is their learning. Therefore biggest part of this work is dedicated to learning algorithms, especially to pa- rameter estimation and specific computational aspects. In this part we attempt to bring in an overview of internal structure of neural network and to propose enhancement of learning algorithm. There are lots of techniques for enhancing and enriching basic model of neural networks. Some of these improvements are, together with genetic algorithms, introduced at the end of this work. At the very end of this work simulations are presented, where we attempt to verify some of the introduced theoretical assumptions and conclusions. Main simulation is an application of concept of neural...
195

VIRTUALIZED CLOUD PLATFORM MANAGEMENT USING A COMBINED NEURAL NETWORK AND WAVELET TRANSFORM STRATEGY

Liu, Chunyu 01 March 2018 (has links)
This study focuses on implementing a log analysis strategy that combines a neural network algorithm and wavelet transform. Wavelet transform allows us to extract the important hidden information and features of the original time series log data and offers a precise framework for the analysis of input information. While neural network algorithm constitutes a powerfulnonlinear function approximation which can provide detection and prediction functions. The combination of the two techniques is based on the idea of using wavelet transform to denoise the log data by decomposing it into a set of coefficients, then feed the denoised data into a neural network. The experimental outputs reveal that this strategy can have a better ability to identify the patterns among problems in a log dataset, and make predictions with a better accuracy. This strategy can help the platform maintainers to adopt corresponding actions to eliminate risks before the occurrence of serious damages.
196

Multistructure segmentation of multimodal brain images using artificial neural networks

Kim, Eun Young 01 December 2009 (has links)
A method for simultaneously segmenting multiple anatomical brain structures from multi-modal MR images has been developed. An artificial neural network (ANN) was trained from a set of feature vectors created by a combination of high-resolution registration methods, atlas based spatial probability distributions, and a training set of 16 expert traced data sets. A set of feature vectors were adapted to increase performance of ANN segmentation; 1) a modified spatial location for structural symmetry of human brain, 2) neighbors along the priors' descent for directional consistency, and 3) candidate vectors based on the priors for the segmentation of multiple structures. The trained neural network was then applied to 8 data sets, and the results were compared with expertly traced structures for validation purposes. Comparing several reliability metrics, including a relative overlap, similarity index, and intraclass correlation of the ANN generated segmentations to a manual trace are similar or higher to those measures previously developed methods. The ANN provides a level of consistency between subjects and time efficiency comparing human labor that allows it to be used for very large studies.
197

Biologically Inspired Vision and Control for an Autonomous Flying Vehicle

Garratt, Matthew Adam, m.garratt@adfa.edu.au 17 February 2008 (has links)
This thesis makes a number of new contributions to control and sensing for unmanned vehicles. I begin by developing a non-linear simulation of a small unmanned helicopter and then proceed to develop new algorithms for control and sensing using the simulation. The work is field-tested in successful flight trials of biologically inspired vision and neural network control for an unstable rotorcraft. The techniques are more robust and more easily implemented on a small flying vehicle than previously attempted methods.¶ Experiments from biology suggest that the sensing of image motion or optic flow in insects provides a means of determining the range to obstacles and terrain. This biologically inspired approach is applied to control of height in a helicopter, leading to the World’s first optic flow based terrain following controller for an unmanned helicopter in forward flight. Another novel optic flow based controller is developed for the control of velocity in hover. Using the measurements of height from other sensors, optic flow is used to provide a measure of the helicopters lateral and longitudinal velocities relative to the ground plane. Feedback of these velocity measurements enables automated hover with a drift of only a few cm per second, which is sufficient to allow a helicopter to land autonomously in gusty conditions with no absolute measurement of position.¶ New techniques for sensor fusion using Extended Kalman Filtering are developed to estimate attitude and velocity from noisy inertial sensors and optic flow measurements. However, such control and sensor fusion techniques can be computationally intensive, rendering them difficult or impossible to implement on a small unmanned vehicle due to limitations on computing resources. Since neural networks can perform these functions with minimal computing hardware, a new technique of control using neural networks is presented. First a hybrid plant model consisting of exactly known dynamics is combined with a black-box representation of the unknown dynamics. Simulated trajectories are then calculated for the plant using an optimal controller. Finally, a neural network is trained to mimic the optimal controller. Flight test results of control of the heave dynamics of a helicopter confirm the neural network controller’s ability to operate in high disturbance conditions and suggest that the neural network outperforms a PD controller. Sensor fusion and control of the lateral and longitudinal dynamics of the helicopter are also shown to be easily achieved using computationally modest neural networks.
198

Permeability estimation of fracture networks

Jafari, Alireza 06 1900 (has links)
This dissertation aims to propose a new and practical method to obtain equivalent fracture network permeability (EFNP), which represents and replaces all the existing fractures located in each grid block for the reservoir simulation of naturally fractured reservoirs. To achieve this, first the relationship between different geometrical properties of fracture networks and their EFNP was studied. A MATLAB program was written to generate many different realizations of 2-D fracture networks by changing fracture length, density and also orientation. Next, twelve different 2-D fractal-statistical properties of the generated fracture networks were measured to quantify different characteristics. In addition to the 2-D fractal-statistical properties, readily available 1-D and 3-D data were also measured for the models showing variations of fracture properties in the Z-direction. The actual EFNP of each fracture network was then measured using commercial software called FRACA. The relationship between the 1-, 2- and 3-D data and EFNP was analyzed using multivariable regression analysis and based on these analyses, correlations with different number of variables were proposed to estimate EFNP. To improve the accuracy of the predicted EFNP values, an artificial neural network with the back-propagation algorithm was also developed. Then, using the experimental design technique, the impact of each fracture network parameter including fracture length, density, orientation and conductivity on EFNP was investigated. On the basis of the results and the analyses, the conditions to obtain EFNP for practical applications based on the available data (1-D well, 2-D outcrop, and 3-D welltest) were presented. This methodology was repeated for natural fracture patterns obtained mostly from the outcrops of different geothermal reservoirs. The validity of the equations was also tested against the real welltest data obtained from the fields. Finally, the concept of the percolation theory was used to determine whether each fracture network in the domain is percolating (permeable) and to quantify the fracture connectivity, which controls the EFNP. For each randomly generated fracture network, the relationship between the combined fractal-percolation properties and the EFNP values was investigated and correlations for predicting the EFNP were proposed. As before, the results were validated with a new set of fracture networks. / Petroleum Engineering
199

Learning to segment texture in 2D vs. 3D : A comparative study

Oh, Se Jong 15 November 2004 (has links)
Texture boundary detection (or segmentation) is an important capability of the human visual system. Usually, texture segmentation is viewed as a 2D problem, as the definition of the problem itself assumes a 2D substrate. However, an interesting hypothesis emerges when we ask a question regarding the nature of textures: What are textures, and why did the ability to discriminate texture evolve or develop? A possible answer to this question is that textures naturally define physically distinct surfaces or objects, thus, we can hypothesize that 2D texture segmentation may be an outgrowth of the ability to discriminate surfaces in 3D. In this thesis, I investigated the relative difficulty of learning to segment textures in 2D vs. 3D configurations. It turns out that learning is faster and more accurate in 3D, very much in line with what was expected. Furthermore, I have shown that the learned ability to segment texture in 3D transfers well into 2D texture segmentation, but not the other way around, bolstering the initial hypothesis, and providing an alternative approach to the texture segmentation problem.
200

On the evolution of autonomous decision-making and communication in collective robotics

Ampatzis, Christos 10 November 2008 (has links)
In this thesis, we use evolutionary robotics techniques to automatically design and synthesise behaviour for groups of simulated and real robots. Our contribution will be on the design of non-trivial individual and collective behaviour; decisions about solitary or social behaviour will be temporal and they will be interdependent with communicative acts. In particular, we study time-based decision-making in a social context: how the experiences of robots unfold in time and how these experiences influence their interaction with the rest of the group. We propose three experiments based on non-trivial real-world cooperative scenarios. First, we study social cooperative categorisation; signalling and communication evolve in a task where the cooperation among robots is not a priori required. The communication and categorisation skills of the robots are co-evolved from scratch, and the emerging time-dependent individual and social behaviour are successfully tested on real robots. Second, we show on real hardware evidence of the success of evolved neuro-controllers when controlling two autonomous robots that have to grip each other (autonomously self-assemble). Our experiment constitutes the first fully evolved approach on such a task that requires sophisticated and fine sensory-motor coordination, and it highlights the minimal conditions to achieve assembly in autonomous robots by reducing the assumptions a priori made by the experimenter to a functional minimum. Third, we present the first work in the literature to deal with the design of homogeneous control mechanisms for morphologically heterogeneous robots, that is, robots that do not share the same hardware characteristics. We show how artificial evolution designs individual behaviours and communication protocols that allow the cooperation between robots of different types, by using dynamical neural networks that specialise on-line, depending on the nature of the morphology of each robot. The experiments briefly described above contribute to the advancement of the state of the art in evolving neuro-controllers for collective robotics both from an application-oriented, engineering point of view, as well as from a more theoretical point of view.

Page generated in 0.0397 seconds