Spelling suggestions: "subject:"artificial beural"" "subject:"artificial aneural""
191 |
EEG Interictal Spike Detection Using Artificial Neural NetworksCarey, Howard J, III 01 January 2016 (has links)
Epilepsy is a neurological disease causing seizures in its victims and affects approximately 50 million people worldwide. Successful treatment is dependent upon correct identification of the origin of the seizures within the brain. To achieve this, electroencephalograms (EEGs) are used to measure a patient’s brainwaves. This EEG data must be manually analyzed to identify interictal spikes that emanate from the afflicted region of the brain. This process can take a neurologist more than a week and a half per patient. This thesis presents a method to extract and process the interictal spikes in a patient, and use them to reduce the amount of data for a neurologist to manually analyze. The effectiveness of multiple neural network implementations is compared, and a data reduction of 3-4 orders of magnitude, or upwards of 99%, is achieved.
|
192 |
Využití umělých neuronových sítí v klasifikaci land cover / Land cover classfication using artificial neural networksOubrechtová, Veronika January 2012 (has links)
Land cover classification using artificial neural networks Abstract This Diploma thesis deals with automatic classification of the satellite high spatial resolution image in the field of land cover. The first half of the work contains the theoretical information about remote sensing and classification methods. The biggest attention is given to the artificial neural networks. In practical part of Diploma thesis are these methods used for the classification of SPOT satellite image. Keywords: remote sensing, image classification, artificial neural networks, SPOT
|
193 |
Modelování durací pomocí neuronových sítí / Modelling Durations Using Artificial Neural NetworksŽofka, Martin January 2014 (has links)
The thesis introduces Artificial Neural Networks (ANN) to the field of financial durations. We begin by reviewing the findings about financial durations and models applied to analyze them. ANNs are then surveyed and one of the possible network architectures is selected for the forecasting. The selected ANN is a feed-forward network, with one hidden layer, a sigmoid activation function and a genetic algorithm for optimization. We use original and diurnally adjusted data for estimation and in contrast to other duration models, ANNs do not require data pre-processing. Therefore forecasts are estimated in one step without removing seasonalities for raw data. The estimates of the ANN are compared to estimates of the Autoregressive Conditional Duration (ACD) model, which serves as a benchmark for forecasting capabilities of the ANNs. The findings confirm that ANNs can be used to model durations with a similar accuracy as the ACD model. In the case of raw data the model slightly outperforms the ACD model, while the opposite is true for adjusted data, however the forecasting ability difference is not significant.
|
194 |
Groundwater Management Using Remotely Sensed Data in High Plains AquiferGhasemian, Davood, Ghasemian, Davood January 2016 (has links)
Groundwater monitoring in regional scales using conventional methods is challenging since it requires a dense network monitoring well system and regular measurements. Satellite measurement of time-variable gravity from the Gravity Recovery and Climate Experiment (GRACE) mission since 2002 provided an exceptional opportunity to observe the variations in Terrestrial Water Storage (TWS) from space. This study has been divided into 3 parts: First different satellite and hydrological model data have been used to validate the TSW measurements derived from GRACE in High Plains Aquifer (HPA). Terrestrial Water Storage derived from GRACE was compared to TWS derived from a water budget whose inputs determined from independent datasets. The results were similar to each other both in magnitude and timing with a correlation coefficient of 0.55. The seasonal groundwater storage changes are also estimated using GRACE and auxiliary data for the period of 2004 to 2009, and results are compared to the local in situ measurements to test the capability of GRACE in detecting groundwater changes in this region. The results from comparing seasonal groundwater changes from GRACE and in situ measurements indicated a good agreement both in magnitude and seasonality with a correlation coefficient of 0.71. This finding reveals the worthiness of GRACE satellite data in detecting the groundwater level anomalies and the benefits of using its data in regional hydrological modelling. In the second part of the study the feasibility of the GRACE TWS for predicting groundwater level changes is investigated in different locations of the High Plains Aquifer. The Artificial Neural Networks (ANNs) are used to predict the monthly groundwater level changes. The input data employed in the ANN include monthly gridded GRACE TWS based on Release-05 of GRACE Level-3, precipitation, minimum and maximum temperature which are estimated from Parameter elevation Regression on Independent Slopes Model (PRISM), and the soil moisture estimations derived from Noah Land Surface Model for the period of January 2004 to December 2009. All the values for mentioned datasets are extracted at the location of 21 selected wells for the study period. The input data is divided into 3 parts which 60% is dedicated to training, 20% to validation, and 20% to testing. The output to the developed ANNs is the groundwater level change which is compared to the US Geological Survey's National Water Information well data. Results from statistical downscaling of GRACE data leaded to a significant improvement in predicting groundwater level changes, and the trained ensemble multi-layer perceptron shows a "good" to a "very good" performance based on the obtained Nash-Sutcliff Efficiency which demonstrates the capability of these data for downscaling. In the third part of this study the soil moisture from 4 different Land Surface models (NOAH, VIC, MOSAIC, and CLM land surface models) which are accessible through NASA Global Land Data Assimilation System (GLDAS) is included in developing the ANNs and the results are compared to each other to quantify the effect of soil moisture in the downscaling process of GRACE. The relative importance of each predictor was estimated using connection weight technique and it was found that the GRACE TWS is a significant parameter in the performance of Artificial Neural Network ensembles, and based on the Root Mean Squared (RMSE) and the correlation coefficients associated to the models in which the soil moisture from Noah and CLM Land Surface Models are used, it is found that using these datasets in process of downscaling GRACE delivers a higher correlated simulation values to the observed values.
|
195 |
Analyse des données en vue du diagnostic des moteurs Diesel de grande puissance / Data analysis for fault diagnosis on high power rated Diesel enginesKhelil, Yassine 04 October 2013 (has links)
Cette thèse a été réalisée dans le cadre d'un projet industriel (BMCI), dont l'objectif est d'augmenter la disponibilité des équipements sur les navires. Dans cette thèse, nous proposons une approche qui met à contribution deux approches différentes, à savoir une approche à base de données pour la détection des défauts et une approche à base de connaissances d'experts pour l'isolation des défauts. Cette approche se veut générique et applicable à différents sous-systèmes du moteur ainsi qu'à divers moteurs et offre une ouverture pour une éventuelle application sur d'autres équipements. De plus, elle est tolérante vis-à-vis des éventuels changements au niveau de l'instrumentation disponible. Cette approche a été testée sur la détection et l'isolation des défauts les plus fréquents et aux conséquences graves auxquels les moteurs Diesel sont sujets. Tous les sous-systèmes du moteurs Diesel sont inclus et l'approche de diagnostic prend en considération les interactions existantes entre les sous-systèmes. L'approche de diagnostic a été testée sur un banc d'essai et sur le navire militaire Adroit de DCNS. Les défauts réalisés sur divers circuits du banc moteur et les défauts apparus en fonctionnement sur certains moteurs de l'Adroit, ont été majoritairement détectés et isolés avec succès. De plus, pour pallier à l'incertitude et au caractère flou des relations expertes utilisées dans la procédure d'isolation, une validation des relations de cause à effet a été réalisée, dans le cadre de cette thèse, par la réalisation d'un modèle analytique de simulation de défauts. / This thesis is carried out within an industrial framework (BMCI) which aims to enhance the availability of equipments on board ships. In this work, a data-based method for fault detection is combined with a knowledge-based method for fault isolation. The presented approach is generic and characterized by the ability to be applied to all the Diesel engine subsystems, to different kind of Diesel engines and can also be extended to other equipments. Moreover, this approach is tolerant regarding differences in instrumentation. This approach is tested upon the detection and isolation of the most hazardous and frequent faults which subject Diesel engines. This approach intends to make diagnosis upon the entire Diesel engine including all the subsystems and the existing interactions between the subsystems. The proposed approach is tested upon a test bench and upon the Diesel engines of the DCNS military vessel textquotedblleft Adroit". Most of the introduced faults on the test bench and the appeared faults on the Adroit engines have been successfully detected and isolated. In addition, to deal with uncertainties and fuzziness of the causal relationships given by maintenance experts, a model is developed. This model aims to validate these causal relationships used in the isolation part of the diagnosis approach.
|
196 |
[en] ARTIFICIAL NEURAL NETWORK MODELING FOR QUALITY INFERENCE OF A POLYMERIZATION PROCESS / [pt] MODELO DE REDES NEURAIS ARTIFICIAIS PARA INFERÊNCIA DA QUALIDADE DE UM PROCESSO POLIMÉRICOJULIA LIMA FLECK 26 January 2009 (has links)
[pt] O presente trabalho apresenta o desenvolvimento de um
modelo neural para a inferência da qualidade do polietileno
de baixa densidade (PEBD) a partir dos valores das
variáveis de processo do sistema reacional. Para tal, fez-
se uso de dados operacionais de uma empresa petroquímica,
cujo pré-processamento incluiu a seleção de variáveis,
limpeza e normalização dos dados selecionados e
preparação dos padrões. A capacidade de inferência do
modelo neural desenvolvido neste estudo foi comparada com a
de dois modelos fenomenológicos existentes. Para tal,
utilizou-se como medida de desempenho o valor do erro
médio absoluto percentual dos modelos, tendo como
referência valores experimentais do índice de fluidez.
Neste contexto, o modelo neural apresentou-se
como uma eficiente ferramenta de modelagem da qualidade do
sistema reacional de produção do PEBD. / [en] This work comprises the development of a neural network-
based model for quality inference of low density
polyethylene (LDPE). Plant data corresponding to
the process variables of a petrochemical company`s LDPE
reactor were used for model development. The data were
preprocessed in the following manner: first,
the most relevant process variables were selected, then
data were conditioned and normalized. The neural network-
based model was able to accurately predict the
value of the polymer melt index as a function of the
process variables. This model`s performance was compared
with that of two mechanistic models
developed from first principles. The comparison was made
through the models` mean absolute percentage error, which
was calculated with respect to experimental values of the
melt index. The results obtained confirm the neural
network model`s ability to infer values of quality-related
measurements of the LDPE reactor.
|
197 |
Weight parameterizations in deep neural networks / Paramétrisation des poids des réseaux de neurones profondsZagoruyko, Sergey 07 September 2018 (has links)
Les réseaux de neurones multicouches ont été proposés pour la première fois il y a plus de trois décennies, et diverses architectures et paramétrages ont été explorés depuis. Récemment, les unités de traitement graphique ont permis une formation très efficace sur les réseaux neuronaux et ont permis de former des réseaux beaucoup plus grands sur des ensembles de données plus importants, ce qui a considérablement amélioré le rendement dans diverses tâches d'apprentissage supervisé. Cependant, la généralisation est encore loin du niveau humain, et il est difficile de comprendre sur quoi sont basées les décisions prises. Pour améliorer la généralisation et la compréhension, nous réexaminons les problèmes de paramétrage du poids dans les réseaux neuronaux profonds. Nous identifions les problèmes les plus importants, à notre avis, dans les architectures modernes : la profondeur du réseau, l'efficacité des paramètres et l'apprentissage de tâches multiples en même temps, et nous essayons de les aborder dans cette thèse. Nous commençons par l'un des problèmes fondamentaux de la vision par ordinateur, le patch matching, et proposons d'utiliser des réseaux neuronaux convolutifs de différentes architectures pour le résoudre, au lieu de descripteurs manuels. Ensuite, nous abordons la tâche de détection d'objets, où un réseau devrait apprendre simultanément à prédire à la fois la classe de l'objet et l'emplacement. Dans les deux tâches, nous constatons que le nombre de paramètres dans le réseau est le principal facteur déterminant sa performance, et nous explorons ce phénomène dans les réseaux résiduels. Nos résultats montrent que leur motivation initiale, la formation de réseaux plus profonds pour de meilleures représentations, ne tient pas entièrement, et des réseaux plus larges avec moins de couches peuvent être aussi efficaces que des réseaux plus profonds avec le même nombre de paramètres. Dans l'ensemble, nous présentons une étude approfondie sur les architectures et les paramétrages de poids, ainsi que sur les moyens de transférer les connaissances entre elles / Multilayer neural networks were first proposed more than three decades ago, and various architectures and parameterizations were explored since. Recently, graphics processing units enabled very efficient neural network training, and allowed training much larger networks on larger datasets, dramatically improving performance on various supervised learning tasks. However, the generalization is still far from human level, and it is difficult to understand on what the decisions made are based. To improve on generalization and understanding we revisit the problems of weight parameterizations in deep neural networks. We identify the most important, to our mind, problems in modern architectures: network depth, parameter efficiency, and learning multiple tasks at the same time, and try to address them in this thesis. We start with one of the core problems of computer vision, patch matching, and propose to use convolutional neural networks of various architectures to solve it, instead of manual hand-crafting descriptors. Then, we address the task of object detection, where a network should simultaneously learn to both predict class of the object and the location. In both tasks we find that the number of parameters in the network is the major factor determining it's performance, and explore this phenomena in residual networks. Our findings show that their original motivation, training deeper networks for better representations, does not fully hold, and wider networks with less layers can be as effective as deeper with the same number of parameters. Overall, we present an extensive study on architectures and weight parameterizations, and ways of transferring knowledge between them
|
198 |
Machine Learning for Decision-Support in Distributed NetworksSetati, Makgopa Gareth 14 November 2006 (has links)
Student Number : 9801145J -
MSc dissertation -
School of Electrical and Information Engineering -
Faculty of Engineering / In this document, a paper is presented that reports on the optimisation of a system that assists in time series prediction. Daily closing prices of a stock are used as the time series under which the system is being optimised. Concepts of machine learning, Artificial Neural Networks, Genetic Algorithms, and Agent-Based Modeling are used as tools for this task. Neural networks serve as the prediction engine and genetic algorithms are used for optimisation tasks as well as the simulation of a multi-agent based trading environment. The simulated trading environment is used to ascertain and optimise the best data, in terms of quality, to use as inputs to the neural network. The results achieved were positive and a large portion of this work concentrates on the refinement of the predictive capability. From this study it is concluded that AI methods bring a sound scientific approach to time series prediction, regardless of the phenomena that is being predicted.
|
199 |
Are we there yet? : Prediciting bus arrival times with an artificial neural networkRideg, Johan, Markensten, Max January 2019 (has links)
Public transport authority UL (Upplands Lokaltrafik) aims to reduce emissions, air pollution, and traffic congestion by providing bus journeys as an alternative to using a car. In order to incentivise bus travel, accurate predictions are critical to potential passengers. Accurate arrival time predictions enable the passengers to spend less time waiting for the bus and revise their plan for connections when their bus runs late. According to literature, Artificial Neural Networks (ANN) has the ability to capture nonlinear relationships between time of day and position of the bus and its arrival time at upcoming bus stops. Using arrival times of buses on one line from July 2018 to February 2019, a data-set for supervised learning was curated and used to train an ANN. The ANN was implemented on data from the city buses and compared to one of the models currently in use. Analysis showed that the ANN was better able to handle the fluctuations in travel time during the day, only being outperformed at night. Before the ANN can be implemented, real time data processing must be added. To cement its practicality, whether its robustness can be improved upon should be explored as the current model is highly dependent on static bus routes.
|
200 |
Classifying Material Defects with Convolutional Neural Networks and Image ProcessingHeidari, Jawid January 2019 (has links)
Fantastic progress has been made within the field of machine learning and deep neural networks in the last decade. Deep convolutional neural networks (CNN) have been hugely successful in imageclassification and object detection. These networks can automate many processes in the industries and increase efficiency. One of these processes is image classification implementing various CNN-models. This thesis addressed two different approaches for solving the same problem. The first approach implemented two CNN-models to classify images. The large pre-trained VGG-model was retrained using so-called transfer learning and trained only the top layers of the network. The other model was a smaller one with customized layers. The trained models are an end-to-end solution. The input is an image, and the output is a class score. The second strategy implemented several classical image processing algorithms to detect the individual defects existed in the pictures. This method worked as a ruled based object detection algorithm. Canny edge detection algorithm, combined with two mathematical morphology concepts, made the backbone of this strategy. Sandvik Coromant operators gathered approximately 1000 microscopical images used in this thesis. Sandvik Coromant is a leading producer of high-quality metal cutting tools. During the manufacturing process occurs some unwanted defects in the products. These defects are analyzed by taking images with a conventional microscopic of 100 and 1000 zooming capability. The three essential defects investigated in this thesis defined as Por, Macro and Slits. Experiments conducted during this thesis show that CNN-models is a good approach to classify impurities and defects in the metal industry, the potential is high. The validation accuracy reached circa 90 percentage, and the final evaluation accuracy was around 95 percentage , which is an acceptable result. The pretrained VGG-model reached a much higher accuracy than the customized model. The Canny edge detection algorithm combined dilation and erosion and contour detection produced a good result. It detected the majority of the defects existed in the images.
|
Page generated in 0.0714 seconds