• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 16
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Um método de aprendizagem seqüencial com filtro de Kalman e Extreme Learning Machine para problemas de regressão e previsão de séries temporais

NÓBREGA, Jarley Palmeira 24 August 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-03-15T12:52:14Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Jarley_Nobrega_CORRIGIDA.pdf: 12392055 bytes, checksum: 30d9ff36e7236d22ddc3a16dd942341f (MD5) / Made available in DSpace on 2016-03-15T12:52:14Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Jarley_Nobrega_CORRIGIDA.pdf: 12392055 bytes, checksum: 30d9ff36e7236d22ddc3a16dd942341f (MD5) Previous issue date: 2015-08-24 / Em aplicações de aprendizagem de máquina, é comum encontrar situações onde o conjunto de entrada não está totalmente disponível no início da fase de treinamento. Uma solução conhecida para essa classe de problema é a realização do processo de aprendizagem através do fornecimento sequencial das instâncias de treinamento. Entre as abordagens mais recentes para esses métodos, encontram-se as baseadas em redes neurais do tipo Single Layer Feedforward Network (SLFN), com destaque para as extensões da Extreme Learning Machine (ELM) para aprendizagem sequencial. A versão sequencial da ELM, chamada de Online Sequential Extreme Learning Machine (OS-ELM), utiliza uma solução recursiva de mínimos quadrados para atualizar os pesos de saída da rede através de uma matriz de covariância. Entretanto, a implementação da OS-ELM e suas extensões sofrem com o problema de multicolinearidade entre os elementos da matriz de covariância. Essa tese introduz um novo método para aprendizagem sequencial com capacidade para tratar os efeitos da multicolinearidade. Chamado de Kalman Learning Machine (KLM), o método proposto utiliza o filtro de Kalman para a atualização sequencial dos pesos de saída de uma SLFN baseada na OS-ELM. Esse trabalho também propõe uma abordagem para a estimativa dos parâmetros do filtro, com o objetivo de diminuir a complexidade computacional do treinamento. Além disso, uma extensão do método chamada de Extended Kalman Learning Machine (EKLM) é apresentada, voltada para problemas onde a natureza do sistema em estudo seja não linear. O método proposto nessa tese foi comparado com alguns dos mais recentes e efetivos métodos para o tratamento de multicolinearidade em problemas de aprendizagem sequencial. Os experimentos executados mostraram que o método proposto apresenta um desempenho melhor que a maioria dos métodos do estado da arte, quando medidos o de erro de previsão e o tempo de treinamento. Um estudo de caso foi realizado, aplicando o método proposto a um problema de previsão de séries temporais para o mercado financeiro. Os resultados confirmaram que o KLM consegue simultaneamente reduzir o erro de previsão e o tempo de treinamento, quando comparado com os demais métodos investigados nessa tese. / In machine learning applications, there are situations where the input dataset is not fully available at the beginning of the training phase. A well known solution for this class of problem is to perform the learning process through the sequential feed of training instances. Among most recent approaches for sequential learning, we can highlight the methods based on Single Layer Feedforward Network (SLFN) and the extensions of the Extreme Learning Machine (ELM) approach for sequential learning. The sequential version of the ELM algorithm, named Online Sequential Extreme Learning Machine (OS-ELM), uses a recursive least squares solution for updating the output weights through a covariance matrix. However, the implementation of OS-ELM and its extensions suffer from the problem of multicollinearity for the hidden layer output matrix. This thesis introduces a new method for sequential learning in which the effects of multicollinearity is handled. The proposed Kalman Learning Machine (KLM) updates sequentially the output weights of an OS-ELM based network by using the Kalman filter iterative procedure. In this work, in order to reduce the computational complexity of the training process, a new approach for estimating the filter parameters is presented. Moreover, an extension of the method, named Extended Kalman Learning Machine (EKLM), is presented for problems where the dynamics of the model are non linear. The proposed method was evaluated by comparing the related state-of-the-art methods for sequential learning based on the original OS-ELM. The results of the experiments show that the proposed method can achieve the lowest forecast error when compared with most of their counterparts. Moreover, the KLM algorithm achieved the lowest average training time when all experiments were considered, as an evidence that the proposed method can reduce the computational complexity for the sequential learning process. A case study was performed by applying the proposed method for a problem of financial time series forecasting. The results reported confirm that the KLM algorithm can decrease the forecast error and the average training time simultaneously, when compared with other sequential learning algorithms.
2

A robust and reliable data-driven prognostics approach based on Extreme Learning Machine and Fuzzy Clustering / Une approche robuste et fiable de pronostic guidé par les données robustes et basée sur l'apprentissage automatique extrême et la classification floue

Javed, kamran 09 April 2014 (has links)
Le pronostic industriel vise à étendre le cycle de vie d’un dispositif physique, tout en réduisant les couts d’exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédiction. En effet, des estimations précises de la durée de vie avant défaillance d’un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d’action visant à accroitre la sécurité, réduire les temps d’arrêt, assurer l’achèvement de la mission et l’efficacité de la production.Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type boite noire pour l’ étude du comportement du système directement `a partir des données de surveillance d’ état, pour définir l’ état actuel du système et prédire la progression future de défauts. Cependant, l’approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostic. Pour la compréhension de la modélisation du pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l’ évolution de la dégradation? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d’un cas `a un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s’ écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c’est `a dire les conditions de fonctionnement, les variations de l’ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigence industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. / Prognostics and Health Management (PHM) aims at extending the life cycle of engineerin gassets, while reducing exploitation and maintenance costs. For this reason,prognostics is considered as a key process with future capabilities. Indeed, accurateestimates of the Remaining Useful Life (RUL) of an equipment enable defining furtherplan of actions to increase safety, minimize downtime, ensure mission completion andefficient production.Recent advances show that data-driven approaches (mainly based on machine learningmethods) are increasingly applied for fault prognostics. They can be seen as black-boxmodels that learn system behavior directly from Condition Monitoring (CM) data, usethat knowledge to infer its current state and predict future progression of failure. However,approximating the behavior of critical machinery is a challenging task that canresult in poor prognostics. As for understanding, some issues of data-driven prognosticsmodeling are highlighted as follows. 1) How to effectively process raw monitoringdata to obtain suitable features that clearly reflect evolution of degradation? 2) Howto discriminate degradation states and define failure criteria (that can vary from caseto case)? 3) How to be sure that learned-models will be robust enough to show steadyperformance over uncertain inputs that deviate from learned experiences, and to bereliable enough to encounter unknown data (i.e., operating conditions, engineering variations,etc.)? 4) How to achieve ease of application under industrial constraints andrequirements? Such issues constitute the problems addressed in this thesis and have ledto develop a novel approach beyond conventional methods of data-driven prognostics.
3

Extreme Learning Machines: novel extensions and application to Big Data

Akusok, Anton 01 May 2016 (has links)
Extreme Learning Machine (ELM) is a recently discovered way of training Single Layer Feed-forward Neural Networks with an explicitly given solution, which exists because the input weights and biases are generated randomly and never change. The method in general achieves performance comparable to Error Back-Propagation, but the training time is up to 5 orders of magnitude smaller. Despite a random initialization, the regularization procedures explained in the thesis ensure consistently good results. While the general methodology of ELMs is well developed, the sheer speed of the method enables its un-typical usage for state-of-the-art techniques based on repetitive model re-training and re-evaluation. Three of such techniques are explained in the third chapter: a way of visualizing high-dimensional data onto a provided fixed set of visualization points, an approach for detecting samples in a dataset with incorrect labels (mistakenly assigned, mistyped or a low confidence), and a way of computing confidence intervals for ELM predictions. All three methods prove useful, and allow even more applications in the future. ELM method is a promising basis for dealing with Big Data, because it naturally deals with the problem of large data size. An adaptation of ELM to Big Data problems, and a corresponding toolbox (published and freely available) are described in chapter 4. An adaptation includes an iterative solution of ELM which satisfies a limited computer memory constraints and allows for a convenient parallelization. Other tools are GPU-accelerated computations and support for a convenient huge data storage format. The chapter also provides two real-world examples of dealing with Big Data using ELMs, which present other problems of Big Data such as veracity and velocity, and solutions to them in the particular problem context.
4

Advanced Data Mining Methods for Electricity Customer Behaviour Analysis in Power Utility Companies

Ms Anisah Nizar Unknown Date (has links)
No description available.
5

Advanced Data Mining Methods for Electricity Customer Behaviour Analysis in Power Utility Companies

Ms Anisah Nizar Unknown Date (has links)
No description available.
6

Relative Optical Navigation around Small Bodies via Extreme Learning Machines

Law, Andrew M. January 2015 (has links)
To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.
7

Revisitando o problema de classificaÃÃo de padrÃes na presenÃa de outliers usando tÃcnicas de regressÃo robusta / Revisiting the problem of pattern classification in the presence of outliers using robust regression techniques

Ana Luiza Bessa de Paula Barros 09 August 2013 (has links)
Nesta tese, aborda-se o problema de classificaÃÃo de dados que estÃo contaminados com pa- drÃes atÃpicos. Tais padrÃes, genericamente chamados de outliers, sÃo onipresentes em conjunto de dados multivariados reais, porÃm sua detecÃÃo a priori (i.e antes de treinar um classificador) à uma tarefa de difÃcil realizaÃÃo. Como conseqÃÃncia, uma abordagem reativa, em que se desconfia da presenÃa de outliers somente apÃs um classificador previamente treinado apresen- tar baixo desempenho, à a mais comum. VÃrias estratÃgias podem entÃo ser levadas a cabo a fim de melhorar o desempenho do classificador, dentre elas escolher um classificador mais poderoso computacionalmente ou promover uma limpeza dos dados, eliminando aqueles pa- drÃes difÃceis de categorizar corretamente. Qualquer que seja a estratÃgia adotada, a presenÃa de outliers sempre irà requerer maior atenÃÃo e cuidado durante o projeto de um classificador de padrÃes. Tendo estas dificuldades em mente, nesta tese sÃo revisitados conceitos e tÃcni- cas provenientes da teoria de regressÃo robusta, em particular aqueles relacionados à estimaÃÃo M, adaptando-os ao projeto de classificadores de padrÃes capazes de lidar automaticamente com outliers. Esta adaptaÃÃo leva à proposiÃÃo de versÃes robustas de dois classificadores de padrÃes amplamente utilizados na literatura, a saber, o classificador linear dos mÃnimos qua- drados (least squares classifier, LSC) e a mÃquina de aprendizado extremo (extreme learning machine, ELM). AtravÃs de uma ampla gama de experimentos computacionais, usando dados sintÃticos e reais, mostra-se que as versÃes robustas dos classificadores supracitados apresentam desempenho consistentemente superior aos das versÃes originais. / This thesis addresses the problem of data classification when they are contaminated with atypical patterns. These patterns, generally called outliers, are omnipresent in real-world multi- variate data sets, but their a priori detection (i.e. before training the classifier) is a difficult task to perform. As a result, the most common approach is the reactive one, in which one suspects of the presence of outliers in the data only after a previously trained classifier has achieved a low performance. Several strategies can then be carried out to improve the performance of the classifier, such as to choose a more computationally powerful classifier and/or to remove the de- tected outliers from data, eliminating those patterns which are difficult to categorize properly. Whatever the strategy adopted, the presence of outliers will always require more attention and care during the design of a pattern classifier. Bearing these difficulties in mind, this thesis revi- sits concepts and techniques from the theory of robust regression, in particular those related to M-estimation, adapting them to the design of pattern classifiers which are able to automatically handle outliers. This adaptation leads to the proposal of robust versions of two pattern classi- fiers widely used in the literature, namely, least squares classifier (LSC) and extreme learning machine (ELM). Through a comprehensive set of computer experiments using synthetic and real-world data, it is shown that the proposed robust classifiers consistently outperform their original versions.
8

Single-Image Super-Resolution via Regularized Extreme Learning Regression for Imagery from Microgrid Polarimeters

Sargent, Garrett Craig 24 May 2017 (has links)
No description available.
9

Design of a Novel Wearable Ultrasound Vest for Autonomous Monitoring of the Heart Using Machine Learning

Goodman, Garrett G. January 2020 (has links)
No description available.
10

A robust & reliable Data-driven prognostics approach based on extreme learning machine and fuzzy clustering.

Javed, Kamran 09 April 2014 (has links) (PDF)
Le Pronostic et l'étude de l'état de santé (en anglais Prognostics and Health Management (PHM)) vise à étendre le cycle de vie d'un actif physique, tout en réduisant les coûts d'exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédictions. En effet, des estimations précises de la durée de vie avant défaillance d'un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d'actions visant à accroître la sécurité, réduire les temps d'arrêt, assurer l'achèvement de la mission et l'efficacité de la production. Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type " boite noire " pour l'étude du comportement du système directement à partir des données de surveillance d'état, pour définir l'état actuel du system et prédire la progression future de défauts. Cependant, l'approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostics. Pour la compréhension de la modélisation de pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l'évolution de la dégradation ? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d'un cas à un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s'écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c'est à dire les conditions de fonctionnement, les variations de l'ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigences industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. Les principales contributions sont les suivantes. <br>- L'étape de traitement des données est améliorée par l'introduction d'une nouvelle approche d'extraction des caractéristiques à l'aide de fonctions trigonométriques et cumulatives qui sont basées sur trois caractéristiques : la monotonie, la "trendability" et la prévisibilité. L'idée principale de ce développement est de transformer les données brutes en indicateur qui améliorent la précision des prévisions à long terme. <br>- Pour tenir compte de la robustesse, la fiabilité et l'applicabilité, un nouvel algorithme de prédiction est proposé: Summation Wavelet-Extreme Learning Machine (SWELM). Le SW-ELM assure de bonnes performances de prédiction, tout en réduisant le temps d'apprentissage. Un ensemble de SW-ELM est également proposé pour quantifier l'incertitude et améliorer la précision des estimations. <br>- Les performances du pronostic sont également renforcées grâce à la proposition d'un nouvel algorithme d'évaluation de la santé: Subtractive-Maximum Entropy Fuzzy Clustering (S-MEFC). S-MEFC est une approche de classification non supervisée qui utilise l'inférence de l'entropie maximale pour représenter l'incertitude de données multidimensionnelles. Elle peut automatiquement déterminer le nombre d'états, sans intervention humaine. <br>- Le modèle de pronostic final est obtenu en intégrant le SW-ELM et le S-MEFC pour montrer l'évolution de la dégradation de la machine avec des prédictions simultanées et l'estimation d'états discrets. Ce programme permet également de définir dynamiquement les seuils de défaillance et d'estimer le RUL des machines surveillées. Les développements sont validés sur des données réelles à partir de trois plates-formes expérimentales: PRONOSTIA FEMTO-ST (banc d'essai des roulements), CNC SIMTech (Les fraises d'usinage), C-MAPSS NASA (turboréacteurs) et d'autres données de référence. En raison de la nature réaliste de la stratégie d'estimation du RUL proposée, des résultats très prometteurs sont atteints. Toutefois, la perspective principale de ce travail est d'améliorer la fiabilité du modèle de pronostic.

Page generated in 0.093 seconds