• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automatic Short-Term Solar Flare Prediction Using Machine Learning and Sunspot Associations.

Qahwaji, Rami S.R., Colak, Tufan January 2007 (has links)
Yes / In this paper, a machine-learning-based system that could provide automated short-term solar flare prediction is presented. This system accepts two sets of inputs: McIntosh classification of sunspot groups and solar cycle data. In order to establish a correlation between solar flares and sunspot groups, the system explores the publicly available solar catalogues from the National Geophysical Data Center to associate sunspots with their corresponding flares based on their timing and NOAA numbers. The McIntosh classification for every relevant sunspot is extracted and converted to a numerical format that is suitable for machine learning algorithms. Using this system we aim to predict whether a certain sunspot class at a certain time is likely to produce a significant flare within six hours time and if so whether this flare is going to be an X or M flare. Machine learning algorithms such as Cascade-Correlation Neural Networks (CCNNs), Support Vector Machines (SVMs) and Radial Basis Function Networks (RBFN) are optimised and then compared to determine the learning algorithm that would provide the best prediction performance. It is concluded that SVMs provide the best performance for predicting whether a McIntosh classified sunspot group is going to flare or not but CCNNs are more capable of predicting the class of the flare to erupt. A hybrid system that combines a SVM and a CCNN is suggested for future use. / EPSRC
2

New Insights in Prediction and Dynamic Modeling from Non-Gaussian Mixture Processing Methods

Safont Armero, Gonzalo 29 July 2015 (has links)
[EN] This thesis considers new applications of non-Gaussian mixtures in the framework of statistical signal processing and pattern recognition. The non-Gaussian mixtures were implemented by mixtures of independent component analyzers (ICA). The fundamental hypothesis of ICA is that the observed signals can be expressed as a linear transformation of a set of hidden variables, usually referred to as sources, which are statistically independent. This independence allows factoring the original M-dimensional probability density function (PDF) of the data as a product of one-dimensional probability densities, greatly simplifying the modeling of the data. ICA mixture models (ICAMM) provide further flexibility by alleviating the independency requirement of ICA, thus allowing the model to obtain local projections of the data without compromising its generalization capabilities. Here are explored new possibilities of ICAMM for the purposes of estimation and classification of signals. The thesis makes several contributions to the research in non-Gaussian mixtures: (i) a method for maximum-likelihood estimation of missing data, based on the maximization of the PDF of the data given the ICAMM; (ii) a method for Bayesian estimation of missing data that minimizes the mean squared error and can obtain the confidence interval of the prediction; (iii) a generalization of the sequential dependence model for ICAMM to semi-supervised or supervised learning and multiple chains of dependence, thus allowing the use of multimodal data; and (iv) introduction of ICAMM in diverse novel applications, both for estimation and for classification. The developed methods were validated via an extensive number of simulations that covered multiple scenarios. These tested the sensitivity of the proposed methods with respect to the following parameters: number of values to estimate; kinds of source distributions; correspondence of the data with respect to the assumptions of the model; number of classes in the mixture model; and unsupervised, semi-supervised, and supervised learning. The performance of the proposed methods was evaluated using several figures of merit, and compared with the performance of multiple classical and state-of-the-art techniques for estimation and classification. Aside from the simulations, the methods were also tested on several sets of real data from different types: data from seismic exploration studies; ground penetrating radar surveys; and biomedical data. These data correspond to the following applications: reconstruction of damaged or missing data from ground-penetrating radar surveys of historical walls; reconstruction of damaged or missing data from a seismic exploration survey; reconstruction of artifacted or missing electroencephalographic (EEG) data; diagnosis of sleep disorders; modeling of the brain response during memory tasks; and exploration of EEG data from subjects performing a battery of neuropsychological tests. The obtained results demonstrate the capability of the proposed methods to work on problems with real data. Furthermore, the proposed methods are general-purpose and can be used in many signal processing fields. / [ES] Esta tesis considera nuevas aplicaciones de las mezclas no Gaussianas dentro del marco de trabajo del procesado estadístico de señal y del reconocimiento de patrones. Las mezclas no Gaussianas fueron implementadas mediante mezclas de analizadores de componentes independientes (ICA). La hipótesis fundamental de ICA es que las señales observadas pueden expresarse como una transformación lineal de un grupo de variables ocultas, normalmente llamadas fuentes, que son estadísticamente independientes. Esta independencia permite factorizar la función de densidad de probabilidad (PDF) original M-dimensional de los datos como un producto de densidades unidimensionales, simplificando ampliamente el modelado de los datos. Los modelos de mezclas ICA (ICAMM) aportan una mayor flexibilidad al relajar el requisito de independencia de ICA, permitiendo que el modelo obtenga proyecciones locales de los datos sin comprometer su capacidad de generalización. Aquí se exploran nuevas posibilidades de ICAMM para los propósitos de estimación y clasificación de señales. La tesis realiza varias contribuciones a la investigación en mezclas no Gaussianas: (i) un método de estimación de datos faltantes por máxima verosimilitud, basado en la maximización de la PDF de los datos dado el ICAMM; (ii) un método de estimación Bayesiana de datos faltantes que minimiza el error cuadrático medio y puede obtener el intervalo de confianza de la predicción; (iii) una generalización del modelo de dependencia secuencial de ICAMM para aprendizaje supervisado o semi-supervisado y múltiples cadenas de dependencia, permitiendo así el uso de datos multimodales; y (iv) introducción de ICAMM en varias aplicaciones novedosas, tanto para estimación como para clasificación. Los métodos desarrollados fueron validados mediante un número extenso de simulaciones que cubrieron múltiples escenarios. Éstos comprobaron la sensibilidad de los métodos propuestos con respecto a los siguientes parámetros: número de valores a estimar; tipo de distribuciones de las fuentes; correspondencia de los datos con respecto a las suposiciones del modelo; número de clases en el modelo de mezclas; y aprendizaje supervisado, semi-supervisado y no supervisado. El rendimiento de los métodos propuestos fue evaluado usando varias figuras de mérito, y comparado con el rendimiento de múltiples técnicas clásicas y del estado del arte para estimación y clasificación. Además de las simulaciones, los métodos también fueron probados sobre varios grupos de datos de diferente tipo: datos de estudios de exploración sísmica; exploraciones por radar de penetración terrestre; y datos biomédicos. Estos datos corresponden a las siguientes aplicaciones: reconstrucción de datos dañados o faltantes de exploraciones de radar de penetración terrestre de muros históricos; reconstrucción de datos dañados o faltantes de un estudio de exploración sísmica; reconstrucción de datos electroencefalográficos (EEG) dañados o artefactados; diagnóstico de desórdenes del sueño; modelado de la respuesta del cerebro durante tareas de memoria; y exploración de datos EEG de sujetos durante la realización de una batería de pruebas neuropsicológicas. Los resultados obtenidos demuestran la capacidad de los métodos propuestos para trabajar en problemas con datos reales. Además, los métodos propuestos son de propósito general y pueden utilizarse en muchos campos del procesado de señal. / [CA] Aquesta tesi considera noves aplicacions de barreges no Gaussianes dins del marc de treball del processament estadístic de senyal i del reconeixement de patrons. Les barreges no Gaussianes van ser implementades mitjançant barreges d'analitzadors de components independents (ICA). La hipòtesi fonamental d'ICA és que els senyals observats poden ser expressats com una transformació lineal d'un grup de variables ocultes, comunament anomenades fonts, que són estadísticament independents. Aquesta independència permet factoritzar la funció de densitat de probabilitat (PDF) original M-dimensional de les dades com un producte de densitats de probabilitat unidimensionals, simplificant àmpliament la modelització de les dades. Els models de barreges ICA (ICAMM) aporten una major flexibilitat en alleugerar el requeriment d'independència d'ICA, permetent així que el model obtinga projeccions locals de les dades sense comprometre la seva capacitat de generalització. Ací s'exploren noves possibilitats d'ICAMM pels propòsits d'estimació i classificació de senyals. Aquesta tesi aporta diverses contribucions a la recerca en barreges no Gaussianes: (i) un mètode d'estimació de dades faltants per màxima versemblança, basat en la maximització de la PDF de les dades donat l'ICAMM; (ii) un mètode d'estimació Bayesiana de dades faltants que minimitza l'error quadràtic mitjà i pot obtenir l'interval de confiança de la predicció; (iii) una generalització del model de dependència seqüencial d'ICAMM per entrenament supervisat o semi-supervisat i múltiples cadenes de dependència, permetent així l'ús de dades multimodals; i (iv) introducció d'ICAMM en diverses noves aplicacions, tant per a estimació com per a classificació. Els mètodes desenvolupats van ser validats mitjançant una extensa quantitat de simulacions que cobriren múltiples situacions. Aquestes van verificar la sensibilitat dels mètodes proposats amb respecte als següents paràmetres: nombre de valors per estimar; mena de distribucions de les fonts; correspondència de les dades amb respecte a les suposicions del model; nombre de classes del model de barreges; i aprenentatge supervisat, semi-supervisat i no-supervisat. El rendiment dels mètodes proposats va ser avaluat mitjançant diverses figures de mèrit, i comparat amb el rendiments de múltiples tècniques clàssiques i de l'estat de l'art per a estimació i classificació. A banda de les simulacions, els mètodes van ser verificats també sobre diversos grups de dades reals de diferents tipus: dades d'estudis d'exploració sísmica; exploracions de radars de penetració de terra; i dades biomèdiques. Aquestes dades corresponen a les següents aplicacions: reconstrucció de dades danyades o faltants d'estudis d'exploracions de radar de penetració de terra sobre murs històrics; reconstrucció de dades danyades o faltants en un estudi d'exploració sísmica; reconstrucció de dades electroencefalogràfiques (EEG) artefactuades o faltants; diagnosi de desordres de la son; modelització de la resposta del cervell durant tasques de memòria; i exploració de dades EEG de subjectes realitzant una bateria de tests neuropsicològics. Els resultats obtinguts han demostrat la capacitat dels mètodes proposats per treballar en problemes amb dades reals. A més, els mètodes proposats són de propòsit general i poden fer-se servir en molts camps del processament de senyal. / Safont Armero, G. (2015). New Insights in Prediction and Dynamic Modeling from Non-Gaussian Mixture Processing Methods [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53913
3

A Content Based Movie Recommendation System Empowered By Collaborative Missing Data Prediction

Karaman, Hilal 01 July 2010 (has links) (PDF)
The evolution of the Internet has brought us into a world that represents a huge amount of information items such as music, movies, books, web pages, etc. with varying quality. As a result of this huge universe of items, people get confused and the question &ldquo / Which one should I choose?&rdquo / arises in their minds. Recommendation Systems address the problem of getting confused about items to choose, and filter a specific type of information with a specific information filtering technique that attempts to present information items that are likely of interest to the user. A variety of information filtering techniques have been proposed for performing recommendations, including content-based and collaborative techniques which are the most commonly used approaches in recommendation systems. This thesis work introduces ReMovender, a content-based movie recommendation system which is empowered by collaborative missing data prediction. The distinctive point of this study lies in the methodology used to correlate the users in the system with one another and the usage of the content information of movies. ReMovender makes it possible for the users to rate movies in a scale from one to five. By using these ratings, it finds similarities among the users in a collaborative manner to predict the missing ratings data. As for the content-based part, a set of movie features are used in order to correlate the movies and produce recommendations for the users.
4

Elhealth : utilizando internet das coisas e predição computacional para gerenciamento elástico de recursos humanos em hospitais inteligentes

Fischer, Gabriel Souto 28 February 2019 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2019-03-22T15:48:30Z No. of bitstreams: 1 Gabriel Souto Fischer_.pdf: 6896303 bytes, checksum: 94fdcc73f442a52096c513cfec8e03f1 (MD5) / Made available in DSpace on 2019-03-22T15:48:30Z (GMT). No. of bitstreams: 1 Gabriel Souto Fischer_.pdf: 6896303 bytes, checksum: 94fdcc73f442a52096c513cfec8e03f1 (MD5) Previous issue date: 2019-02-28 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Os hospitais são pontos de atendimento extremamente importantes para garantir o tratamento adequado da saúde humana. Um dos principais problemas a serem enfrentados são as filas de atendimento aos pacientes cada vez mais superlotadas, que fazem com que os pacientes fiquem cada vez mais tempo com problemas de saúde sem tratamento adequado. A alocação de profissionais de saúde em ambientes hospitalares não é capaz de se adaptar à demanda de pacientes, e há momentos em que salas com pouco uso têm profissionais ociosos, e salas com muito uso acabam tendo menos profissionais do que o necessário. Os trabalhos anteriores acabam não resolvendo o problema, uma vez que se concentram em maneiras de automatizar o tratamento da saúde, mas não em técnicas para melhorar a alocação de recursos humanos disponíveis. Neste contexto, o presente trabalho propõe o ElHealth, um modelo focado na IoT capaz de identificar o uso das salas pelos pacientes e, através de técnicas de predição computacional, identificar quando uma sala terá uma demanda que excede a capacidade de atendimento, propondo ações para movimentar os recursos humanos para se adaptar à demanda futura de pacientes. As principais contribuições do ElHealth são a definição da Elasticidade Preditiva Multinível de Recursos Humanos, uma extensão do conceito de elasticidade de recursos em Cloud Computing para gerenciar o uso de recursos humanos em diferente níveis de um ambiente hospitalar, e a definição do Speedup Elástico Proativo de Recursos Humanos, uma extensão do conceito de Speedup da computação paralela para identificar o ganho de tempo de atendimento com o uso paralelo dinâmico de recursos humanos para atendimento em um ambiente hospitalar. O ElHealth foi simulado em ambiente hospitalar utilizando dados de uma policlínica brasileira e obteve resultados promissores, sendo capaz de diminuir a quantidade média de pacientes aguardando e reduzir o tempo de espera por atendimento no ambiente proposto. / Hospitals are extremely important care points for ensuring the proper treatment of human health. One of the main problems to be faced is the increasingly overcrowded patient care queues, who end up getting more and more time with health problems without proper treatment. The allocation of health professionals in hospital environments is not able to adapt to the demands of patients, and there are times when rooms with little use have idle professionals, and rooms with a lot of use having fewer professionals than necessary. Previous works end up not solving the problem since they focus on ways to automate the treatment of health, but not on techniques for better allocating available human resources. Against this background, the present work proposes ElHealth, an IoT-focused model able to identify patients’ use of the rooms and, through data prediction techniques, to identify when a room will have a demand that exceeds the capacity of care, proposing actions to move human resources to adapt to future patients demand. The main contribution of ElHealth is the definition of Multi-level Predictive Elasticity of Human Resources, an extension of the concept of resource elasticity in Cloud Computing to manage the use of human resources at different levels of a healthcare environment, and the definition of Proactive Human Resource Elastic Speedup, an extension of the Speedup concept of parallel computing to identify the gain of medical care time with the dynamic parallel use of human resources for care in a hospital environment. ElHealth was simulated a hospital environment using data from a Brazilian polyclinic, and obtained promising results, being able to decrease the average number of patients waiting, and reduce waiting time for care in the proposed environment.

Page generated in 0.0872 seconds