• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 2
  • 2
  • 2
  • Tagged with
  • 23
  • 23
  • 9
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Parameter Estimation of Complex Systems from Sparse and Noisy Data

Chu, Yunfei 2010 December 1900 (has links)
Mathematical modeling is a key component of various disciplines in science and engineering. A mathematical model which represents important behavior of a real system can be used as a substitute for the real process for many analysis and synthesis tasks. The performance of model based techniques, e.g. system analysis, computer simulation, controller design, sensor development, state filtering, product monitoring, and process optimization, is highly dependent on the quality of the model used. Therefore, it is very important to be able to develop an accurate model from available experimental data. Parameter estimation is usually formulated as an optimization problem where the parameter estimate is computed by minimizing the discrepancy between the model prediction and the experimental data. If a simple model and a large amount of data are available then the estimation problem is frequently well-posed and a small error in data fitting automatically results in an accurate model. However, this is not always the case. If the model is complex and only sparse and noisy data are available, then the estimation problem is often ill-conditioned and good data fitting does not ensure accurate model predictions. Many challenges that can often be neglected for estimation involving simple models need to be carefully considered for estimation problems involving complex models. To obtain a reliable and accurate estimate from sparse and noisy data, a set of techniques is developed by addressing the challenges encountered in estimation of complex models, including (1) model analysis and simplification which identifies the important sources of uncertainty and reduces the model complexity; (2) experimental design for collecting information-rich data by setting optimal experimental conditions; (3) regularization of estimation problem which solves the ill-conditioned large-scale optimization problem by reducing the number of parameters; (4) nonlinear estimation and filtering which fits the data by various estimation and filtering algorithms; (5) model verification by applying statistical hypothesis test to the prediction error. The developed methods are applied to different types of models ranging from models found in the process industries to biochemical networks, some of which are described by ordinary differential equations with dozens of state variables and more than a hundred parameters.
12

A General Model for Continuous Noninvasive Pulmonary Artery Pressure Estimation

Smith, Robert Anthony 15 December 2011 (has links) (PDF)
Elevated pulmonary artery pressure (PAP) is a significant healthcare risk. Continuous monitoring for patients with elevated PAP is crucial for effective treatment, yet the most accurate method is invasive and expensive, and cannot be performed repeatedly. Noninvasive methods exist but are inaccurate, expensive, and cannot be used for continuous monitoring. We present a machine learning model based on heart sounds that estimates pulmonary artery pressure with enough accuracy to exclude an invasive diagnostic operation, allowing for consistent monitoring of heart condition in suspect patients without the cost and risk of invasive monitoring. We conduct a greedy search through 38 possible features using a 109-patient cross-validation to find the most predictive features. Our best general model has a standard estimate of error (SEE) of 8.28 mmHg, which outperforms the previous best performance in the literature on a general set of unseen patient data.
13

Improving armed conflict prediction using machine learning : ViEWS+

Helle, Valeria, Negus, Andra-Stefania, Nyberg, Jakob January 2018 (has links)
Our project, ViEWS+, expands the software functionality of the Violence EarlyWarning System (ViEWS). ViEWS aims to predict the probabilities of armed conflicts in the next 36 months using machine learning. Governments and policy-makers may use conflict predictions to decide where to deliver aid and resources, potentially saving lives. The predictions use conflict data gathered by ViEWS, which includes variables like past conflicts, child mortality and urban density. The large number of variables raises the need for a selection tool to remove those that are irrelevant for conflict prediction. Before our work, the stakeholders used their experience and some guesswork to pick the variables, and the predictive function with its parameters. Our goals were to improve the efficiency, in terms of speed, and correctness of the ViEWS predictions. Three steps were taken. Firstly, we made an automatic variable selection tool. This helps researchers use fewer, more relevant variables, to save time and resources. Secondly, we compared prediction functions, and identified the best for the purpose of predicting conflict. Lastly, we tested how parameter values affect the performance of the chosen functions, so as to produce good predictions but also reduce the execution time. The new tools improved both the execution time and the predictive correctness of the system compared to the results obtained prior to our project. It is now nine times faster than before, and its correctness has improved by a factor of three. We believe our work leads to more accurate conflict predictions, and as ViEWS has strong connections to the European Union, we hope that decision makers can benefit from it when trying to prevent conflicts. / I detta projekt, vilket vi valt att benämna ViEWS+, har vi förbättrat olika aspekter av ViEWS (Violence Early-Warning System), ett system som med maskinlärning försöker förutsäga var i världen väpnade konflikter kommer uppstå. Målet med ViEWS är att kunna förutsäga sannolikheten för konflikter så långt som 36 månader i framtiden. Målet med att förutsäga sannoliketen för konflikter är att politiker och beslutsfattare ska kunna använda dessa kunskaper för att förhindra dem.  Indata till systemet är konfliktdata med ett stort antal egenskaper, så som tidigare konflikter, barnadödlighet och urbanisering. Dessa är av varierande användbarhet, vilket skapar ett behov för att sålla ut de som inte är användbara för att förutsäga framtida konflikter. Innan vårt projekt har forskarna som använder ViEWS valt ut egenskaper för hand, vilket blir allt svårare i och med att fler introduceras. Forskargruppen hade även ingen formell metodik för att välja parametervärden till de maskinlärningsfunktioner de använder. De valde parametrar baserat på erfarenhet och känsla, något som kan leda till onödigt långa exekveringstider och eventuellt sämre resultat beroende på funktionen som används. Våra mål med projektet var att förbättra systemets produktivitet, i termer av exekveringstid och säkerheten i förutsägelserna. För att uppnå detta utvecklade vi analysverktyg för att försöka lösa de existerande problemen. Vi har utvecklat ett verktyg för att välja ut färre, mer användbara, egenskaper från datasamlingen. Detta gör att egenskaper som inte tillför någon viktig information kan sorteras bort vilket sparar exekveringstid. Vi har även jämfört prestandan hos olika maskinlärningsfunktioner, för att identifiera de bäst lämpade för konfliktprediktion. Slutligen har vi implementerat ett verktyg för att analysera hur resultaten från funktionerna varierar efter valet av parametrar. Detta gör att man systematiskt kan bestämma vilka parametervärden som bör väljas för att garantera bra resultat samtidigt som exekveringstid hålls nere. Våra resultat visar att med våra förbättringar sänkes exekveringstiden med en faktor av omkring nio och förutsägelseförmågorna höjdes med en faktor av tre. Vi hoppas att vårt arbete kan leda till säkrare föutsägelser och vilket i sin tur kanske leder till en fredligare värld.
14

Classificação automática de modulações mono e multiportadoras utilizando método de extração de características e classificadores SVM

Amoedo, Diego Alves, 69-98468-0910 19 July 2017 (has links)
Submitted by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-02-23T14:45:53Z No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-02-23T14:46:21Z (GMT) No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-02-23T14:46:21Z (GMT). No. of bitstreams: 2 Dissertação_Diego A. Amoedo.pdf: 21597862 bytes, checksum: a9e7494163dfed228afe8750f777a7fc (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-07-19 / Cognitive radio is a new technology that aims to solve the spectrumunderutilization problem, through spectrum sensing, whose objective is to detect the so called spectrum holes. Automatic modulation classi cation plays an important role in this scenario, since it provides information about primary users, with the goal of aiding in spectrum sensing tasks. In the present dissertation, we propose a methodology for multiclass and hierarchical classi cation of modulated signal using support vector machines (SVM), with a set of prede ned parameters. In literature, other works deal with automatic modulation classi cation with SVM and other classi ers, however, few of them take a deep look at classi er design. SVM is known by its high discrimantion capacity, but its performance is very sensitive to the parameters used during classi ers design. With the use of a prede ned set of parameters, we seek to analyze the behavior of the classi er broadly and to investigate the in uence of parameter changes on the constitution of classi ers. In addition, we use one-versus-all and one-versus-one, error-correcting output codes and hierarchical decomposition. Finally, nine types of modulations (AM, FM, BPSK, QPSK, 16QAM, 64QAM, GMSK, OFDM and WCDMA) are used. The types of modulation as well as the decomposition techniques used cover almost all decomposition techniques and modulation classes present in the literature. / O Rádio Cognitivo é uma nova tecnologia que busca resolver o problema de subutilização do espectro de radiofrequências, por meio do sensoriamento do espectro, cujo objetivo é detectar os buracos espectrais. A classi cação automática de modulação desempenha um papel importante neste cenário, pois, provém informa- ção sobre os usuários primários de modo a auxiliar nas tarefas de sensoriamento do espectro. Nesta dissertação, propomos uma metodologia para a classi cação multiclasse e hierárquica de sinais modulados utilizando SVM, com um conjunto de parâmetros pré-de nidos. Na literatura, outros trabalhos tratam da classi cação automática de modulação tanto com SVM como com outros tipos de classi cadores, porém, poucos fazem uma análise detalhada do projeto dos classi cadores. O SVM é conhecido por sua alta capacidade de discriminação, todavia, seu desempenho é bastante sensível aos parâmetros usados na geração dos classi cadores. Com a utilização de um conjunto pré-de nido de parâmetros, buscamos analisar o comportamento do classi cador de forma ampla e investigar a in uência das mudanças de parâmetros na constituição de classi cadores. Além disso, utiliza-se as técnicas de decomposição multiclasse um-contra-todos, um-contra-um, códigos de saída corretores de erros e hierárquica. Por m, foram utilizados nove tipos de modulações (AM, FM, BPSK, QPSK, 16QAM, 64QAM, GMSK, OFDM e WCDMA). Tanto os tipos de modulação quanto as técnicas de decomposição abrangem quase a totalidade de técnicas de decomposição e de classes de modulação presentes na literatura.
15

Estimation non-paramétrique de la densité de variables aléatoires cachées / Nonparametric estimation of the density of hidden random variables.

Dion, Charlotte 24 June 2016 (has links)
Cette thèse comporte plusieurs procédures d'estimation non-paramétrique de densité de probabilité.Dans chaque cas les variables d'intérêt ne sont pas observées directement, ce qui est une difficulté majeure.La première partie traite un modèle linéaire mixte où des observations répétées sont disponibles.La deuxième partie s'intéresse aux modèles d'équations différentielles stochastiques à effets aléatoires. Plusieurs trajectoires sont observées en temps continu sur un intervalle de temps commun.La troisième partie se place dans un contexte de bruit multiplicatif.Les différentes parties de cette thèse sont reliées par un contexte commun de problème inverse et par une problématique commune: l'estimation de la densité d'une variable cachée. Dans les deux premières parties la densité d'un ou plusieurs effets aléatoires est estimée. Dans la troisième partie il s'agit de reconstruire la densité de la variable d'origine à partir d'observations bruitées.Différentes méthodes d'estimation globale sont utilisées pour construire des estimateurs performants: estimateurs à noyau, estimateurs par projection ou estimateurs construits par déconvolution.La sélection de paramètres mène à des estimateurs adaptatifs et les risques quadratiques intégrés sont majorés grâce à une inégalité de concentration de Talagrand. Une étude sur simulations de chaque estimateur illustre leurs performances. Un jeu de données neuronales est étudié grâce aux procédures mises en place pour les équations différentielles stochastiques. / This thesis contains several nonparametric estimation procedures of a probability density function.In each case, the main difficulty lies in the fact that the variables of interest are not directly observed.The first part deals with a mixed linear model for which repeated observations are available.The second part focuses on stochastic differential equations with random effects. Many trajectories are observed continuously on the same time interval.The third part is in a full multiplicative noise framework.The parts of the thesis are connected by the same context of inverse problems and by a common problematic: the estimation of the density function of a hidden variable.In the first two parts the density of one or two random effects is estimated. In the third part the goal is to rebuild the density of the original variable from the noisy observations.Different global methods are used and lead to well competitive estimators: kernel estimators, projection estimators or estimators built from deconvolution.Parameter selection gives adaptive estimators and the integrated risks are bounded using a Talagrand concentration inequality.A simulation study for each proposed estimator highlights their performances.A neuronal dataset is investigated with the new procedures for stochastic differential equations developed in this work.
16

ASSESSMENT AND PREDICTION OF CARDIOVASCULAR STATUS DURING CARDIAC ARREST THROUGH MACHINE LEARNING AND DYNAMICAL TIME-SERIES ANALYSIS

Shandilya, Sharad 02 July 2013 (has links)
In this work, new methods of feature extraction, feature selection, stochastic data characterization/modeling, variance reduction and measures for parametric discrimination are proposed. These methods have implications for data mining, machine learning, and information theory. A novel decision-support system is developed in order to guide intervention during cardiac arrest. The models are built upon knowledge extracted with signal-processing, non-linear dynamic and machine-learning methods. The proposed ECG characterization, combined with information extracted from PetCO2 signals, shows viability for decision-support in clinical settings. The approach, which focuses on integration of multiple features through machine learning techniques, suits well to inclusion of multiple physiologic signals. Ventricular Fibrillation (VF) is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. For a given desired sensitivity, the proposed model provides a significantly higher accuracy and specificity as compared to the state-of-the-art. Notably, within the range of 80-90% of sensitivity, the method provides about 40% higher specificity. This means that when trained to have the same level of sensitivity, the model will yield far fewer false positives (unnecessary shocks). Also introduced is a new model that predicts recurrence of arrest after a successful countershock is delivered. To date, no other work has sought to build such a model. I validate the method by reporting multiple performance metrics calculated on (blind) test sets.
17

A Signal Processing Approach to Voltage-Sensitive Dye Optical Imaging / Une approche mathématique de l'imagerie optique par colorant potentiométrique

Raguet, Hugo 22 September 2014 (has links)
L’imagerie optique par colorant potentiométrique est une méthode d’enregistrement de l’activité corticale prometteuse, mais dont le potentiel réel est limité par la présence d’artefacts et d’interférences dans les acquisitions. À partir de modèles existant dans la littérature, nous proposons un modèle génératif du signal basé sur un mélange additif de composantes, chacune contrainte dans une union d’espaces linéaires déterminés par son origine biophysique. Motivés par le problème de séparation de composantes qui en découle, qui est un problème inverse linéaire sous-déterminé, nous développons : (1) des régularisations convexes structurées spatialement, favorisant en particulier des solutions parcimonieuses ; (2) un nouvel algorithme proximal de premier ordre pour minimiser efficacement la fonctionnelle qui en résulte ; (3) des méthodes statistiques de sélection de paramètre basées sur l’estimateur non biaisé du risque de Stein. Nous étudions ces outils dans un cadre général, et discutons leur utilité pour de nombreux domaines des mathématiques appliqués, en particulier pour les problèmes inverses ou de régression en grande dimension. Nous développons par la suite un logiciel de séparation de composantes en présence de bruit, dans un environnement intégré adapté à l’imagerie optique par colorant potentiométrique. Finalement, nous évaluons ce logiciel sur différentes données, synthétiques et réelles, montrant des résultats encourageants quant à la possibilité d’observer des dynamiques corticales complexes. / Voltage-sensitive dye optical imaging is a promising recording modality for the cortical activity, but its practical potential is limited by many artefacts and interferences in the acquisitions. Inspired by existing models in the literature, we propose a generative model of the signal, based on an additive mixtures of components, each one being constrained within an union of linear spaces, determined by its biophysical origin. Motivated by the resulting component separation problem, which is an underdetermined linear inverse problem, we develop: (1) convex, spatially structured regularizations, enforcing in particular sparsity on the solutions; (2) a new rst-order proximal algorithm for minimizing e›ciently the resulting functional; (3) statistical methods for automatic parameters selection, based on Stein’s unbiased risk estimate.We study thosemethods in a general framework, and discuss their potential applications in variouselds of applied mathematics, in particular for large scale inverse problems or regressions. We develop subsequently a soŸware for noisy component separation, in an integrated environment adapted to voltage-sensitive dye optical imaging. Finally, we evaluate this soŸware on dišerent data set, including synthetic and real data, showing encouraging perspectives for the observation of complex cortical dynamics.
18

Incorporating complex cells into neural networks for pattern classification

Bergstra, James 03 1900 (has links)
Dans le domaine des neurosciences computationnelles, l'hypothèse a été émise que le système visuel, depuis la rétine et jusqu'au cortex visuel primaire au moins, ajuste continuellement un modèle probabiliste avec des variables latentes, à son flux de perceptions. Ni le modèle exact, ni la méthode exacte utilisée pour l'ajustement ne sont connus, mais les algorithmes existants qui permettent l'ajustement de tels modèles ont besoin de faire une estimation conditionnelle des variables latentes. Cela nous peut nous aider à comprendre pourquoi le système visuel pourrait ajuster un tel modèle; si le modèle est approprié, ces estimé conditionnels peuvent aussi former une excellente représentation, qui permettent d'analyser le contenu sémantique des images perçues. Le travail présenté ici utilise la performance en classification d'images (discrimination entre des types d'objets communs) comme base pour comparer des modèles du système visuel, et des algorithmes pour ajuster ces modèles (vus comme des densités de probabilité) à des images. Cette thèse (a) montre que des modèles basés sur les cellules complexes de l'aire visuelle V1 généralisent mieux à partir d'exemples d'entraînement étiquetés que les réseaux de neurones conventionnels, dont les unités cachées sont plus semblables aux cellules simples de V1; (b) présente une nouvelle interprétation des modèles du système visuels basés sur des cellules complexes, comme distributions de probabilités, ainsi que de nouveaux algorithmes pour les ajuster à des données; et (c) montre que ces modèles forment des représentations qui sont meilleures pour la classification d'images, après avoir été entraînés comme des modèles de probabilités. Deux innovations techniques additionnelles, qui ont rendu ce travail possible, sont également décrites : un algorithme de recherche aléatoire pour sélectionner des hyper-paramètres, et un compilateur pour des expressions mathématiques matricielles, qui peut optimiser ces expressions pour processeur central (CPU) et graphique (GPU). / Computational neuroscientists have hypothesized that the visual system from the retina to at least primary visual cortex is continuously fitting a latent variable probability model to its stream of perceptions. It is not known exactly which probability model, nor exactly how the fitting takes place, but known algorithms for fitting such models require conditional estimates of the latent variables. This gives us a strong hint as to why the visual system might be fitting such a model; in the right kind of model those conditional estimates can also serve as excellent features for analyzing the semantic content of images perceived. The work presented here uses image classification performance (accurate discrimination between common classes of objects) as a basis for comparing visual system models, and algorithms for fitting those models as probability densities to images. This dissertation (a) finds that models based on visual area V1's complex cells generalize better from labeled training examples than conventional neural networks whose hidden units are more like V1's simple cells, (b) presents novel interpretations for complex-cell-based visual system models as probability distributions and novel algorithms for fitting them to data, and (c) demonstrates that these models form better features for image classification after they are first trained as probability models. Visual system models based on complex cells achieve some of the best results to date on the CIFAR-10 image classification benchmark, and samples from their probability distributions indicate that they have learnt to capture important aspects of natural images. Two auxiliary technical innovations that made this work possible are also described: a random search algorithm for selecting hyper-parameters, and an optimizing compiler for matrix-valued mathematical expressions which can target both CPU and GPU devices.
19

Régularisations de faible complexité pour les problèmes inverses / Low Complexity Regularization of Inverse Problems

Vaiter, Samuel 10 July 2014 (has links)
Cette thèse se consacre aux garanties de reconstruction et de l’analyse de sensibilité de régularisation variationnelle pour des problèmes inverses linéaires bruités. Il s’agit d’un problème d’optimisation convexe combinant un terme d’attache aux données et un terme de régularisation promouvant des solutions vivant dans un espace dit de faible complexité. Notre approche, basée sur la notion de fonctions partiellement lisses, permet l’étude d’une grande variété de régularisations comme par exemple la parcimonie de type analyse ou structurée, l’anti-Parcimonie et la structure de faible rang. Nous analysons tout d’abord la robustesse au bruit, à la fois en termes de distance entre les solutions et l’objet original, ainsi que la stabilité de l’espace modèle promu.Ensuite, nous étudions la stabilité de ces problèmes d’optimisation à des perturbations des observations. A partir d’observations aléatoires, nous construisons un estimateur non biaisé du risque afin d’obtenir un schéma de sélection de paramètre. / This thesis is concerned with recovery guarantees and sensitivity analysis of variational regularization for noisy linear inverse problems. This is cast as aconvex optimization problem by combining a data fidelity and a regularizing functional promoting solutions conforming to some notion of low complexity related to their non-Smoothness points. Our approach, based on partial smoothness, handles a variety of regularizers including analysis/structured sparsity, antisparsity and low-Rank structure. We first give an analysis of thenoise robustness guarantees, both in terms of the distance of the recovered solutions to the original object, as well as the stability of the promoted modelspace. We then turn to sensivity analysis of these optimization problems to observation perturbations. With random observations, we build un biased estimator of the risk which provides a parameter selection scheme.
20

Incorporating complex cells into neural networks for pattern classification

Bergstra, James 03 1900 (has links)
No description available.

Page generated in 8.2143 seconds