• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 70
  • 70
  • 20
  • 19
  • 17
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Analyse conjointe de traces oculométriques et d'EEG à l'aide de modèles de Markov cachés couplés / Joint analysis of eye movements and EEGs using coupled hidden Markov

Olivier, Brice 26 June 2019 (has links)
Cette thèse consiste à analyser conjointement des signaux de mouvement des yeux et d’électroencéphalogrammes (EEG) multicanaux acquis simultanément avec des participants effectuant une tâche de lecture de recueil d'informations afin de prendre une décision binaire - le texte est-il lié à un sujet ou non? La recherche d'informations textuelles n'est pas un processus homogène dans le temps - ni d'un point de vue cognitif, ni en termes de mouvement des yeux. Au contraire, ce processus implique plusieurs étapes ou phases, telles que la lecture normale, le balayage, la lecture attentive - en termes d'oculométrie - et la création et le rejet d'hypothèses, la confirmation et la décision - en termes cognitifs.Dans une première contribution, nous discutons d'une méthode d'analyse basée sur des chaînes semi-markoviennes cachées sur les signaux de mouvement des yeux afin de mettre en évidence quatre phases interprétables en termes de stratégie d'acquisition d'informations: lecture normale, lecture rapide, lecture attentive et prise de décision.Dans une deuxième contribution, nous lions ces phases aux changements caractéristiques des signaux EEG et des informations textuelles. En utilisant une représentation en ondelettes des EEG, cette analyse révèle des changements de variance et de corrélation des coefficients inter-canaux, en fonction des phases et de la largeur de bande. En utilisant des méthodes de plongement des mots, nous relions l’évolution de la similarité sémantique au sujet tout au long du texte avec les changements de stratégie.Dans une troisième contribution, nous présentons un nouveau modèle dans lequel les EEG sont directement intégrés en tant que variables de sortie afin de réduire l’incertitude des états. Cette nouvelle approche prend également en compte les aspects asynchrones et hétérogènes des données. / This PhD thesis consists in jointly analyzing eye-tracking signals and multi-channel electroencephalograms (EEGs) acquired concomitantly on participants doing an information collection reading task in order to take a binary decision - is the text related to some topic or not ? Textual information search is not a homogeneous process in time - neither on a cognitive point of view, nor in terms of eye-movement. On the contrary, this process involves several steps or phases, such as normal reading, scanning, careful reading - in terms of oculometry - and creation and rejection of hypotheses, confirmation and decision - in cognitive terms.In a first contribution, we discuss an analysis method based on hidden semi-Markov chains on the eye-tracking signals in order to highlight four interpretable phases in terms of information acquisition strategy: normal reading, fast reading, careful reading, and decision making.In a second contribution, we link these phases with characteristic changes of both EEGs signals and textual information. By using a wavelet representation of EEGs, this analysis reveals variance and correlation changes of the inter-channels coefficients, according to the phases and the bandwidth. And by using word embedding methods, we link the evolution of semantic similarity to the topic throughout the text with strategy changes.In a third contribution, we present a new model where EEGs are directly integrated as output variables in order to reduce the state uncertainty. This novel approach also takes into consideration the asynchronous and heterogeneous aspects of the data.
12

Probabilistic Methods for Computational Annotation of Genomic Sequences / Probabilistische Methoden für computergestützte Genom-Annotation

Keller, Oliver 26 January 2011 (has links)
No description available.
13

On some special-purpose hidden Markov models / Einige Erweiterungen von Hidden Markov Modellen für spezielle Zwecke

Langrock, Roland 28 April 2011 (has links)
No description available.
14

Estimations et projections d’indicateurs de santé pour maladies chroniques et prise en compte de l’impact d’interventions / Estimates and projections of health indicators for chronic diseases and impact of interventions

Wanneveich, Mathilde 04 November 2016 (has links)
De nos jours, la Santé Publique porte de plus en plus d’intérêts aux maladies chroniques neuro-dégénératives liées au vieillissement telles que la démence ou la maladie de Parkinson. Ces pathologies ne peuvent ni être évitées ni guéries et occasionnent une détérioration progressive de la santé des malades requérant donc des soins spécifiques. Le contexte démographique actuel laisse entrevoir un vieillissement de la population et un allongement de l’espérance de vie continus, ce qui aura pour conséquence majeure d’aggraver le fardeau économique, social et démographique de ces maladies dans les années à venir. C’est pourquoi, dans l’optique d’anticiper les problèmes à venir, il est important de développer des modèles statistiques permettant de faire des projections et d’évaluer le fardeau futur de ces maladies via divers indicateurs de santé. De plus, il est intéressant de pouvoir donner des projections,en fonction de scénarios potentiels, qui pourraient être mis en place (e.g. un nouveau traitement), afin d’évaluer l’impact qu’aurait une telle intervention de Santé Publique, mais aussi de prendre en compte une variation de l’incidence due, par exemple, à des changements de comportement. Une approche utilisant le modèle illness-death sous une hypothèse Markovienne a été proposée par Joly et al. 2013 pour répondre à cet objectif. Cette approche est particulièrement adaptée dans ce contexte, car elle permet, notamment, de prendre en compte le risque compétitif qui existe entre devenir malade ou décéder (pour un individu non-malade). L’utilisation de ce modèle pour faire des projections a l’avantage, d’une part, de faire intervenir des projections démographiques nationales pour mieux capter l’évolution de la mortalité au cours du temps, et d’autre part, de proposer une modélisation adaptée de la mortalité (en distinguant la mortalité des malades, des non-malades et générale). C’est sur la base de ce modèle que découle tout le travail de cette thèse. Dans une première partie, les hypothèses du modèle existant ont été améliorées ou modifiées dans le but de considérer l’évolution de l’incidence de la maladie au cours du temps,puis de passer d’un modèle Markovien à un modèle semi-Markovien afin de modéliser la mortalité des malades en fonction de la durée passée avec la maladie. Dans une deuxième partie, la méthode initiale, permettant de considérer l’impact d’une intervention mais avec des hypothèses restrictives, a été développée et généralisée pour prendre en compte des interventions plus flexibles. Puis, les expressions mathématiques/statistiques d’indicateurs de santé pertinents ont été développées dans ce contexte dans le but d’avoir un panel de projections permettant une meilleure évaluation du fardeau futur de la maladie. L’application principale de ce travail a porté sur des projections concernant la démence. Cependant, en appliquant ces modèles à la maladie de Parkinson, nous avons proposé des méthodes permettant d’adapter notre approche à d’autres types de données. / Nowadays, Public Health is more and more interested in the neuro-degenerative chronic diseases related to the ageing such as Dementia or Parkinson’s disease. These pathologies cannot be prevented or cured and cause a progressive deterioration of health, requiring specific cares. The current demographic situation suggests a continuous ageing of the population and a rise of the life expectancy. As consequence of this, the economic, social and demographic burden related to these diseases will worsen in years to come. That is why, developing statistical models, which allow to make projections and to estimate the future burden of chronic diseases via several health indicators, is becoming of paramount importance. Furthermore, it would be interesting to give projections, according to hypothetical scenarios, which could be set up (e.g. a new treatment), to estimate the impact of such Public health intervention, but also to take into account modifications in disease incidence due, for example, to behavior changes. To attend this objective, an approach was proposed by Joly et al. 2013, using the illness-death model under Markovian hypothesis. Such approach has shown to be particularly adapted in this context, since it allows to consider the competive risk existing between the risk of death and the risk to develop the disease (for anon-diseased subject). On one hand, projections made by using this model take advantage of including national demographic projections to better consider the mortality trends over time. Moreover, the approach proposes to carefully model mortality, distinguishing overall mortality, non-diseased and diseased mortality trends. All the work of this thesis have been developed based on this model. In a first part, the hypotheses of the existing model are improved or modified in order to both :consider the evolution of disease incidence over time ; pass from a Markov to a semi-Markov hypothesis, which allows to model the mortality among diseased subjects depending on the time spent with the disease. In a second part, the initial method, allowing to take into account the impact of an intervention, but with many restrictive assumptions, is developed and generalized for more flexible interventions. Then, the mathematical/statistical expressions of relevant health indicators are developed in this context to have a panel of projections giving a better assessment of the future disease burden. The main application of this work concerns projections of Dementia. However, by applying these models to Parkinson’s disease, we propose methods which allows to adapt our approach to other types of data.
15

On temporal coherency of probabilistic models for audio-to-score alignment / Modèles probabilistes temporellement cohérents pour l'alignement audio-sur-partition

Cuvillier, Philippe 15 December 2016 (has links)
Cette thèse porte sur l'alignement automatique d'un enregistrement audio avec la partition de musique correspondante. Nous adoptons une approche probabiliste et proposons une démarche théorique pour la modélisation algorithmique de ce problème d'alignement automatique. La question est de modéliser l'évolution temporelle des événements par des processus stochastiques. Notre démarche part d'une spécificité de l'alignement musical : une partition attribue à chaque événement une durée nominale, qui est une information a priori sur la durée probable d'occurrence de l'événement. La problématique qui nous occupe est celle de la modélisation probabiliste de cette information de durée. Nous définissons la notion de cohérence temporelle à travers plusieurs critères de cohérence que devrait respecter tout algorithme d'alignement musical. Ensuite, nous menons une démarche axiomatique autour du cas des modèles de semi-Markov cachés. Nous démontrons que ces critères sont respectés lorsque des conditions mathématiques particulières sont vérifiées par les lois a priori du modèle probabiliste de la partition. Ces conditions proviennent de deux domaines mathématiques jusqu'ici étrangers à la question de l'alignement : les processus de Lévy et la totale positivité d'ordre deux. De nouveaux résultats théoriques sont démontrés sur l'interrelation entre ces deux notions. En outre, les bienfaits pratiques de ces résultats théoriques sont démontrés expérimentalement sur des algorithmes d'alignement en temps réel. / This thesis deals with automatic alignment of audio recordings with corresponding music scores. We study algorithmic solutions for this problem in the framework of probabilistic models which represent hidden evolution on the music score as stochastic process. We begin this work by investigating theoretical foundations of the design of such models. To do so, we undertake an axiomatic approach which is based on an application peculiarity: music scores provide nominal duration for each event, which is a hint for the actual and unknown duration. Thus, modeling this specific temporal structure through stochastic processes is our main problematic. We define temporal coherency as compliance with such prior information and refine this abstract notion by stating two criteria of coherency. Focusing on hidden semi-Markov models, we demonstrate that coherency is guaranteed by specific mathematical conditions on the probabilistic design and that fulfilling these prescriptions significantly improves precision of alignment algorithms. Such conditions are derived by combining two fields of mathematics, Lévy processes and total positivity of order 2. This is why the second part of this work is a theoretical investigation which extends existing results in the related literature.
16

CONTINUOUS TIME MULTI-STATE MODELS FOR INTERVAL CENSORED DATA

Wan, Lijie 01 January 2016 (has links)
Continuous-time multi-state models are widely used in modeling longitudinal data of disease processes with multiple transient states, yet the analysis is complex when subjects are observed periodically, resulting in interval censored data. Recently, most studies focused on modeling the true disease progression as a discrete time stationary Markov chain, and only a few studies have been carried out regarding non-homogenous multi-state models in the presence of interval-censored data. In this dissertation, several likelihood-based methodologies were proposed to deal with interval censored data in multi-state models. Firstly, a continuous time version of a homogenous Markov multi-state model with backward transitions was proposed to handle uneven follow-up assessments or skipped visits, resulting in the interval censored data. Simulations were used to compare the performance of the proposed model with the traditional discrete time stationary Markov chain under different types of observation schemes. We applied these two methods to the well-known Nun study, a longitudinal study of 672 participants aged ≥ 75 years at baseline and followed longitudinally with up to ten cognitive assessments per participant. Secondly, we constructed a non-homogenous Markov model for this type of panel data. The baseline intensity was assumed to be Weibull distributed to accommodate the non-homogenous property. The proportional hazards method was used to incorporate risk factors into the transition intensities. Simulation studies showed that the Weibull assumption does not affect the accuracy of the parameter estimates for the risk factors. We applied our model to data from the BRAiNS study, a longitudinal cohort of 531 subjects each cognitively intact at baseline. Last, we presented a parametric method of fitting semi-Markov models based on Weibull transition intensities with interval censored cognitive data with death as a competing risk. We relaxed the Markov assumption and took interval censoring into account by integrating out all possible unobserved transitions. The proposed model also allowed for incorporating time-dependent covariates. We provided a goodness-of-fit assessment for the proposed model by the means of prevalence counts. To illustrate the methods, we applied our model to the BRAiNS study.
17

Modeling Complex Forest Ecology in a Parallel Computing Infrastructure

Mayes, John 08 1900 (has links)
Effective stewardship of forest ecosystems make it imperative to measure, monitor, and predict the dynamic changes of forest ecology. Measuring and monitoring provides us a picture of a forest's current state and the necessary data to formulate models for prediction. However, societal and natural events alter the course of a forest's development. A simulation environment that takes into account these events will facilitate forest management. In this thesis, we describe an efficient parallel implementation of a land cover use model, Mosaic, and discuss the development efforts to incorporate spatial interaction and succession dynamics into the model. To evaluate the performance of our implementation, an extensive set of simulation experiments was carried out using a dataset representing the H.J. Andrews Forest in the Oregon Cascades. Results indicate that a significant reduction in the simulation execution time of our parallel model can be achieved as compared to uni-processor simulations.
18

Bayesian Nonparametric Modeling of Latent Structures

Xing, Zhengming January 2014 (has links)
<p>Unprecedented amount of data has been collected in diverse fields such as social network, infectious disease and political science in this information explosive era. The high dimensional, complex and heterogeneous data imposes tremendous challenges on traditional statistical models. Bayesian nonparametric methods address these challenges by providing models that can fit the data with growing complexity. In this thesis, we design novel Bayesian nonparametric models on dataset from three different fields, hyperspectral images analysis, infectious disease and voting behaviors. </p><p>First, we consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelength-dependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering the potential to significantly reduce the quantity of data that need be measured. We achieve this objective by employing Bayesian dictionary learning model, considering two distinct means of imposing sparse dictionary usage and drawing the dictionary elements from a Gaussian process prior, imposing structure on the wavelength dependence of the dictionary elements.</p><p>Second, a Bayesian statistical model is developed for analysis of the time-evolving properties of infectious disease, with a particular focus on viruses. The model employs a latent semi-Markovian state process, and the state-transition statistics are driven by three terms: ($i$) a general time-evolving trend of the overall population, ($ii$) a semi-periodic term that accounts for effects caused by the days of the week, and ($iii$) a regression term that relates the probability of infection to covariates (here, specifically, to the Google Flu Trends data).</p><p>Third, extensive information on 3 million randomly sampled United States citizens is used to construct a statistical model of constituent preferences for each U.S. congressional district. This model is linked to the legislative voting record of the legislator from each district, yielding an integrated model for constituency data, legislative roll-call votes, and the text of the legislation. The model is used to examine the extent to which legislators' voting records are aligned with constituent preferences, and the implications of that alignment (or lack thereof) on subsequent election outcomes. The analysis is based on a Bayesian nonparametric formalism, with fast inference via a stochastic variational Bayesian analysis.</p> / Dissertation
19

Bayesian modelling of recurrent pipe failures in urban water systems using non-homogeneous Poisson processes with latent structure

Economou, Theodoros January 2010 (has links)
Recurrent events are very common in a wide range of scientific disciplines. The majority of statistical models developed to characterise recurrent events are derived from either reliability theory or survival analysis. This thesis concentrates on applications that arise from reliability, which in general involve the study about components or devices where the recurring event is failure. Specifically, interest lies in repairable components that experience a number of failures during their lifetime. The goal is to develop statistical models in order to gain a good understanding about the driving force behind the failures. A particular counting process is adopted, the non-homogenous Poisson process (NHPP), where the rate of occurrence (failure rate) depends on time. The primary application considered in the thesis is the prediction of underground water pipe bursts although the methods described have more general scope. First, a Bayesian mixed effects NHPP model is developed and applied to a network of water pipes using MCMC. The model is then extended to a mixture of NHPPs. Further, a special mixture case, the zero-inflated NHPP model is developed to cope with data involving a large number of pipes that have never failed. The zero-inflated model is applied to the same pipe network. Quite often, data involving recurrent failures over time, are aggregated where for instance the times of failures are unknown and only the total number of failures are available. Aggregated versions of the NHPP model and its zero-inflated version are developed to accommodate aggregated data and these are applied to the aggregated version of the earlier data set. Complex devices in random environments often exhibit what may be termed as state changes in their behaviour. These state changes may be caused by unobserved and possibly non-stationary processes such as severe weather changes. A hidden semi-Markov NHPP model is formulated, which is a NHPP process modulated by an unobserved semi-Markov process. An algorithm is developed to evaluate the likelihood of this model and a Metropolis-Hastings sampler is constructed for parameter estimation. Simulation studies are performed to test implementation and finally an illustrative application of the model is presented. The thesis concludes with a general discussion and a list of possible generalisations and extensions as well as possible applications other than the ones considered.
20

RESOURCE MANAGEMENT FRAMEWORK FOR VOLUNTEER CLOUD COMPUTING

Mengistu, Tessema Mindaye 01 December 2018 (has links)
The need for high computing resources is on the rise, despite the exponential increase of the computing capacity of workstations, the proliferation of mobile devices, and the omnipresence of data centers with massive server farms that housed tens (if not hundreds) of thousands of powerful servers. This is mainly due to the unprecedented increase in the number of Internet users worldwide and the Internet of Things (IoTs). So far, Cloud Computing has been providing the necessary computing infrastructures for applications, including IoT applications. However, the current cloud infrastructures that are based on dedicated datacenters are expensive to set-up; running the infrastructure needs expertise, a lot of electrical power for cooling the facilities, and redundant supply of everything in a data center to provide the desired resilience. Moreover, the current centralized cloud infrastructures will not suffice for IoT's network intensive applications with very fast response requirements. Alternative cloud computing models that depend on spare resources of volunteer computers are emerging, including volunteer cloud computing, in addition to the conventional data center based clouds. These alternative cloud models have one characteristic in common -- they do not rely on dedicated data centers to provide the cloud services. Volunteer clouds are opportunistic cloud systems that run over donated spare resources of volunteer computers. On the one hand, volunteer clouds claim numerous outstanding advantages: affordability, on-premise, self-provision, greener computing (owing to consolidate use of existent computers), etc. On the other hand, full-fledged implementation of volunteer cloud computing raises unique technical and research challenges: management of highly dynamic and heterogeneous compute resources, Quality of Service (QoS) assurance, meeting Service Level Agreement (SLA), reliability, security/trust, which are all made more difficult due to the high dynamics and heterogeneity of the non-dedicated cloud hosts. This dissertation investigates the resource management aspect of volunteer cloud computing. Due to the intermittent availability and heterogeneity of computing resource involved, resource management is one of the challenging tasks in volunteer cloud computing. The dissertation, specifically, focuses on the Resource Discovery and VM Placement tasks of resource management. The resource base over which volunteer cloud computing depends on is a scavenged, sporadically available, aggregate computing power of individual volunteer computers. Delivering reliable cloud services over these unreliable nodes is a big challenge in volunteer cloud computing. The fault tolerance of the whole system rests on the reliability and availability of the infrastructure base. This dissertation discusses the modelling of a fault tolerant prediction based resource discovery in volunteer cloud computing. It presents a multi-state semi-Markov process based model to predict the future availability and reliability of nodes in volunteer cloud systems. A volunteer node is modelled as a semi-Markov process, whose future state depends only on its current state. This exactly matches with a key observation made in analyzing the traces of personal computers in enterprises that the daily patterns of resource availability are comparable to those in the most recent days. The dissertation illustrates how prediction based resource discovery enables volunteer cloud systems to provide reliable cloud services over the unreliable and non-dedicated volunteer hosts with empirical evidences. VM placement algorithms play crucial role in Cloud Computing in fulfilling its characteristics and achieving its objectives. In general, VM placement is a challenging problem that has been extensively studied in conventional Cloud Computing context. Due to its divergent characteristics, volunteer cloud computing needs a novel and unique way of solving the existing Cloud Computing problems, including VM placement. Intermittent availability of nodes, unreliable infrastructure, and resource constrained nodes are some of the characteristics of volunteer cloud computing that make VM placement problem more complicated. In this dissertation, we model the VM placement problem as a \textit{Bounded 0-1 Multi-Dimensional Knapsack Problem}. As a known NP-hard problem, the dissertation discusses heuristic based algorithms that takes the typical characteristics of volunteer cloud computing into consideration, to solve the VM placement problem formulated as a knapsack problem. Three algorithms are developed to meet the objectives and constraints specific to volunteer cloud computing. The algorithms are tested on a real volunteer cloud computing test-bed and showed a good performance results based on their optimization objectives. The dissertation also presents the design and implementation of a real volunteer cloud computing system, cuCloud, that bases its resource infrastructure on donated computing resource of computers. The need for the development of cuCloud stems from the lack of experimentation platform, real or simulation, that specifically works for volunteer cloud computing. The cuCloud is a system that can be called a genuine volunteer cloud computing system, which manifests the concept of ``Volunteer Computing as a Service'' (VCaaS), with a particular significance in edge computing and related applications. In the course of this dissertation, empirical evaluations show that volunteer clouds can be used to execute range of applications reliably and efficiently. Moreover, the physical proximity of volunteer nodes to where applications originate, edge of the network, helps them in reducing the round trip time latency of applications. However, the overall computing capability of volunteer clouds will not suffice to handle highly resource intensive applications by itself. Based on these observations, the dissertation also proposes the use of volunteer clouds as a resource fabric in the emerging Edge Computing paradigm as a future work.

Page generated in 0.0299 seconds