• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 23
  • 21
  • 20
  • 11
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 240
  • 69
  • 48
  • 46
  • 32
  • 28
  • 27
  • 26
  • 25
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Threat Assessment and Proactive Decision-Making for Crash Avoidance in Autonomous Vehicles

Khattar, Vanshaj 24 May 2021 (has links)
Threat assessment and reliable motion-prediction of surrounding vehicles are some of the major challenges encountered in autonomous vehicles' safe decision-making. Predicting a threat in advance can give an autonomous vehicle enough time to avoid crashes or near crash situations. Most vehicles on roads are human-driven, making it challenging to predict their intentions and movements due to inherent uncertainty in their behaviors. Moreover, different driver behaviors pose different kinds of threats. Various driver behavior predictive models have been proposed in the literature for motion prediction. However, these models cannot be trusted entirely due to the human drivers' highly uncertain nature. This thesis proposes a novel trust-based driver behavior prediction and stochastic reachable set threat assessment methodology for various dangerous situations on the road. This trust-based methodology allows autonomous vehicles to quantify the degree of trust in their predictions to generate the probabilistically safest trajectory. This approach can be instrumental in the near-crash scenarios where no collision-free trajectory exists. Three different driving behaviors are considered: Normal, Aggressive, and Drowsy. Hidden Markov Models are used for driver behavior prediction. A "trust" in the detected driver is established by combining four driving features: Longitudinal acceleration, lateral acceleration, lane deviation, and velocity. A stochastic reachable set-based approach is used to model these three different driving behaviors. Two measures of threat are proposed: Current Threat and Short Term Prediction Threat which quantify present and the future probability of a crash. The proposed threat assessment methodology resulted in a lower rate of false positives and negatives. This probabilistic threat assessment methodology is used to address the second challenge in autonomous vehicle safety: crash avoidance decision-making. This thesis presents a fast, proactive decision-making methodology based on Stochastic Model Predictive Control (SMPC). A proactive decision-making approach exploits the surrounding human-driven vehicles' intent to assess the future threat, which helps generate a safe trajectory in advance, unlike reactive decision-making approaches that do not account for the surrounding vehicles' future intent. The crash avoidance problem is formulated as a chance-constrained optimization problem to account for uncertainty in the surrounding vehicle's motion. These chance-constraints always ensure a minimum probabilistic safety of the autonomous vehicle by keeping the probability of crash below a predefined risk parameter. This thesis proposes a tractable and deterministic reformulation of these chance-constraints using convex hull formulation for a fast real-time implementation. The controller's performance is studied for different risk parameters used in the chance-constraint formulation. Simulation results show that the proposed control methodology can avoid crashes in most hazardous situations on the road. / Master of Science / Unexpected road situations frequently arise on the roads which leads to crashes. In an NHTSA study, it was reported that around 94% of car crashes could be attributed to driver errors and misjudgments. This could be attributed to drinking and driving, fatigue, or reckless driving on the roads. Full self-driving cars can significantly reduce the frequency of such accidents. Testing of self-driving cars has recently begun on certain roads, and it is estimated that one in ten cars will be self-driving by the year 2030. This means that these self-driving cars will need to operate in human-driven environments and interact with human-driven vehicles. Therefore, it is crucial for autonomous vehicles to understand the way humans drive on the road to avoid collisions and interact safely with human-driven vehicles on the road. Detecting a threat in advance and generating a safe trajectory for crash avoidance are some of the major challenges faced by autonomous vehicles. We have proposed a reliable decision-making algorithm for crash avoidance in autonomous vehicles. Our framework addresses two core challenges encountered in crash avoidance decision-making in autonomous vehicles: 1. The outside challenge: Reliable motion prediction of surrounding vehicles to continuously assess the threat to the autonomous vehicle. 2. The inside challenge: Generating a safe trajectory for the autonomous vehicle in case of future predicted threat. The outside challenge is to predict the motion of surrounding vehicles. This requires building a reliable model through which future evolution of their position states can be predicted. Building these models is not trivial, as the surrounding vehicles' motion depends on human driver intentions and behaviors, which are highly uncertain. Various driver behavior predictive models have been proposed in the literature. However, most do not quantify trust in their predictions. We have proposed a trust-based driver behavior prediction method which combines all sensor measurements to output the probability (trust value) of a certain driver being "drowsy", "aggressive", or "normal". This method allows the autonomous vehicle to choose how much to trust a particular prediction. Once a picture is painted of surrounding vehicles, we can generate safe trajectories in advance – the inside challenge. Most existing approaches use stochastic optimal control methods, which are computationally expensive and impractical for fast real-time decision-making in crash scenarios. We have proposed a fast, proactive decision-making algorithm to generate crash avoidance trajectories based on Stochastic Model Predictive Control (SMPC). We reformulate the SMPC probabilistic constraints as deterministic constraints using convex hull formulation, allowing for faster real-time implementation. This deterministic SMPC implementation ensures in real-time that the vehicle maintains a minimum probabilistic safety.
132

Iterative Decoding and Channel Estimation over Hidden Markov Fading Channels

Khan, Anwer Ali 24 May 2000 (has links)
Since the 1950s, hidden Markov models (HMMS) have seen widespread use in electrical engineering. Foremost has been their use in speech processing, pattern recognition, artificial intelligence, queuing theory, and communications theory. However, recent years have witnessed a renaissance in the application of HMMs to the analysis and simulation of digital communication systems. Typical applications have included signal estimation, frequency tracking, equalization, burst error characterization, and transmit power control. Of special significance to this thesis, however, has been the use of HMMs to model fading channels typical of wireless communications. This variegated use of HMMs is fueled by their ability to model time-varying systems with memory, their ability to yield closed form solutions to otherwise intractable analytic problems, and their ability to help facilitate simple hardware and/or software based implementations of simulation test-beds. The aim of this thesis is to employ and exploit hidden Markov fading models within an iterative (turbo) decoding framework. Of particular importance is the problem of channel estimation, which is vital for realizing the large coding gains inherent in turbo coded schemes. This thesis shows that a Markov fading channel (MFC) can be conceptualized as a trellis, and that the transmission of a sequence over a MFC can be viewed as a trellis encoding process much like convolutional encoding. The thesis demonstrates that either maximum likelihood sequence estimation (MLSE) algorithms or maximum <I> a posteriori</I> (MAP) algorithms operating over the trellis defined by the MFC can be used for channel estimation. Furthermore, the thesis illustrates sequential and decision-directed techniques for using the aforementioned trellis based channel estimators <I>en masse</I> with an iterative decoder. / Master of Science
133

Autoregressive Higher-Order Hidden Markov Models: Exploiting Local Chromosomal Dependencies in the Analysis of Tumor Expression Profiles

Seifert, Michael, Abou-El-Ardat, Khalil, Friedrich, Betty, Klink, Barbara, Deutsch, Andreas 07 May 2015 (has links) (PDF)
Changes in gene expression programs play a central role in cancer. Chromosomal aberrations such as deletions, duplications and translocations of DNA segments can lead to highly significant positive correlations of gene expression levels of neighboring genes. This should be utilized to improve the analysis of tumor expression profiles. Here, we develop a novel model class of autoregressive higher-order Hidden Markov Models (HMMs) that carefully exploit local data-dependent chromosomal dependencies to improve the identification of differentially expressed genes in tumor. Autoregressive higher-order HMMs overcome generally existing limitations of standard first-order HMMs in the modeling of dependencies between genes in close chromosomal proximity by the simultaneous usage of higher-order state-transitions and autoregressive emissions as novel model features. We apply autoregressive higher-order HMMs to the analysis of breast cancer and glioma gene expression data and perform in-depth model evaluation studies. We find that autoregressive higher-order HMMs clearly improve the identification of overexpressed genes with underlying gene copy number duplications in breast cancer in comparison to mixture models, standard first- and higher-order HMMs, and other related methods. The performance benefit is attributed to the simultaneous usage of higher-order state-transitions in combination with autoregressive emissions. This benefit could not be reached by using each of these two features independently. We also find that autoregressive higher-order HMMs are better able to identify differentially expressed genes in tumors independent of the underlying gene copy number status in comparison to the majority of related methods. This is further supported by the identification of well-known and of previously unreported hotspots of differential expression in glioblastomas demonstrating the efficacy of autoregressive higher-order HMMs for the analysis of individual tumor expression profiles. Moreover, we reveal interesting novel details of systematic alterations of gene expression levels in known cancer signaling pathways distinguishing oligodendrogliomas, astrocytomas and glioblastomas.
134

Bayesian Latent Variable Models for Biostatistical Applications

Ridall, Peter Gareth January 2004 (has links)
In this thesis we develop several kinds of latent variable models in order to address three types of bio-statistical problem. The three problems are the treatment effect of carcinogens on tumour development, spatial interactions between plant species and motor unit number estimation (MUNE). The three types of data looked at are: highly heterogeneous longitudinal count data, quadrat counts of species on a rectangular lattice and lastly, electrophysiological data consisting of measurements of compound muscle action potential (CMAP) area and amplitude. Chapter 1 sets out the structure and the development of ideas presented in this thesis from the point of view of: model structure, model selection, and efficiency of estimation. Chapter 2 is an introduction to the relevant literature that has in influenced the development of this thesis. In Chapter 3 we use the EM algorithm for an application of an autoregressive hidden Markov model to describe longitudinal counts. The data is collected from experiments to test the effect of carcinogens on tumour growth in mice. Here we develop forward and backward recursions for calculating the likelihood and for estimation. Chapter 4 is the analysis of a similar kind of data using a more sophisticated model, incorporating random effects, but estimation this time is conducted from the Bayesian perspective. Bayesian model selection is also explored. In Chapter 5 we move to the two dimensional lattice and construct a model for describing the spatial interaction of tree types. We also compare the merits of directed and undirected graphical models for describing the hidden lattice. Chapter 6 is the application of a Bayesian hierarchical model (MUNE), where the latent variable this time is multivariate Gaussian and dependent on a covariate, the stimulus. Model selection is carried out using the Bayes Information Criterion (BIC). In Chapter 7 we approach the same problem by using the reversible jump methodology (Green, 1995) where this time we use a dual Gaussian-Binary representation of the latent data. We conclude in Chapter 8 with suggestions for the direction of new work. In this thesis, all of the estimation carried out on real data has only been performed once we have been satisfied that estimation is able to retrieve the parameters from simulated data. Keywords: Amyotrophic lateral sclerosis (ALS), carcinogens, hidden Markov models (HMM), latent variable models, longitudinal data analysis, motor unit disease (MND), partially ordered Markov models (POMMs), the pseudo auto- logistic model, reversible jump, spatial interactions.
135

Algorithmes de restauration bayésienne mono- et multi-objets dans des modèles markoviens / Single and multiple object(s) Bayesian restoration algorithms for Markovian models

Petetin, Yohan 27 November 2013 (has links)
Cette thèse est consacrée au problème d'estimation bayésienne pour le filtrage statistique, dont l'objectif est d'estimer récursivement des états inconnus à partir d'un historique d'observations, dans un modèle stochastique donné. Les modèles stochastiques considérés incluent principalement deux grandes classes de modèles : les modèles de Markov cachés et les modèles de Markov à sauts conditionnellement markoviens. Ici, le problème est abordé sous sa forme générale dans la mesure où nous considérons le problème du filtrage mono- et multi objet(s), ce dernier étant abordé sous l'angle de la théorie des ensembles statistiques finis et du filtre « Probability Hypothesis Density ». Tout d'abord, nous nous intéressons à l'importante classe d'approximations que constituent les algorithmes de Monte Carlo séquentiel, qui incluent les algorithmes d'échantillonnage d'importance séquentiel et de filtrage particulaire auxiliaire. Les boucles de propagation mises en jeux dans ces algorithmes sont étudiées et des algorithmes alternatifs sont proposés. Les algorithmes de filtrage particulaire dits « localement optimaux », c'est à dire les algorithmes d'échantillonnage d'importance avec densité d'importance conditionnelle optimale et de filtrage particulaire auxiliaire pleinement adapté sont comparés statistiquement, en fonction des paramètres du modèle donné. Ensuite, les méthodes de réduction de variance basées sur le théorème de Rao-Blackwell sont exploitées dans le contexte du filtrage mono- et multi-objet(s) Ces méthodes, utilisées principalement en filtrage mono-objet lorsque la dimension du vecteur d'état à estimer est grande, sont dans un premier temps étendues pour les approximations Monte Carlo du filtre Probability Hypothesis Density. D'autre part, des méthodes de réduction de variance alternatives sont proposées : bien que toujours basées sur le théorème de Rao-Blackwell, elles ne se focalisent plus sur le caractère spatial du problème mais plutôt sur son caractère temporel. Enfin, nous abordons l'extension des modèles probabilistes classiquement utilisés. Nous rappelons tout d'abord les modèles de Markov couple et triplet dont l'intérêt est illustré à travers plusieurs exemples pratiques. Ensuite, nous traitons le problème de filtrage multi-objets, dans le contexte des ensembles statistiques finis, pour ces modèles. De plus, les propriétés statistiques plus générales des modèles triplet sont exploitées afin d'obtenir de nouvelles approximations de l'estimateur bayésien optimal (au sens de l'erreur quadratique moyenne) dans les modèles à sauts classiquement utilisés; ces approximations peuvent produire des estimateurs de performances comparables à celles des approximations particulaires, mais ont l'avantage d'être moins coûteuses sur le plan calculatoire / This thesis focuses on the Bayesian estimation problem for statistical filtering which consists in estimating hidden states from an historic of observations over time in a given stochastic model. The considered models include the popular Hidden Markov Chain models and the Jump Markov State Space Systems; in addition, the filtering problem is addressed under a general form, that is to say we consider the mono- and multi-object filtering problems. The latter one is addressed in the Random Finite Sets and Probability Hypothesis Density contexts. First, we focus on the class of particle filtering algorithms, which include essentially the sequential importance sampling and auxiliary particle filter algorithms. We explore the recursive loops for computing the filtering probability density function, and alternative particle filtering algorithms are proposed. The ``locally optimal'' filtering algorithms, i.e. the sequential importance sampling with optimal conditional importance distribution and the fully adapted auxiliary particle filtering algorithms, are statistically compared in function of the parameters of a given stochastic model. Next, variance reduction methods based on the Rao-Blackwell theorem are exploited in the mono- and multi-object filtering contexts. More precisely, these methods are mainly used in mono-object filtering when the dimension of the hidden state is large; so we first extend them for Monte Carlo approximations of the Probabilty Hypothesis Density filter. In addition, alternative variance reduction methods are proposed. Although we still use the Rao-Blackwell decomposition, our methods no longer focus on the spatial aspect of the problem but rather on its temporal one. Finally, we discuss on the extension of the classical stochastic models. We first recall pairwise and triplet Markov models and we illustrate their interest through several practical examples. We next address the multi-object filtering problem for such models in the random finite sets context. Moreover, the statistical properties of the more general triplet Markov models are used to build new approximations of the optimal Bayesian estimate (in the sense of the mean square error) in Jump Markov State Space Systems. These new approximations can produce estimates with performances alike those given by particle filters but with lower computational cost
136

Autoregressive Higher-Order Hidden Markov Models: Exploiting Local Chromosomal Dependencies in the Analysis of Tumor Expression Profiles

Seifert, Michael, Abou-El-Ardat, Khalil, Friedrich, Betty, Klink, Barbara, Deutsch, Andreas 07 May 2015 (has links)
Changes in gene expression programs play a central role in cancer. Chromosomal aberrations such as deletions, duplications and translocations of DNA segments can lead to highly significant positive correlations of gene expression levels of neighboring genes. This should be utilized to improve the analysis of tumor expression profiles. Here, we develop a novel model class of autoregressive higher-order Hidden Markov Models (HMMs) that carefully exploit local data-dependent chromosomal dependencies to improve the identification of differentially expressed genes in tumor. Autoregressive higher-order HMMs overcome generally existing limitations of standard first-order HMMs in the modeling of dependencies between genes in close chromosomal proximity by the simultaneous usage of higher-order state-transitions and autoregressive emissions as novel model features. We apply autoregressive higher-order HMMs to the analysis of breast cancer and glioma gene expression data and perform in-depth model evaluation studies. We find that autoregressive higher-order HMMs clearly improve the identification of overexpressed genes with underlying gene copy number duplications in breast cancer in comparison to mixture models, standard first- and higher-order HMMs, and other related methods. The performance benefit is attributed to the simultaneous usage of higher-order state-transitions in combination with autoregressive emissions. This benefit could not be reached by using each of these two features independently. We also find that autoregressive higher-order HMMs are better able to identify differentially expressed genes in tumors independent of the underlying gene copy number status in comparison to the majority of related methods. This is further supported by the identification of well-known and of previously unreported hotspots of differential expression in glioblastomas demonstrating the efficacy of autoregressive higher-order HMMs for the analysis of individual tumor expression profiles. Moreover, we reveal interesting novel details of systematic alterations of gene expression levels in known cancer signaling pathways distinguishing oligodendrogliomas, astrocytomas and glioblastomas.
137

Estimation du maximum de vraisemblance dans les modèles de Markov partiellement observés avec des applications aux séries temporelles de comptage / Maximum likelihood estimation in partially observed Markov models with applications to time series of counts

Sim, Tepmony 08 March 2016 (has links)
L'estimation du maximum de vraisemblance est une méthode répandue pour l'identification d'un modèle paramétré de série temporelle à partir d'un échantillon d'observations. Dans le cadre de modèles bien spécifiés, il est primordial d'obtenir la consistance de l'estimateur, à savoir sa convergence vers le vrai paramètre lorsque la taille de l'échantillon d'observations tend vers l'infini. Pour beaucoup de modèles de séries temporelles, par exemple les modèles de Markov cachés ou « hidden Markov models »(HMM), la propriété de consistance « forte » peut cependant être dfficile à établir. On peut alors s'intéresser à la consistance de l'estimateur du maximum de vraisemblance (EMV) dans un sens faible, c'est-à-dire que lorsque la taille de l'échantillon tend vers l'infini, l'EMV converge vers un ensemble de paramètres qui s'associent tous à la même distribution de probabilité des observations que celle du vrai paramètre. La consistance dans ce sens, qui reste une propriété privilégiée dans beaucoup d'applications de séries temporelles, est dénommée consistance de classe d'équivalence. L'obtention de la consistance de classe d'équivalence exige en général deux étapes importantes : 1) montrer que l'EMV converge vers l'ensemble qui maximise la log-vraisemblance normalisée asymptotique ; et 2) montrer que chaque paramètre dans cet ensemble produit la même distribution du processus d'observation que celle du vrai paramètre. Cette thèse a pour objet principal d'établir la consistance de classe d'équivalence des modèles de Markov partiellement observés, ou « partially observed Markov models » (PMM), comme les HMM et les modèles « observation-driven » (ODM). / Maximum likelihood estimation is a widespread method for identifying a parametrized model of a time series from a sample of observations. Under the framework of well-specified models, it is of prime interest to obtain consistency of the estimator, that is, its convergence to the true parameter as the sample size of the observations goes to infinity. For many time series models, for instance hidden Markov models (HMMs), such a “strong” consistency property can however be difficult to establish. Alternatively, one can show that the maximum likelihood estimator (MLE) is consistent in a weakened sense, that is, as the sample size goes to infinity, the MLE eventually converges to a set of parameters, all of which associate to the same probability distribution of the observations as for the true one. The consistency in this sense, which remains a preferred property in many time series applications, is referred to as equivalence-class consistency. The task of deriving such a property generally involves two important steps: 1) show that the MLE converges to the maximizing set of the asymptotic normalized loglikelihood; and 2) show that any parameter in this maximizing set yields the same distribution of the observation process as for the true parameter. In this thesis, our primary attention is to establish the equivalence-class consistency for time series models that belong to the class of partially observed Markov models (PMMs) such as HMMs and observation-driven models (ODMs).
138

Statistical approaches for natural language modelling and monotone statistical machine translation

Andrés Ferrer, Jesús 11 February 2010 (has links)
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural y traducción automática estadística. En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad. En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida. Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico . / Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
139

Bird song recognition with hidden Markov models

Van der Merwe, Hugo Jacobus 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--Stellenbosch University, 2008. / Automatic bird song recognition and transcription is a relatively new field. Reliable automatic recognition systems would be of great benefit to further research in ornithology and conservation, as well as commercially in the very large birdwatching subculture. This study investigated the use of Hidden Markov Models and duration modelling for bird call recognition. Through use of more accurate duration modelling, very promising results were achieved with feature vectors consisting of only pitch and volume. An accuracy of 51% was achieved for 47 calls from 39 birds, with the models typically trained from only one or two specimens. The ALS pitch tracking algorithm was adapted to bird song to extract the pitch. Bird song synthesis was employed to subjectively evaluate the features. Compounded Selfloop Duration Modelling was developed as an alternative duration modelling technique. For long durations, this technique can be more computationally efficient than Ferguson stacks. The application of approximate string matching to bird song was also briefly considered.
140

An HMM-based automatic singing transcription platform for a sight-singing tutor

Krige, Willie 03 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--Stellenbosch University, 2008. / A singing transcription system transforming acoustic input into MIDI note sequences is presented. The transcription system is incorporated into a pronunciation-independent sight-singing tutor system, which provides note-level feedback on the accuracy with which each note in a sequence has been sung. Notes are individually modeled with hidden Markov models (HMMs) using untuned pitch and delta-pitch as feature vectors. A database consisting of annotated passages sung by 26 soprano subjects was compiled for the development of the system, since no existing data was available. Various techniques that allow efficient use of a limited dataset are proposed and evaluated. Several HMM topologies are also compared, in analogy with approaches often used in the field of automatic speech recognition. Context-independent note models are evaluated first, followed by the use of explicit transition models to better identify boundaries between notes. A non-repetitive grammar is used to reduce the number of insertions. Context-dependent note models are then introduced, followed by context-dependent transition models. The aim in introducing context-dependency is to improve transition region modeling, which in turn should increase note transcription accuracy, but also improve the time-alignment of the notes and the transition regions. The final system is found to be able to transcribe sung passages with around 86% accuracy. Finally, a note-level sight-singing tutor system based on the singing transcription system is presented and a number of note sequence scoring approaches are evaluated.

Page generated in 0.05 seconds