• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 82
  • 77
  • 47
  • 41
  • 40
  • 38
  • 20
  • 13
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 984
  • 597
  • 329
  • 263
  • 138
  • 100
  • 98
  • 71
  • 69
  • 68
  • 68
  • 66
  • 62
  • 61
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Model Based Speech Enhancement and Coding

Zhao, David Yuheng January 2007 (has links)
In mobile speech communication, adverse conditions, such as noisy acoustic environments and unreliable network connections, may severely degrade the intelligibility and natural- ness of the received speech quality, and increase the listening effort. This thesis focuses on countermeasures based on statistical signal processing techniques. The main body of the thesis consists of three research articles, targeting two specific problems: speech enhancement for noise reduction and flexible source coder design for unreliable networks. Papers A and B consider speech enhancement for noise reduction. New schemes based on an extension to the auto-regressive (AR) hidden Markov model (HMM) for speech and noise are proposed. Stochastic models for speech and noise gains (excitation variance from an AR model) are integrated into the HMM framework in order to improve the modeling of energy variation. The extended model is referred to as a stochastic-gain hidden Markov model (SG-HMM). The speech gain describes the energy variations of the speech phones, typically due to differences in pronunciation and/or different vocalizations of individual speakers. The noise gain improves the tracking of the time-varying energy of non-stationary noise, e.g., due to movement of the noise source. In Paper A, it is assumed that prior knowledge on the noise environment is available, so that a pre-trained noise model is used. In Paper B, the noise model is adaptive and the model parameters are estimated on-line from the noisy observations using a recursive estimation algorithm. Based on the speech and noise models, a novel Bayesian estimator of the clean speech is developed in Paper A, and an estimator of the noise power spectral density (PSD) in Paper B. It is demonstrated that the proposed schemes achieve more accurate models of speech and noise than traditional techniques, and as part of a speech enhancement system provide improved speech quality, particularly for non-stationary noise sources. In Paper C, a flexible entropy-constrained vector quantization scheme based on Gaus- sian mixture model (GMM), lattice quantization, and arithmetic coding is proposed. The method allows for changing the average rate in real-time, and facilitates adaptation to the currently available bandwidth of the network. A practical solution to the classical issue of indexing and entropy-coding the quantized code vectors is given. The proposed scheme has a computational complexity that is independent of rate, and quadratic with respect to vector dimension. Hence, the scheme can be applied to the quantization of source vectors in a high dimensional space. The theoretical performance of the scheme is analyzed under a high-rate assumption. It is shown that, at high rate, the scheme approaches the theoretically optimal performance, if the mixture components are located far apart. The practical performance of the scheme is confirmed through simulations on both synthetic and speech-derived source vectors. / QC 20100825
232

PELICAN : a PipELIne, including a novel redundancy-eliminating algorithm, to Create and maintain a topicAl family-specific Non-redundant protein database

Andersson, Christoffer January 2005 (has links)
The increasing number of biological databases today requires that users are able to search more efficiently among as well as in individual databases. One of the most widespread problems is redundancy, i.e. the problem of duplicated information in sets of data. This thesis aims at implementing an algorithm that distinguishes from other related attempts by using the genomic positions of sequences, instead of similarity based sequence comparisons, when making a sequence data set non-redundant. In an automatic updating procedure the algorithm drastically increases the possibility to update and to maintain the topicality of a non-redundant database. The procedure creates a biologically sound non-redundant data set with accuracy comparable to other algorithms focusing on making data sets non-redundant
233

Convergence in distribution for filtering processes associated to Hidden Markov Models with densities

Kaijser, Thomas January 2013 (has links)
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell for the case when the Markov chain takes its values in a finite set and it goes back to a paper of H Kunita for the case when the state space of the Markov chain is a compact Hausdor space. Recently, due to work by F Kochmann, J Reeds, P Chigansky and R van Handel, a necessary and sucient condition for the convergence of the distributions of the filtering process has been found for the case when the state space is finite. This condition has since been generalised to the case when the state space is denumerable. In this paper we generalise some of the previous results on convergence in distribution to the case when the Markov chain and the observation sequence of a Hidden Markov Model take their values in complete, separable, metric spaces; it has though been necessary to assume that both the transition probability function of the Markov chain and the transition probability function that generates the observation sequence have densities.
234

Exploring the Behaviour of the Hidden Markov Model on CpG Island Prediction

2013 April 1900 (has links)
DNA can be represented abstrzctly as a language with only four nucleotides represented by the letters A, C, G, and T, yet the arrangement of those four letters plays a major role in determining the development of an organism. Understanding the signi cance of certain arrangements of nucleotides can unlock the secrets of how the genome achieves its essential functionality. Regions of DNA particularly enriched with cytosine (C nucleotides) and guanine (G nucleotides), especially the CpG di-nucleotide, are frequently associated with biological function related to gene expression, and concentrations of CpGs referred to as \CpG islands" are known to collocate with regions upstream from gene coding sequences within the promoter region. The pattern of occurrence of these nucleotides, relative to adenine (A nucleotides) and thymine (T nucleotides), lends itself to analysis by machine-learning techniques such as Hidden Markov Models (HMMs) to predict the areas of greater enrichment. HMMs have been applied to CpG island prediction before, but often without an awareness of how the outcomes are a ected by the manner in which the HMM is applied. Two main ndings of this study are: 1. The outcome of a HMM is highly sensitive to the setting of the initial probability estimates. 2. Without the appropriate software techniques, HMMs cannot be applied e ectively to large data such as whole eukaryotic chromosomes. Both of these factors are rarely considered by users of HMMs, but are critical to a successful application of HMMs to large DNA sequences. In fact, these shortcomings were discovered through a close examination of published results of CpG island prediction using HMMs, and without being addressed, can lead to an incorrect implementation and application of HMM theory. A rst-order HMM is developed and its performance compared to two other historical methods, the Takai and Jones method and the UCSC method from the University of California Santa Cruz. The HMM is then extended to a second-order to acknowledge that pairs of nucleotides de ne CpG islands rather than single nucleotides alone, and the second-order HMM is evaluated in comparison to the other methods. The UCSC method is found to be based on properties that are not related to CpG islands, and thus is not a fair comparison to the other methods. Of the other methods, the rst-order HMM method and the Takai and Jones method are comparable in the tests conducted, but the second-order HMM method demonstrates superior predictive capabilities. However, these results are valid only when taking into consideration the highly sensitive outcomes based on initial estimates, and nding a suitable set of estimates that provide the most appropriate results. The rst-order HMM is applied to the problem of producing synthetic data that simulates the characteristics of a DNA sequence, including the speci ed presence of CpG islands, based on the model parameters of a trained HMM. HMM analysis is applied to the synthetic data to explore its delity in generating data with similar characteristics, as well as to validate the predictive ability of an HMM. Although this test fails to i meet expectations, a second test using a second-order HMM to produce simulated DNA data using frequency distributions of CpG island pro les exhibits highly accurate predictions of the pre-speci ed CpG islands, con- rming that when the synthetic data are appropriately structured, an HMM can be an accurate predictive tool. One outcome of this thesis is a set of software components (CpGID 2.0 and TrackMap) capable of ef- cient and accurate application of an HMM to genomic sequences, together with visualization that allows quantitative CpG island results to be viewed in conjunction with other genomic data. CpGID 2.0 is an adaptation of a previously published software component that has been extensively revised, and TrackMap is a companion product that works with the results produced by the CpGID 2.0 program. Executing these components allows one to monitor output aspects of the computational model such as number and size of the predicted CpG islands, including their CG content percentage and level of CpG frequency. These outcomes can then be related to the input values used to parameterize the HMM.
235

Automated Rehabilitation Exercise Motion Tracking

Lin, Jonathan Feng-Shun January 2012 (has links)
Current physiotherapy practice relies on visual observation of the patient for diagnosis and assessment. The assessment process can potentially be automated to improve accuracy and reliability. This thesis proposes a method to recover patient joint angles and automatically extract movement profiles utilizing small and lightweight body-worn sensors. Joint angles are estimated from sensor measurements via the extended Kalman filter (EKF). Constant-acceleration kinematics is employed as the state evolution model. The forward kinematics of the body is utilized as the measurement model. The state and measurement models are used to estimate the position, velocity and acceleration of each joint, updated based on the sensor inputs from inertial measurement units (IMUs). Additional joint limit constraints are imposed to reduce drift, and an automated approach is developed for estimating and adapting the process noise during on-line estimation. Once joint angles are determined, the exercise data is segmented to identify each of the repetitions. This process of identifying when a particular repetition begins and ends allows the physiotherapist to obtain useful metrics such as the number of repetitions performed, or the time required to complete each repetition. A feature-guided hidden Markov model (HMM) based algorithm is developed for performing the segmentation. In a sequence of unlabelled data, motion segment candidates are found by scanning the data for velocity-based features, such as velocity peaks and zero crossings, which match the pre-determined motion templates. These segment potentials are passed into the HMM for template matching. This two-tier approach combines the speed of a velocity feature based approach, which only requires the data to be differentiated, with the accuracy of the more computationally-heavy HMM, allowing for fast and accurate segmentation. The proposed algorithms were verified experimentally on a dataset consisting of 20 healthy subjects performing rehabilitation exercises. The movement data was collected by IMUs strapped onto the hip, thigh and calf. The joint angle estimation system achieves an overall average RMS error of 4.27 cm, when compared against motion capture data. The segmentation algorithm reports 78% accuracy when the template training data comes from the same participant, and 74% for a generic template.
236

Multivariate Longitudinal Data Analysis with Mixed Effects Hidden Markov Models

Raffa, Jesse Daniel January 2012 (has links)
Longitudinal studies, where data on study subjects are collected over time, is increasingly involving multivariate longitudinal responses. Frequently, the heterogeneity observed in a multivariate longitudinal response can be attributed to underlying unobserved disease states in addition to any between-subject differences. We propose modeling such disease states using a hidden Markov model (HMM) approach and expand upon previous work, which incorporated random effects into HMMs for the analysis of univariate longitudinal data, to the setting of a multivariate longitudinal response. Multivariate longitudinal data are modeled jointly using separate but correlated random effects between longitudinal responses of mixed data types in addition to a shared underlying hidden process. We use a computationally efficient Bayesian approach via Markov chain Monte Carlo (MCMC) to fit such models. We apply this methodology to bivariate longitudinal response data from a smoking cessation clinical trial. Under these models, we examine how to incorporate a treatment effect on the disease states, as well as develop methods to classify observations by disease state and to attempt to understand patient dropout. Simulation studies were performed to evaluate the properties of such models and their applications under a variety of realistic situations.
237

An improved fully connected hidden Markov model for rational vaccine design

Zhang, Chenhong 24 February 2005
<p>Large-scale, in vitro vaccine screening is an expensive and slow process, while rational vaccine design is faster and cheaper. As opposed to the emperical ways to design vaccines in biology laboratories, rational vaccine design models the structure of vaccines with computational approaches. Building an effective predictive computer model requires extensive knowledge of the process or phenomenon being modelled. Given current knowledge about the steps involved in immune system responses, computer models are currently focused on one or two of the most important and best known steps; for example: presentation of antigens by major histo-compatibility complex (MHC) molecules. In this step, the MHC molecule selectively binds to some peptides derived from antigens and then presents them to the T-cell. One current focus in rational vaccine design is prediction of peptides that can be bound by MHC.<p>Theoretically, predicting which peptides bind to a particular MHC molecule involves discovering patterns in known MHC-binding peptides and then searching for peptides which conform to these patterns in some new antigenic protein sequences. According to some previous work, Hidden Markov models (HMMs), a machine learning technique, is one of the most effective approaches for this task. Unfortunately, for computer models like HMMs, the number of the parameters to be determined is larger than the number which can be estimated from available training data.<p>Thus, heuristic approaches have to be developed to determine the parameters. In this research, two heuristic approaches are proposed. The rst initializes the HMM transition and emission probability matrices by assigning biological meanings to the states. The second approach tailors the structure of a fully connected HMM (fcHMM) to increase specicity. The effectiveness of these two approaches is tested on two human leukocyte antigens(HLA) alleles, HLA-A*0201 and HLAB* 3501. The results indicate that these approaches can improve predictive accuracy. Further, the HMM implementation incorporating the above heuristics can outperform a popular prole HMM (pHMM) program, HMMER, in terms of predictive accuracy.
238

Multivariate Poisson hidden Markov models for analysis of spatial counts

Karunanayake, Chandima Piyadharshani 08 June 2007
Multivariate count data are found in a variety of fields. For modeling such data, one may consider the multivariate Poisson distribution. Overdispersion is a problem when modeling the data with the multivariate Poisson distribution. Therefore, in this thesis we propose a new multivariate Poisson hidden Markov model based on the extension of independent multivariate Poisson finite mixture models, as a solution to this problem. This model, which can take into account the spatial nature of weed counts, is applied to weed species counts in an agricultural field. The distribution of counts depends on the underlying sequence of states, which are unobserved or hidden. These hidden states represent the regions where weed counts are relatively homogeneous. Analysis of these data involves the estimation of the number of hidden states, Poisson means and covariances. Parameter estimation is done using a modified EM algorithm for maximum likelihood estimation. <p>We extend the univariate Markov-dependent Poisson finite mixture model to the multivariate Poisson case (bivariate and trivariate) to model counts of two or three species. Also, we contribute to the hidden Markov model research area by developing Splus/R codes for the analysis of the multivariate Poisson hidden Markov model. Splus/R codes are written for the estimation of multivariate Poisson hidden Markov model using the EM algorithm and the forward-backward procedure and the bootstrap estimation of standard errors. The estimated parameters are used to calculate the goodness of fit measures of the models.<p>Results suggest that the multivariate Poisson hidden Markov model, with five states and an independent covariance structure, gives a reasonable fit to this dataset. Since this model deals with overdispersion and spatial information, it will help to get an insight about weed distribution for herbicide applications. This model may lead researchers to find other factors such as soil moisture, fertilizer level, etc., to determine the states, which govern the distribution of the weed counts.
239

An HMM-based segmentation method for traffic monitoring movies

Kato, Jien, Watanabe, Toyohide, Joga, Sebastien, Jens, Rittscher, Andrew, Blake, 加藤, ジェーン, 渡邉, 豊英 09 1900 (has links)
No description available.
240

Gimp Anthropology: Non-Apparent Disabilities and Navigating the Social

Orlando, Rebekah 06 September 2012 (has links)
Individuals with non-apparent, physical disabilities face unique social challenges from those that are encountered by the more visibly disabled. The absence of visible cues indicating physical impairment causes ambiguity in social situations, leaving the sufferer vulnerable to moral judgments and social sanctions when they are unable to embody and perform to cultural norms. This dynamic generates a closeted status that the individual must learn to navigate. Using Eve Sedgwick's "The Epistemology of the Closet," this paper deploys auto-ethnography, traditional ethnographic techniques, and literature reviews to illuminate a third space of functioning between the outwardly 'healthy' and the visibly disabled.

Page generated in 0.0451 seconds