• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 22
  • 19
  • 15
  • 7
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 235
  • 235
  • 89
  • 44
  • 42
  • 37
  • 30
  • 29
  • 27
  • 25
  • 24
  • 22
  • 21
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION

Zhang, Qing 01 January 2015 (has links)
Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people.
52

Αυτόματη αναγνώριση ομιλητή χρησιμοποιώντας μεθόδους ταυτοποίησης κλειστού συνόλου / Automatic speaker recognition using closed-set recognition methods

Κεραμεύς, Ηλίας 03 August 2009 (has links)
Ο στόχος ενός συστήματος αυτόματης αναγνώρισης ομιλητή είναι άρρηκτα συνδεδεμένος με την εξαγωγή, το χαρακτηρισμό και την αναγνώριση πληροφοριών σχετικά με την ταυτότητα ενός ομιλητή. Η αναγνώριση ομιλητή αναφέρεται είτε στην ταυτοποίηση είτε στην επιβεβαίωσή του. Συγκεκριμένα, ανάλογα με τη μορφή της απόφασης που επιστρέφει, ένα σύστημα ταυτοποίησης μπορεί να χαρακτηριστεί ως ανοιχτού συνόλου (open-set) ή ως κλειστού συνόλου (closed-set). Αν ένα σύστημα βασιζόμενο σε ένα άγνωστο δείγμα φωνής αποκρίνεται με μια ντετερμινιστικής μορφής απόφαση, εάν το δείγμα ανήκει σε συγκεκριμένο ή σε άγνωστο ομιλητή, το σύστημα χαρακτηρίζεται ως σύστημα ταυτοποίησης ανοιχτού συνόλου. Από την άλλη πλευρά, στην περίπτωση που το σύστημα επιστρέφει τον πιθανότερο ομιλητή, από αυτούς που ήδη είναι καταχωρημένοι στη βάση, από τον οποίο προέρχεται το δείγμα φωνής το σύστημα χαρακτηρίζεται ως σύστημα κλειστού συνόλου. Η ταυτοποίηση συστήματος κλειστού συνόλου, περαιτέρω μπορεί να χαρακτηριστεί ως εξαρτημένη ή ανεξάρτητη από κείμενο, ανάλογα με το εάν το σύστημα γνωρίζει την εκφερόμενη φράση ή εάν αυτό είναι ικανό να αναγνωρίσει τον ομιλητή από οποιαδήποτε φράση που μπορεί αυτός να εκφέρει. Στην εργασία αυτή εξετάζονται και υλοποιούνται αλγόριθμοι αυτόματης αναγνώρισης ομιλητή που βασίζονται σε κλειστού τύπου και ανεξαρτήτως κειμένου συστήματα ταυτοποίησης. Συγκεκριμένα, υλοποιούνται αλγόριθμοι που βασίζονται στην ιδέα της διανυσματικής κβάντισης, τα στοχαστικά μοντέλα και τα νευρωνικά δίκτυα. / The purpose of system of automatic recognition of speaker is unbreakably connected with the export, the characterization and the recognition of information with regard to the identity of speaker. The recognition of speaker is reported or in the identification or in his confirmation. Concretely, depending on the form of decision that returns, a system of identification can be characterized as open-set or as closed-set. If a system based on an unknown sample of voice is replied with deterministic form decision, if the sample belongs in concrete or in unknown speaker, the system is characterized as system of identification of open set. On the other hand, in the case where the system return the more likely speaker than which emanates the sample of voice, the system is characterized as system of closed set. The identification of system of close set, further can be characterized as made dependent or independent from text, depending on whether the system knows the speaking phrase or if this is capable to recognize the speaker from any phrase that can speak. In this work they are examined and they are implemented algorithms of automatic recognition of speaker that are based in closed type and independent text systems of identification. Concretely, are implemented algorithms that are based in the idea of the Vector Quantization, the stochastic models and the neural networks.
53

Construction of amino acid rate matrices and extensions of the Barry and Hartigan model for phylogenetic inference

Zou, Liwen 09 August 2011 (has links)
This thesis considers two distinct topics in phylogenetic analysis. The first is construction of empirical rate matrices for amino acid models. The second topic, which constitutes the majority of the thesis, involves analysis of and extensions to the BH model of Barry and Hartigan (1987). There are a number of rate matrices used for phylogenetic analysis including the PAM (Dayhoff et al. 1979), JTT (Jones et al. 1992) and WAG (Whelan and Goldman 2001). The construction of each of these has difficulties. To avoid adjusting for multiple substitutions, the PAM and JTT matrices were constructed using only a subset of the data consisting of closely related species. The WAG model used an incomplete maximum likelihood estimation to reduce computational cost. We develop a modification of the pairwise methods first described in Arvestad and Bruno that better adjusts for some of the sparseness difficulties that arise with amino acid data. The BH model is very flexible, allowing separate discrete-time Markov processes to occur along different edges. We show, however, that an identifiability problem arises for the BH model making it difficult to estimate character state frequencies at internal nodes. To obtain such frequencies and edge-lengths for BH model fits, we define a nonstationary GTR (NSGTR) model along an edge, and find the NSGTR model that best approximates the fitted BH model. The NSGTR model is slightly more restrictive but allows for estimation of internal node frequencies and interpretable edge lengths. While adjusting for rates-across-sites variation is now common practice in phylogenetic analyses, it is widely recognized that in reality evolutionary processes can change over both sites and lineages. As an adjustment for this, we introduce a BH mixture model that not only allows completely different models along edges of a topology, but also allows for different site classes whose evolutionary dynamics can take any form.
54

Extreme Value Mixture Modelling with Simulation Study and Applications in Finance and Insurance

Hu, Yang January 2013 (has links)
Extreme value theory has been used to develop models for describing the distribution of rare events. The extreme value theory based models can be used for asymptotically approximating the behavior of the tail(s) of the distribution function. An important challenge in the application of such extreme value models is the choice of a threshold, beyond which point the asymptotically justified extreme value models can provide good extrapolation. One approach for determining the threshold is to fit the all available data by an extreme value mixture model. This thesis will review most of the existing extreme value mixture models in the literature and implement them in a package for the statistical programming language R to make them more readily useable by practitioners as they are not commonly available in any software. There are many different forms of extreme value mixture models in the literature (e.g. parametric, semi-parametric and non-parametric), which provide an automated approach for estimating the threshold and taking into account the uncertainties with threshold selection. However, it is not clear that how the proportion above the threshold or tail fraction should be treated as there is no consistency in the existing model derivations. This thesis will develop some new models by adaptation of the existing ones in the literature and placing them all within a more generalized framework for taking into account how the tail fraction is defined in the model. Various new models are proposed by extending some of the existing parametric form mixture models to have continuous density at the threshold, which has the advantage of using less model parameters and being more physically plausible. The generalised framework all the mixture models are placed within can be used for demonstrating the importance of the specification of the tail fraction. An R package called evmix has been created to enable these mixture models to be more easily applied and further developed. For every mixture model, the density, distribution, quantile, random number generation, likelihood and fitting function are presented (Bayesian inference via MCMC is also implemented for the non-parametric extreme value mixture models). A simulation study investigates the performance of the various extreme value mixture models under different population distributions with a representative variety of lower and upper tail behaviors. The results show that the kernel density estimator based non-parametric form mixture model is able to provide good tail estimation in general, whilst the parametric and semi-parametric forms mixture models can give a reasonable fit if the distribution below the threshold is correctly specified. Somewhat surprisingly, it is found that including a constraint of continuity at the threshold does not substantially improve the model fit in the upper tail. The hybrid Pareto model performs poorly as it does not include the tail fraction term. The relevant mixture models are applied to insurance and financial applications which highlight the practical usefulness of these models.
55

Contaminated Chi-square Modeling and Its Application in Microarray Data Analysis

Zhou, Feng 01 January 2014 (has links)
Mixture modeling has numerous applications. One particular interest is microarray data analysis. My dissertation research is focused on the Contaminated Chi-Square (CCS) Modeling and its application in microarray. A moment-based method and two likelihood-based methods including Modified Likelihood Ratio Test (MLRT) and Expectation-Maximization (EM) Test are developed for testing the omnibus null hypothesis of no contamination of a central chi-square distribution by a non-central Chi-Square distribution. When the omnibus null hypothesis is rejected, we further developed the moment-based test and the EM test for testing an extra component to the Contaminated Chi-Square (CCS+EC) Model. The moment-based approach is easy and there is no need for re-sampling or random field theory to obtain critical values. When the statistical models are complicated such as large mixtures of dimensional distributions, MLRT and EM test may have better power than moment based approaches, and the MLRT and EM tests developed herein enjoy an elegant asymptotic theory.
56

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
57

Statistical methods for species richness estimation using count data from multiple sampling units

Argyle, Angus Gordon 23 April 2012 (has links)
The planet is experiencing a dramatic loss of species. The majority of species are unknown to science, and it is usually infeasible to conduct a census of a region to acquire a complete inventory of all life forms. Therefore, it is important to estimate and conduct statistical inference on the total number of species in a region based on samples obtained from field observations. Such estimates may suggest the number of species new to science and at potential risk of extinction. In this thesis, we develop novel methodology to conduct statistical inference, based on abundance-based data collected from multiple sampling locations, on the number of species within a taxonomic group residing in a region. The primary contribution of this work is the formulation of novel statistical methodology for analysis in this setting, where abundances of species are recorded at multiple sampling units across a region. This particular area has received relatively little attention in the literature. In the first chapter, the problem of estimating the number of species is formulated in a broad context, one that occurs in several seemingly unrelated fields of study. Estimators are commonly developed from statistical sampling models. Depending on the organisms or objects under study, different sampling techniques are used, and consequently, a variety of statistical models have been developed for this problem. A review of existing estimation methods, categorized by the associated sampling model, is presented in the second chapter. The third chapter develops a new negative binomial mixture model. The negative binomial model is employed to account for the common tendency of individuals of a particular species to occur in clusters. An exponential mixing distribution permits inference on the number of species that exist in the region, but were in fact absent from the sampling units. Adopting a classical approach for statistical inference, we develop the maximum likelihood estimator, and a corresponding profile-log-likelihood interval estimate of species richness. In addition, a Gaussian-based confidence interval based on large-sample theory is presented. The fourth chapter further extends the hierarchical model developed in Chapter 3 into a Bayesian framework. The motivation for the Bayesian paradigm is explained, and a hierarchical model based on random effects and discrete latent variables is presented. Computing the posterior distribution in this case is not straight-forward. A data augmentation technique that indirectly places priors on species richness is employed to compute the model using a Metropolis-Hastings algorithm. The fifth chapter examines the performance of our new methodology. Simulation studies are used to examine the mean-squared error of our proposed estimators. Comparisons to several commonly-used non-parametric estimators are made. Several conclusions emerge, and settings where our approaches can yield superior performance are clarified. In the sixth chapter, we present a case study. The methodology is applied to a real data set of oribatid mites (a taxonomic order of micro-arthropods) collected from multiple sites in a tropical rainforest in Panama. We adjust our statistical sampling models to account for the varying masses of material sampled from the sites. The resulting estimates of species richness for the oribatid mites are useful, and contribute to a wider investigation, currently underway, examining the species richness of all arthropods in the rainforest. Our approaches are the only existing methods that can make full use of the abundance-based data from multiple sampling units located in a single region. The seventh and final chapter concludes the thesis with a discussion of key considerations related to implementation and modeling assumptions, and describes potential avenues for further investigation. / Graduate
58

New tools for unsupervised learning

Xiao, Ying 12 January 2015 (has links)
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem. In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms -- these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well. Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
59

Cough Detection and Forecasting for Radiation Treatment of Lung Cancer

Qiu, Zigang Jimmy 06 April 2010 (has links)
In radiation therapy, a treatment plan is designed to make the delivery of radiation to a target more accurate, effective, and less damaging to surrounding healthy tissues. In lung sites, the tumor is affected by the patient’s respiratory motion. Despite tumor motion, current practice still uses a static delivery plan. Unexpected changes due to coughs and sneezes are not taken into account and as a result, the tumor is not treated accurately and healthy tissues are damaged. In this thesis we detail a framework of using an accelerometer device to detect and forecast coughs. The accelerometer measurements are modeled as a ARMA process to make forecasts. We draw from studies in cough physiology and use amplitudes and durations of the forecasted breathing cycles as features to estimate parameters of Gaussian Mixture Models for cough and normal breathing classes. The system was tested on 10 volunteers, where each data set consisted of one 3-5 minute accelerometer measurements to train the system, and two 1-3 minute accelerometer measurements for testing.
60

Cough Detection and Forecasting for Radiation Treatment of Lung Cancer

Qiu, Zigang Jimmy 06 April 2010 (has links)
In radiation therapy, a treatment plan is designed to make the delivery of radiation to a target more accurate, effective, and less damaging to surrounding healthy tissues. In lung sites, the tumor is affected by the patient’s respiratory motion. Despite tumor motion, current practice still uses a static delivery plan. Unexpected changes due to coughs and sneezes are not taken into account and as a result, the tumor is not treated accurately and healthy tissues are damaged. In this thesis we detail a framework of using an accelerometer device to detect and forecast coughs. The accelerometer measurements are modeled as a ARMA process to make forecasts. We draw from studies in cough physiology and use amplitudes and durations of the forecasted breathing cycles as features to estimate parameters of Gaussian Mixture Models for cough and normal breathing classes. The system was tested on 10 volunteers, where each data set consisted of one 3-5 minute accelerometer measurements to train the system, and two 1-3 minute accelerometer measurements for testing.

Page generated in 0.044 seconds