• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian Phylogenetic Inference : Estimating Diversification Rates from Reconstructed Phylogenies

Höhna, Sebastian January 2013 (has links)
Phylogenetics is the study of the evolutionary relationship between species. Inference of phylogeny relies heavily on statistical models that have been extended and refined tremendously over the past years into very complex hierarchical models. Paper I introduces probabilistic graphical models to statistical phylogenetics and elaborates on the potential advantages a unified graphical model representation could have for the community, e.g., by facilitating communication and improving reproducibility of statistical analyses of phylogeny and evolution. Once the phylogeny is reconstructed it is possible to infer the rates of diversification (speciation and extinction). In this thesis I extend the birth-death process model, so that it can be applied to incompletely sampled phylogenies, that is, phylogenies of only a subsample of the presently living species from one group. Previous work only considered the case when every species had the same probability to be included and here I examine two alternative sampling schemes: diversified taxon sampling and cluster sampling. Paper II introduces these sampling schemes under a constant rate birth-death process and gives the probability density for reconstructed phylogenies. These models are extended in Paper IV to time-dependent diversification rates, again, under different sampling schemes and applied to empirical phylogenies. Paper III focuses on fast and unbiased simulations of reconstructed phylogenies. The efficiency is achieved by deriving the analytical distribution and density function of the speciation times in the reconstructed phylogeny. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 1: Manuscript. Paper 4: Accepted.</p>
2

A Hierarchical Bayesian Model for the Unmixing Analysis of Compositional Data subject to Unit-sum Constraints

Yu, Shiyong 15 May 2015 (has links)
Modeling of compositional data is emerging as an active area in statistics. It is assumed that compositional data represent the convex linear mixing of definite numbers of independent sources usually referred to as end members. A generic problem in practice is to appropriately separate the end members and quantify their fractions from compositional data subject to nonnegative and unit-sum constraints. A number of methods essentially related to polytope expansion have been proposed. However, these deterministic methods have some potential problems. In this study, a hierarchical Bayesian model was formulated, and the algorithms were coded in MATLABÒ. A test run using both a synthetic and real-word dataset yields scientifically sound and mathematically optimal outputs broadly consistent with other non-Bayesian methods. Also, the sensitivity of this model to the choice of different priors and structure of the covariance matrix of error were discussed.
3

Méthodes quantitatives pour l'étude asymptotique de processus de Markov homogènes et non-homogènes / Quantitative methods for the asymptotic study of homogeneous and non-homogeneous Markov processes

Delplancke, Claire 28 June 2017 (has links)
L'objet de cette thèse est l'étude de certaines propriétés analytiques et asymptotiques des processus de Markov, et de leurs applications à la méthode de Stein. Le point de vue considéré consiste à déployer des inégalités fonctionnelles pour majorer la distance entre lois de probabilité. La première partie porte sur l'étude asymptotique de processus de Markov inhomogènes en temps via des inégalités de type Poincaré, établies par l'analyse spectrale fine de l'opérateur de transition. On se place d'abord dans le cadre du théorème central limite, qui affirme que la somme renormalisée de variables aléatoires converge vers la mesure gaussienne, et l'étude est consacrée à l'obtention d'une borne à la Berry-Esseen permettant de quantifier cette convergence. La distance choisie est une quantité naturelle et encore non étudiée dans ce cadre, la distance du chi-2, complétant ainsi la littérature relative à d'autres distances (Kolmogorov, variation totale, Wasserstein). Toujours dans le contexte non-homogène, on s'intéresse ensuite à un processus peu mélangeant relié à un algorithme stochastique de recherche de médiane. Ce processus évolue par sauts de deux types (droite ou gauche), dont la taille et l'intensité dépendent du temps. Une majoration de la distance de Wasserstein d'ordre 1 entre la loi du processus et la mesure gaussienne est établie dans le cas où celle-ci est invariante sous la dynamique considérée, et étendue à des exemples où seule la normalité asymptotique est vérifiée. La seconde partie s'attache à l'étude des entrelacements entre processus de Markov (homogènes) et gradients, qu'on peut interpréter comme un raffinement du critère de Bakry-Emery, et leur application à la méthode de Stein, qui est un ensemble de techniques permettant de majorer la distance entre deux mesures de probabilité. On prouve l'existence de relations d'entrelacement du second ordre pour les processus de naissance-mort, allant ainsi plus loin que les relations du premier ordre connues. Ces relations sont mises à profit pour construire une méthode originale et universelle d'évaluation des facteurs de Stein relatifs aux mesures de probabilité discrètes, qui forment une composante essentielle de la méthode de Stein-Chen. / The object of this thesis is the study of some analytical and asymptotic properties of Markov processes, and their applications to Stein's method. The point of view consists in the development of functional inequalities in order to obtain upper-bounds on the distance between probability distributions. The first part is devoted to the asymptotic study of time-inhomogeneous Markov processes through Poincaré-like inequalities, established by precise estimates on the spectrum of the transition operator. The first investigation takes place within the framework of the Central Limit Theorem, which states the convergence of the renormalized sum of random variables towards the normal distribution. It results in the statement of a Berry-Esseen bound allowing to quantify this convergence with respect to the chi-2 distance, a natural quantity which had not been investigated in this setting. It therefore extends similar results relative to other distances (Kolmogorov, total variation, Wasserstein). Keeping with the non-homogeneous framework, we consider a weakly mixing process linked to a stochastic algorithm for median approximation. This process evolves by jumps of two sorts (to the right or to the left) with time-dependent size and intensity. An upper-bound on the Wasserstein distance of order 1 between the marginal distribution of the process and the normal distribution is provided when the latter is invariant under the dynamic, and extended to examples where only the asymptotic normality stands. The second part concerns intertwining relations between (homogeneous) Markov processes and gradients, which can be seen as refinment of the Bakry-Emery criterion, and their application to Stein's method, a collection of techniques to estimate the distance between two probability distributions. Second order intertwinings for birth-death processes are stated, going one step further than the existing first order relations. These relations are then exploited to construct an original and universal method of evaluation of discrete Stein's factors, a key component of Stein-Chen's method.
4

Dimensionamento de equipes de trabalho por meio de modelos probabilísticos / Size of work teams by means of probabilistic models

Freitas, Christiano Michel Fernandes 18 May 2018 (has links)
Submitted by Liliane Ferreira (ljuvencia30@gmail.com) on 2018-07-18T13:36:22Z No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-07-18T15:20:11Z (GMT) No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2018-07-18T15:20:11Z (GMT). No. of bitstreams: 2 Dissertação - Christiano Michel Fernandes Freitas - 2018.pdf: 3053695 bytes, checksum: 0d910878cf5ec6ac8091d4ef7816758e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-05-18 / This work proposes the modeling of a production system with three manufacturing units, in order to allow the optimal dimensioning of maintainers and the accomplishment of a sensitivity analysis that allows to evaluate the reliability of the obtained results. A Quasi-Birthand- Death (QBD) process is used to model the productive units, and through infinitesimal generators, the input probabilities for the developed code are obtained. Organizations usually define their supporter teams empirically, which can compromise organizational strategies. Thus, the code offers assistance in the decision making of these professionals. Thus, three production units X, Y and Z were modeled and the minimum dimensioning of maintainers that each unit had to be performed. Thus, the X unit with two maintainers provides a 70% probability of remaining in operation, the Y unit with three provides 76%, and finally, the Y unit with only one maintainer allows an 80% chance of remaining in operation. Bymeans of the sensitivity analysis, it was noticed thatwhen disturbing the infinitesimal generator the values of probability of operation tend to approximate to 100% whereas a maintainer is added, however, when the fourth maintainer is added, there is little variation in the system. However,when the system is stressed by the growth of the randomvariable t, the reliability of the results tends to decrease, whereas with a maintainer, the probability of functioning falls considerably over time, and in contrast, with four maintainers, the permanence of operating state tends to be distant. / O presente trabalho tem como objetivos: modelar um sistema de produção com três unidades fabris, de modo a permitir o dimensionamento ideal de mantenedores, e a realização de uma análise de sensibilidade para avaliar a confiabilidade dos resultados obtidos pelo mesmo modelo. Um processo de quase nascimento e morte (Quasi-Birth-and-Death - QBD) é utilizado para modelar as unidades produtivas, e por meio dos geradores infinitesimais, são obtidas as probabilidades de entrada para o código desenvolvido. Geralmente as organizações definem suas equipes de mantenedores de forma empírica, fato que pode comprometer as estratégias organizacionais. Sendo assim, o código oferece auxílio na tomada de decisões destes profissionais. Deste modo, foram modeladas três unidades de produção X, Y e Z e realizado o dimensionamento mínimo de mantenedores que cada unidade deve ter. Observou-se que a unidade X com no mínimo dois mantenedores proporciona 70% de probabilidade de permanecer em funcionamento, a unidade Y com três, proporciona 76%, e a unidade Z, com um mantenedor, possibilita 80%. Por meio da análise de sensibilidade, notou-se que ao perturbar o gerador infinitesimal os valores de probabilidade de funcionamento tendem a se aproximar de 100% à medida que se acrescenta um mantenedor, no entanto, quando se acrescenta o quarto mantenedor, existe pouca variação no sistema. Já em relação ao tempo, quando se estressa o sistema por meio do crescimento da variável aleatória t , a confiabilidade dos resultados tende a diminuir, sendo que com um mantenedor, a probabilidade de funcionamento cai consideravelmente ao longo do tempo, e em contrapartida, com quatro mantenedores, a permanência de estado de funcionamento tende a ser duradoura.
5

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
6

Využití teorie hromadné obsluhy při návrhu a optimalizaci paketových sítí / Queueing theory utilization in packet network design and optimization process

Rýzner, Zdeněk January 2011 (has links)
This master's thesis deals with queueing theory and its application in designing node models in packet-switched network. There are described general principles of designing queueing theory models and its mathematical background. Further simulator of packet delay in network was created. This application implements two described models - M/M/1 and M/G/1. Application can be used for simulating network nodes and obtaining basic network characteristics like packet delay or packet loss. Next, lab exercise was created, in that exercise students familiarize themselves with basic concepts of queueing theory and examine both analytical and simulation approach to solving queueing systems.

Page generated in 0.1503 seconds