• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 15
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 155
  • 155
  • 45
  • 44
  • 33
  • 31
  • 30
  • 29
  • 29
  • 28
  • 27
  • 27
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Distributed estimation in resource-constrained wireless sensor networks

Li, Junlin. January 2008 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Ghassan AlRegib; Committee Member: Elliot Moore; Committee Member: Monson H. Hayes; Committee Member: Paul A. Work; Committee Member: Ying Zhang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
12

Calibration of multivariate generalized hyperbolic distributions using the EM algorithm, with applications in risk management, portfolio optimization and portfolio credit risk

Hu, Wenbo. Kercheval, Alec. January 2005 (has links)
Thesis (Ph. D.)--Florida State University, 2005. / Advisor: Alec Kercheval, Florida State University, College of Arts and Sciences, Dept. of Mathemematics. Title and description from dissertation home page (viewed Jan. 26, 2006). Document formatted into pages; contains xii, 103 pages. Includes bibliographical references.
13

Dolda Markovmodeller / Hidden Markov Models

Wirén, Anton January 2018 (has links)
Denna uppsats bygger på de tre klassiska problemen för en dold Markovmodell. Alla tre problemen kommer att matematiskt beskrivas på ett fullständigt sätt. Bland annat kommer problemet med hur man tränar en modell alltså lösas analytiskt med Lagrange multiplikator samt motivera varför Expectation-Maximization fungerar. En väsentlig del kommer också vara att introducera beräkningseffektiva algoritmer för att göra det möjligt att lösa dessa problem.
14

Estimation of individual treatment effect via Gaussian mixture model

Wang, Juan 21 August 2020 (has links)
In this thesis, we investigate the estimation problem of treatment effect from Bayesian perspective through which one can first obtain the posterior distribution of unobserved potential outcome from observed data, and then obtain the posterior distribution of treatment effect. We mainly consider how to represent a joint distribution of two potential outcomes - one from treated group and another from control group, which can give us an indirect impression of correlation, since the estimation of treatment effect depends on correlation between two potential outcomes. The first part of this thesis illustrates the effectiveness of adapting Gaussian mixture models in solving the treatment effect problem. We apply the mixture models - Gaussian Mixture Regression (GMR) and Gaussian Mixture Linear Regression (GMLR)- as a potentially simple and powerful tool to investigate the joint distribution of two potential outcomes. For GMR, we consider a joint distribution of the covariate and two potential outcomes. For GMLR, we consider a joint distribution of two potential outcomes, which linearly depend on covariate. Through developing an EM algorithm for GMLR, we find that GMR and GMLR are effective in estimating means and variances, but they are not effective in capturing correlation between two potential outcomes. In the second part of this thesis, GMLR is modified to capture unobserved covariance structure (correlation between outcomes) that can be explained by latent variables introduced through making an important model assumption. We propose a much more efficient Pre-Post EM Algorithm to implement our proposed GMLR model with unobserved covariance structure in practice. Simulation studies show that Pre-Post EM Algorithm performs well not only in estimating means and variances, but also in estimating covariance.
15

Automatic K-Expectation-Maximization (K-EM) Clustering Algorithm for Data Mining Applications

Harsh, Archit 12 August 2016 (has links)
A non-parametric data clustering technique for achieving efficient data-clustering and improving the number of clusters is presented in this thesis. K-Means and Expectation-Maximization algorithms have been widely deployed in data-clustering applications. Result findings in related works revealed that both these algorithms have been found to be characterized with shortcomings. K-Means was established not to guarantee convergence and the choice of clusters heavily influenced the results. Expectation-Maximization’s premature convergence does not assure the optimality of results and as with K-Means, the choice of clusters influence the results. To overcome the shortcomings, a fast automatic K-EM algorithm is developed that provide optimal number of clusters by employing various internal cluster validity metrics, providing efficient and unbiased results. The algorithm is implemented on a wide array of data sets to ensure the accuracy of the results and efficiency of the algorithm.
16

Dynamic Causal Modeling Across Network Topologies

Zaghlool, Shaza B. 03 April 2014 (has links)
Dynamic Causal Modeling (DCM) uses dynamical systems to represent the high-level neural processing strategy for a given cognitive task. The logical network topology of the model is specified by a combination of prior knowledge and statistical analysis of the neuro-imaging signals. Parameters of this a-priori model are then estimated and competing models are compared to determine the most likely model given experimental data. Inter-subject analysis using DCM is complicated by differences in model topology, which can vary across subjects due to errors in the first-level statistical analysis of fMRI data or variations in cognitive processing. This requires considerable judgment on the part of the experimenter to decide on the validity of assumptions used in the modeling and statistical analysis; in particular, the dropping of subjects with insufficient activity in a region of the model and ignoring activation not included in the model. This manual data filtering is required so that the fMRI model's network size is consistent across subjects. This thesis proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various data set sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool. / Ph. D.
17

Modèles de processus stochastiques avec sauts sur arbres : application à l'évolution adaptative sur des phylogénies. / Shifted stochastic processes evolving on trees : application to models of adaptive evolution on phylogenies.

Bastide, Paul 19 October 2017 (has links)
Le projet s'inscrit dans la dynamique de systématisation statistique qui s'opère aujourd'hui dans le champ de l'écologie comparative. Les différents traits quantitatifs d'un jeu d'espèces échantillonné peuvent être vus comme le résultat d'un processus stochastique courant le long d'un arbre phylogénétique, ce qui permet de prendre en compte des corrélations issues d'histoires évolutives communes. Certains changements environnementaux peuvent produire un déplacement de niches évolutive, qui se traduisent par un saut dans la valeur du processus stochastique décrivant l'évolution au cours du temps du trait des espèces concernées. Parce qu'on ne mesure la valeur du processus dynamique qu'à un seul instant, pour les espèces actuelles, certains scénarii d'évolution ne peuvent être reconstruits, ou présentent des problèmes d'identifiabilité, que l'on étudie avec soin. On construit ici un modèle à données incomplètes d'inférence statistique, que l'on implémente efficacement. La position des sauts est détectée de manière automatique, et leur nombre est choisi grâce à une procédure de sélection de modèle adaptée à la structure du problème, et pour laquelle on dispose de certaines garanties théoriques. Un arbre phylogénétique ne prend pas en compte les phénomènes d'hybridation ou de transferts de gènes horizontaux, qui sont fréquents dans certains groupes d'organismes, comme les plantes ou les bactéries. Pour pallier ce problème, on utilise alors un réseau phylogénétique, pour lequel on propose une adaptation du modèle d'évolution de traits quantitatifs décrit précédemment. Ce modèle permet d'étudier l'hétérosis, qui se manifeste lorsqu'un hybride présente un trait d'une valeur exceptionnelle par rapport à celles de ses deux parents. / This project is aiming at taking a step further in the process of systematic statistical modeling that is occurring in the field of comparative ecology. A way to account for correlations between quantitative traits of a set of sampled species due to common evolutionary histories is to see the current state as the result of a stochastic process running on a phylogenetic tree. Due to environmental changes, some ecological niches can shift in time, inducing a shift in the parameters values of the stochastic process modeling trait evolution. Because we only measure the value of the process at a single time point, for extant species, some evolutionary scenarios cannot be reconstructed, or have some identifiability issues, that we carefully study. We construct an incomplete-data model for statistical inference, along with an efficient implementation. We perform an automatic shift detection, and choose the number of shifts thanks to a model selection procedure, specifically crafted to handle the special structure of the problem. Theoretical guaranties are derived in some special cases. A phylogenetic tree cannot take into account hybridization or horizontal gene transfer events, that are widely spread in some groups of species, such as plants or bacterial organisms. A phylogenetic network can be used to deal with these events. We develop a new model of trait evolution on this kind of structure, that takes non-linear effects such as heterosis into account. Heterosis, or hybrid vigor or depression, is a well studied effect, that happens when a hybrid species has a trait value that is outside of the range of its two parents.
18

Statistical models for catch-at-length data with birth cohort information

Chung, Sai-ho., 鍾世豪. January 2005 (has links)
published_or_final_version / abstract / Social Sciences / Doctoral / Doctor of Philosophy
19

Computational intelligence techniques for missing data imputation

Nelwamondo, Fulufhelo Vincent 14 August 2008 (has links)
Despite considerable advances in missing data imputation techniques over the last three decades, the problem of missing data remains largely unsolved. Many techniques have emerged in the literature as candidate solutions, including the Expectation Maximisation (EM), and the combination of autoassociative neural networks and genetic algorithms (NN-GA). The merits of both these techniques have been discussed at length in the literature, but have never been compared to each other. This thesis contributes to knowledge by firstly, conducting a comparative study of these two techniques.. The significance of the difference in performance of the methods is presented. Secondly, predictive analysis methods suitable for the missing data problem are presented. The predictive analysis in this problem is aimed at determining if data in question are predictable and hence, to help in choosing the estimation techniques accordingly. Thirdly, a novel treatment of missing data for online condition monitoring problems is presented. An ensemble of three autoencoders together with hybrid Genetic Algorithms (GA) and fast simulated annealing was used to approximate missing data. Several significant insights were deduced from the simulation results. It was deduced that for the problem of missing data using computational intelligence approaches, the choice of optimisation methods plays a significant role in prediction. Although, it was observed that hybrid GA and Fast Simulated Annealing (FSA) can converge to the same search space and to almost the same values they differ significantly in duration. This unique contribution has demonstrated that a particular interest has to be paid to the choice of optimisation techniques and their decision boundaries. iii Another unique contribution of this work was not only to demonstrate that a dynamic programming is applicable in the problem of missing data, but to also show that it is efficient in addressing the problem of missing data. An NN-GA model was built to impute missing data, using the principle of dynamic programing. This approach makes it possible to modularise the problem of missing data, for maximum efficiency. With the advancements in parallel computing, various modules of the problem could be solved by different processors, working together in parallel. Furthermore, a method for imputing missing data in non-stationary time series data that learns incrementally even when there is a concept drift is proposed. This method works by measuring the heteroskedasticity to detect concept drift and explores an online learning technique. New direction for research, where missing data can be estimated for nonstationary applications are opened by the introduction of this novel method. Thus, this thesis has uniquely opened the doors of research to this area. Many other methods need to be developed so that they can be compared to the unique existing approach proposed in this thesis. Another novel technique for dealing with missing data for on-line condition monitoring problem was also presented and studied. The problem of classifying in the presence of missing data was addressed, where no attempts are made to recover the missing values. The problem domain was then extended to regression. The proposed technique performs better than the NN-GA approach, both in accuracy and time efficiency during testing. The advantage of the proposed technique is that it eliminates the need for finding the best estimate of the data, and hence, saves time. Lastly, instead of using complicated techniques to estimate missing values, an imputation approach based on rough sets is explored. Empirical results obtained using both real and synthetic data are given and they provide a valuable and promising insight to the problem of missing data. The work, has significantly confirmed that rough sets can be reliable for missing data estimation in larger and real databases.
20

Improved iterative schemes for REML estimation of variance parameters in linear mixed models.

Knight, Emma January 2008 (has links)
Residual maximum likelihood (REML) estimation is a popular method of estimation for variance parameters in linear mixed models, which typically requires an iterative scheme. The aim of this thesis is to review several popular iterative schemes and to develop an improved iterative strategy that will work for a wide class of models. The average information (AI) algorithm is a computationally convenient and efficient algorithm to use when starting values are in the neighbourhood of the REML solution. However when reasonable starting values are not available, the algorithm can fail to converge. The expectation-maximisation (EM) algorithm and the parameter expanded EM (PXEM) algorithm are good alternatives in these situations but they can be very slow to converge. The formulation of these algorithms for a general linear mixed model is presented, along with their convergence properties. A series of hybrid algorithms are presented. EM or PXEM iterations are used initially to obtain variance parameter estimates that are in the neighbourhood of the REML solution, and then AI iterations are used to ensure rapid convergence. Composite local EM/AI and local PXEM/AI schemes are also developed; the local EM and local PXEM algorithms update only the random effect variance parameters, with the estimates of the residual error variance parameters held fixed. Techniques for determining when to use EM-type iterations and when to switch to AI iterations are investigated. Methods for obtaining starting values for the iterative schemes are also presented. The performance of these various schemes is investigated for several different linear mixed models. A number of data sets are used, including published data sets and simulated data. The performance of the basic algorithms is compared to that of the various hybrid algorithms, using both uninformed and informed starting values. The theoretical and empirical convergence rates are calculated and compared for the basic algorithms. The direct comparison of the AI and PXEM algorithms shows that the PXEM algorithm, although an improvement over the EM algorithm, still falls well short of the AI algorithm in terms of speed of convergence. However, when the starting values are too far from the REML solution, the AI algorithm can be unstable. Instability is most likely to arise in models with a more complex variance structure. The hybrid schemes use EM-type iterations to move close enough to the REML solution to enable the AI algorithm to successfully converge. They are shown to be robust to choice of starting values like the EM and PXEM algorithms, while demonstrating fast convergence like the AI algorithm. / Thesis (Ph.D.) - University of Adelaide, School of Agriculture, Food and Wine, 2008

Page generated in 0.1128 seconds