• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 37
  • 23
  • 17
  • 9
  • 7
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 210
  • 210
  • 68
  • 65
  • 62
  • 48
  • 39
  • 39
  • 37
  • 30
  • 29
  • 28
  • 27
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Estimating the Proportion of True Null Hypotheses in Multiple Testing Problems

Oyeniran, Oluyemi 18 July 2016 (has links)
No description available.
12

Novel stochastic and entropy-based Expectation-Maximisation algorithm for transcription factor binding site motif discovery

Kilpatrick, Alastair Morris January 2015 (has links)
The discovery of transcription factor binding site (TFBS) motifs remains an important and challenging problem in computational biology. This thesis presents MITSU, a novel algorithm for TFBS motif discovery which exploits stochastic methods as a means of both overcoming optimality limitations in current algorithms and as a framework for incorporating relevant prior knowledge in order to improve results. The current state of the TFBS motif discovery field is surveyed, with a focus on probabilistic algorithms that typically take the promoter regions of coregulated genes as input. A case is made for an approach based on the stochastic Expectation-Maximisation (sEM) algorithm; its position amongst existing probabilistic algorithms for motif discovery is shown. The algorithm developed in this thesis is unique amongst existing motif discovery algorithms in that it combines the sEM algorithm with a derived data set which leads to an improved approximation to the likelihood function. This likelihood function is unconstrained with regard to the distribution of motif occurrences within the input dataset. MITSU also incorporates a novel heuristic to automatically determine TFBS motif width. This heuristic, known as MCOIN, is shown to outperform current methods for determining motif width. MITSU is implemented in Java and an executable is available for download. MITSU is evaluated quantitatively using realistic synthetic data and several collections of previously characterised prokaryotic TFBS motifs. The evaluation demonstrates that MITSU improves on a deterministic EM-based motif discovery algorithm and an alternative sEM-based algorithm, in terms of previously established metrics. The ability of the sEM algorithm to escape stable fixed points of the EM algorithm, which trap deterministic motif discovery algorithms and the ability of MITSU to discover multiple motif occurrences within a single input sequence are also demonstrated. MITSU is validated using previously characterised Alphaproteobacterial motifs, before being applied to motif discovery in uncharacterised Alphaproteobacterial data. A number of novel results from this analysis are presented and motivate two extensions of MITSU: a strategy for the discovery of multiple different motifs within a single dataset and a higher order Markov background model. The effects of incorporating these extensions within MITSU are evaluated quantitatively using previously characterised prokaryotic TFBS motifs and demonstrated using Alphaproteobacterial motifs. Finally, an information-theoretic measure of motif palindromicity is presented and its advantages over existing approaches for discovering palindromic motifs discussed.
13

Semiparametric mixture models

Xiang, Sijia January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Weixin Yao / This dissertation consists of three parts that are related to semiparametric mixture models. In Part I, we construct the minimum profile Hellinger distance (MPHD) estimator for a class of semiparametric mixture models where one component has known distribution with possibly unknown parameters while the other component density and the mixing proportion are unknown. Such semiparametric mixture models have been often used in biology and the sequential clustering algorithm. In Part II, we propose a new class of semiparametric mixture of regression models, where the mixing proportions and variances are constants, but the component regression functions are smooth functions of a covariate. A one-step backfitting estimate and two EM-type algorithms have been proposed to achieve the optimal convergence rate for both the global parameters and nonparametric regression functions. We derive the asymptotic property of the proposed estimates and show that both proposed EM-type algorithms preserve the asymptotic ascent property. In Part III, we apply the idea of single-index model to the mixture of regression models and propose three new classes of models: the mixture of single-index models (MSIM), the mixture of regression models with varying single-index proportions (MRSIP), and the mixture of regression models with varying single-index proportions and variances (MRSIPV). Backfitting estimates and the corresponding algorithms have been proposed for the new models to achieve the optimal convergence rate for both the parameters and the nonparametric functions. We show that the nonparametric functions can be estimated as if the parameters were known and the parameters can be estimated with the same rate of convergence, n[subscript](-1/2), that is achieved in a parametric model.
14

On Convergence Properties of the EM Algorithm for Gaussian Mixtures

Jordan, Michael, Xu, Lei 21 April 1995 (has links)
"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.
15

Essays in Dynamic Macroeconometrics

Bañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting. The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy. The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness. The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for. The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast. The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size. The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
16

A comparably robust approach to estimate the left-censored data of trace elements in Swedish groundwater

Li, Cong January 2012 (has links)
Groundwater data in this thesis, which is taken from the database of Sveriges Geologiska Undersökning, characterizes chemical and quantitative status of groundwater in Sweden. The data usually is recorded with only quantification limits when it is below certain values. Accordingly, this thesis is aiming at handling such kind of data. The thesis considers this topic by using the EM algorithm to get the results from maximum likelihood estimation. Consequently, estimations of distributions on censored data of trace elements are expounded on. Related simulations show that the estimation is acceptable.
17

Modeling the Bid-Ask Spread by Option Hedging

Lin, Chi-hsien 08 August 2005 (has links)
The bid-ask spread costs consist of three components, which include order processing costs, inventory-holding costs, and adverse selection costs. In this paper, we model the inventory-holding costs of the bid-ask spread by option hedging. Theinventory-holding costs are hedged by call or put option positions. Since trades deal with the adverse selection traders are unobservable. We treat it as a latent variable, and Expected-Maximization (EM) algorithm are applied to estimate the related parameters of the model. Simulation studies are performed for several different models. Empirical results of NYSE high frequency data show that the proposed model are obtain appropriate parameter estimation when the returns satisfied normality assumption.
18

List-mode SPECT reconstruction using empirical likelihood

Lehovich, Andre January 2005 (has links)
This dissertation investigates three topics related to imagereconstruction from list-mode Anger camera data. Our mainfocus is the processing of photomultiplier-tube (PMT)signals directly into images. First we look at the use of list-mode calibration data toreconstruct a non-parametric likelihood model relating theobject to the data list. The reconstructed model can thenbe combined with list-mode object data to produce amaximum-likelihood (ML) reconstruction, an approach we calldouble list-mode reconstruction. This trades off reducedprior assumptions about the properties of the imaging systemfor greatly increased processing time and increaseduncertainty in the reconstruction. Second we use the list-mode expectation-maximization (EM)algorithm to reconstruct planar projection images directlyfrom PMT data. Images reconstructed by EM are compared withimages produced using the faster and more common techniqueof first producing ML position estimates, then histogramingto form an image. A mathematical model of the human visualsystem, the channelized Hotelling observer, is used tocompare the reconstructions by performing the Rayleigh task,a traditional measure of resolution. EM is found to producehigher resolution images than the histogram approach,suggesting that information is lost during the positionestimation step. Finally we investigate which linear parameters of an objectare estimable, in other words may be estimated without biasfrom list-mode data. We extend the notion of a linearsystem operator, familiar from binned-mode systems, tolist-mode systems, and show the estimable parameters aredetermined by the range of the adjoint of the systemoperator. As in the binned-mode case, the list-modesensitivity functions define ``natural pixels'' with whichto reconstruct the object.
19

Network Exceptions Modelling Using Hidden Markov Model : A Case Study of Ericsson’s DroppedCall Data

Li, Shikun January 2014 (has links)
In telecommunication, the series of mobile network exceptions is a processwhich exhibits surges and bursts. The bursty part is usually caused by systemmalfunction. Additionally, the mobile network exceptions are often timedependent. A model that successfully captures these aspects will make troubleshootingmuch easier for system engineers. The Hidden Markov Model(HMM) is a good candidate as it provides a mechanism to capture both thetime dependency and the random occurrence of bursts. This thesis focuses onan application of the HMM to mobile network exceptions, with a case study ofEricsson’s Dropped Call data. For estimation purposes, two methods of maximumlikelihood estimation for HMM, namely, EM algorithm and stochasticEM algorithm, are used.
20

Iterative receivers for digital communications via variational inference and estimation

Nissilä, M. (Mauri) 08 January 2008 (has links)
Abstract In this thesis, iterative detection and estimation algorithms for digital communications systems in the presence of parametric uncertainty are explored and further developed. In particular, variational methods, which have been extensively applied in other research fields such as artificial intelligence and machine learning, are introduced and systematically used in deriving approximations to the optimal receivers in various channel conditions. The key idea behind the variational methods is to transform the problem of interest into an optimization problem via an introduction of extra degrees of freedom known as variational parameters. This is done so that, for fixed values of the free parameters, the transformed problem has a simple solution, solving approximately the original problem. The thesis contributes to the state of the art of advanced receiver design in a number of ways. These include the development of new theoretical and conceptual viewpoints of iterative turbo-processing receivers as well as a new set of practical joint estimation and detection algorithms. Central to the theoretical studies is to show that many of the known low-complexity turbo receivers, such as linear minimum mean square error (MMSE) soft-input soft-output (SISO) equalizers and demodulators that are based on the Bayesian expectation-maximization (BEM) algorithm, can be formulated as solutions to the variational optimization problem. This new approach not only provides new insights into the current designs and structural properties of the relevant receivers, but also suggests some improvements on them. In addition, SISO detection in multipath fading channels is considered with the aim of obtaining a new class of low-complexity adaptive SISOs. As a result, a novel, unified method is proposed and applied in order to derive recursive versions of the classical Baum-Welch algorithm and its Bayesian counterpart, referred to as the BEM algorithm. These formulations are shown to yield computationally attractive soft decision-directed (SDD) channel estimators for both deterministic and Rayleigh fading intersymbol interference (ISI) channels. Next, by modeling the multipath fading channel as a complex bandpass autoregressive (AR) process, it is shown that the statistical parameters of radio channels, such as frequency offset, Doppler spread, and power-delay profile, can be conveniently extracted from the estimated AR parameters which, in turn, may be conveniently derived via an EM algorithm. Such a joint estimator for all relevant radio channel parameters has a number of virtues, particularly its capability to perform equally well in a variety of channel conditions. Lastly, adaptive iterative detection in the presence of phase uncertainty is investigated. As a result, novel iterative joint Bayesian estimation and symbol a posteriori probability (APP) computation algorithms, based on the variational Bayesian method, are proposed for both constant-phase channel models and dynamic phase models, and their performance is evaluated via computer simulations.

Page generated in 0.0803 seconds