• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 576
  • 240
  • 59
  • 58
  • 28
  • 25
  • 24
  • 24
  • 20
  • 15
  • 15
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 1278
  • 621
  • 313
  • 271
  • 197
  • 195
  • 193
  • 180
  • 172
  • 167
  • 151
  • 122
  • 122
  • 108
  • 106
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

Liang, Yuli January 2015 (has links)
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
72

Essays on Estimation Methods for Factor Models and Structural Equation Models

Jin, Shaobo January 2015 (has links)
This thesis which consists of four papers is concerned with estimation methods in factor analysis and structural equation models. New estimation methods are proposed and investigated. In paper I an approximation of the penalized maximum likelihood (ML) is introduced to fit an exploratory factor analysis model. Approximated penalized ML continuously and efficiently shrinks the factor loadings towards zero. It naturally factorizes a covariance matrix or a correlation matrix. It is also applicable to an orthogonal or an oblique structure. Paper II, a simulation study, investigates the properties of approximated penalized ML with an orthogonal factor model. Different combinations of penalty terms and tuning parameter selection methods are examined. Differences in factorizing a covariance matrix and factorizing a correlation matrix are also explored. It is shown that the approximated penalized ML frequently improves the traditional estimation-rotation procedure. In Paper III we focus on pseudo ML for multi-group data. Data from different groups are pooled and normal theory is used to fit the model. It is shown that pseudo ML produces consistent estimators of factor loadings and that it is numerically easier than multi-group ML. In addition, normal theory is not applicable to estimate standard errors. A sandwich-type estimator of standard errors is derived. Paper IV examines properties of the recently proposed polychoric instrumental variable (PIV) estimators for ordinal data through a simulation study. PIV is compared with conventional estimation methods (unweighted least squares and diagonally weighted least squares). PIV produces accurate estimates of factor loadings and factor covariances in the correctly specified confirmatory factor analysis model and accurate estimates of loadings and coefficient matrices in the correctly specified structure equation model. If the model is misspecified, robustness of PIV depends on model complexity, underlying distribution, and instrumental variables.
73

Maximum Likelihood Estimators for ARMA and ARFIMA Models. A Monte Carlo Study.

Hauser, Michael A. January 1998 (has links) (PDF)
We analyze by simulation the properties of two time domain and two frequency domain estimators for low order autoregressive fractionally integrated moving average Gaussian models, ARFIMA (p,d,q). The estimators considered are the exact maximum likelihood for demeaned data, EML, the associated modified profile likelihood, MPL, and the Whittle estimator with, WLT, and without tapered data, WL. Length of the series is 100. The estimators are compared in terms of pile-up effect, mean square error, bias, and empirical confidence level. The tapered version of the Whittle likelihood turns out to be a reliable estimator for ARMA and ARFIMA models. Its small losses in performance in case of ``well-behaved" models are compensated sufficiently in more ``difficult" models. The modified profile likelihood is an alternative to the WLT but is computationally more demanding. It is either equivalent to the EML or more favorable than the EML. For fractionally integrated models, particularly, it dominates clearly the EML. The WL has serious deficiencies for large ranges of parameters, and so cannot be recommended in general. The EML, on the other hand, should only be used with care for fractionally integrated models due to its potential large negative bias of the fractional integration parameter. In general, one should proceed with caution for ARMA(1,1) models with almost canceling roots, and, in particular, in case of the EML and the MPL for inference in the vicinity of a moving average root of +1. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
74

Statistical Signal Processing of ESI-TOF-MS for Biomarker Discovery

January 2012 (has links)
abstract: Signal processing techniques have been used extensively in many engineering problems and in recent years its application has extended to non-traditional research fields such as biological systems. Many of these applications require extraction of a signal or parameter of interest from degraded measurements. One such application is mass spectrometry immunoassay (MSIA) which has been one of the primary methods of biomarker discovery techniques. MSIA analyzes protein molecules as potential biomarkers using time of flight mass spectrometry (TOF-MS). Peak detection in TOF-MS is important for biomarker analysis and many other MS related application. Though many peak detection algorithms exist, most of them are based on heuristics models. One of the ways of detecting signal peaks is by deploying stochastic models of the signal and noise observations. Likelihood ratio test (LRT) detector, based on the Neyman-Pearson (NP) lemma, is an uniformly most powerful test to decision making in the form of a hypothesis test. The primary goal of this dissertation is to develop signal and noise models for the electrospray ionization (ESI) TOF-MS data. A new method is proposed for developing the signal model by employing first principles calculations based on device physics and molecular properties. The noise model is developed by analyzing MS data from careful experiments in the ESI mass spectrometer. A non-flat baseline in MS data is common. The reasons behind the formation of this baseline has not been fully comprehended. A new signal model explaining the presence of baseline is proposed, though detailed experiments are needed to further substantiate the model assumptions. Signal detection schemes based on these signal and noise models are proposed. A maximum likelihood (ML) method is introduced for estimating the signal peak amplitudes. The performance of the detection methods and ML estimation are evaluated with Monte Carlo simulation which shows promising results. An application of these methods is proposed for fractional abundance calculation for biomarker analysis, which is mathematically robust and fundamentally different than the current algorithms. Biomarker panels for type 2 diabetes and cardiovascular disease are analyzed using existing MS analysis algorithms. Finally, a support vector machine based multi-classification algorithm is developed for evaluating the biomarkers' effectiveness in discriminating type 2 diabetes and cardiovascular diseases and is shown to perform better than a linear discriminant analysis based classifier. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012
75

Modeling synthetic aperture radar image data

Matthew Pianto, Donald 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T18:29:09Z (GMT). No. of bitstreams: 2 arquivo4274_1.pdf: 5027595 bytes, checksum: 37a31f281a0f888465edbdc60cb2db39 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2008 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Nessa tese estudamos a estimação por máxima verossimilhança (MV) do parâmetro de aspereza da distribuição G 0 A de imagens com speckle (Frery et al., 1997). Descobrimos que, satisfeita uma certa condição dos momentos amostrais, a função de verossimilhança é monótona e as estimativas MV são infinitas, implicando uma região plana. Implementamos quatro estimadores de correção de viés em uma tentativa de obter estimativas MV finitas. Três dos estimadores são obtidos da literatura sobre verossimilhança monótona (Firth, 1993; Jeffreys, 1946) e um, baseado em reamostragem, é proposto pelo autor. Fazemos experimentos numéricos de Monte Carlo para comparar os quatro estimadores e encontramos que não existe um favorito claro, a menos quando um parâmetro (dado a priori da estimação) toma um valor específico. Também aplicamos os estimadores a dados reais de radar de abertura sintética. O resultado desta análise mostra que os estimadores precisam ser comparados com base em suas habilidades de classificar regiões corretamente como ásperas, planas, ou intermediárias e não pelos seus vieses e erros quadráticos médios
76

Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades

Hattaway, James T. 09 March 2010 (has links) (PDF)
The normal distribution is a commonly seen distribution in nature, education, and business. Data that are mounded or bell shaped are easily found across various fields of study. Although there is high utility with the normal distribution; often the full range can not be observed. The truncated normal distribution accounts for the inability to observe the full range and allows for inferring back to the original population. Depending on the amount of truncation, the truncated normal has several distinct shapes. A simulation study evaluating the performance of the maximum likelihood estimators and method of moment estimators is conducted and a comparison of performance is made. The α Likelihood Ratio Test (LRT) is derived for testing the null hypothesis of equal population means for truncated normal data. A simulation study evaluating the power of the LRT to detect absolute standardized differences between the two population means with small sample size was conducted and the power curves were approximated. Another simulation study evaluating the power of the LRT to detect absolute differences for testing the hypothesis with large unequal sample sizes was conducted. The α LRT was extended to a k population hypothesis test for equal population means. A simulation study examining the power of the k population LRT for detecting absolute standardized differences when one of the population means is different than the others was conducted and the power curve approximated. Stat~221 is the largest introductory statistics course at BYU serving about 4,500 students a year. Every section of Stat 221 shares common homework assignments and tests. This controls for confounding when making comparisons between sections. Historically grades have been thought to be bell shaped, but with grade inflation and other factors, the upper tail is lost because of the truncation at 100. It is reasonable to assume that grades follow a truncated normal distribution. Inference using the final grades should be done recognizing the truncation. Performance of the different Stat 221 sections was evaluated using the LRTs derived.
77

Statistical Inferences under a semiparametric finite mixture model

Zhang, Shiju January 2005 (has links)
No description available.
78

Preamble Design for Symbol Timing Estimation from SOQPSK-TG Waveforms

Erkmen, Baris I., Tkacenko, Andre, Okino, Clayton M. 10 1900 (has links)
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Data-aided symbol synchronization for bursty communications utilizes a predetermined modulation sequence, i.e., a preamble, preceding the payload. For effective symbol synchronization, this preamble must be designed in accordance with the modulation format. In this paper, we analyze preambles for shaped offset quadrature phase-shift keying (SOQPSK) waveforms. We compare the performance of several preambles by deriving the Cram´er-Rao bound (CRB), and identify a desirable one for the Telemetry Group variant of SOQPSK. We also demonstrate, via simulation, that the maximum likelihood estimator with this preamble approaches the CRB at moderate signal-to-noise ratio.
79

Search for VH → leptons + b¯b with the ATLAS experiment at the LHC

Debenedetti, Chiara January 2014 (has links)
The search for a Higgs boson decaying to a b¯b pair is one of the key analyses ongoing at the ATLAS experiment. Despite being the largest branching ratio decay for a Standard Model Higgs boson, a large dataset is necessary to perform this analysis because of the very large backgrounds affecting the measurement. To discriminate the electroweak H → b¯b signal from the large QCD backgrounds, the associated production of the Higgs with a W or a Z boson decaying leptonically is used. Different techniques have been proposed to enhance the signal over background ratio in the VH(b¯b) channel, from dedicated kinematic cuts, to a single large radius jet to identify the two collimated b’s in the Higgs high transverse momentum regime, to multivariate techniques. The high-pT approach, using a large radius jet to identify the b’s coming from the Higgs decay, has been tested against an analysis based on kinematic cuts for a dataset of 4.7 fb−1 luminosity at √s = 7 TeV, and compatible results were found for the same transverse momentum range. Using a kinematic cut based approach the VH(b¯b) signal search has been performed for the full LHC Run 1 dataset: 4.7 fb−1 at √s = 7 TeV and 20.7 fb−1 at √s = 8 TeV. Several backgrounds to this analysis, such as Wb¯b have not been measured in data yet, and an accurate study of the theoretical description has been performed, comparing the predictions of various Monte Carlo generators at different orders. The complexity of the analysis requires a profile likelihood fit with several categories and almost 200 parameters, taking into account all the systematics coming from experimental or modelling limitations, to extract the result. To validate the fit model, a test of the ability to extract the signal is performed on the resonant V Z(b¯b) background. A 4.8σ excess compatible with the Standard Model rate expectation has been measured, with a best fit value μVZ = 0.93+0.22−0.21. The full LHC Run1 dataset result for the VH(b¯b) process is a limit of (1.3)1.4 x SM (expected) observed, with a best fit value of 0.2±0.5(stat)±0.4(sys) for a Higgs boson of 125 GeV mass.
80

Evaluation of evidence for autocorrelated data, with an example relating to traces of cocaine on banknotes

Wilson, Amy Louise January 2014 (has links)
Much research in recent years for evidence evaluation in forensic science has focussed on methods for determining the likelihood ratio in various scenarios. One proposition concerning the evidence is put forward by the prosecution and another is put forward by the defence. The likelihood of each of these two propositions is calculated, given the evidence. The likelihood ratio, or value of the evidence, is then given by the ratio of the likelihoods associated with these two propositions. The aim of this research is twofold. Firstly, it is intended to provide methodology for the evaluation of the likelihood ratio for continuous autocorrelated data. The likelihood ratio is evaluated for two such scenarios. The first is when the evidence consists of data which are autocorrelated at lag one. The second, an extension to this, is when the observed evidential data are also believed to be driven by an underlying latent Markov chain. Two models have been developed to take these attributes into account, an autoregressive model of order one and a hidden Markov model, which does not assume independence of adjacent data points conditional on the hidden states. A nonparametric model which does not make a parametric assumption about the data and which accounts for lag one autocorrelation is also developed. The performance of these three models is compared to the performance of a model which assumes independence of the data. The second aim of the research is to develop models to evaluate evidence relating to traces of cocaine on banknotes, as measured by the log peak area of the ion count for cocaine product ion m/z 105, obtained using tandem mass spectrometry. Here, the prosecution proposition is that the banknotes are associated with a person who is involved with criminal activity relating to cocaine and the defence proposition is the converse, which is that the banknotes are associated with a person who is not involved with criminal activity relating to cocaine. Two data sets are available, one of banknotes seized in criminal investigations and associated with crime involving cocaine, and one of banknotes from general circulation. Previous methods for the evaluation of this evidence were concerned with the percentage of banknotes contaminated or assumed independence of measurements of quantities of cocaine on adjacent banknotes. It is known that nearly all banknotes have traces of cocaine on them and it was found that there was autocorrelation within samples of banknotes so thesemethods are not appropriate. The models developed for autocorrelated data are applied to evidence relating to traces of cocaine on banknotes; the results obtained for each of the models are compared using rates of misleading evidence, Tippett plots and scatter plots. It is found that the hiddenMarkov model is the best choice for themodelling of cocaine traces on banknotes because it has the lowest rate of misleading evidence and it also results in likelihood ratios which are large enough to give support to the prosecution proposition for some samples of banknotes seized from crime scenes. Comparison of the results obtained for models which take autocorrelation into account with the results obtained from the model which assumes independence indicate that not accounting for autocorrelation can result in the overstating of the likelihood ratio.

Page generated in 0.0563 seconds