• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 24
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 151
  • 151
  • 138
  • 45
  • 29
  • 26
  • 26
  • 23
  • 22
  • 20
  • 20
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The covariance structure of conditional maximum likelihood estimates

Strasser, Helmut 11 1900 (has links) (PDF)
In this paper we consider conditional maximum likelihood (cml) estimates for item parameters in the Rasch model under random subject parameters. We give a simple approximation for the asymptotic covariance matrix of the cml-estimates. The approximation is stated as a limit theorem when the number of item parameters goes to infinity. The results contain precise mathematical information on the order of approximation. The results enable the analysis of the covariance structure of cml-estimates when the number of items is large. Let us give a rough picture. The covariance matrix has a dominating main diagonal containing the asymptotic variances of the estimators. These variances are almost equal to the efficient variances under ml-estimation when the distribution of the subject parameter is known. Apart from very small numbers n of item parameters the variances are almost not affected by the number n. The covariances are more or less negligible when the number of item parameters is large. Although this picture intuitively is not surprising it has to be established in precise mathematical terms. This has been done in the present paper. The paper is based on previous results [5] of the author concerning conditional distributions of non-identical replications of Bernoulli trials. The mathematical background are Edgeworth expansions for the central limit theorem. These previous results are the basis of approximations for the Fisher information matrices of cmlestimates. The main results of the present paper are concerned with the approximation of the covariance matrices. Numerical illustrations of the results and numerical experiments based on the results are presented in Strasser, [6].
12

Maximum likelihood estimation of phylogenetic tree with evolutionary parameters

Wang, Qiang 19 May 2004 (has links)
No description available.
13

Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl Liebenberg

Liebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
14

Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl Liebenberg

Liebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
15

Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target Availability

Khanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
16

Estimação em modelos funcionais com erro normais e repetições não balanceadas / Estimation in functional models by using a normal error and replications unbalanced

Joan Neylo da Cruz Rodriguez 29 April 2008 (has links)
Esta dissertação compreende um estudo da eficiência de estimadores dos parâmetros no modelo funcional com erro nas variáveis, com repetições para contornar o problema de falta de identificação. Nela, discute-se os procedimentos baseados nos métodos de máxima verossimilhança e escore corrigido. As estimativas obtidas pelos dois métodos levam a resultados similares. / This work is concerned with a study on the efficiency of parameter estimates in the functional linear relashionship with constant variances. Where the lack of identification is resolved of by considering replications. Estimation is dealt with by using maximum likelihood and the corrected score approach. Comparisons between the approaches are illustrated by using simulated data.
17

The ACD Model with an application to the brazilian interbank rate futures market

Assun????o, Ad??o Vone Teixeira de 31 March 2016 (has links)
Submitted by Jadi Castro (jadiana.castro@ucb.br) on 2017-02-21T18:01:09Z No. of bitstreams: 1 AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5) / Approved for entry into archive by Kelson Anthony de Menezes (kelson@ucb.br) on 2017-02-21T18:02:23Z (GMT) No. of bitstreams: 1 AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5) / Made available in DSpace on 2017-02-21T18:02:24Z (GMT). No. of bitstreams: 1 AdaoVoneTeixeiradeAssuncaoDissertacao2016.pdf: 1069424 bytes, checksum: 2cec9cf848a40c34902559e8d8f0c95c (MD5) Previous issue date: 2016-03-31 / Aplicamos o Modelo Autoregressivo de Dura????o Condicional (ACD) Mercado de Futuros de Taxa Interbanc??ria Brasileira. A amostra foi constru??da com base em contratos M??s antes da expira????o para replicar a curva de obriga????es de um m??s eo per??odo estudado Vai de julho de 2013 a setembro de 2015. Utilizamos M??xima Verossimilhan??a Estimativa baseada nas distribui????es de probabilidade mais populares na literatura ACD: Exponencial, gama e Weibull e verificou-se que a estimativa baseada na A distribui????o exponencial foi a melhor op????o para modelar os dados. / We applied the basic Autoregressive Conditional Duration Model (ACD) to the Brazilian Interbank Rate Futures Market. The sample was built using contracts in the month prior to expiration to replicate a one month bond curve and the period studied goes from july of 2013 to september of 2015. We used Maximum Likelihood Estimation based on the most popular probability distributions in the ACD literature: exponential, gamma and Weibull and we found that the estimation based on the exponential distributional was the best option to model the data.
18

Estimação em modelos funcionais com erro normais e repetições não balanceadas / Estimation in functional models by using a normal error and replications unbalanced

Cruz Rodriguez, Joan Neylo da 29 April 2008 (has links)
Esta dissertação compreende um estudo da eficiência de estimadores dos parâmetros no modelo funcional com erro nas variáveis, com repetições para contornar o problema de falta de identificação. Nela, discute-se os procedimentos baseados nos métodos de máxima verossimilhança e escore corrigido. As estimativas obtidas pelos dois métodos levam a resultados similares. / This work is concerned with a study on the efficiency of parameter estimates in the functional linear relashionship with constant variances. Where the lack of identification is resolved of by considering replications. Estimation is dealt with by using maximum likelihood and the corrected score approach. Comparisons between the approaches are illustrated by using simulated data.
19

Statistical detection with weak signals via regularization

Li, Jinzheng 01 July 2012 (has links)
There has been an increasing interest in uncovering smuggled nuclear materials associated with the War on Terror. Detection of special nuclear materials hidden in cargo containers is a major challenge in national and international security. We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15db. The proposed method is shown to be variable-selection consistent, in the framework of increasing detection time and under mild regularity conditions. We study the problem of testing for shielding, i.e. the presence of intervening materials that attenuate the gamma ray signal. We show that, as detection time increases to infinity, the Lagrange multiplier test, the likelihood ratio test and Wald test are asymptotically equivalent, under the null hypothesis, and their asymptotic null distribution is Chi-square. We also derived the local power of these tests. We also develop a nonparametric approach for detecting spectra indicative of the presence of SNM. This approach characterizes the shape change in a spectrum from background radiation. We do this by proposing a dissimilarity function that characterizes the complete shape change of a spectrum from the background, over all energy channels. We derive the null asymptotic test distributions in terms of functionals of the Brownian bridge. Simulation results show that the proposed approach is very powerful and promising for detecting weak signals. It is able to accurately detect weak signals with SNR as low as -37db.
20

Extensões do Modelo Potência Normal / Power Normal Model extensions

Siroky, Andressa Nunes 29 March 2019 (has links)
Em análise de dados que apresentam certo grau de assimetria, curtose ou bimodalidade, a suposição de normalidade não é válida, sendo necessários modelos que capturem estas características dos dados. Neste contexto, uma nova classe de distribuições bimodais assimétricas gerada por um mecanismo de mistura é proposta neste trabalho. Algumas propriedades para o caso particular que inclui a distribuição normal como família base desta classe são estudadas e apresentadas, tal caso resulta no chamado Modelo Mistura de Potência Normal (MPN). Dois algoritmos de simulação são desenvolvidos com a finalidade de obter variáveis aleatórias com esta distribuição. A abordagem frequentista é empregada para a inferência dos parâmetros do modelo proposto. São realizados estudos de simulação com o objetivo de avaliar o comportamento das estimativas de máxima verossimilhança dos parâmetros. Adicionalmente, um modelo de regressão para dados bimodais é proposto, utilizando a distribuição MPN como variável resposta nos modelos Generalizados Aditivos para Posição, Escala e Forma, cuja sigla em inglês é GAMLSS. Para este modelo de regressão estudos de simulação também são realizados. Em ambos os casos estudados, o modelo proposto é ilustrado utilizando um conjunto de dados reais referente à pontuação de jogadores na Super Liga Brasileira de Voleibol Masculino 2014/2015. Com relação a este conjunto de dados, o modelo MPN apresenta melhor ajuste quando comparado à modelos já existentes na literatura para dados bimodais. / In analysis of data that present a certain degree of asymmetry, kurtosis or bimodality, the assumption of normality is not valid and models that capture these characteristics of the data are required. In this context, a new class of bimodal asymmetric distributions generated by a mixture mechanism is proposed. Some properties for the particular case that includes the normal distribution as the base family of this class are studied and presented, such case results in the so-called Power Normal Mixture Model. Two simulation algorithms are developed with the purpose of obtaining random variables with this new distribution. The frequentist approach is used to the inference of the model parameters. Simulation studies are carried out with the aim of assessing the behavior of the maximum likelihood estimates of the parameters. In addition, the power normal mixture distribution is introduced as the response variable for the Generalized Additives Models for Location, Scale and Shape (GAMLSS). For this regression model, simulation studies are also performed. In both cases studied, the proposed model is illustrated using a data set on players\' scores in the Male Brazilian Volleyball Superliga 2014/2015. With respect to this dataset, the power normal mixture model presents better fit when compared to models already existing in the literature to bimodal data.

Page generated in 0.1114 seconds