• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 6
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Méthode de simulation avec les variables antithétiques

Gatarayiha, Jean Philippe 06 1900 (has links)
Dans ce mémoire, nous travaillons sur une méthode de simulation de Monte-Carlo qui utilise des variables antithétiques pour estimer un intégrale de la fonction f(x) sur un intervalle (0,1] où f peut être une fonction monotone, non-monotone ou une autre fonction difficile à simuler. L'idée principale de la méthode qu'on propose est de subdiviser l'intervalle (0,1] en m sections dont chacune est subdivisée en l sous intervalles. Cette technique se fait en plusieurs étapes et à chaque fois qu'on passe à l'étape supérieure la variance diminue. C'est à dire que la variance obtenue à la kième étape est plus petite que celle trouvée à la (k-1)ième étape ce qui nous permet également de rendre plus petite l'erreur d’estimation car l'estimateur de l'intégrale de f(x) sur [0,1] est sans biais. L'objectif est de trouver m, le nombre optimal de sections, qui permet de trouver cette diminution de la variance. / In this master thesis, we consider simulation methods based on antithetic variates for estimate integrales of f(x) on interval (0,1] where f is monotonic function, not a monotonic function or a function difficult to integrate. The main idea consists in subdividing the (0,1] in m sections of which each one is subdivided in l subintervals. This method is done recursively. At each step the variance decreases, i.e. The variance obtained at the kth step is smaller than that is found at the (k-1)th step. This allows us to reduce the error in the estimation because the estimator of integrales of f(x) on interval [0,1] is unbiased. The objective is to optimize m. / Les fichiers qui accompagnent mon document ont été réalisés avec le logiciel Latex et les simulations ont été réalisés par Splus(R).
42

Location-based estimation of the autoregressive coefficient in ARX(1) models.

Kamanu, Timothy Kevin Kuria January 2006 (has links)
<p>In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo / mean-unbiased&rsquo / and &lsquo / medianunbiased&rsquo / estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1).</p> <p><br /> However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to&nbsp / compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo / medianunbiased&rsquo / estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed / the &lsquo / most-probably-unbiased&rsquo / estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed / (2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model / (3) the exact variance and MSE of LS estimator is determined / (4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort / (5) an exact method of evaluating the density of the three estimators is described / (6) their exact bias, mean, variance and MSE are determined and analysed / and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed.</p> <p><br /> The discussion and results show that the estimators are still biased in the usual sense: &lsquo / in expectation&rsquo / . However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation.</p>
43

Wald tests for IV regression with weak instruments

Vilela, Lucas Pimentel 17 September 2013 (has links)
Submitted by Lucas Pimentel Vilela (lvilela@fgvmail.br) on 2013-10-07T17:22:14Z No. of bitstreams: 2 Dissertação Final.pdf: 665295 bytes, checksum: b54a14202e41e19e863a73328cfb2123 (MD5) Supplement.pdf: 2259071 bytes, checksum: 19718d483c50f35f3878c81521b0acf9 (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-10-08T21:33:30Z (GMT) No. of bitstreams: 2 Dissertação Final.pdf: 665295 bytes, checksum: b54a14202e41e19e863a73328cfb2123 (MD5) Supplement.pdf: 2259071 bytes, checksum: 19718d483c50f35f3878c81521b0acf9 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-10-14T14:44:28Z (GMT) No. of bitstreams: 2 Dissertação Final.pdf: 665295 bytes, checksum: b54a14202e41e19e863a73328cfb2123 (MD5) Supplement.pdf: 2259071 bytes, checksum: 19718d483c50f35f3878c81521b0acf9 (MD5) / Made available in DSpace on 2013-10-14T14:45:02Z (GMT). No. of bitstreams: 2 Dissertação Final.pdf: 665295 bytes, checksum: b54a14202e41e19e863a73328cfb2123 (MD5) Supplement.pdf: 2259071 bytes, checksum: 19718d483c50f35f3878c81521b0acf9 (MD5) Previous issue date: 2013-09-17 / This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003). / Esta dissertação trata do problema de inferência na presença de identificação fraca em modelos de regresso com variáveis instrumentais. Mais especificamente em testes de hipóteses com relação ao parâmetro da variável endógena quando os instrumentos são fracos. O principal foco é nos testes condicionais unilaterais baseados nas estatísticas de razão de máxima verossimilhança, score e Wald. Resultados teóricos e numéricos mostram que o teste t condicional unilateral baseado no estimador de mínimos quadrados em dois estágios tem uma boa performance mesmo na presença de instrumentos fracamente correlacionados com a variável endógena. A abordagem condicional corrige uniformemente o tamanho do teste t e quando a estatística F populacional é tão pequena quanto dois, o poder do teste é próximo ao power envelope tanto de testes similares quanto de não similares. Tal resultado é surpreendente visto a má performance dos testes t’s condicionais bilaterais relatada em (6, Andrews, Moreira and Stock (2007)). Dado esse resultado aparentemente contra intuitivo, apresentamos novos testes t’s condicionals bilaterais que são aproximadamente não viesados e performam, em alguns casos, tão bem quanto o teste condicional baseado na estatística de razão de verossimilhança de ( 19 , Moreira (2003)).
44

Discrete algebra and geometry applied to the Pauli group and mutually unbiased bases in quantum information theory / Algèbre et géométrie discrètes appliquées au groupe de Pauli et aux bases décorrélées en théorie de l’information quantique

Albouy, Olivier 12 June 2009 (has links)
Pour d non puissance d’un nombre premier, le nombre maximal de bases deux à deux décorrélées d’un espace de Hilbert de dimension d n’est pas encore connu. Dans ce mémoire, nous commençons par donner une construction de bases décorrélées en lien avec une famille de représentations irréductibles de l'algèbre de Lie su(2) et faisant appel aux sommes de Gauss.Puis nous étudions de façon systématique la possibilité de construire de telle bases au moyen des opérateurs de Pauli. 1) L’étude de la droite projective sur Zdm montre que, pour obtenir des ensembles maximaux de bases décorrélées à l’aide d'opérateurs de Pauli, il est nécessaire de considérer des produits tensoriels de ces opérateurs. 2) Les sous-modules lagrangiens de Zd2n, dont nous donnons une classification complète, rendent compte des ensembles maximalement commutant d'opérateurs de Pauli. Cette classification permet de savoir lesquels de ces ensembles sont susceptibles de donner des bases décorrélées : ils correspondent aux demi-modules lagrangiens, qui s'interprètent encore comme les points isotropes de la droite projective (P(Mat(n, Zd)²),ω). Nous explicitons alors un isomorphisme entre les bases décorrélées ainsi obtenues et les demi-modules lagrangiens distants, ce qui précise aussi la correspondance entre sommes de Gauss et bases décorrélées. 3) Des corollaires sur le groupe de Clifford et l’espace des phases discret sont alors développés.Enfin, nous présentons quelques outils inspirés de l’étude précédente. Nous traitons ainsi du rapport anharmonique sur la sphère de Bloch, de géométrie projective en dimension supérieure, des opérateurs de Pauli continus et nous comparons l'entropie de von Neumann à une mesure de l'intrication par calcul d'un déterminant. / For d not a power of a prime, the maximal number of mutually unbiased bases (MUBs) in a d-dimensional Hilbert space is still unknown. In this thesis, we begin by an original building of MUBs by means of Gauss sums, in relation with a family of irreducible representations of the Lie algebra su(2).Then, we sytematically study the possibility of building such bases by means of Pauli operators. 1) The study of the projective line on Zdm shows that, in order to obtain maximal sets of MUBs, tensorial products of these operators are in order. 2) Lagrangian submodules of Zd2n, of which we give a complete classification, account for maximally commuting sets of Pauli operators. This classification enables to know which of these sets are likely to yield unbiased bases. They correspond to Lagrangian half-modules that can be interpreted as the isotropic points of the projective line (P(Mat(n, Zd)²),ω). Hence, we establish an isomorphism between the unbiased bases thus obtained and distant Lagrangian half-modules, which precises by the way the correspondance between Gauss sums and MUBs. 3) Corollaries on the Clifford group and the finite phase space are then developed.Finally, we present some tools inspired by the previous study. We deal with the cross-ratio on the Bloch sphere and projective geometry in higher dimension, Pauli operators with continuous exponents and we compare von Neumann entropy with a determinantal measure of entanglement
45

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
46

Optimum Savitzky-Golay Filtering for Signal Estimation

Krishnan, Sunder Ram January 2013 (has links) (PDF)
Motivated by the classic works of Charles M. Stein, we focus on developing risk-estimation frameworks for denoising problems in both one-and two-dimensions. We assume a standard additive noise model, and formulate the denoising problem as one of estimating the underlying clean signal from noisy measurements by minimizing a risk corresponding to a chosen loss function. Our goal is to incorporate perceptually-motivated loss functions wherever applicable, as in the case of speech enhancement, with the squared error loss being considered for the other scenarios. Since the true risks are observed to depend on the unknown parameter of interest, we circumvent the roadblock by deriving finite-sample un-biased estimators of the corresponding risks based on Stein’s lemma. We establish the link with the multivariate parameter estimation problem addressed by Stein and our denoising problem, and derive estimators of the oracle risks. In all cases, optimum values of the parameters characterizing the denoising algorithm are determined by minimizing the Stein’s unbiased risk estimator (SURE). The key contribution of this thesis is the development of a risk-estimation approach for choosing the two critical parameters affecting the quality of nonparametric regression, namely, the order and bandwidth/smoothing parameters. This is a classic problem in statistics, and certain algorithms relying on derivation of suitable finite-sample risk estimators for minimization have been reported in the literature (note that all these works consider the mean squared error (MSE) objective). We show that a SURE-based formalism is well-suited to the regression parameter selection problem, and that the optimum solution guarantees near-minimum MSE (MMSE) performance. We develop algorithms for both glob-ally and locally choosing the two parameters, the latter referred to as spatially-adaptive regression. We observe that the parameters are so chosen as to tradeoff the squared bias and variance quantities that constitute the MSE. We also indicate the advantages accruing out of incorporating a regularization term in the cost function in addition to the data error term. In the more general case of kernel regression, which uses a weighted least-squares (LS) optimization, we consider the applications of image restoration from very few random measurements, in addition to denoising of uniformly sampled data. We show that local polynomial regression (LPR) becomes a special case of kernel regression, and extend our results for LPR on uniform data to non-uniformly sampled data also. The denoising algorithms are compared with other standard, performant methods available in the literature both in terms of estimation error and computational complexity. A major perspective provided in this thesis is that the problem of optimum parameter choice in nonparametric regression can be viewed as the selection of optimum parameters of a linear, shift-invariant filter. This interpretation is provided by deriving motivation out of the hallmark paper of Savitzky and Golay and Schafer’s recent article in IEEE Signal Processing Magazine. It is worth noting that Savitzky and Golay had shown in their original Analytical Chemistry journal article, that LS fitting of a fixed-order polynomial over a neighborhood of fixed size is equivalent to convolution with an impulse response that is fixed and can be pre-computed. They had provided tables of impulse response coefficients for computing the smoothed function and smoothed derivatives for different orders and neighborhood sizes, the resulting filters being referred to as Savitzky-Golay (S-G) filters. Thus, we provide the new perspective that the regression parameter choice is equivalent to optimizing for the filter impulse response length/3dB bandwidth, which are inversely related. We observe that the MMSE solution is such that the S-G filter chosen is of longer impulse response length (equivalently smaller cutoff frequency) at relatively flat portions of the noisy signal so as to smooth noise, and vice versa at locally fast-varying portions of the signal so as to capture the signal patterns. Also, we provide a generalized S-G filtering viewpoint in the case of kernel regression. Building on the S-G filtering perspective, we turn to the problem of dynamic feature computation in speech recognition. We observe that the methodology employed for computing dynamic features from the trajectories of static features is in fact derivative S-G filtering. With this perspective, we note that the filter coefficients can be pre-computed, and that the whole problem of delta feature computation becomes efficient. Indeed, we observe an advantage by a factor of 104 on making use of S-G filtering over actual LS polynomial fitting and evaluation. Thereafter, we study the properties of first-and second-order derivative S-G filters of certain orders and lengths experimentally. The derivative filters are bandpass due to the combined effects of LPR and derivative computation, which are lowpass and highpass operations, respectively. The first-and second-order S-G derivative filters are also observed to exhibit an approximately constant-Q property. We perform a TIMIT phoneme recognition experiment comparing the recognition accuracies obtained using S-G filters and the conventional approach followed in HTK, where Furui’s regression formula is made use of. The recognition accuracies for both cases are almost identical, with S-G filters of certain bandwidths and orders registering a marginal improvement. The accuracies are also observed to improve with longer filter lengths, for a particular order. In terms of computation latency, we note that S-G filtering achieves delta and delta-delta feature computation in parallel by linear filtering, whereas they need to be obtained sequentially in case of the standard regression formulas used in the literature. Finally, we turn to the problem of speech enhancement where we are interested in de-noising using perceptually-motivated loss functions such as Itakura-Saito (IS). We propose to perform enhancement in the discrete cosine transform domain using risk-minimization. The cost functions considered are non-quadratic, and derivation of the unbiased estimator of the risk corresponding to the IS distortion is achieved using an approximate Taylor-series analysis under high signal-to-noise ratio assumption. The exposition is general since we focus on an additive noise model with the noise density assumed to fall within the exponential class of density functions, which comprises most of the common densities. The denoising function is assumed to be pointwise linear (modified James-Stein (MJS) estimator), and parallels between Wiener filtering and the optimum MJS estimator are discussed.
47

Location-based estimation of the autoregressive coefficient in ARX(1) models

Kamanu, Timothy Kevin Kuria January 2006 (has links)
Magister Scientiae - MSc / In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo;mean-unbiased&rsquo; and &lsquo;medianunbiased&rsquo; estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1). However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to&nbsp; compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo;medianunbiased&rsquo; estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed; the &lsquo;most-probably-unbiased&rsquo; estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed; (2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model; (3) the exact variance and MSE of LS estimator is determined; (4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort; (5) an exact method of evaluating the density of the three estimators is described; (6) their exact bias, mean, variance and MSE are determined and analysed; and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed. The discussion and results show that the estimators are still biased in the usual sense: &lsquo;in expectation&rsquo;. However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation. / South Africa
48

Implementing SAE Techniques to Predict Global Spectacles Needs

Zhang, Yuxue January 2023 (has links)
This study delves into the application of Small Area Estimation (SAE) techniques to enhance the accuracy of predicting global needs for assistive spectacles. By leveraging the power of SAE, the research undertakes a comprehensive exploration, employing arange of predictive models including Linear Regression (LR), Empirical Best Linear Unbiased Prediction (EBLUP), hglm (from R package) with Conditional Autoregressive (CAR), and Generalized Linear Mixed Models (GLMM). At last phase,the global spectacle needs’ prediction includes various essential steps such as random effects simulation, coefficient extraction from GLMM estimates, and log-linear modeling. The investigation develops a multi-faceted approach, incorporating area-level modeling, spatial correlation analysis, and relative standard error, to assess their impact on predictive accuracy. The GLMM consistently displays the lowest Relative Standard Error (RSE) values, almost close to zero, indicating precise but potentially overfit results. Conversely, the hglm with CAR model presents a narrower RSE range, typically below 25%, reflecting greater accuracy; however, it is worth noting that it contains a higher number of outliers. LR illustrates a performance similar to EBLUP, with RSE values reaching around 50% in certain scenarios and displaying slight variations across different contexts. These findings underscore the trade-offs between precision and robustness across these models, especially for finer geographical levels and countries not included in the initial sample.
49

混合線性模型推測問題之研究

洪可音 Unknown Date (has links)
當線性模型中包含隨機效果項時,若將之視為固定效果或直接忽略,往往會造成嚴重的推測偏差,故應以混合線性模型為架構。若模式中只包含一個隨機效果項,則模式中有兩個變異數成份,若包含 個隨機效果項,則模式中有 個變異數成份。本論文主要在介紹至少兩個變異數成份時固定效果及隨機效果線性組合的最佳線性不偏推測量(BLUP),及其推測區間之推導與建立。然而BLUP實為變異數比率的函數,若變異數比率未知,而以最大概似法(Maximum Likelihood Method)或殘差最大概似法(Residual Maximum Likelihood Method)估計出變異數比率,再代入BLUP中,則得到的是經驗最佳線性不偏推測量(EBLUP)。至於推測區間則與EBLUP的均方誤有關,本論文先介紹如何求算其漸近不偏估計量,再介紹EBLUP之推測誤差除以 後,其自由度的估算方法,據以建構推測區間。 / When random effects are contained in the model, if they are treated as fixed effects or ignore, then it may result in serious prediction bias. Instead, mixed linear model is to be considered. If there is one source of random effects, then the model has two variance components, while it has variance components, if the model contains random effects. This study primarily presents the derivation of the best linear unbiased predictor (BLUP) of a linear combination of the fixed and random effects, and then the conduction of the prediction interval when the model contains at least two variance components. However, BLUP is a function of variance ratios. If the variance ratios are unknown, we can replace them by their maximum likelihood estimates or residual maximum likelihood estimates, then we can get empirical best linear unbiased predictor (EBLUP). Because prediction interval is relating to the mean squared error (MSE) of EBLUP, so the study first introduces how to get its approximate unbiased estimator, m<sub>a</sub> , then introduces how to evaluate the degrees of freedom of the ratio of the prediction error for the EBLUP and m<sub>a</sub> <sup>1/2</sup> , in order to use both of them to establish the prediction interval.
50

Statistical Inference

Chou, Pei-Hsin 26 June 2008 (has links)
In this paper, we will investigate the important properties of three major parts of statistical inference: point estimation, interval estimation and hypothesis testing. For point estimation, we consider the two methods of finding estimators: moment estimators and maximum likelihood estimators, and three methods of evaluating estimators: mean squared error, best unbiased estimators and sufficiency and unbiasedness. For interval estimation, we consider the the general confidence interval, confidence interval in one sample, confidence interval in two samples, sample sizes and finite population correction factors. In hypothesis testing, we consider the theory of testing of hypotheses, testing in one sample, testing in two samples, and the three methods of finding tests: uniformly most powerful test, likelihood ratio test and goodness of fit test. Many examples are used to illustrate their applications.

Page generated in 0.0425 seconds