Spelling suggestions: "subject:"square estimator""
11 |
Change point estimation in noisy Hammerstein integral equations / Sprungstellen-Schätzer für verrauschte Hammerstein Integral GleichungenFrick, Sophie 02 December 2010 (has links)
No description available.
|
12 |
Détection de l’invalidité et estimation d’un effet causal en présence d’instruments invalides dans un contexte de randomisation mendélienneBoucher-Roy, David 08 1900 (has links)
La randomisation mendélienne est une méthode d’instrumentation utilisant des instruments
de nature génétique afin d’estimer, via par exemple la régression des moindres
carrés en deux étapes, une relation de causalité entre un facteur d’exposition et une réponse
lorsque celle-ci est confondue par une ou plusieurs variables de confusion non mesurées. La
randomisation mendélienne est en mesure de gérer le biais de confusion à condition que les
instruments utilisés soient valides, c’est-à-dire qu’ils respectent trois hypothèses clés. On
peut généralement se convaincre que deux des trois hypothèses sont satisfaites alors qu’un
phénomène génétique, la pléiotropie, peut parfois rendre la troisième hypothèse invalide.
En présence d’invalidité, l’estimation de l’effet causal de l’exposition sur la réponse peut
être sévèrement biaisée. Afin d’évaluer la potentielle présence d’invalidité lorsqu’un seul
instrument est utilisé, Glymour et al. (2012) ont proposé une méthode qu’on dénomme ici
l’approche de la différence simple qui utilise le signe de la différence entre l’estimateur des
moindres carrés ordinaires de la réponse sur l’exposition et l’estimateur des moindres carrés
en deux étapes calculé à partir de l’instrument pour juger de l’invalidité de l’instrument. Ce
mémoire introduit trois méthodes qui s’inspirent de cette approche, mais qui sont applicables
à la randomisation mendélienne à instruments multiples. D’abord, on introduit l’approche
de la différence globale, une simple généralisation de l’approche de la différence simple au cas
des instruments multiples qui a comme objectif de détecter si un ou plusieurs instruments
utilisés sont invalides. Ensuite, on introduit les approches des différences individuelles et des
différences groupées, deux méthodes qui généralisent les outils de détection de l’invalidité
de l’approche de la différence simple afin d’identifier des instruments potentiellement
problématiques et proposent une nouvelle estimation de l’effet causal de l’exposition sur la
réponse. L’évaluation des méthodes passe par une étude théorique de l’impact de l’invalidité
sur la convergence des estimateurs des moindres carrés ordinaires et des moindres carrés
en deux étapes et une simulation qui compare la précision des estimateurs résultant des
différentes méthodes et leur capacité à détecter l’invalidité des instruments. / Mendelian randomization is an instrumentation method that uses genetic instruments
to estimate, via two-stage least squares regression for example, a causal relationship
between an exposure and an outcome when the relationship is confounded by one or more
unmeasured confounders. Mendelian randomization can handle confounding bias provided
that the instruments are valid, i.e., that they meet three key assumptions. While two of
the three assumptions can usually be satisfied, the third assumption is often invalidated
by a genetic phenomenon called pleiotropy. In the presence of invalid instruments, the
estimate of the causal effect of exposure on the outcome may be severely biased. To assess
the potential presence of an invalid instrument in single-instrument studies, Glymour et
al. (2012) proposed a method, hereinafter referred to as the simple difference approach,
which uses the sign of the difference between the ordinary least squares estimator of the
outcome on the exposure and the two-stage least squares estimator calculated using the
instrument. Based on this approach, we introduce three methods applicable to Mendelian
randomization with multiple instruments. The first method is the global difference approach
and corresponds to a simple generalization of the simple difference approach to the case of
multiple instruments that aims to detect whether one or more instruments are invalid. Next,
we introduce the individual differences and the grouped differences approaches, two methods
that generalize the simple difference approach to identify potentially invalid instruments
and provide new estimates of the causal effect of the exposure on the outcome. The methods
are evaluated using a theoretical investigation of the impact that invalid instruments have
on the convergence of the ordinary least squares and two-stage least squares estimators as
well as with a simulation study that compares the accuracy of the respective estimators and
the ability of the corresponding methods to detect invalid instruments.
|
13 |
以穩健估計及長期資料分析觀點探討資本資產定價模型 / On the CAPM from the Views of Robustness and Longitudinal Analysis呂倩如, Lu Chien-ju Unknown Date (has links)
資本資產定價模型 (CAPM) 由Sharp (1964)、Lintner (1965)及Black (1972)發展出後,近年來已被廣泛的應用於衡量證券之預期報酬率與風險間之關係。一般而言,衡量結果之估計有兩個階段,首先由時間序列分析估計出貝它(beta)係數,然後再檢定廠商或投資組合之平均報酬率與貝它係數之關係。
Fama與MacBeth (1973)利用最小平方法估計貝它係數,再將由橫斷面迴歸方法所得出之斜率係數加以平均後,以統計t-test檢定之。然而以最小平方法估計係數,其估計值很容易受離群值之影響,因此本研究考慮以穩健估計 (robust estimator)來避免此一問題。另外,本研究亦將長期資料分析 (longitudinal data analysis) 引入CAPM裡,期望能檢定貝它係數是否能確實有效地衡量出系統性風險。
論文中以台灣股票市場電子業之實證分析來比較上述不同方法對CAPM的結果,資料蒐集期間為1998年9月至2001年12月之月資料。研究結果顯示出,穩健估計相對於最小平方法就CAPM有較佳的解釋力。而長期資料分析模型更用來衡量債券之超額報酬部分,是否會依上、中、下游或公司之不同而不同。 / The Capital Asset Pricing Model (CAPM) of Sharp (1964), Lintner (1965) and Black (1972) has been widely used in measuring the relationship between the expected return on a security and its risk in the recent years. It consists of two stages to estimate the relationship between risk and expected return. The first one is that betas are estimated from time series regressions, and the second is that the relationship between mean returns and betas is tested across firms or portfolios. Fama and MacBeth (1973) first used ordinary least squares (OLS) to estimate beta and took time series averages of the slope coefficients from monthly cross-sectional regressions in such studies. However it is well known that OLS is sensitive to outliers. Therefore, robust estimators are employed to avoid the problems. Furthermore, the longitudinal data analysis is applied to examine whether betas over time and securities are the valid measure of risk in the CAPM. An empirical study is carried out to present the different approaches. We use the data about the Information and Electronic industry in Taiwan stock market during the period from September 1998 to December 2001. For the time series regression analysis, the robust methods lead to more explanatory power than the OLS results. The linear mixed-effect model is used to examine the effects of different streams and companies for the security excess returns in these data.
|
14 |
Processamento de erros grosseiros através do índice de não-detecção de erros e dos resíduos normalizados / Bad data processing through the undetectability index and the normalized residualsVieira, Camila Silva 20 October 2017 (has links)
Esta dissertação trata do problema de processamento de Erros Grosseiros (EGs) com base na aplicação do chamado Índice de Não-Detecção de Erros, ou apenas UI (Undetectability Index), na análise dos resíduos do estimador de estado por mínimos quadrados ponderados. O índice UI foi desenvolvido recentemente e possibilita a classificação das medidas de acordo com as suas características de não refletirem grande parcela de seus erros nos resíduos daquele estimador. As medidas com maiores UIs são aquelas cujos erros são mais difíceis de serem detectados através de métodos que fazem uso da análise dos resíduos, pois grande parcela do erro dessas medidas não aparece no resíduo. Inicialmente demonstrou-se, nesta dissertação, que erros das estimativas das variáveis de estado em um sistema com EG não-detectável (em uma medida de alto índice UI) podem ser mais significativos que em medidas com EGs detectáveis (em medidas com índices UIs baixos). Justificando, dessa forma, a importância de estudos para tornar possível o processamento de EGs em medidas com alto índice UI. Realizou-se, então, nesta dissertação, diversas simulações computacionais buscando analisar a influência de diferentes ponderações de medidas no UI e também nos erros das estimativas das variáveis de estado. Encontrou-se, então, uma maneira que destacou-se como a mais adequada para ponderação das medidas. Por fim, ampliaram-se, nesta dissertação, as pesquisas referentes ao UI para um estimador de estado por mínimos quadrados ponderados híbrido. / This dissertation deals with the problem of Gross Errors processing based on the use of the so-called Undetectability Index, or just UI. This index was developed recently and it is capable to classify the measurements according to their characteristics of not reflecting their errors into the residuals of the weighted least squares state estimation process. Gross errors in measurements with higher UIs are very difficult to be detected by methods based on the residual analysis, as the errors in those measurements are masked, i.e., they are not reflected in the residuals. Initially, this dissertation demonstrates that a non-detectable gross error (error in a measurement with high UI) may affect more the accuracy of the estimated state variables than a detectable gross error (error in a measurement with low UI). Therefore, justifying the importance of studies that make possible gross errors processing in measurements with high UI. In this dissertation, several computational simulations are carried out to analyze the influence of different weights of measurements in the UI index and also in the accuracy of the estimated state variables. It is chosen a way that stood out as the most appropriate for weighing the measurements. Finally, in this dissertation, the studies referring to the UI is extended for a hybrid weighted least squares state estimator.
|
15 |
Processamento de erros grosseiros através do índice de não-detecção de erros e dos resíduos normalizados / Bad data processing through the undetectability index and the normalized residualsCamila Silva Vieira 20 October 2017 (has links)
Esta dissertação trata do problema de processamento de Erros Grosseiros (EGs) com base na aplicação do chamado Índice de Não-Detecção de Erros, ou apenas UI (Undetectability Index), na análise dos resíduos do estimador de estado por mínimos quadrados ponderados. O índice UI foi desenvolvido recentemente e possibilita a classificação das medidas de acordo com as suas características de não refletirem grande parcela de seus erros nos resíduos daquele estimador. As medidas com maiores UIs são aquelas cujos erros são mais difíceis de serem detectados através de métodos que fazem uso da análise dos resíduos, pois grande parcela do erro dessas medidas não aparece no resíduo. Inicialmente demonstrou-se, nesta dissertação, que erros das estimativas das variáveis de estado em um sistema com EG não-detectável (em uma medida de alto índice UI) podem ser mais significativos que em medidas com EGs detectáveis (em medidas com índices UIs baixos). Justificando, dessa forma, a importância de estudos para tornar possível o processamento de EGs em medidas com alto índice UI. Realizou-se, então, nesta dissertação, diversas simulações computacionais buscando analisar a influência de diferentes ponderações de medidas no UI e também nos erros das estimativas das variáveis de estado. Encontrou-se, então, uma maneira que destacou-se como a mais adequada para ponderação das medidas. Por fim, ampliaram-se, nesta dissertação, as pesquisas referentes ao UI para um estimador de estado por mínimos quadrados ponderados híbrido. / This dissertation deals with the problem of Gross Errors processing based on the use of the so-called Undetectability Index, or just UI. This index was developed recently and it is capable to classify the measurements according to their characteristics of not reflecting their errors into the residuals of the weighted least squares state estimation process. Gross errors in measurements with higher UIs are very difficult to be detected by methods based on the residual analysis, as the errors in those measurements are masked, i.e., they are not reflected in the residuals. Initially, this dissertation demonstrates that a non-detectable gross error (error in a measurement with high UI) may affect more the accuracy of the estimated state variables than a detectable gross error (error in a measurement with low UI). Therefore, justifying the importance of studies that make possible gross errors processing in measurements with high UI. In this dissertation, several computational simulations are carried out to analyze the influence of different weights of measurements in the UI index and also in the accuracy of the estimated state variables. It is chosen a way that stood out as the most appropriate for weighing the measurements. Finally, in this dissertation, the studies referring to the UI is extended for a hybrid weighted least squares state estimator.
|
16 |
Approximation of Terrain Data Utilizing Splines / Approximation of Terrain Data Utilizing SplinesTomek, Peter January 2012 (has links)
Pro optimalizaci letových trajektorií ve velmi malé nadmorské výšce, terenní vlastnosti musí být zahrnuty velice přesne. Proto rychlá a efektivní evaluace terenních dat je velice důležitá vzhledem nato, že čas potrebný pro optimalizaci musí být co nejkratší. Navyše, na optimalizaci letové trajektorie se využívájí metody založené na výpočtu gradientu. Proto musí být aproximační funkce terenních dat spojitá do určitého stupne derivace. Velice nádejná metoda na aproximaci terenních dat je aplikace víceroměrných simplex polynomů. Cílem této práce je implementovat funkci, která vyhodnotí dané terenní data na určitých bodech spolu s gradientem pomocí vícerozměrných splajnů. Program by měl vyčíslit více bodů najednou a měl by pracovat v $n$-dimensionálním prostoru.
|
17 |
[pt] ESTIMAÇÃO DE MODELOS NÃO-LINEARES BASEADOS EM CONDIÇÕES DE MOMENTO / [en] MOMENT-BASED ESTIMATION OF NONLINEAR MODELSDANILO CAIANO DELGADO 10 July 2020 (has links)
[pt] O objetivo desta dissertação é comparar através de um estudo de simulação diferentes estimadores de modelos não-lineares. Nós consideramos neste trabalho o estimador não-linear de mínimos quadrados em dois estágios (NL2SLS), o estimador não-linear de máxima verossimilhança de informação limitada (LIML)
e o estimador com função controle (CF). Os resultados mostram que os estimadores CF e LIML possuem em geral uma performance superior ao do NL2SLS para os modelos selecionados. O trabalho considera uma aplicação de uma Curva de Phillips não-linear para a Economia Brasileira. / [en] The aim of this dissertation is to compare, in a simulation study, different nonlinear estimators for selected models. We consider the two-stage nonlinear least-squares (NL2SLS), the nonlinear limited information maximum likelihood (LIML), and the control function (CF) estimator. Our results show that usually either CF or LIML estimators perform better than the NL2SLS estimator for the selected models. In an application with real data, we consider the estimation a nonlinear Phillips Curve for Brazilian economy.
|
Page generated in 0.1064 seconds