Spelling suggestions: "subject:"reparameterization"" "subject:"parameterization""
1 |
Bias approximation and reduction in vector autoregressive modelsBrännström, Tomas January 1995 (has links)
In the last few decades, vector autoregressive (VAR) models have gained tremendous popularity as an all-purpose tool in econometrics and other disciplines. Some of their most prominent uses are for forecasting, causality tests, tests of economic theories, hypothesis-seeking, data characterisation, innovation accounting, policy analysis, and cointegration analysis. Their popularity appears to be attributable to their flexibility relative to other models rather than to their virtues per se. In addition, analysts often use VAR models as benchmark models. VAR modeling has not gone uncriticised, though. A list of relevant arguments against VAR modelling can be found in Section 2.3 of this thesis. There is one additional problem which is rarely mentioned though, namely the often heavily biased estimates in VAR models. Although methods to reduce this bias have been available for quite some time, it has probably not been done before, at least not in any systematic way. The present thesis attempts to systematically examine the performance of bias-reduced VAR estimates, using two existing and one newly derived approximation to the bias. The thesis is orginanised as follows. After a short introductory chapter, a brief history of VAR modelling can be found in Chapter 2 together with a review of different representations and a compilation of criticisms against VAR models. Chapter 3 reports the results of very extensive Monte Carlo experiments serving dual purposes: Firstly, the simulations will reveal whether or not bias really poses a serious problem, because if it turns out that biases appear only by exception or are mainly insignificant, there would be little need to reduce the bias. Secondly, the same data as in Chapter 3 will be used in Chapter 4 to evaluate the bias approximations, allowing for direct comparison between bias-reduced and original estimates. Though Monte Carlo methods have been (rightfully) criticised for being too specific to allow for any generalisation, there seems to be no good alternative to analyse small-sample properties of complicated estimators such as these. Chapter 4 is in a sense the core of the thesis, containing evaluations of three bias approximations. The performance of the bias approximations is evaluated chiefly using single regression equations and 3D surfaces. The only truly new research result in this thesis can also be found in Chapter 4; a second-order approximation to the bias of the parameter matrix in a VAR(p) model. Its performance is compared with the performance of two existing first-order approximations, and all three are used to construct bias-reduced estimators, which are then evaluated. Chapter 5 holds an application of US money supply and inflation in order to find out whether the results in Chapter 4 can have any real impacts. Unfortunately though, bias reduction appears not to make any difference in this particular case. Chapter 6 concludes. / Diss. Stockholm : Handelshögsk.
|
2 |
Blood-Oxygen-Level-Dependent Parameter Identification using Multimodal Neuroimaging and Particle FiltersMundle, Aditya Ramesh 06 March 2012 (has links)
The Blood Oxygen Level Dependent (BOLD) signal provides indirect estimates of neural activity. The parameters of this BOLD signal can give information about the pathophysiological state of the brain. Most of the models for the BOLD signal are overparameterized which makes the unique identification of these parameters difficult.
In this work, we use information from multiple neu- roimaging sources to get better estimates of these parameters instead of relying on the information from the BOLD signal only. The mulitmodal neuroimaging setup consisted of the information from Cerebral Blood Volume (CBV) ( VASO-Fluid-Attenuation-Inversion-Recovery (VASO-FLAIR)), and Cerebral Blood Flow (CBF) (from Arterial Spin Labelling (ASL)) in addition to the BOLD signal and the fusion of this information is achieved in a Particle Filter (PF) framework. The trace plots and the correlation coefficients of the parameter estimates from the PF reflect ill-posedness of the BOLD model. The means of the parameter estimates are much closer to the ground truth compared to the estimates obtained using only the BOLD information. These parameter estimates were also found to be more robust to noise and influence of the prior. / Master of Science
|
3 |
Increasing Policy Network Size Does Not Guarantee Better Performance in Deep Reinforcement LearningZachery Peter Berg (12455928) 25 April 2022 (has links)
<p>The capacity of deep reinforcement learning policy networks has been found to affect the performance of trained agents. It has been observed that policy networks with more parameters have better training performance and generalization ability than smaller networks. In this work, we find cases where this does not hold true. We observe unimodal variance in the zero-shot test return of varying width policies, which accompanies a drop in both train and test return. Empirically, we demonstrate mostly monotonically increasing performance or mostly optimal performance as the width of deep policy networks increase, except near the variance mode. Finally, we find a scenario where larger networks have increasing performance up to a point, then decreasing performance. We hypothesize that these observations align with the theory of double descent in supervised learning, although with specific differences.</p>
|
4 |
Análise de dados categorizados com omissão em variáveis explicativas e respostas / Categorical data analysis with missingness in explanatory and response variablesPoleto, Frederico Zanqueta 08 April 2011 (has links)
Nesta tese apresentam-se desenvolvimentos metodológicos para analisar dados com omissão e também estudos delineados para compreender os resultados de tais análises. Escrutinam-se análises de sensibilidade bayesiana e clássica para dados com respostas categorizadas sujeitas a omissão. Mostra-se que as componentes subjetivas de cada abordagem podem influenciar os resultados de maneira não-trivial, independentemente do tamanho da amostra, e que, portanto, as conclusões devem ser cuidadosamente avaliadas. Especificamente, demonstra-se que distribuições \\apriori\\ comumente consideradas como não-informativas ou levemente informativas podem, na verdade, ser bastante informativas para parâmetros inidentificáveis, e que a escolha do modelo sobreparametrizado também tem um papel importante. Quando há omissão em variáveis explicativas, também é necessário propor um modelo marginal para as covariáveis mesmo se houver interesse apenas no modelo condicional. A especificação incorreta do modelo para as covariáveis ou do modelo para o mecanismo de omissão leva a inferências enviesadas para o modelo de interesse. Trabalhos anteriormente publicados têm-se dividido em duas vertentes: ou utilizam distribuições semiparamétricas/não-paramétricas, flexíveis para as covariáveis, e identificam o modelo com a suposição de um mecanismo de omissão não-informativa, ou empregam distribuições paramétricas para as covariáveis e permitem um mecanismo mais geral, de omissão informativa. Neste trabalho analisam-se respostas binárias, combinando um mecanismo de omissão informativa com um modelo não-paramétrico para as covariáveis contínuas, por meio de uma mistura induzida pela distribuição \\apriori\\ de processo de Dirichlet. No caso em que o interesse recai apenas em momentos da distribuição das respostas, propõe-se uma nova análise de sensibilidade sob o enfoque clássico para respostas incompletas que evita suposições distribucionais e utiliza parâmetros de sensibilidade de fácil interpretação. O procedimento tem, em particular, grande apelo na análise de dados contínuos, campo que tradicionalmente emprega suposições de normalidade e/ou utiliza parâmetros de sensibilidade de difícil interpretação. Todas as análises são ilustradas com conjuntos de dados reais. / We present methodological developments to conduct analyses with missing data and also studies designed to understand the results of such analyses. We examine Bayesian and classical sensitivity analyses for data with missing categorical responses and show that the subjective components of each approach can influence results in non-trivial ways, irrespectively of the sample size, concluding that they need to be carefully evaluated. Specifically, we show that prior distributions commonly regarded as slightly informative or non-informative may actually be too informative for non-identifiable parameters, and that the choice of over-parameterized models may drastically impact the results. When there is missingness in explanatory variables, we also need to consider a marginal model for the covariates even if the interest lies only on the conditional model. An incorrect specification of either the model for the covariates or of the model for the missingness mechanism leads to biased inferences for the parameters of interest. Previously published works are commonly divided into two streams: either they use semi-/non-parametric flexible distributions for the covariates and identify the model via a non-informative missingness mechanism, or they employ parametric distributions for the covariates and allow a more general informative missingness mechanism. We consider the analysis of binary responses, combining an informative missingness model with a non-parametric model for the continuous covariates via a Dirichlet process mixture. When the interest lies only in moments of the response distribution, we consider a new classical sensitivity analysis for incomplete responses that avoids distributional assumptions and employs easily interpreted sensitivity parameters. The procedure is particularly useful for analyses of missing continuous data, an area where normality is traditionally assumed and/or relies on hard-to-interpret sensitivity parameters. We illustrate all analyses with real data sets.
|
5 |
Análise de dados categorizados com omissão em variáveis explicativas e respostas / Categorical data analysis with missingness in explanatory and response variablesFrederico Zanqueta Poleto 08 April 2011 (has links)
Nesta tese apresentam-se desenvolvimentos metodológicos para analisar dados com omissão e também estudos delineados para compreender os resultados de tais análises. Escrutinam-se análises de sensibilidade bayesiana e clássica para dados com respostas categorizadas sujeitas a omissão. Mostra-se que as componentes subjetivas de cada abordagem podem influenciar os resultados de maneira não-trivial, independentemente do tamanho da amostra, e que, portanto, as conclusões devem ser cuidadosamente avaliadas. Especificamente, demonstra-se que distribuições \\apriori\\ comumente consideradas como não-informativas ou levemente informativas podem, na verdade, ser bastante informativas para parâmetros inidentificáveis, e que a escolha do modelo sobreparametrizado também tem um papel importante. Quando há omissão em variáveis explicativas, também é necessário propor um modelo marginal para as covariáveis mesmo se houver interesse apenas no modelo condicional. A especificação incorreta do modelo para as covariáveis ou do modelo para o mecanismo de omissão leva a inferências enviesadas para o modelo de interesse. Trabalhos anteriormente publicados têm-se dividido em duas vertentes: ou utilizam distribuições semiparamétricas/não-paramétricas, flexíveis para as covariáveis, e identificam o modelo com a suposição de um mecanismo de omissão não-informativa, ou empregam distribuições paramétricas para as covariáveis e permitem um mecanismo mais geral, de omissão informativa. Neste trabalho analisam-se respostas binárias, combinando um mecanismo de omissão informativa com um modelo não-paramétrico para as covariáveis contínuas, por meio de uma mistura induzida pela distribuição \\apriori\\ de processo de Dirichlet. No caso em que o interesse recai apenas em momentos da distribuição das respostas, propõe-se uma nova análise de sensibilidade sob o enfoque clássico para respostas incompletas que evita suposições distribucionais e utiliza parâmetros de sensibilidade de fácil interpretação. O procedimento tem, em particular, grande apelo na análise de dados contínuos, campo que tradicionalmente emprega suposições de normalidade e/ou utiliza parâmetros de sensibilidade de difícil interpretação. Todas as análises são ilustradas com conjuntos de dados reais. / We present methodological developments to conduct analyses with missing data and also studies designed to understand the results of such analyses. We examine Bayesian and classical sensitivity analyses for data with missing categorical responses and show that the subjective components of each approach can influence results in non-trivial ways, irrespectively of the sample size, concluding that they need to be carefully evaluated. Specifically, we show that prior distributions commonly regarded as slightly informative or non-informative may actually be too informative for non-identifiable parameters, and that the choice of over-parameterized models may drastically impact the results. When there is missingness in explanatory variables, we also need to consider a marginal model for the covariates even if the interest lies only on the conditional model. An incorrect specification of either the model for the covariates or of the model for the missingness mechanism leads to biased inferences for the parameters of interest. Previously published works are commonly divided into two streams: either they use semi-/non-parametric flexible distributions for the covariates and identify the model via a non-informative missingness mechanism, or they employ parametric distributions for the covariates and allow a more general informative missingness mechanism. We consider the analysis of binary responses, combining an informative missingness model with a non-parametric model for the continuous covariates via a Dirichlet process mixture. When the interest lies only in moments of the response distribution, we consider a new classical sensitivity analysis for incomplete responses that avoids distributional assumptions and employs easily interpreted sensitivity parameters. The procedure is particularly useful for analyses of missing continuous data, an area where normality is traditionally assumed and/or relies on hard-to-interpret sensitivity parameters. We illustrate all analyses with real data sets.
|
Page generated in 0.3915 seconds