• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Volatility Forecasting of Crude Oil Future¡ÐUnder Normal Mixture Model and NIG Mixture Model

Wu, Chia-ying 30 May 2012 (has links)
This study attempts to capture the behavior of volatility in the commodity futures market by importing the normal mixture GARCH Model and the NIG mixture GARCH model (Normal-inverse Gaussian Mixture GARCH Model). Normal mixture GARCH Model (what follows called NM-GARCH Model) is a model mixed by two to several normal distributions with a specific weight portfolio, and its variance abide by GAECH process. The ability of capturing the financial data with leptokurtosis and fat-tail of NM-GARCH Model is better than Normal GARCH Model and Student¡¦s t GARCH Model.¡CAlso¡AThe Variance of the factor with lower weight in NM-GARCH Model usually higher, and the volatility of the factor with higher weight is lower, which explains the situation happens in the real market that the probability of large fluctuations (shocks) is small, and the probability of small fluctuations are higher. Generally, the volatilities which keeping occurring in common cases are respectively flat, and the shocks usually bring large impacts but less frequent. NIG Mixture Distribution is a distribution mixed by two to several weighted distributions, and the distribution of every factor abides by NIG Distribution. Compare to Normal Mixture Distribution, NIG Mixture Distribution takes the advantages of NIG Distribution into account, which can not only explain leptokurtosis and the deviation of data, but describe the fat-tail phenomenon more complete as well, because of the both tails of NIG Distribution decreasing slowly. This study will apply the NM GARCH Model and NIG GARCH Model to the Volatility forecasting of the return rates in the crude oil futures market, and infer the predictive abilities of this two kinds of models are significantly better than other volatility model by implementing parameter estimation, forecasting, loss function and statistic significant test.
2

Evaluating and Reducing the Effects of Misclassification in a Sequential Multiple Assignment Randomized Trial (SMART)

He, Jun 01 January 2018 (has links)
SMART designs tailor individual treatment by re-randomizing patients to subsequent therapies based on their response to initial treatment. However, the classification of patients being responders/non-responders could be inaccurate and thus lead to inappropriate treatment assignment. In a two-step SMART design, by assuming equal randomization, and equal variances of misclassified patients and correctly classified patients, we evaluated misclassification effects on mean, variance, and type I error/ power of single sequential treatment outcome (SST), dynamic treatment outcome (DTRs), and overall outcome. The results showed that misclassification could introduce bias to estimates of treatment effect in all types of outcome. Though the magnitude of bias could vary according to different templates, there were a few constant conclusions: 1) for any fixed sensitivity the bias of mean of SSTs responders always approached to 0 as specificity increased to 1, and for any fixed specificity the bias of mean of SSTs non-responders always approached to 0 as sensitivity increased to 1; 2) for any fixed specificity there was monotonic nonlinear relationship between the bias of mean of SSTs responders and sensitivity, and for any fixed sensitivity there was also monotonic nonlinear relationship between the bias of mean of SSTs non-responders and specificity; 3) the bias of variance of SSTs was always non-monotone nonlinear equation; 4) the variance of SSTs under misclassification was always over-estimated; 5) the maximized absolute relative bias of variance of SSTs was always ¼ of the squared mean difference between misclassified patients and correctly classified patients divided by true variance, but it might not be observed in the range of sensitivity and specificity (0,1); 6) regarding to sensitivity and specificity, the bias of mean of DTRs or overall outcomes was always linear equation and their bias of variance was always non-monotone nonlinear equation; 7) the relative bias of mean/ variance of DTRs or overall outcomes could approach to 0 where sensitivity or specificity wasn’t necessarily to be 1. Furthermore, the results showed that the misclassification could affect statistical inference. Power could be less or bigger than planned 80% under misclassification and showed either monotonic or non-monotonic pattern as sensitivity or specificity decreased. To mitigate these adverse effects, patient observations could be weighted by the likelihood that their response was correctly classified. We investigated both normal-mixture-model (NM) and k-nearest-neighbor (KNN) strategies to attempt to reduce bias of mean and variance and improve inference at final stage outcome. The NM estimated the early stage probabilities of being a responder for each patient through optimizing the likelihood function by EM algorithm, while KNN estimated these probabilities based upon classifications for the k nearest observations. Simulations were used to compare the performance of these approaches. The results showed that 1) KNN and NM produced modest reductions of bias of point estimates of SSTs; 2) both strategies reduced bias on point estimates of DTRs when the misclassified patients and correctly classified patients from same initial treatment had unequal means; 3) NM reduced the bias of point estimates of overall outcome more than KNN; 4) in general, there were little effect on power adjustment; 5) type I error should always be preserved at 0.05 regardless of misclassification when same response rate and same treatment effects among responders or among non-responders were assumed, but the observed type I error tended to be less than 0.05; 6) KNN preserved type I error at 0.05, but NM might increase type I error rate. Even though most of time both KNN and NM strategies improved point estimates in SMART designs while we knew misclassification might be involved, the tradeoff were increased type I error rate and little effect on power. Our work showed that misclassification should be considered in SMART design because it introduced bias, but KNN or NM strategies at the final stage couldn’t completely reduce bias of point estimates or improve power. However, in future by adjusting with covariates, these two strategies might be used to improve the classification accuracy in the early stage outcomes.
3

Deconvolution in Random Effects Models via Normal Mixtures

Litton, Nathaniel A. 2009 August 1900 (has links)
This dissertation describes a minimum distance method for density estimation when the variable of interest is not directly observed. It is assumed that the underlying target density can be well approximated by a mixture of normals. The method compares a density estimate of observable data with a density of the observable data induced from assuming the target density can be written as a mixture of normals. The goal is to choose the parameters in the normal mixture that minimize the distance between the density estimate of the observable data and the induced density from the model. The method is applied to the deconvolution problem to estimate the density of $X_{i}$ when the variable $% Y_{i}=X_{i}+Z_{i}$, $i=1,\ldots ,n$, is observed, and the density of $Z_{i}$ is known. Additionally, it is applied to a location random effects model to estimate the density of $Z_{ij}$ when the observable quantities are $p$ data sets of size $n$ given by $X_{ij}=\alpha _{i}+\gamma Z_{ij},~i=1,\ldots ,p,~j=1,\ldots ,n$, where the densities of $\alpha_{i} $ and $Z_{ij}$ are both unknown. The performance of the minimum distance approach in the measurement error model is compared with the deconvoluting kernel density estimator of Stefanski and Carroll (1990). In the location random effects model, the minimum distance estimator is compared with the explicit characteristic function inversion method from Hall and Yao (2003). In both models, the methods are compared using simulated and real data sets. In the simulations, performance is evaluated using an integrated squared error criterion. Results indicate that the minimum distance methodology is comparable to the deconvoluting kernel density estimator and outperforms the explicit characteristic function inversion method.
4

Spurious Heavy Tails / Falska tunga svansar

Segerfors, Ted January 2015 (has links)
Since the financial crisis which started in 2007, the risk awareness in the financial sector is greater than ever. Financial institutions such as banks and insurance companies are heavily regulated in order to create a harmonic and resilient global economic environment. Sufficiently large capital buffers may protect institutions from bankruptcy due to some adverse financial events leading to an undesirable outcome for the company. In many regulatory frameworks, the institutions are obliged to estimate high quantiles of their loss distributions. This is relatively unproblematic when large samples of relevant historical data are available. Serious statistical problems appear when only small samples of relevant data are available. One possible solution would be to pool two or more samples that appear to have the same distribution, in order to create a larger sample. This thesis identifies the advantages and risks of pooling of small samples. For some mixtures of normally distributed samples, with what is considered to be the same variances, the pooled data may indicate heavy tails. Since a finite mixture of normally distributed samples has light tails, this is an example of spurious heavy tails. Even though two samples may appear to have the same distribution function it is not necessarily better to pool the samples in order to obtain a larger sample size with the aim of more accurate quantile estimation. For two normally distributed samples of sizes m and n and standard deviations s and v, we find that when v=s is approximately 2, n+m is less than 100 and m=(m+n) is approximately 0.75, then there is a considerable risk of believing that the two samples have equal variance and that the pooled sample has heavy tails. / Efter den finansiella krisen som hade sin start 2007 har riskmedvetenheten inom den finansiella sektorn ökat. Finansiella institutioner så som banker och försäkringsbolag är noga reglerade och kontrollerade för att skapa en stark och stabil världsekonomi. Genom att banker och försäkringsbolag enligt regelverken måste ha kapitalbuffertar som ska skydda mot konkurser vid oväntade och oönskade händelser skapas en mer harmonisk finansiell marknad. Dessa regelverk som institutionerna måste följa innebär ofta att de ansvariga måste skatta höga kvantiler av institutionens förväntade förlustfunktion. Att skapa en pålitligt modell och sedan skatta höga kvantiler är lätt när det finns mycket relevant data tillgänglig. När det inte finns tillr äckligt med historisk data uppkommer statistiska problem. En lösning på problemet är att poola två eller _era grupper av data som ser ut att komma från samma fördelningsfunktion för att på så sätt skapa en större grupp med historisk data tillgänglig. Detta arbetet går igenom fördelar och risker med att poola data när det inte finns tillräckligt med relevant historisk data för att skapa en pålitlig modell. En viss mix av normalfördelade datagrupper som ser ut att ha samma varians kan uppfattas att komma från tungsvansade fördelningar. Eftersom normalfördelningen inte är en tungsvansad fördelning kan denna missuppfattning skapa problem, detta är ett exempel på falska tunga svansar. Även fast två datagrupper ser ut att komma från samma fördelningsfunktion så är det inte nödvändigtvis bättre att poola dessa grupper för att skapa ett större urval. För två normalfördelade datagrupper med storlekarna m och n och standardavvikelserna s och v, är det farligaste scenariot när v=s är ungefär 2, n+m är mindre än 100 och m=(m+n)är ungefär 0.75. När detta inträffar finns det en signifikant risk att de två datagrupperna ser ut att komma från samma fördelningsfunktion och att den poolade datan innehar tungsvansade egenskaper.
5

Essays on Trade Agreements, Agricultural Commodity Prices and Unconditional Quantile Regression

Li, Na 03 January 2014 (has links)
My dissertation consists of three essays in three different areas: international trade; agricultural markets; and nonparametric econometrics. The first and third essays are theoretical papers, while the second essay is empirical. In the first essay, I developed a political economy model of trade agreements where the set of policy instruments are endogenously determined, providing a rationale for countervailing duties (CVDs). Trade-related policy intervention is assumed to be largely shaped in response to rent seeking demand as is often shown empirically. Consequently, the uncertain circumstance during the lifetime of a trade agreement involves both economic and rent seeking conditions. The latter approximates the actual trade policy decisions more closely than the externality hypothesis and thus provides scope for empirical testing. The second essay tests whether normal mixture (NM) generalized autoregressive conditional heteroscedasticity (GARCH) models adequately capture the relevant properties of agricultural commodity prices. Volatility series were constructed for ten agricultural commodity weekly cash prices. NM-GARCH models allow for heterogeneous volatility dynamics among different market regimes. Both in-sample fit and out-of-sample forecasting tests confirm that the two-state NM-GARCH approach performs significantly better than the traditional normal GARCH model. For each commodity, it is found that an expected negative price change corresponds to a higher volatility persistence, while an expected positive price change arises in conjunction with a greater responsiveness of volatility. In the third essay, I propose an estimator for a nonparametric additive unconditional quantile regression model. Unconditional quantile regression is able to assess the possible different impacts of covariates on different unconditional quantiles of a response variable. The proposed estimator does not require d-dimensional nonparametric regression and therefore has no curse of dimensionality. In addition, the estimator has an oracle property in the sense that the asymptotic distribution of each additive component is the same as the case when all other components are known. Both numerical simulations and an empirical application suggest that the new estimator performs much better than alternatives. / the Canadian Agricultural Trade Policy and Competitiveness Research Network, the Structure and Performance of Agriculture and Agri-products Industry Network, and the Institute for the Advanced Study of Food and Agricultural Policy.
6

NORMAL MIXTURE AND CONTAMINATED MODEL WITH NUISANCE PARAMETER AND APPLICATIONS

Fan, Qian 01 January 2014 (has links)
This paper intend to find the proper hypothesis and test statistic for testing existence of bilaterally contamination when there exists nuisance parameter. The test statistic is based on method of moments estimators. Union-Intersection test is used for testing if the distribution of population can be implemented by a bilaterally contaminated normal model with unknown variance. This paper also developed a hierarchical normal mixture model (HNM) and applied it to birth weight data. EM algorithm is employed for parameter estimation and a singular Bayesian information criterion (sBIC) is applied to choose the number components. We also proposed a singular flexible information criterion which in addition involves a data-driven penalty.
7

Implementace a aplikace statistických metod ve výzkumu, výrobní technologii a řízení jakosti / Implementation and Application of Statistical Methods in Research, Manufacturing Technology and Quality Control

Kupka, Karel January 2012 (has links)
This thesis deals with modern statistical approaches and their application aimed at robust methods and neural network modelling. Selected methods are analyzed and applied on frequent practical problems in czech industry and technology. Topics and methods are to be benificial in real applications compared to currently used classical methods. Applicability and effectivity of the algorithms is verified and demonstrated on real studies and problems in czech industrial and research bodies. The great and unexploited potential of modern theoretical and computational capacity and the potential of new approaces to statistical modelling and methods. A significant result of this thesis is also an environment for software application development for data analysis with own programming language DARWin (Data Analysis Robot for Windows) for implemenation of effective numerical algorithms for extaction information from data. The thesis should be an incentive for boarder use of robust and computationally intensive methods as neural networks for modelling processes, quality control and generally better understanding of nature.
8

Implementace a aplikace statistických metod ve výzkumu, výrobní technologii a řízení jakosti / Implementation and Application of Statistical Methods in Research, Manufacturing Technology and Quality Control

Kupka, Karel January 2012 (has links)
This thesis deals with modern statistical approaches and their application aimed at robust methods and neural network modelling. Selected methods are analyzed and applied on frequent practical problems in czech industry and technology. Topics and methods are to be benificial in real applications compared to currently used classical methods. Applicability and effectivity of the algorithms is verified and demonstrated on real studies and problems in czech industrial and research bodies. The great and unexploited potential of modern theoretical and computational capacity and the potential of new approaces to statistical modelling and methods. A significant result of this thesis is also an environment for software application development for data analysis with own programming language DARWin (Data Analysis Robot for Windows) for implemenation of effective numerical algorithms for extaction information from data. The thesis should be an incentive for boarder use of robust and computationally intensive methods as neural networks for modelling processes, quality control and generally better understanding of nature.

Page generated in 0.0645 seconds