• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 6
  • 5
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Wavelet thresholding for unequally time-spaced data

Kovac, Arne January 1999 (has links)
No description available.
2

Efficacy of robust regression applied to fractional factorial treatment structures.

McCants, Michael January 1900 (has links)
Master of Science / Department of Statistics / James J. Higgins / Completely random and randomized block designs involving n factors at each of two levels are used to screen for the effects of a large number of factors. With such designs it may not be possible either because of costs or because of time to run each treatment combination more than once. In some cases, only a fraction of all the treatments may be run. With a large number of factors and limited observations, even one outlier can adversely affect the results. Robust regression methods are designed to down-weight the adverse affects of outliers. However, to our knowledge practitioners do not routinely apply robust regression methods in the context of fractional replication of 2^n factorial treatment structures. The purpose of this report is examine how robust regression methods perform in this context.
3

Robust linear regression

Bai, Xue January 1900 (has links)
Master of Science / Department of Statistics / Weixin Yao / In practice, when applying a statistical method it often occurs that some observations deviate from the usual model assumptions. Least-squares (LS) estimators are very sensitive to outliers. Even one single atypical value may have a large effect on the regression parameter estimates. The goal of robust regression is to develop methods that are resistant to the possibility that one or several unknown outliers may occur anywhere in the data. In this paper, we review various robust regression methods including: M-estimate, LMS estimate, LTS estimate, S-estimate, [tau]-estimate, MM-estimate, GM-estimate, and REWLS estimate. Finally, we compare these robust estimates based on their robustness and efficiency through a simulation study. A real data set application is also provided to compare the robust estimates with traditional least squares estimator.
4

Robust Analysis of M-Estimators of Nonlinear Models

Neugebauer, Shawn Patrick 16 August 1996 (has links)
Estimation of nonlinear models finds applications in every field of engineering and the sciences. Much work has been done to build solid statistical theories for its use and interpretation. However, there has been little analysis of the tolerance of nonlinear model estimators to deviations from assumptions and normality. We focus on analyzing the robustness properties of M-estimators of nonlinear models by studying the effects of deviations from assumptions and normality on these estimators. We discuss St. Laurent and Cook's Jacobian Leverage and identify the relationship of the technique to the robustness concept of influence. We derive influence functions for M-estimators of nonlinear models and show that influence of position becomes, more generally, influence of model. The result shows that, for M-estimators, we must bound not only influence of residual but also influence of model. Several examples highlight the unique problems of nonlinear model estimation and demonstrate the utility of the influence function. / Master of Science
5

Approximate replication of high-breakdown robust regression techniques

Zeileis, Achim, Kleiber, Christian January 2008 (has links) (PDF)
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed. / Series: Research Report Series / Department of Statistics and Mathematics
6

Study on Ramsay Fuzzy Neural Networks

Wu, Tzung-Han 23 June 2008 (has links)
In this thesis, M-estimators with Ramsay¡¦s function used in robust regression theory for linear parametric regression problems will be generalized to nonparametric Ramsay fuzzy neural networks (RFNNs) for nonlinear regression problems. Emphasis is put particularly on the robustness against outliers. This provides alternative learning machines when faced with general nonlinear learning problems. Simple weight updating rules based on incremental gradient descent and iteratively reweighted least squares (IRLS) will be derived. Some numerical examples will be provided to compare the robustness against outliers for usual fuzzy neural networks (FNNs) and the proposed RFNNs. Simulation results show that the RFNNs proposed in this thesis have good robustness against outliers.
7

A Comparison of Five Robust Regression Methods with Ordinary Least Squares: Relative Efficiency, Bias and Test of the Null Hypothesis

Anderson, Cynthia, 1962- 08 1900 (has links)
A Monte Carlo simulation was used to generate data for a comparison of five robust regression estimation methods with ordinary least squares (OLS) under 36 different outlier data configurations. Two of the robust estimators, Least Absolute Value (LAV) estimation and MM estimation, are commercially available. Three authormodified variations on MM were also included (MM1, MM2, and MM3). Design parameters that were varied include sample size (n=60 and n=180), number of independent predictor variables (2, 3 and 6), outlier density (0%, 5% and 15%) and outlier location (2x,2y s, 8x8y s, 4x,8y s and 8x,4y s). Criteria on which the regression methods were measured are relative efficiency, bias and a test of the null hypothesis. Results indicated that MM2 was the best performing robust estimator on relative efficiency. The best performing estimator on bias was MM1. The best performing regression method on the test of the null hypothesis was MM2. Overall, the MM-type robust regression methods outperformed OLS and LAV on relative efficiency, bias, and the test of the null hypothesis.
8

Regression-Based Monte Carlo For Pricing High-Dimensional American-Style Options / Regressionsbaserad Monte Carlo För Att Prissätta Högdimensionella Amerikanska Optioner

Andersson, Niklas January 2016 (has links)
Pricing different financial derivatives is an essential part of the financial industry. For some derivatives there exists a closed form solution, however the pricing of high-dimensional American-style derivatives is still today a challenging problem. This project focuses on the derivative called option and especially pricing of American-style basket options, i.e. options with both an early exercise feature and multiple underlying assets. In high-dimensional problems, which is definitely the case for American-style options, Monte Carlo methods is advantageous. Therefore, in this thesis, regression-based Monte Carlo has been used to determine early exercise strategies for the option. The well known Least Squares Monte Carlo (LSM) algorithm of Longstaff and Schwartz (2001) has been implemented and compared to Robust Regression Monte Carlo (RRM) by C.Jonen (2011). The difference between these methods is that robust regression is used instead of least square regression to calculate continuation values of American style options. Since robust regression is more stable against outliers the result using this approach is claimed by C.Jonen to give better estimations of the option price. It was hard to compare the techniques without the duality approach of Andersen and Broadie (2004) therefore this method was added. The numerical tests then indicate that the exercise strategy determined using RRM produces a higher lower bound and a tighter upper bound compared to LSM. The difference between upper and lower bound could be up to 4 times smaller using RRM. Importance sampling and Quasi Monte Carlo have also been used to reduce the variance in the estimation of the option price and to speed up the convergence rate. / Prissättning av olika finansiella derivat är en viktig del av den finansiella sektorn. För vissa derivat existerar en sluten lösning, men prissättningen av derivat med hög dimensionalitet och av amerikansk stil är fortfarande ett utmanande problem. Detta projekt fokuserar på derivatet som kallas option och särskilt prissättningen av amerikanska korg optioner, dvs optioner som både kan avslutas i förtid och som bygger på flera underliggande tillgångar. För problem med hög dimensionalitet, vilket definitivt är fallet för optioner av amerikansk stil, är Monte Carlo metoder fördelaktiga. I detta examensarbete har därför regressions baserad Monte Carlo använts för att bestämma avslutningsstrategier för optionen. Den välkända minsta kvadrat Monte Carlo (LSM) algoritmen av Longstaff och Schwartz (2001) har implementerats och jämförts med Robust Regression Monte Carlo (RRM) av C.Jonen (2011). Skillnaden mellan metoderna är att robust regression används istället för minsta kvadratmetoden för att beräkna fortsättningsvärden för optioner av amerikansk stil. Eftersom robust regression är mer stabil mot avvikande värden påstår C.Jonen att denna metod ger bättre skattingar av optionspriset. Det var svårt att jämföra teknikerna utan tillvägagångssättet med dualitet av Andersen och Broadie (2004) därför lades denna metod till. De numeriska testerna indikerar då att avslutningsstrategin som bestämts med RRM producerar en högre undre gräns och en snävare övre gräns jämfört med LSM. Skillnaden mellan övre och undre gränsen kunde vara upp till 4 gånger mindre med RRM. Importance sampling och Quasi Monte Carlo har också använts för att reducera variansen i skattningen av optionspriset och för att påskynda konvergenshastigheten.
9

Robust mixture regression models using t-distribution

Wei, Yan January 1900 (has links)
Master of Science / Department of Statistics / Weixin Yao / In this report, we propose a robust mixture of regression based on t-distribution by extending the mixture of t-distributions proposed by Peel and McLachlan (2000) to the regression setting. This new mixture of regression model is robust to outliers in y direction but not robust to the outliers with high leverage points. In order to combat this, we also propose a modified version of the proposed method, which fits the mixture of regression based on t-distribution to the data after adaptively trimming the high leverage points. We further propose to adaptively choose the degree of freedom for the t-distribution using profile likelihood. The proposed robust mixture regression estimate has high efficiency due to the adaptive choice of degree of freedom. We demonstrate the effectiveness of the proposed new method and compare it with some of the existing methods through simulation study.
10

Penalized methods and algorithms for high-dimensional regression in the presence of heterogeneity

Yi, Congrui 01 December 2016 (has links)
In fields such as statistics, economics and biology, heterogeneity is an important topic concerning validity of data inference and discovery of hidden patterns. This thesis focuses on penalized methods for regression analysis with the presence of heterogeneity in a potentially high-dimensional setting. Two possible strategies to deal with heterogeneity are: robust regression methods that provide heterogeneity-resistant coefficient estimation, and direct detection of heterogeneity while estimating coefficients accurately in the meantime. We consider the first strategy for two robust regression methods, Huber loss regression and quantile regression with Lasso or Elastic-Net penalties, which have been studied theoretically but lack efficient algorithms. We propose a new algorithm Semismooth Newton Coordinate Descent to solve them. The algorithm is a novel combination of Semismooth Newton Algorithm and Coordinate Descent that applies to penalized optimization problems with both nonsmooth loss and nonsmooth penalty. We prove its convergence properties, and show its computational efficiency through numerical studies. We also propose a nonconvex penalized regression method, Heterogeneity Discovery Regression (HDR) , as a realization of the second idea. We establish theoretical results that guarantees statistical precision for any local optimum of the objective function with high probability. We also compare the numerical performances of HDR with competitors including Huber loss regression, quantile regression and least squares through simulation studies and a real data example. In these experiments, HDR methods are able to detect heterogeneity accurately, and also largely outperform the competitors in terms of coefficient estimation and variable selection.

Page generated in 0.0862 seconds