• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 6
  • 6
  • 5
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

穩健迴歸轉換與區域影響分析 / Robust Regression Transformation and Diagnostics Using Local Influence

黃逸勤 Unknown Date (has links)
12

A Comparison Of Some Robust Regression Techniques

Avci, Ezgi 01 September 2009 (has links) (PDF)
Robust regression is a commonly required approach in industrial studies like data mining, quality control and improvement, and finance areas. Among the robust regression methods / Least Median Squares, Least Trimmed Squares, Mregression, MM-method, Least Absolute Deviations, Locally Weighted Scatter Plot Smoothing and Multivariate Adaptive Regression Splines are compared under contaminated normal distributions with each other and Ordinary Least Squares with respect to the multiple outlier detection performance measures. In this comparison / a simulation study is performed by changing some of the parameters such as outlier density, outlier locations in the x-axis, sample size and number of independent variables. In the comparison of the methods, multiple outlier detection is carried out with respect to the performance measures detection capability, false alarm rate and improved mean square error and ratio of improved mean square error. As a result of this simulation study, the three most competitive methods are compared on an industrial data set with respect to the coefficient of multiple determination and mean square error.
13

Statistical Analysis of Operational Data for Manufacturing System Performance Improvement

Wang, Zhenrui January 2013 (has links)
The performance of a manufacturing system relies on its four types of elements: operators, machines, computer system and material handling system. To ensure the performance of these elements, operational data containing various aspects of information are collected for monitoring and analysis. This dissertation focuses on the operator performance evaluation and machine failure prediction. The proposed research work is motivated by the following challenges in analyzing operational data. (i) the complex relationship between the variables, (ii) the implicit information important to failure prediction, and (iii) data with outliers, missing and erroneous measurements. To overcome these challenges, the following research has been conducted. To compare operator performance, a methodology combining regression modeling and multiple comparisons technique is proposed. The regression model quantifies and removes the complex effects of other impacting factors on the operator performance. A robust zero-inflated Poisson (ZIP) model is developed to reduce the impacts of the excessive zeros and outliers in the performance metric, i.e. the number of defects (NoD), on regression analysis. The model residuals are plotted in non-parametric statistical charts for performance comparison. The estimated model coefficients are also used to identify under-performing machines. To detect temporal patterns from operational data sequence, an algorithm is proposed for detecting interval-based asynchronous periodic patterns (APP). The algorithm effectively and efficiently detects pattern through a modified clustering and a convolution-based template matching method. To predict machine failures based on the covariates with erroneous measurements, a new method is proposed for statistical inference of proportional hazard model under a mixture of classical and Berkson errors. The method estimates the model coefficients with an expectation-maximization (EM) algorithm with expectation step achieved by Monte Carlo simulation. The model estimated with the proposed method will improve the accuracy of the inference on machine failure probability. The research work presented in this dissertation provides a package of solutions to improve manufacturing system performance. The effectiveness and efficiency of the proposed methodologies have been demonstrated and justified with both numerical simulations and real-world case studies.
14

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
15

Robust techniques for regression models with minimal assumptions / M.M. van der Westhuizen

Van der Westhuizen, Magdelena Marianna January 2011 (has links)
Good quality management decisions often rely on the evaluation and interpretation of data. One of the most popular ways to investigate possible relationships in a given data set is to follow a process of fitting models to the data. Regression models are often employed to assist with decision making. In addition to decision making, regression models can also be used for the optimization and prediction of data. The success of a regression model, however, relies heavily on assumptions made by the model builder. In addition, the model may also be influenced by the presence of outliers; a more robust model, which is not as easily affected by outliers, is necessary in making more accurate interpretations about the data. In this research study robust techniques for regression models with minimal assumptions are explored. Mathematical programming techniques such as linear programming, mixed integer linear programming, and piecewise linear regression are used to formulate a nonlinear regression model. Outlier detection and smoothing techniques are included to address the robustness of the model and to improve predictive accuracy. The performance of the model is tested by applying it to a variety of data sets and comparing the results to those of other models. The results of the empirical experiments are also presented in this study. / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
16

Revisitando o problema de classificaÃÃo de padrÃes na presenÃa de outliers usando tÃcnicas de regressÃo robusta / Revisiting the problem of pattern classification in the presence of outliers using robust regression techniques

Ana Luiza Bessa de Paula Barros 09 August 2013 (has links)
Nesta tese, aborda-se o problema de classificaÃÃo de dados que estÃo contaminados com pa- drÃes atÃpicos. Tais padrÃes, genericamente chamados de outliers, sÃo onipresentes em conjunto de dados multivariados reais, porÃm sua detecÃÃo a priori (i.e antes de treinar um classificador) à uma tarefa de difÃcil realizaÃÃo. Como conseqÃÃncia, uma abordagem reativa, em que se desconfia da presenÃa de outliers somente apÃs um classificador previamente treinado apresen- tar baixo desempenho, à a mais comum. VÃrias estratÃgias podem entÃo ser levadas a cabo a fim de melhorar o desempenho do classificador, dentre elas escolher um classificador mais poderoso computacionalmente ou promover uma limpeza dos dados, eliminando aqueles pa- drÃes difÃceis de categorizar corretamente. Qualquer que seja a estratÃgia adotada, a presenÃa de outliers sempre irà requerer maior atenÃÃo e cuidado durante o projeto de um classificador de padrÃes. Tendo estas dificuldades em mente, nesta tese sÃo revisitados conceitos e tÃcni- cas provenientes da teoria de regressÃo robusta, em particular aqueles relacionados à estimaÃÃo M, adaptando-os ao projeto de classificadores de padrÃes capazes de lidar automaticamente com outliers. Esta adaptaÃÃo leva à proposiÃÃo de versÃes robustas de dois classificadores de padrÃes amplamente utilizados na literatura, a saber, o classificador linear dos mÃnimos qua- drados (least squares classifier, LSC) e a mÃquina de aprendizado extremo (extreme learning machine, ELM). AtravÃs de uma ampla gama de experimentos computacionais, usando dados sintÃticos e reais, mostra-se que as versÃes robustas dos classificadores supracitados apresentam desempenho consistentemente superior aos das versÃes originais. / This thesis addresses the problem of data classification when they are contaminated with atypical patterns. These patterns, generally called outliers, are omnipresent in real-world multi- variate data sets, but their a priori detection (i.e. before training the classifier) is a difficult task to perform. As a result, the most common approach is the reactive one, in which one suspects of the presence of outliers in the data only after a previously trained classifier has achieved a low performance. Several strategies can then be carried out to improve the performance of the classifier, such as to choose a more computationally powerful classifier and/or to remove the de- tected outliers from data, eliminating those patterns which are difficult to categorize properly. Whatever the strategy adopted, the presence of outliers will always require more attention and care during the design of a pattern classifier. Bearing these difficulties in mind, this thesis revi- sits concepts and techniques from the theory of robust regression, in particular those related to M-estimation, adapting them to the design of pattern classifiers which are able to automatically handle outliers. This adaptation leads to the proposal of robust versions of two pattern classi- fiers widely used in the literature, namely, least squares classifier (LSC) and extreme learning machine (ELM). Through a comprehensive set of computer experiments using synthetic and real-world data, it is shown that the proposed robust classifiers consistently outperform their original versions.
17

Vybrané aspekty robustní regrese a srovnání metod robustní regrese / Selected aspects of robust regression and comparison of robust regression methods

Černý, Jindřich January 2006 (has links)
This dissertation examines the robust regression methods. The primary purpose of this work is to propose an extension, derivation and summary (including computational algorithm) for Theil-Sen's regression estimates (or in some literature also referred to as Passing-Bablok's regression method) for multi-dimensional space and compare this method to other robust regression methods. The combination of these two objectives is the primary and the original contribution of the dissertation. Based on the available literature it is unknown if anyone has discussed this problem in greater depth and solved it in total. Therefore this work provides a summary overview of the issue and offers a new alternative of this multidimensional, nonparametric, robust regression method. Secondary goals include a clear summary of other robust methods, a summary of findings related to these robust regression methods, robust methods compared with each other placing emphasis on the comparison with the proposed Theil-Sen's regression estimates method and with the least squares method. The summary also includes individual mathematical context and interchangeability of the proposed methods. These secondary objectives are also another benefit of this dissertation in the field of robust regression problems; this is especially important to gain a unified view of the problems of robust regression methods and estimates in general.
18

Minimax D-optimal designs for regression models with heteroscedastic errors

Yzenbrandt, Kai 20 April 2021 (has links)
Minimax D-optimal designs for regression models with heteroscedastic errors are studied and constructed. These designs are robust against possible misspecification of the error variance in the model. We propose a flexible assumption for the error variance and use a minimax approach to define robust designs. As usual it is hard to find robust designs analytically, since the associated design problem is not a convex optimization problem. However, the minimax D-optimal design problem has an objective function as a difference of two convex functions. An effective algorithm is developed to compute minimax D-optimal designs under the least squares estimator and generalized least squares estimator. The algorithm can be applied to construct minimax D-optimal designs for any linear or nonlinear regression model with heteroscedastic errors. In addition, several theoretical results are obtained for the minimax D-optimal designs. / Graduate
19

Robustní lineární regrese / Robust linear regression

Rábek, Július January 2021 (has links)
Regression analysis is one of the most extensively used statistical tools applied across different fields of science, with linear regression being its most well-known method. How- ever, the traditional procedure to obtain the linear model estimates, the least squares approach, is highly sensitive to even slight departures from the assumed modelling frame- work. This is especially pronounced when atypical values occur in the observed data. This lack of stability of the least squares approach is a serious problem in applications. Thus, the focus of this thesis lies in assessing the available robust alternatives to least squares estimation, which are not so easily affected by any outlying values. First, we introduce the linear regression model theory and derive the least squares method. Then, we char- acterise different types of unusual observations and outline some fundamental robustness measures. Next, we define and examine the robust alternatives to the classical estimation in the linear regression models. Finally, we conduct a comprehensive simulation study comparing the performance of robust methods under different scenarios. 1
20

Bayesian Restricted Likelihood Methods

Lewis, John Robert January 2014 (has links)
No description available.

Page generated in 0.1069 seconds