• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 3
  • 3
  • Tagged with
  • 13
  • 13
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The shrinkage least absolute deviation estimator in large samples and its application to the Treynor-Black model /

Kim, Tae-Hwan, January 1998 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1998. / Vita. Includes bibliographical references (leaves 114-116).
2

Robust mixture regression model fitting by Laplace distribution

Xing, Yanru January 1900 (has links)
Master of Science / Department of Statistics / Weixing Song / A robust estimation procedure for mixture linear regression models is proposed in this report by assuming the error terms follow a Laplace distribution. EM algorithm is imple- mented to conduct the estimation procedure of missing information based on the fact that the Laplace distribution is a scale mixture of normal and a latent distribution. Finite sample performance of the proposed algorithm is evaluated by some extensive simulation studies, together with the comparisons made with other existing procedures in this literature. A sensitivity study is also conducted based on a real data example to illustrate the application of the proposed method.
3

Development and validation of early prediction for neurological outcome at 90 days after return of spontaneous circulation in out-of-hospital cardiac arrest / 自己心拍再開後の院外心停止における90日後神経学的転帰の早期予後予測の開発と検証

Nishioka, Norihiro 23 March 2022 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第23798号 / 医博第4844号 / 新制||医||1058(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 佐藤 俊哉, 教授 黒田 知宏, 教授 永井 洋士 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
4

Numerical Methods for Wilcoxon Fractal Image Compression

Jau, Pei-Hung 28 June 2007 (has links)
In the thesis, the Wilcoxon approach to linear regression problems is combined with the fractal image compression to form a novel Wilcoxon fractal image compression. When the original image is corrupted by noise, we argue that the fractal image compression scheme should be insensitive to those outliers present in the corrupted image. This leads to the new concept of robust fractal image compression. The proposed Wilcoxon fractal image compression is the first attempt toward the design of robust fractal image compression. Four different numerical methods, i.e., steepest decent, line minimization based on quadratic interpolation, line minimization based on cubic interpolation, and least absolute deviation, will be proposed to solve the associated linear Wilcoxon regression problem. From the simulation results, it will be seen that, compared with the traditional fractal image compression, Wilcoxon fractal image compression has very good robustness against outliers caused by salt-and-pepper noise. However, it does not show great improvement of the robustness against outliers caused by Gaussian noise.
5

Variable Selection and Function Estimation Using Penalized Methods

Xu, Ganggang 2011 December 1900 (has links)
Penalized methods are becoming more and more popular in statistical research. This dissertation research covers two major aspects of applications of penalized methods: variable selection and nonparametric function estimation. The following two paragraphs give brief introductions to each of the two topics. Infinite variance autoregressive models are important for modeling heavy-tailed time series. We use a penalty method to conduct model selection for autoregressive models with innovations in the domain of attraction of a stable law indexed by alpha is an element of (0, 2). We show that by combining the least absolute deviation loss function and the adaptive lasso penalty, we can consistently identify the true model. At the same time, the resulting coefficient estimator converges at a rate of n^(?1/alpha) . The proposed approach gives a unified variable selection procedure for both the finite and infinite variance autoregressive models. While automatic smoothing parameter selection for nonparametric function estimation has been extensively researched for independent data, it is much less so for clustered and longitudinal data. Although leave-subject-out cross-validation (CV) has been widely used, its theoretical property is unknown and its minimization is computationally expensive, especially when there are multiple smoothing parameters. By focusing on penalized modeling methods, we show that leave-subject-out CV is optimal in that its minimization is asymptotically equivalent to the minimization of the true loss function. We develop an efficient Newton-type algorithm to compute the smoothing parameters that minimize the CV criterion. Furthermore, we derive one simplification of the leave-subject-out CV, which leads to a more efficient algorithm for selecting the smoothing parameters. We show that the simplified version of CV criteria is asymptotically equivalent to the unsimplified one and thus enjoys the same optimality property. This CV criterion also provides a completely data driven approach to select working covariance structure using generalized estimating equations in longitudinal data analysis. Our results are applicable to additive, linear varying-coefficient, nonlinear models with data from exponential families.
6

Covariate Model Building in Nonlinear Mixed Effects Models

Ribbing, Jakob January 2007 (has links)
<p>Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass.</p><p>The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design.</p><p>A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.</p>
7

Covariate Model Building in Nonlinear Mixed Effects Models

Ribbing, Jakob January 2007 (has links)
Population pharmacokinetic-pharmacodynamic (PK-PD) models can be fitted using nonlinear mixed effects modelling (NONMEM). This is an efficient way of learning about drugs and diseases from data collected in clinical trials. Identifying covariates which explain differences between patients is important to discover patient subpopulations at risk of sub-therapeutic or toxic effects and for treatment individualization. Stepwise covariate modelling (SCM) is commonly used to this end. The aim of the current thesis work was to evaluate SCM and to develop alternative approaches. A further aim was to develop a mechanistic PK-PD model describing fasting plasma glucose, fasting insulin, insulin sensitivity and beta-cell mass. The lasso is a penalized estimation method performing covariate selection simultaneously to shrinkage estimation. The lasso was implemented within NONMEM as an alternative to SCM and is discussed in comparison with that method. Further, various ways of incorporating information and propagating knowledge from previous studies into an analysis were investigated. In order to compare the different approaches, investigations were made under varying, replicated conditions. In the course of the investigations, more than one million NONMEM analyses were performed on simulated data. Due to selection bias the use of SCM performed poorly when analysing small datasets or rare subgroups. In these situations, the lasso method in NONMEM performed better, was faster, and additionally validated the covariate model. Alternatively, the performance of SCM can be improved by propagating knowledge or incorporating information from previously analysed studies and by population optimal design. A model was also developed on a physiological/mechanistic basis to fit data from three phase II/III studies on the investigational drug, tesaglitazar. This model described fasting glucose and insulin levels well, despite heterogeneous patient groups ranging from non-diabetic insulin resistant subjects to patients with advanced diabetes. The model predictions of beta-cell mass and insulin sensitivity were well in agreement with values in the literature.
8

Development Of Algorithms For Bad Data Detection In Power System State Estimation

Musti, S S Phaniram 07 1900 (has links)
Power system state estimation (PSSE) is an energy management system function responsible for the computation of the most likely values of state variables viz., bus voltage magnitudes and angles. The state estimation is obtained within a network at a given instant by solving a system of mostly non-linear equations whose parameters are the redundant measurements, both static such as transformer/line parameters and dynamic such as, status of circuit breakers/isolators, transformer tap positions, active/reactive power flows, generator active/reactive power outputs etc. PSSE involves solving an over determined set of nonlinear equations by minimizing a weighted norm of the measurement residuals. Typically, the L1 and L2 norms are employed. The use of L2 norm leads to state estimation based on the weighted least squares (WLS) criterion. This method is known to exhibit efficient filtering capability when the errors are Gaussian but fails in the case of presence of bad data. The method of hypothesis testing identification can be incorporated into the WLS estimator to detect and identify bad data. Nevertheless, it is prone to failure when the measurement is a leverage point. On the other hand state estimation based on the weighted least absolute value (WLAV) criterion using L1 norm, has superior bad data suppression capability. But it also fails in rejecting bad data measurements associated with leverage points. Leverage points are highly influential measurements that attract the state estimator solution towards them. Consequently, much research effort has focused recently, on producing a LAV estimator that remains robust in the presence of bad leverage measurements. This problem has been addressed in the thesis work. Two methods, which aims development of robust estimator that are insensitive to bad leverage points, have been proposed viz., (i) The objective function used here is obtained by linearizing L2 norm of the error function. In addition to the constraints corresponding to measurement set, constraints corresponding to bounds of state variables are also involved. Linear programming (LP) optimization is carried out using upper bound optimization technique. (ii) A hybrid optimization algorithm which is combination of”upper bound optimization technique” and ”an improved algorithm for discrete l1 linear approximation”, to restrict the state variables not to leave the basis during optimization process. Linear programming optimization, with bounds of state variables as additional constraints is carried out using the proposed hybrid optimization algorithm. The proposed state estimator algorithms are tested on 24-bus EHV equivalent of southern power network, 36-bus EHV equivalent of western grid, 205-bus interconnected grid system of southern region and IEEE-39 bus New England system. Performances of the proposed two methods are compared with the WLAV estimator in the presence of bad data associated with leverage points. Also, the effect of bad leverage measurements on the interacting bad data, which are non-leverage, has been compared. Results show that proposed state estimator algorithms rejects bad data associated with leverage points efficiently.
9

Lasso顯著性檢定與向前逐步迴歸變數選取方法之比較 / A Comparison between Lasso Significance Test and Forward Stepwise Selection Method

鄒昀庭, Tsou, Yun Ting Unknown Date (has links)
迴歸模式的變數選取是很重要的課題,Tibshirani於1996年提出最小絕對壓縮挑選機制(Least Absolute Shrinkage and Selection Operator;簡稱Lasso),主要特色是能在估計的過程中自動完成變數選取。但因為Lasso本身並沒有牽扯到統計推論的層面,因此2014年時Lockhart et al.所提出的Lasso顯著性檢定是重要的突破。由於Lasso顯著性檢定的建構過程與傳統向前逐步迴歸相近,本研究接續Lockhart et al.(2014)對兩種變數選取方法的比較,提出以Bootstrap來改良傳統向前逐步迴歸;最後並比較Lasso、Lasso顯著性檢定、傳統向前逐步迴歸、以AIC決定變數組合的向前逐步迴歸,以及以Bootstrap改良的向前逐步迴歸等五種方法變數選取之效果。最後發現Lasso顯著性檢定雖然不容易犯型一錯誤,選取變數時卻過於保守;而以Bootstrap改良的向前逐步迴歸跟Lasso顯著性檢定一樣不容易犯型一錯誤,而選取變數上又比起Lasso顯著性檢定更大膽,因此可算是理想的方法改良結果。 / Variable selection of a regression model is an essential topic. In 1996, Tibshirani proposed a method called Lasso (Least Absolute Shrinkage and Selection Operator), which completes the matter of selecting variable set while estimating the parameters. However, the original version of Lasso does not provide a way for making inference. Therefore, the significance test for lasso proposed by Lockhart et al. in 2014 is an important breakthrough. Based on the similarity of construction of statistics between Lasso significance test and forward selection method, continuing the comparisons between the two methods from Lockhart et al. (2014), we propose an improved version of forward selection method by bootstrap. And at the second half of our research, we compare the variable selection results of Lasso, Lasso significance test, forward selection, forward selection by AIC, and forward selection by bootstrap. We find that although the Type I error probability for Lasso Significance Test is small, the testing method is too conservative for including new variables. On the other hand, the Type I error probability for forward selection by bootstrap is also small, yet it is more aggressive in including new variables. Therefore, based on our simulation results, the bootstrap improving forward selection is rather an ideal variable selecting method.
10

Comparison Of Regression Techniques Via Monte Carlo Simulation

Can Mutan, Oya 01 June 2004 (has links) (PDF)
The ordinary least squares (OLS) is one of the most widely used methods for modelling the functional relationship between variables. However, this estimation procedure counts on some assumptions and the violation of these assumptions may lead to nonrobust estimates. In this study, the simple linear regression model is investigated for conditions in which the distribution of the error terms is Generalised Logistic. Some robust and nonparametric methods such as modified maximum likelihood (MML), least absolute deviations (LAD), Winsorized least squares, least trimmed squares (LTS), Theil and weighted Theil are compared via computer simulation. In order to evaluate the estimator performance, mean, variance, bias, mean square error (MSE) and relative mean square error (RMSE) are computed.

Page generated in 0.0586 seconds