131 |
Regularized Autoregressive Approximation in Time SeriesChen, Bei January 2008 (has links)
In applications, the true underlying model of an observed time series is typically unknown or has a complicated structure. A common approach is to approximate the true model by autoregressive (AR) equation whose orders are chosen by information criterions such as AIC, BIC and Parsen's CAT and whose parameters are estimated by the least square (LS), the Yule Walker (YW) or other methods. However, as sample size increases, it often implies that the model order has to be refined and the parameters need to be recalculated. In order to avoid such shortcomings, we propose the Regularized AR (RAR) approximation and illustrate its applications in frequency detection and long memory process forecasting. The idea of the RAR approximation is to utilize a “long" AR model whose order significantly exceeds the model order suggested by information criterions, and to estimate AR parameters by Regularized LS (RLS) method, which enables to estimate AR parameters with different level of accuracy and the number of estimated parameters can grow linearly with the sample size. Therefore, the repeated model selection and parameter estimation are avoided as the observed sample increases.
We apply the RAR approach to estimate the unknown frequencies in periodic processes by approximating their generalized spectral densities, which significantly reduces the computational burden and improves accuracy of estimates. Our theoretical findings indicate that the RAR estimates of unknown frequency are strongly consistent and normally distributed. In practice, we may encounter spurious frequency estimates due to the high model order. Therefore, we further propose the robust trimming algorithm (RTA) of RAR frequency estimation. Our simulation studies indicate that the RTA can effectively eliminate the spurious roots and outliers, and therefore noticeably increase the accuracy. Another application we discuss in this thesis is modeling and forecasting of long memory processes using the RAR approximation. We demonstration that the RAR is useful in long-range prediction of general ARFIMA(p,d,q) processes with p > 1 and q > 1 via simulation studies.
|
132 |
Robust Image Registration for Improved Clinical Efficiency : Using Local Structure Analysis and Model-Based ProcessingForsberg, Daniel January 2013 (has links)
Medical imaging plays an increasingly important role in modern healthcare. In medical imaging, it is often relevant to relate different images to each other, something which can prove challenging, since there rarely exists a pre-defined mapping between the pixels in different images. Hence, there is a need to find such a mapping/transformation, a procedure known as image registration. Over the years, image registration has been proved useful in a number of clinical situations. Despite this, current use of image registration in clinical practice is rather limited, typically only used for image fusion. The limited use is, to a large extent, caused by excessive computation times, lack of established validation methods/metrics and a general skepticism toward the trustworthiness of the estimated transformations in deformable image registration. This thesis aims to overcome some of the issues limiting the use of image registration, by proposing a set of technical contributions and two clinical applications targeted at improved clinical efficiency. The contributions are made in the context of a generic framework for non-parametric image registration and using an image registration method known as the Morphon. In image registration, regularization of the estimated transformation forms an integral part in controlling the registration process, and in this thesis, two regularizers are proposed and their applicability demonstrated. Although the regularizers are similar in that they rely on local structure analysis, they differ in regard to implementation, where one is implemented as applying a set of filter kernels, and where the other is implemented as solving a global optimization problem. Furthermore, it is proposed to use a set of quadrature filters with parallel scales when estimating the phase-difference, driving the registration. A proposal that brings both accuracy and robustness to the registration process, as shown on a set of challenging image sequences. Computational complexity, in general, is addressed by porting the employed Morphon algorithm to the GPU, by which a performance improvement of 38-44x is achieved, when compared to a single-threaded CPU implementation. The suggested clinical applications are based upon the concept paint on priors, which was formulated in conjunction with the initial presentation of the Morphon, and which denotes the notion of assigning a model a set of properties (local operators), guiding the registration process. In this thesis, this is taken one step further, in which properties of a model are assigned to the patient data after completed registration. Based upon this, an application using the concept of anatomical transfer functions is presented, in which different organs can be visualized with separate transfer functions. This has been implemented for both 2D slice visualization and 3D volume rendering. A second application is proposed, in which landmarks, relevant for determining various measures describing the anatomy, are transferred to the patient data. In particular, this is applied to idiopathic scoliosis and used to obtain various measures relevant for assessing spinal deformity. In addition, a data analysis scheme is proposed, useful for quantifying the linear dependence between the different measures used to describe spinal deformities.
|
133 |
Regularization Using a Parameterized Trust Region SubproblemGrodzevich, Oleg January 2004 (has links)
We present a new method for regularization of ill-conditioned problems that extends the traditional trust-region approach. Ill-conditioned problems arise, for example, in image restoration or mathematical processing of medical data, and involve matrices that are very ill-conditioned. The method makes use of the L-curve and L-curve maximum curvature criterion as a strategy recently proposed to find a good regularization parameter. We describe the method and show its application to an image restoration problem. We also provide a MATLAB code for the algorithm. Finally, a comparison to the CGLS approach is given and analyzed, and future research directions are proposed.
|
134 |
Regularized Autoregressive Approximation in Time SeriesChen, Bei January 2008 (has links)
In applications, the true underlying model of an observed time series is typically unknown or has a complicated structure. A common approach is to approximate the true model by autoregressive (AR) equation whose orders are chosen by information criterions such as AIC, BIC and Parsen's CAT and whose parameters are estimated by the least square (LS), the Yule Walker (YW) or other methods. However, as sample size increases, it often implies that the model order has to be refined and the parameters need to be recalculated. In order to avoid such shortcomings, we propose the Regularized AR (RAR) approximation and illustrate its applications in frequency detection and long memory process forecasting. The idea of the RAR approximation is to utilize a “long" AR model whose order significantly exceeds the model order suggested by information criterions, and to estimate AR parameters by Regularized LS (RLS) method, which enables to estimate AR parameters with different level of accuracy and the number of estimated parameters can grow linearly with the sample size. Therefore, the repeated model selection and parameter estimation are avoided as the observed sample increases.
We apply the RAR approach to estimate the unknown frequencies in periodic processes by approximating their generalized spectral densities, which significantly reduces the computational burden and improves accuracy of estimates. Our theoretical findings indicate that the RAR estimates of unknown frequency are strongly consistent and normally distributed. In practice, we may encounter spurious frequency estimates due to the high model order. Therefore, we further propose the robust trimming algorithm (RTA) of RAR frequency estimation. Our simulation studies indicate that the RTA can effectively eliminate the spurious roots and outliers, and therefore noticeably increase the accuracy. Another application we discuss in this thesis is modeling and forecasting of long memory processes using the RAR approximation. We demonstration that the RAR is useful in long-range prediction of general ARFIMA(p,d,q) processes with p > 1 and q > 1 via simulation studies.
|
135 |
Regularization of an autoconvolution problem occurring in measurements of ultra-short laser pulsesGerth, Daniel 17 July 2012 (has links) (PDF)
Introducing a new method for measureing ultra-short laser pulses, the research group "Solid State Light Sources" of the Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy, Berlin, encountered a new type of autoconvolution problem. The so called SD-SPIDER method aims for the reconstruction of the real valued phase of a complex valued laser pulse from noisy measurements. The measurements are also complex valued and additionally influenced by a device-related kernel function. Although the autoconvolution equation has been examined intensively in the context of inverse problems, results for complex valued functions occurring as solutions and right-hand sides of the autoconvolution equation and for nontrivial kernels were missing. The thesis is a first step to bridge this gap. In the first chapter, the physical background is explained and especially the autoconvolution effect is pointed out. From this, the mathematical model is derived, leading to the final autoconvolution equation. Analytical results are given in the second chapter. It follows the numerical treatment of the problem in chapter three. A regularization approach is presented and tested with artificial data. In particular, a new parameter choice rule making use of a specific property of the SD-SPIDER method is proposed and numerically verified. / Bei der Entwicklung einer neuen Methode zur Messung ultra-kurzer Laserpulse stieß die Forschungsgruppe "Festkörper-Lichtquellen" des Max-Born-Institutes für Nichtlineare Optik und Kurzzeitspektroskopie, Berlin, auf ein neuartiges Selbstfaltungsproblem. Die so genannte SD-SPIDER-Methode dient der Rekonstruktion der reellen Phase eines komplexwertigen Laserpulses mit Hilfe fehlerbehafteter Messungen. Die Messwerte sind ebenfalls komplexwertig und zusätzlich beeinflusst von einer durch das Messprinzip erzeugten Kernfunktion. Obwohl Selbstfaltungsgleichungen intensiv im Kontext Inverser Probleme untersucht wurden, fehlen Resultate für komplexwertige Lösungen und rechte Seiten ebenso wie für nichttriviale Kernfunktionen. Die Diplomarbeit stellt einen ersten Schritt dar, diese Lücke zu schließen. Im ersten Kapitel wird der physikalische Hintergrund erläutert und insbesondere der Selbstfaltungseffekt erklärt. Davon ausgehend wird das mathematische Modell aufgestellt. Kapitel zwei befasst sich mit der Analysis der Gleichung. Es folgt die numerische Behandlung des Problems in Kapitel drei. Eine Regularisierungsmethode wird vorgestellt und an künstlichen Daten getestet. Insbesondere wird eine neue Regel zur Wahl des Regularisierungsparameters vorgeschlagen und numerisch bestätigt, welche auf einer speziellen Eigenschaft des SD-SPIDER Verfahrens beruht.
|
136 |
Efficient Tools For Reliability Analysis Using Finite Mixture DistributionsCross, Richard J. (Richard John) 02 December 2004 (has links)
The complexity of many failure mechanisms and variations in component manufacture often make standard probability distributions inadequate for reliability modeling. Finite mixture distributions provide the necessary flexibility for modeling such complex phenomena but add considerable difficulty to the inference. This difficulty is overcome by drawing an analogy to neural networks. With appropropriate modifications, a neural network can represent a finite mixture CDF or PDF exactly. Training with Bayesian Regularization gives an efficient empirical Bayesian inference of the failure time distribution. Training also yields an effective number of parameters from which the number of components in the mixture can be estimated. Credible sets for functions of the model parameters can be estimated using a simple closed-form expression. Complete, censored, and inpection samples can be considered by appropriate choice of the likelihood function.
In this work, architectures for Exponential, Weibull, Normal, and Log-Normal mixture networks have been derived. The capabilities of mixture networks have been demonstrated for complete, censored, and inspection samples from Weibull and Log-Normal mixtures. Furthermore, mixture networks' ability to model arbitrary failure distributions has been demonstrated. A sensitivity analysis has been performed to determine how mixture network estimator errors are affected my mixture component spacing and sample size. It is shown that mixture network estimators are asymptotically unbiased and that errors decay with sample size at least as well as with MLE.
|
137 |
Principal Components Analysis for Binary DataLee, Seokho 2009 May 1900 (has links)
Principal components analysis (PCA) has been widely used as a statistical tool for the dimension
reduction of multivariate data in various application areas and extensively studied
in the long history of statistics. One of the limitations of PCA machinery is that PCA can be
applied only to the continuous type variables. Recent advances of information technology
in various applied areas have created numerous large diverse data sets with a high dimensional
feature space, including high dimensional binary data. In spite of such great demands,
only a few methodologies tailored to such binary dataset have been suggested. The
methodologies we developed are the model-based approach for generalization to binary
data. We developed a statistical model for binary PCA and proposed two stable estimation
procedures using MM algorithm and variational method. By considering the regularization
technique, the selection of important variables is automatically achieved. We also proposed
an efficient algorithm for model selection including the choice of the number of principal
components and regularization parameter in this study.
|
138 |
Stability Analysis of Method of Foundamental Solutions for Laplace's EquationsHuang, Shiu-ling 21 June 2006 (has links)
This thesis consists of two parts. In the first part, to solve the boundary value problems of homogeneous equations, the fundamental solutions (FS) satisfying the homogeneous equations are chosen, and their linear combination is forced to satisfy the exterior and
the interior boundary conditions. To avoid the logarithmic
singularity, the source points of FS are located outside of the solution domain S. This method is called the method of fundamental solutions (MFS). The MFS was first used in Kupradze in 1963. Since then, there have appeared numerous
reports of MFS for computation, but only a few for analysis. The part one of this thesis is to derive the eigenvalues for the Neumann and the Robin boundary conditions in the simple case, and to estimate the bounds of condition number for the mixed boundary conditions in some non-disk domains. The same exponential rates of
Cond are obtained. And to report numerical results for two kinds of cases. (I) MFS for Motz's problem by adding singular functions. (II) MFS for Motz's problem by local refinements of collocation nodes. The values of traditional condition number are huge, and those of effective condition number are moderately large. However,
the expansion coefficients obtained by MFS are scillatingly
large, to cause another kind of instability: subtraction
cancellation errors in the final harmonic solutions. Hence, for practical applications, the errors and the ill-conditioning must be balanced each other. To mitigate the ill-conditioning, it is suggested that the number of FS should not be large, and the distance between the source circle and the partial S should not be far, either.
In the second part, to reduce the severe instability of MFS, the truncated singular value decomposition(TSVD) and Tikhonov regularization(TR) are employed. The computational formulas of the condition number and the effective condition number are derived, and their analysis is explored in detail. Besides, the error analysis of TSVD and TR is also made. Moreover, the combination of
TSVD and TR is proposed and called the truncated Tikhonov
regularization in this thesis, to better remove some effects of infinitesimal sigma_{min} and high frequency eigenvectors.
|
139 |
Evaulation Of Spatial And Spatio-temporal Regularization Approaches In Inverse Problem Of ElectrocardiographyOnal, Murat 01 August 2008 (has links) (PDF)
Conventional electrocardiography (ECG) is an essential tool for investigating cardiac disorders such as arrhythmias or myocardial infarction. It consists of interpretation of potentials recorded at the body surface that occur due to the electrical activity of the heart. However, electrical signals originated at the heart suffer from attenuation and smoothing within the thorax, therefore ECG signal measured on the body surface lacks some important details. The goal of forward and inverse ECG problems is to recover these lost details by estimating the heart& / #8217 / s electrical activity non-invasively from body surface potential measurements. In the forward problem, one calculates the body surface potential distribution (i.e. torso potentials) using an appropriate source model for the equivalent cardiac sources. In the inverse problem of ECG, one estimates cardiac electrical activity based on measured torso potentials and a geometric model of the torso. Due to attenuation and spatial smoothing that occur within the thorax, inverse ECG problem is ill-posed and the forward model matrix is badly conditioned. Thus, small disturbances in the measurements lead to amplified errors in inverse solutions. It is difficult to solve this problem for effective cardiac imaging due to the ill-posed nature and high dimensionality of the problem. Tikhonov regularization, Truncated Singular Value Decomposition (TSVD) and Bayesian MAP estimation are some of the methods proposed in literature to cope with the ill-posedness of the problem. The most common approach in these methods is to ignore temporal relations of epicardial potentials and to solve the inverse problem at every time instant independently (column sequential approach). This is the fastest and the easiest approach / however, it does not include temporal correlations. The goal of this thesis is to include temporal constraints as well as spatial constraints in solving the inverse ECG problem. For this purpose, two methods are used. In the first method, we solved the augmented problem directly. Alternatively, we solve the problem with column sequential approach after applying temporal whitening. The performance of each method is evaluated.
|
140 |
Parameter Estimation In Generalized Partial Linear Models With Conic Quadratic ProgrammingCelik, Gul 01 September 2010 (has links) (PDF)
In statistics, regression analysis is a technique, used to understand and model the
relationship between a dependent variable and one or more independent variables.
Multiple Adaptive Regression Spline (MARS) is a form of regression analysis. It is a
non-parametric regression technique and can be seen as an extension of linear models
that automatically models non-linearities and interactions. MARS is very important
in both classification and regression, with an increasing number of applications in
many areas of science, economy and technology.
In our study, we analyzed Generalized Partial Linear Models (GPLMs), which are
particular semiparametric models. GPLMs separate input variables into two parts
and additively integrates classical linear models with nonlinear model part. In order
to smooth this nonparametric part, we use Conic Multiple Adaptive Regression Spline
(CMARS), which is a modified form of MARS. MARS is very benefical for high
dimensional problems and does not require any particular class of relationship between
the regressor variables and outcome variable of interest. This technique offers a great advantage for fitting nonlinear multivariate functions. Also, the contribution of the
basis functions can be estimated by MARS, so that both the additive and interaction
effects of the regressors are allowed to determine the dependent variable. There are
two steps in the MARS algorithm: the forward and backward stepwise algorithms. In
the first step, the model is constructed by adding basis functions until a maximum
level of complexity is reached. Conversely, in the second step, the backward stepwise
algorithm reduces the complexity by throwing the least significant basis functions from
the model.
In this thesis, we suggest not using backward stepwise algorithm, instead, we employ
a Penalized Residual Sum of Squares (PRSS). We construct PRSS for MARS as a
Tikhonov Regularization Problem. We treat this problem using continuous optimization
techniques which we consider to become an important complementary technology
and alternative to the concept of the backward stepwise algorithm. Especially, we apply
the elegant framework of Conic Quadratic Programming (CQP) an area of convex
optimization that is very well-structured, hereby, resembling linear programming and,
therefore, permitting the use of interior point methods.
At the end of this study, we compare CQP with Tikhonov Regularization problem
for two different data sets, which are with and without interaction effects. Moreover,
by using two another data sets, we make a comparison between CMARS and two
other classification methods which are Infinite Kernel Learning (IKL) and Tikhonov
Regularization whose results are obtained from the thesis, which is on progress.
|
Page generated in 0.0237 seconds