Spelling suggestions: "subject:"illconditioned matrix."" "subject:"illconditioned matrix.""
1 |
Low-rank Matrix EstimationFan, Xing 01 January 2024 (has links) (PDF)
The first part of this dissertation focuses on matrix-covariate regression models. While they have been studied in many existing works, classical statistical and computational methods for the analysis of the regression coefficient estimation are highly affected by high dimensional matrix-valued covariates. To address these issues, we proposes a framework of matrix-covariate regression models based on a low-rank constraint and an additional regularization for structured signals, with considerations of models of both continuous and binary responses. In the second part, we examine a Mixture Multilayer Stochastic Block Model (MMLSBM), where layers can be grouped into sets of similar networks. Each group of networks is endowed with a unique Stochastic Block Model. The objective is to partition the multilayer network into clusters of similar layers and identify communities within those layers. We present an alternative approach called the Alternating Minimization Algorithm (ALMA), which aims to simultaneously recover the layer partition and estimate the matrices of connection probabilities for the distinct layers. In the last part, we demonstrates the effectiveness of the projected gradient descent algorithm. Firstly, its local convergence rate is independent of the condition number. Secondly, under conditions where the objective function is rank-2r restricted L-smooth and μ-strongly convex, with L/μ < 3, projected gradient descent with appropriate step size converges linearly to the solution. Moreover, a perturbed version of this algorithm effectively navigates away from saddle points, converging to an approximate solution or a second-order local minimizer across a wide range of step sizes. Furthermore, we establish that there are no spurious local minimizes in estimating asymmetric low-rank matrices when the objective function satisfies L/μ < 3.
|
2 |
Joint Estimation and Calibration for Motion SensorLiu, Peng January 2020 (has links)
In the thesis, a calibration method for positions of each accelerometer in an Inertial Sensor Array (IMU) sensor array is designed and implemented. In order to model the motion of the sensor array in the real world, we build up a state space model. Based on the model we use, the problem is to estimate the parameters within the state space model. In this thesis, this problem is solved using Maximum Likelihood (ML) framework and two methods are implemented and analyzed. One is based on Expectation Maximization (EM) and the other is to optimize the cost function directly using Gradient Descent (GD). In the EM algorithm, an ill-conditioned problem exists in the M step, which degrades the performance of the algorithm especially when the initial error is small, and the final Mean Square Error (MSE) curve will diverge in this case. The EM algorithm with enough data samples works well when the initial error is large. In the Gradient Descent method, a reformulation of the problem avoids the ill-conditioned problem. After the parameter estimation part, we analyze the MSE curve of these parameters through the Monte Carlo Simulation. The final MSE curves show that the Gradient Descent based method is more robust in handling the numerical issues of the parameter estimation. The Gradient Descent method is also robust to noise level based on the simulation result. / I denna rapport utvecklas och implementeras en kalibreringsmethod för att skatta positionen för en grupp av accelerometrar placerade i en så kallad IMU sensor array. För att beskriva rörelsen för hela sensorgruppen, härleds en dynamisk tillståndsmodell. Problemställningen är då att skatta parametrarna i tillståndsmodellen. Detta löses med hjälp av Maximum Likelihood-metoden (ML) där två stycken algoritmer implementeras och analyseras. En baseras på Expectation Maximization (EM) och i den andra optimeras kostnadsfunktionen direkt med gradientsökning. I EM-algoritmen uppstår ett illa konditionerat delproblem i M-steget, vilket försämrar algoritmens prestanda, speciellt när det initiala felet är litet. Den resulterande MSE-kurvan kommer att avvika i detta fall. Däremot fungerar EM-algoritmen väl när antalet datasampel är tillräckligt och det initiala felet är större. I gradientsökningsmetoden undviks konditioneringsproblemen med hjälp av en omformulering. Slutligen analyseras medelkvadratfelet (MSE) för parameterskattningarna med hjälp av Monte Carlo-simulering. De resulterande MSE-kurvorna visar att gradientsökningsmetoden är mer robust mot numeriska problem, speciellt när det initiala felet är litet. Simuleringarna visar även att gradientsökning är robust mot brus.
|
Page generated in 0.1035 seconds