• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completion

Wang, Tianming 01 May 2018 (has links)
Spectrally sparse signals arise in many applications of signal processing. A spectrally sparse signal is a mixture of a few undamped or damped complex sinusoids. An important problem from practice is to reconstruct such a signal from partial time domain samples. Previous convex methods have the drawback that the computation and storage costs do not scale well with respect to the signal length. This common drawback restricts their applicabilities to large and high-dimensional signals. The reconstruction of a spectrally sparse signal from partial samples can be formulated as a low-rank Hankel matrix completion problem. We develop two fast and provable non-convex solvers, FIHT and PGD. FIHT is based on Riemannian optimization while PGD is based on Burer-Monteiro factorization with projected gradient descent. Suppose the underlying spectrally sparse signal is of model order r and length n. We prove that O(r^2log^2(n)) and O(r^2log(n)) random samples are sufficient for FIHT and PGD respectively to achieve exact recovery with overwhelming probability. Every iteration, the computation and storage costs of both methods are linear with respect to signal length n. Therefore they are suitable for handling spectrally sparse signals of large size, which may be prohibited for previous convex methods. Extensive numerical experiments verify their recovery abilities as well as computation efficiency, and also show that the algorithms are robust to noise and mis-specification of the model order. Comparing the two solvers, FIHT is faster for easier problems while PGD has a better recovery ability.
2

Feature Selection under Multicollinearity & Causal Inference on Time Series

Bhattacharya, Indranil January 2017 (has links) (PDF)
In this work, we study and extend algorithms for Sparse Regression and Causal Inference problems. Both the problems are fundamental in the area of Data Science. The goal of regression problem is to nd out the \best" relationship between an output variable and input variables, given samples of the input and output values. We consider sparse regression under a high-dimensional linear model with strongly correlated variables, situations which cannot be handled well using many existing model selection algorithms. We study the performance of the popular feature selection algorithms such as LASSO, Elastic Net, BoLasso, Clustered Lasso as well as Projected Gradient Descent algorithms under this setting in terms of their running time, stability and consistency in recovering the true support. We also propose a new feature selection algorithm, BoPGD, which cluster the features rst based on their sample correlation and do subsequent sparse estimation using a bootstrapped variant of the projected gradient descent method with projection on the non-convex L0 ball. We attempt to characterize the efficiency and consistency of our algorithm by performing a host of experiments on both synthetic and real world datasets. Discovering causal relationships, beyond mere correlation, is widely recognized as a fundamental problem. The Causal Inference problems use observations to infer the underlying causal structure of the data generating process. The input to these problems is either a multivariate time series or i.i.d sequences and the output is a Feature Causal Graph where the nodes correspond to the variables and edges capture the direction of causality. For high dimensional datasets, determining the causal relationships becomes a challenging task because of the curse of dimensionality. Graphical modeling of temporal data based on the concept of \Granger Causality" has gained much attention in this context. The blend of Granger methods along with model selection techniques, such as LASSO, enables efficient discovery of a \sparse" sub-set of causal variables in high dimensional settings. However, these temporal causal methods use an input parameter, L, the maximum time lag. This parameter is the maximum gap in time between the occurrence of the output phenomenon and the causal input stimulus. How-ever, in many situations of interest, the maximum time lag is not known, and indeed, finding the range of causal e ects is an important problem. In this work, we propose and evaluate a data-driven and computationally efficient method for Granger causality inference in the Vector Auto Regressive (VAR) model without foreknowledge of the maximum time lag. We present two algorithms Lasso Granger++ and Group Lasso Granger++ which not only constructs the hypothesis feature causal graph, but also simultaneously estimates a value of maxlag (L) for each variable by balancing the trade-o between \goodness of t" and \model complexity".

Page generated in 0.1413 seconds