Return to search

High-Dimensional Analysis of Regularized Convex Optimization Problems with Application to Massive MIMO Wireless Communication Systems

In the past couple of decades, the amount of data available has dramatically in- creased. Thus, in modern large-scale inference problems, the dimension of the signal to be estimated is comparable or even larger than the number of available observa- tions. Yet the desired properties of the signal typically lie in some low-dimensional structure, such as sparsity, low-rankness, finite alphabet, etc. Recently, non-smooth regularized convex optimization has risen as a powerful tool for the recovery of such structured signals from noisy linear measurements in an assortment of applications in signal processing, wireless communications, machine learning, computer vision, etc. With the advent of Compressed Sensing (CS), there has been a huge number of theoretical results that consider the estimation performance of non-smooth convex optimization in such a high-dimensional setting.
In this thesis, we focus on precisely analyzing the high dimensional error perfor- mance of such regularized convex optimization problems under the presence of im- pairments (such as uncertainties) in the measurement matrix, which has independent Gaussian entries. The precise nature of our analysis allows performance compari- son between different types of these estimators and enables us to optimally tune the involved hyper-parameters. In particular, we study the performance of some of the most popular cases in linear inverse problems, such as the LASSO, Elastic Net, Least Squares (LS), Regularized Least Squares (RLS) and their box-constrained variants.

In each context, we define appropriate performance measures, and we sharply an-
alyze them in the High-Dimensional Statistical Regime. We use our results for a concrete application of designing efficient decoders for modern massive multi-input multi-output (MIMO) wireless communication systems and optimally allocate their power.
The framework used for the analysis is based on Gaussian process methods, in particular, on a recently developed strong and tight version of the classical Gor- don Comparison Inequality which is called the Convex Gaussian Min-max Theorem (CGMT). We use some results from Random Matrix Theory (RMT) in our analysis as well.

Identiferoai:union.ndltd.org:kaust.edu.sa/oai:repository.kaust.edu.sa:10754/668170
Date03 1900
CreatorsAlrashdi, Ayed
ContributorsAl-Naffouri, Tareq Y., Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, Alouini, Mohamed-Slim, Shihada, Basem, Kammoun, Abla, Al-Dhahir, Naofal, Davidson, Tim
Source SetsKing Abdullah University of Science and Technology
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
Rights2022-03-20, At the time of archiving, the student author of this dissertation opted to temporarily restrict access to it. The full text of this dissertation will become available to the public after the expiration of the embargo on 2022-03-20.

Page generated in 0.0021 seconds