• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 9
  • 7
  • 7
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 68
  • 32
  • 23
  • 16
  • 15
  • 15
  • 13
  • 13
  • 13
  • 13
  • 10
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Regularization Techniques for Linear Least-Squares Problems

Suliman, Mohamed Abdalla Elhag 04 1900 (has links)
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA method deals with discrete ill-posed problems when the singular values of the linear transformation matrix are decaying very fast to a significantly small value. For the both proposed algorithms, the regularization parameter is obtained as a solution of a non-linear characteristic equation. We provide a details study for the general properties of these functions and address the existence and uniqueness of the root. To demonstrate the performance of the derivations, the first proposed COPRA method is applied to estimate different signals with various characteristics, while the second proposed COPRA method is applied to a large set of different real-world discrete ill-posed problems. Simulation results demonstrate that the two proposed methods outperform a set of benchmark regularization algorithms in most cases. In addition, the algorithms are also shown to have the lowest run time.
32

Graph Matrices under the Multivariate Setting

Hossain, Imran 23 May 2022 (has links)
No description available.
33

On Some Universality Problems in Combinatorial Random Matrix Theory

Meehan, Sean 02 October 2019 (has links)
No description available.
34

Two-Sample Testing of High-Dimensional Covariance Matrices

Sun, Nan, 0000-0003-0278-5254 January 2021 (has links)
Testing the equality between two high-dimensional covariance matrices is challenging. As the most efficient way to measure evidential discrepancies in observed data, the likelihood ratio test is expected to be powerful when the null hypothesis is violated. However, when the data dimensionality becomes large and potentially exceeds the sample size by a substantial margin, likelihood ratio based approaches face practical and theoretical challenges. To solve this problem, this study proposes a method by which we first randomly project the original high-dimensional data into lower-dimensional space, and then apply the corrected likelihood ratio tests developed with random matrix theory. We show that testing with a single random projection is consistent under the null hypothesis. Through evaluating the power function, which is challenging in this context, we provide evidence that the test with a single random projection based on a random projection matrix with reasonable column sizes is more powerful when the two covariance matrices are unequal but component-wise discrepancy could be small -- a weak and dense signal setting. To more efficiently utilize this data information, we propose combined tests from multiple random projections from the class of meta-analyses. We establish the foundation of the combined tests from our theoretical analysis that the p-values from multiple random projections are asymptotically independent in the high-dimensional covariance matrices testing problem. Then, we show that combined tests from multiple random projections are consistent under the null hypothesis. In addition, our theory presents the merit of certain meta-analysis approaches over testing with a single random projection. Numerical evaluation of the power function of the combined tests from multiple random projections is also provided based on numerical evaluation of power function of testing with a single random projection. Extensive simulations and two real genetic data analyses confirm the merits and potential applications of our test. / Statistics
35

Gap Probabilities in Random Matrix Ensembles

Bäcklin, Oskar January 2023 (has links)
In this degree project we look at eigenvalue statistics of two randommatrix ensembles, the Gaussian and the circular ensembles. We beginwith their definition and discuss the joint probability distribution of theirentries and eigenvalues. In addition we introduce two sparse matrix models which enables us to numerically compute some eigenvalues statisticsfor these ensembles via the use of simulation. In particular we focus on thegap probability. Lastly we present and discuss the result of the numerical simulations.
36

Limiting Behavior of the Largest Eigenvalues of Random Toeplitz Matrices / Det asymptotiska beteendet av största egenvärdet av stokastiska Toeplitz-matriser

Modée, Samuel January 2019 (has links)
We consider random symmetric Toeplitz matrices of size n. Assuming that the entries on the diagonals are independent centered random variables with finite γ-th moment (γ>2), a law of large numbers is established for the largest eigenvalue. Following the approach of Sen and Virág (2013), in the limit of large n, the largest rescaled eigenvalue is shown to converge to the limit 0.8288... . The background theory is explained and some symmetry results on the eigenvectors of the Toeplitz matrix and an auxiliary matrix are presented. A numerical investigation illustrates the rate of convergence and the oscillatory nature of the eigenvectors of the Toeplitz matrix. Finally, the possibility of proving a limiting distribution for the largest eigenvalue is discussed, and suggestions for future research are made. / Vi betraktar stokastiska Toeplitz-matriser av storlek n. Givet att elementen på diagonalerna är oberoende, centrerade stokastiska variabler med ändligt γ-moment (γ>2), fastställer vi ett stora talens lag för det största egenvärdet. Med metoden från Sen och Virág (2013) visar vi att det största omskalade egenvärdet konvergera mot gränsen 0.8288... . Bakgrundsteorin förklaras och några symmetriresultat för Toeplitz-matrisens egenvektorer presenteras. En numerisk undersökning illustrerar konvergenshastigheten och Toeplitz-matrisens egenvektorers periodiska natur. Slutligen diskuteras möjligheten att bevisa en asymptotisk fördelning för de största egenvärderna och förslag för fortsatt forskning läggs fram.
37

Enhanced energy detection based spectrum sensing in cognitive radio networks using Random Matrix Theory

Ahmed, A., Hu, Yim Fun, Noras, James M. January 2014 (has links)
No / Opportunistic secondary usage of underutilised radio spectrum is currently of great interest and the use of TV White Spaces (TVWS) has been considered for Long Term Evolution (LTE) broadband services. However, wireless microphones operating in TV bands pose a challenge to TVWS opportunistic access. Efficient and proactive spectrum sensing could prevent harmful interference between collocated devices, but existing spectrum sensing schemes such as energy detection and schemes based on Random Matrix Theory (RMT) have performance limitations. We propose a new blind spectrum sensing scheme with higher performance based on RMT supported by a new formula for the estimation of noise variance. The performance of the proposed scheme has been evaluated through extensive simulations on wireless microphone signals. The proposed scheme has also been compared to energy detection schemes, and shows higher performance in terms of the probability of false alarm (Pfa) and probability of detection (Pd).
38

Stock Market Network Topology Analysis Based on a Minimum Spanning Tree Approach

Zhang, Yinghua 31 July 2009 (has links)
No description available.
39

Application of Random Matrix Theory for Financial Market Systems

Witte, Michael Jonathan 10 April 2014 (has links)
No description available.
40

Convergence Rates of Spectral Distribution of Random Inner Product Kernel Matrices

Kong, Nayeong January 2018 (has links)
This dissertation has two parts. In the first part, we focus on random inner product kernel matrices. Under various assumptions, many authors have proved that the limiting empirical spectral distribution (ESD) of such matrices A converges to the Marchenko- Pastur distribution. Here, we establish the corresponding rate of convergence. The strategy is as follows. First, we show that for z = u + iv ∈ C, v > 0, the distance between the Stieltjes transform m_A (z) of ESD of matrix A and Machenko-Pastur distribution m(z) is of order O (log n \ nv). Next, we prove the Kolmogorov distance between ESD of matrix A and Marchenko-Pastur distribution is of order O(3\log n\n). It is the less sharp rate for much more general class of matrices. This uses a Berry-Esseen type bound that has been employed for similar purposes for other families of random matrices. In the second part, random geometric graphs on the unit sphere are considered. Observing that adjacency matrices of these graphs can be thought of as random inner product matrices, we are able to use an idea of Cheng-Singer to establish the limiting for the ESD of these adjacency matrices. / Mathematics

Page generated in 0.0678 seconds