Spelling suggestions: "subject:"random 1atrix"" "subject:"random béatrix""
1 |
From Wishart to Jacobi ensembles : statistical properties and applicationsVivo, Pierpaolo January 2008 (has links)
Sixty years after the works of Wigner and Dyson, Random Matrix Theory still remains a very active and challenging area of research, with countless applications in mathematical physics, statistical mechanics and beyond. In this thesis, we focus on rotationally invariant models where the requirement of independence of matrix elements is dropped. Some classical examples are the Jacobi and Wishart-Laguerre (or chiral) ensembles, which constitute the core of the present work. The Wishart-Laguerre ensemble contains covariance matrices of random data, and represents a very important tool in multivariate data analysis, with recent applications to finance and telecommunications. We will first consider large deviations of the maximum eigenvalue, providing new analytical results for its large N behavior, and then a power-law deformation of the classical Wishart-Laguerre ensemble, with possible applications to covariance matrices of financial data. For the Jacobi matrices, which arise naturally in the quantum conductance problem, we provide analytical formulas for quantities of interest for the experiments.
|
2 |
Solvable Particle Models Related to the Beta-EnsembleShum, Christopher 03 October 2013 (has links)
For beta > 0, the beta-ensemble corresponds to the joint probability density on the real line proportional to
prod_{n > m}^N abs{x_n - x_m}^beta prod_{n = 1}^N w(x_n)
where w is the weight of the system. It has the application of being the Boltzmann factor for the configuration of N charge-one particles interacting logarithmically on an infinite wire inside an external field Q = -log w at inverse temperature beta. Similarly, the circular beta-ensemble has joint probability density proportional to
prod_{n > m}^N abs{e^{itheta_n} - e^{itheta_m}}^beta prod_{n = 1}^N w(x_n) quad for theta_n in [- pi, pi)
and can be interpreted as N charge-one particles on the unit circle interacting logarithmically with no external field. When beta = 1, 2, and 4, both ensembles are said to be solvable in that their correlation functions can be expressed in a form which allows for asymptotic calculations. It is not known, however, whether the general beta-ensemble is solvable.
We present four families of particle models which are solvable point processes related to the beta-ensemble. Two of the examples interpolate between the circular beta-ensembles for beta = 1, 2, and 4. These give alternate ways of connecting the classical beta-ensembles besides simply changing the values of beta. The other two examples are "mirrored" particle models, where each particle has a paired particle reflected about some point or axis of symmetry.
|
3 |
Regularized Discriminant Analysis: A Large Dimensional StudyYang, Xiaoke 28 April 2018 (has links)
In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).
|
4 |
Aspects of random matrix theory: concentration and subsequence problemsXu, Hua 17 November 2008 (has links)
The present work studies some aspects of random matrix theory. Its first part is devoted to the asymptotics of random matrices with infinitely divisible, in particular heavy-tailed, entries. Its second part focuses on relations between limiting law in subsequence problems and spectra of random matrices.
|
5 |
Robust Estimation of Scatter Matrix, Random Matrix Theory and an Application to Spectrum SensingLiu, Zhedong 05 May 2019 (has links)
The covariance estimation is one of the most critical tasks in multivariate statistical analysis. In many applications, reliable estimation of the covariance matrix, or scatter matrix in general, is required. The performance of the classical maximum likelihood method relies a great deal on the validity of the model assumption. Since the assumptions are often approximately correct, many robust statistical methods have been proposed to be robust against the deviation from the model assumptions. M-estimator is an important class of robust estimator of the scatter matrix. The properties of these robust estimators under high dimensional setting, which means the number of dimensions has the same order of magnitude as the number of observations, is desirable. To study these, random matrix theory is a very important tool. With high dimensional properties of robust estimators, we introduced a new method for blind spectrum sensing in cognitive radio networks.
|
6 |
Applications des grandes matrices aléatoires aux traitements du signal de grandes dimensions / Applications of large random matrix to high dimensional statistical signalprocessingPham, Gia-Thuy 28 February 2017 (has links)
A definir / A definir
|
7 |
Invariant Measures on Projective SpaceChao, Chihyi 13 June 2002 (has links)
In 2 ¡Ñ2 case,we discuss the uniqueness of the
u-invariant measure on projective space.Under the condition that |detM|=1 for any M in Gu and Gu is not compact,we have the followings:
(1) For any x in P(R^2),if #{M¡Dx|M belongs Gu}>2, then the u-invariant measure is unique.
(2) For some x in P(R^2),there exists
x1,x2 such that {M¡Dx|M belongs Gu} is contained in {x1,x2},if x1 and x2 are both fixed,then the
u-invariant measure v is not unique;otherwise,if u has mass only on x1 and x2,then the u-invariant
measure is unique.
|
8 |
A New Asset Pricing Model based on the Zero-Beta CAPM: Theory and EvidenceLiu, Wei 03 October 2013 (has links)
This work utilizes zero-beta CAPM to derive an alternative form dubbed the ZCAPM. The ZCAPM posits that asset prices are a function of market risk composed of two components: average market returns and cross-sectional market volatility. Market risk associated with average market returns in the CAPM market model is known as beta risk. We refer to market risk related to cross-sectional market volatility as zeta risk. Using U.S. stock returns from January 1965 to December 2010, out-of-sample cross-sectional asset pricing tests show that the ZCAPM better predicts stock returns than popular three- and four-factor models. These and other empirical tests lead us to conclude that the ZCAPM holds promise as a robust asset pricing model.
|
9 |
LARGE-SCALE MICROARRAY DATA ANALYSIS USING GPU- ACCELERATED LINEAR ALGEBRA LIBRARIESZhang, Yun 01 August 2012 (has links)
The biological datasets produced as a result of high-throughput genomic research such as specifically microarrays, contain vast amounts of knowledge for entire genome and their expression affiliations. Gene clustering from such data is a challenging task due to the huge data size and high complexity of the algorithms as well as the visualization needs. Most of the existing analysis methods for genome-wide gene expression profiles are sequential programs using greedy algorithms and require subjective human decision. Recently, Zhu et al. proposed a parallel Random matrix theory (RMT) based approach for generating transcriptional networks, which is much more resistant to high level of noise in the data [9] without human intervention. Nowadays GPUs are designed to be used more efficiently for general purpose computing [1] and are vastly superior to CPUs [6] in terms of threading performance. Our kernel functions running on GPU utilizes the functions from both the libraries of Compute Unified Basic Linear Algebra Subroutines (CUBLAS) and Compute Unified Linear Algebra (CULA) which implements the Linear Algebra Package (LAPACK). Our experiment results show that GPU program can achieve an average speed-up of 2~3 times for some simulated datasets.
|
10 |
Random Matrix Theory: Selected Applications from Statistical Signal Processing and Machine LearningElkhalil, Khalil 06 1900 (has links)
Random matrix theory is an outstanding mathematical tool that has demonstrated its usefulness in many areas ranging from wireless communication to finance and economics. The main motivation behind its use comes from the fundamental role that random matrices play in modeling unknown and unpredictable physical quantities. In many situations, meaningful metrics expressed as scalar functionals of these random matrices arise naturally. Along this line, the present work consists in leveraging tools from random matrix theory in an attempt to answer fundamental questions related to applications from statistical signal processing and machine learning. In a first part, this thesis addresses the development of analytical tools for the computation of the inverse moments of random Gram matrices with one side correlation. Such a question is mainly driven by applications in signal processing and wireless communications wherein such matrices naturally arise. In particular, we derive closed-form expressions for the inverse moments and show that the obtained results can help approximate several performance metrics of common estimation techniques. Then, we carry out a large dimensional study of discriminant analysis classifiers. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such result permits a better understanding of the underlying classifiers, in practical large but finite dimensions, and can be used to optimize the performance. Finally, we revisit kernel ridge regression and study a centered version of it that we call centered kernel ridge regression or CKRR in short. Relying on recent advances on the asymptotic properties of random kernel matrices, we carry out a large dimensional analysis of CKRR under the assumption that both the data dimesion and the training size grow simultaneiusly large at the same rate. We particularly show that both the empirical and prediction risks converge to a limiting risk that relates the performance to the data statistics and the parameters involved. Such a result is important as it permits a better undertanding of kernel ridge regression and allows to efficiently optimize the performance.
|
Page generated in 0.0568 seconds