Spelling suggestions: "subject:"matrix bivariate"" "subject:"matrix covariate""
1 |
Matrix Variate and Kernel Density Methods for Applications in TelematicsPocuca, Nikola January 2019 (has links)
In the last few years, telemetric data arising from embedded vehicle sensors brung an overwhelming abundance of information to companies. There is no indication that this will be abated in future. This information concerning driving behaviour brings an opportunity to carry out analysis. The merging of telemetric data and informatics gives rise to a sub-field of data science known as telematics. This work encompasses matrix variate and kernel density methods for the purposes of analysing telemetric data. These methods expand the current literature by alleviating the issues that arise with high-dimensional data. / Thesis / Master of Science (MSc)
|
2 |
Clustering Matrix Variate Data Using Finite Mixture Models with Component-Wise RegularizationTait, Peter A 11 1900 (has links)
Matrix variate distributions present a innate way to model random matrices. Realiza-
tions of random matrices are created by concurrently observing variables in different
locations or at different time points. We use a finite mixture model composed of
matrix variate normal densities to cluster matrix variate data. The matrix variate
data was generated by accelerometers worn by children in a clinical study conducted
at McMaster. Their acceleration along the three planes of motion over the course of
seven days, forms their matrix variate data. We use the resulting clusters to verify
existing group membership labels derived from a test of motor-skills proficiency used
to assess the children’s locomotion. / Thesis / Master of Science (MSc)
|
3 |
Analysis of Three-Way Data and Other Topics in Clustering and ClassificationGallaugher, Michael Patrick Brian January 2020 (has links)
Clustering and classification is the process of finding underlying group structure in heterogenous data. With the rise of the “big data” phenomenon, more complex data structures have made it so traditional clustering methods are oftentimes not advisable or feasible. This thesis presents methodology for analyzing three different examples of these more complex data types. The first is three-way (matrix variate) data, or data that come in the form of matrices. A large emphasis is placed on clustering skewed three-way data, and high dimensional three-way data. The second is click- stream data, which considers a user’s internet search patterns. Finally, co-clustering methodology is discussed for very high-dimensional two-way (multivariate) data. Parameter estimation for all these methods is based on the expectation maximization (EM) algorithm. Both simulated and real data are used for illustration. / Thesis / Doctor of Philosophy (PhD)
|
4 |
An Evolutionary Algorithm for Matrix-Variate Model-Based ClusteringFlynn, Thomas J. January 2023 (has links)
Model-based clustering is the use of finite mixture models to identify underlying group structures in data. Estimating parameters for mixture models is notoriously difficult, with the expectation-maximization (EM) algorithm being the predominant method. An alternative approach is the evolutionary algorithm (EA) which emulates natural selection on a population of candidate solutions. By leveraging a fitness function and genetic operators like crossover and mutation, EAs offer a distinct way to search the likelihood surface. EAs have been developed for model-based clustering in the multivariate setting; however, there is a growing interest in matrix-variate distributions for three-way data applications. In this context, we propose an EA for finite mixtures of matrix-variate distributions. / Thesis / Master of Science (MSc)
|
5 |
Moments and Quadratic Forms of Matrix Variate Skew Normal DistributionsZheng, Shimin, Knisley, Jeff, Wang, Kesheng 01 February 2016 (has links)
In 2007, Domínguez-Molina et al. obtained the moment generating function (mgf) of the matrix variate closed skew normal distribution. In this paper, we use their mgf to obtain the first two moments and some additional properties of quadratic forms for the matrix variate skew normal distributions. The quadratic forms are particularly interesting because they are essentially correlation tests that introduce a new type of orthogonality condition.
|
6 |
Decision Theory Classification Of High-dimensional Vectors Based On Small SamplesBradshaw, David 01 January 2005 (has links)
In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
|
7 |
The Inverse Problem of Multivariate and Matrix-Variate Skew Normal DistributionsZheng, Shimin, Hardin, J. M., Gupta, A. K. 01 June 2012 (has links)
In this paper, we prove that the joint distribution of random vectors Z 1 and Z 2 and the distribution of Z 2 are skew normal provided that Z 1 is skew normally distributed and Z 2 conditioning on Z 1 is distributed as closed skew normal. Also, we extend the main results to the matrix variate case.
|
8 |
Stochastic Representations of the Matrix Variate Skew Elliptically Contoured DistributionsZheng, Shimin, Zhang, Chunming, Knisley, Jeff 01 January 2013 (has links)
Matrix variate skew elliptically contoured distributions generalize several classes of important distributions. This paper defines and explores matrix variate skew elliptically contoured distributions. In particular, we discuss two stochastic representations of the matrix variate skew elliptically contoured distributions.
|
9 |
Lois de Wishart sur les cônes convexes / Wishart laws on convex conesMamane, Salha 20 March 2017 (has links)
En analyse multivariée de données de grande dimension, les lois de Wishart définies dans le contexte des modèles graphiques revêtent une grande importance car elles procurent parcimonie et modularité. Dans le contexte des modèles graphiques Gaussiens régis par un graphe G, les lois de Wishart peuvent être définies sur deux restrictions alternatives du cône des matrices symétriques définies positives : le cône PG des matrices symétriques définies positives x satisfaisant xij=0, pour tous sommets i et j non adjacents, et son cône dual QG. Dans cette thèse, nous proposons une construction harmonieuse de familles exponentielles de lois de Wishart sur les cônes PG et QG. Elle se focalise sur les modèles graphiques d'interactions des plus proches voisins qui présentent l'avantage d'être relativement simples tout en incluant des exemples de tous les cas particuliers intéressants: le cas univarié, un cas d'un cône symétrique, un cas d'un cône homogène non symétrique, et une infinité de cas de cônes non-homogènes. Notre méthode, simple, se fonde sur l'analyse sur les cônes convexes. Les lois de Wishart sur QAn sont définies à travers la fonction gamma sur QAn et les lois de Wishart sur PAn sont définies comme la famille de Diaconis- Ylvisaker conjuguée. Ensuite, les méthodes développées sont utilisées pour résoudre la conjecture de Letac- Massam sur l'ensemble des paramètres de la loi de Wishart sur QAn. Cette thèse étudie aussi les sousmodèles, paramétrés par un segment dans M, d'une famille exponentielle paramétrée par le domaine des moyennes M. / In the framework of Gaussian graphical models governed by a graph G, Wishart distributions can be defined on two alternative restrictions of the cone of symmetric positive definite matrices: the cone PG of symmetric positive definite matrices x satisfying xij=0 for all non-adjacent vertices i and j and its dual cone QG. In this thesis, we provide a harmonious construction of Wishart exponential families in graphical models. Our simple method is based on analysis on convex cones. The focus is on nearest neighbours interactions graphical models, governed by a graph An, which have the advantage of being relatively simple while including all particular cases of interest such as the univariate case, a symmetric cone case, a nonsymmetric homogeneous cone case and an infinite number of non-homogeneous cone cases. The Wishart distributions on QAn are constructed as the exponential family generated from the gamma function on QAn. The Wishart distributions on PAn are then constructed as the Diaconis- Ylvisaker conjugate family for the exponential family of Wishart distributions on QAn. The developed methods are then used to solve the Letac-Massam Conjecture on the set of parameters of type I Wishart distributions on QAn. Finally, we introduce and study exponential families of distributions parametrized by a segment of means with an emphasis on their Fisher information. The focus in on distributions with matrix parameters. The particular cases of Gaussian and Wishart exponential families are further examined.
|
10 |
A Matrix Variate Generalization of the Skew Pearson Type VII and Skew T DistributionZheng, Shimin, Gupta, A. K., Liu, Xuefeng 01 January 2012 (has links)
We define and study multivariate and matrix variate skew Pearson type VII and skew t-distributions. We derive the marginal and conditional distributions, the linear transformation, and the stochastic representations of the multivariate and matrix variate skew Pearson type VII distributions and skew t-distributions. Also, we study the limiting distributions.
|
Page generated in 0.0738 seconds