Spelling suggestions: "subject:"2analysis off covariance"" "subject:"2analysis oof covariance""
61 |
An automatic controller tuning algorithm.Christodoulou, Michael, A. January 1991 (has links)
A project report submitted to the Faculty of Engineering, University of the
Witwatersrand, Johannesburg, in partial fulfillment of the requirements
for 'the degree of Master of Science in Engineering. Johannesburg 1991. / The report describes the design of an algorithm which can be used for automatic
controller tuning purposes. It uses an on-line parameter estimator and a pole assignrnent
design method. The resulting control law is formulated to approximate a
proportional-integral (PI) industrial controller. The development ofthe algorithm
is based on the delta-operator, Some implementation aspects such as covariance resetting, dead zone, and signal conditioning are also discussed. Robust stability and
performance are two issues that govern the design approach. Additionally transient
and steady state system response criteria are utilized from the time and frequency
domains. The design work is substantiated with the use of simulation and real plant
tests. / AC2017
|
62 |
The Joint Modeling of Longitudinal Covariates and Censored Quantile Regression for Health ApplicationsHu, Bo January 2022 (has links)
The overall theme of this thesis focuses on the joint modeling of longitudinal covariates and a censored survival outcome, where a survival outcome is modeled using a conditional quantile regression. In traditional joint modeling approaches, a survival outcome is usually parametrically modeled as a Cox regression. Censored quantile regressions can model a survival outcome without pre-specifying a parametric likelihood function or assuming a proportional hazard ratio. Existing censored quantile methods are mostly limited to fixed cross-sectional covariates, while in many longitudinal studies, researchers wish to investigate the associations between longitudinal covariates and a survival outcome.
The first part considers the problem of joint modeling with a survival outcome under a mixture of censoring: left censoring, interval censoring or right censoring. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to the LEGACY Girls cohort Study to understand the influence of individual genetic profiles on the pubertal development (i.e., the onset of breast development) while adjusting for BMI growth trajectories.
The second part considers the problem of joint modelling with a random right censoring survival outcome. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. Theoretical properties for the resulting parameter estimates are established. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to Mayo Clinic Primary Biliary Cholangitis Data to assess the effect of drug D-penicilamine on risk of liver transplantation or death, while controlling for age at registration and serBilir marker.
|
63 |
Who Carries the Burden of Strength? The Impact of Colorism on Perceptions of Strong Black WomenJean-Ceide, Cassandre Jennie 05 1900 (has links)
Using intersectionality as a guiding framework, the current study examined how gendered and racialized perceptions of Black women as "strong Black women" may be shaped by colorism. This experimental study sampled 314 Black and White participants from the community. Participants were presented with a vignette that described a Black woman coping with workplace stress in one of two ways, one congruent with strong Black womanhood (emotional restriction) and one incongruent with strong Black womanhood (emotional vulnerability), alongside the image of a light skin or dark skin Black woman. Then, participants were asked to rate how "strong" they perceived the woman in the vignette to be. A factorial ANCOVA was conducted to test how perceptions of the woman in the vignette varied based on her emotional response to workplace stress and skin tone, while controlling for perceptions of likability and competence. As hypothesized, we observed that participants perceived the woman responding to workplace stress with emotional restriction as stronger than the women who responded with emotional vulnerability. However, skin tone, nor the interaction between emotional response and skin tone had a bearing on participants' perceptions. There were also no differences in perceptions based on participant race. Through its intersectional framing, this study challenges scholars and practitioners to consider how the interplay between racism, sexism, and colorism shapes how Black women are seen by others and, in turn, how they may see themselves as strong Black women. Implications of the findings, limitations, and future directions are discussed.
|
64 |
Online machine learning methods for visual tracking / Algorithmes d'apprentissage en ligne pour le suivi visuelQin, Lei 05 May 2014 (has links)
Nous étudions le problème de suivi de cible dans une séquence vidéo sans aucune connaissance préalable autre qu'une référence annotée dans la première image. Pour résoudre ce problème, nous proposons une nouvelle méthode de suivi temps-réel se basant sur à la fois une représentation originale de l’objet à suivre (descripteur) et sur un algorithme adaptatif capable de suivre la cible même dans les conditions les plus difficiles comme le cas où la cible disparaît et réapparait dans le scène (ré-identification). Tout d'abord, pour la représentation d’une région de l’image à suivre dans le temps, nous proposons des améliorations au descripteur de covariance. Ce nouveau descripteur est capable d’extraire des caractéristiques spécifiques à la cible, tout en ayant la capacité à s’adapter aux variations de l’apparence de la cible. Ensuite, l’étape algorithmique consiste à mettre en cascade des modèles génératifs et des modèles discriminatoires afin d’exploiter conjointement leurs capacités à distinguer la cible des autres objets présents dans la scène. Les modèles génératifs sont déployés dans les premières couches afin d’éliminer les candidats les plus faciles alors que les modèles discriminatoires sont déployés dans les couches suivantes afin de distinguer la cibles des autres objets qui lui sont très similaires. L’analyse discriminante des moindres carrés partiels (AD-MCP) est employée pour la construction des modèles discriminatoires. Enfin, un nouvel algorithme d'apprentissage en ligne AD-MCP a été proposé pour la mise à jour incrémentale des modèles discriminatoires / We study the challenging problem of tracking an arbitrary object in video sequences with no prior knowledge other than a template annotated in the first frame. To tackle this problem, we build a robust tracking system consisting of the following components. First, for image region representation, we propose some improvements to the region covariance descriptor. Characteristics of a specific object are taken into consideration, before constructing the covariance descriptor. Second, for building the object appearance model, we propose to combine the merits of both generative models and discriminative models by organizing them in a detection cascade. Specifically, generative models are deployed in the early layers for eliminating most easy candidates whereas discriminative models are in the later layers for distinguishing the object from a few similar "distracters". The Partial Least Squares Discriminant Analysis (PLS-DA) is employed for building the discriminative object appearance models. Third, for updating the generative models, we propose a weakly-supervised model updating method, which is based on cluster analysis using the mean-shift gradient density estimation procedure. Fourth, a novel online PLS-DA learning algorithm is developed for incrementally updating the discriminative models. The final tracking system that integrates all these building blocks exhibits good robustness for most challenges in visual tracking. Comparing results conducted in challenging video sequences showed that the proposed tracking system performs favorably with respect to a number of state-of-the-art methods
|
65 |
Abnormal detection in video streams via one-class learning methods / Algorithmes d'apprentissage mono-classe pour la détection d'anomalies dans les flux vidéoWang, Tian 06 May 2014 (has links)
La vidéosurveillance représente l’un des domaines de recherche privilégiés en vision par ordinateur. Le défi scientifique dans ce domaine comprend la mise en œuvre de systèmes automatiques pour obtenir des informations détaillées sur le comportement des individus et des groupes. En particulier, la détection de mouvements anormaux de groupes d’individus nécessite une analyse fine des frames du flux vidéo. Dans le cadre de cette thèse, la détection de mouvements anormaux est basée sur la conception d’un descripteur d’image efficace ainsi que des méthodes de classification non linéaires. Nous proposons trois caractéristiques pour construire le descripteur de mouvement : (i) le flux optique global, (ii) les histogrammes de l’orientation du flux optique (HOFO) et (iii) le descripteur de covariance (COV) fusionnant le flux optique et d’autres caractéristiques spatiales de l’image. Sur la base de ces descripteurs, des algorithmes de machine learning (machines à vecteurs de support (SVM)) mono-classe sont utilisés pour détecter des événements anormaux. Deux stratégies en ligne de SVM mono-classe sont proposées : la première est basée sur le SVDD (online SVDD) et la deuxième est basée sur une version « moindres carrés » des algorithmes SVM (online LS-OC-SVM) / One of the major research areas in computer vision is visual surveillance. The scientific challenge in this area includes the implementation of automatic systems for obtaining detailed information about the behavior of individuals and groups. Particularly, detection of abnormal individual movements requires sophisticated image analysis. This thesis focuses on the problem of the abnormal events detection, including feature descriptor design characterizing the movement information and one-class kernel-based classification methods. In this thesis, three different image features have been proposed: (i) global optical flow features, (ii) histograms of optical flow orientations (HOFO) descriptor and (iii) covariance matrix (COV) descriptor. Based on these proposed descriptors, one-class support vector machines (SVM) are proposed in order to detect abnormal events. Two online strategies of one-class SVM are proposed: The first strategy is based on support vector description (online SVDD) and the second strategy is based on online least squares one-class support vector machines (online LS-OC-SVM)
|
66 |
Physically constrained maximum likelihood method for snapshot deficient adaptive array processingKraay, Andrea L. (Andrea Lorraine), 1976- January 2003 (has links)
Thesis (Elec.E. and S.M. in Electrical Engineering)--Joint Program in Applied Ocean Physics and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2003. / "February 2003." / Includes bibliographical references (leaves 139-141). / by Andrea L. Kraay. / Elec.E.and S.M.in Electrical Engineering
|
67 |
Variation Simulation of Fixtured Assembly Processes for Compliant Structures Using Piecewise-Linear AnalysisStewart, Michael L. 09 October 2004 (has links) (PDF)
While variation analysis methods for compliant assemblies are not new, little has been done to include the effects of multi-step, fixtured assembly processes. This thesis introduces a new method to statistically analyze compliant part assembly processes using fixtures. This method, consistent with the FASTA method developed at BYU, yields both a mean and a variant solution. The method, called Piecewise-Linear Elastic Analysis, or PLEA, is developed for predicting the residual stress, deformation and springback variation in compliant assemblies. A comprehensive, step-by-step analysis map is provided. PLEA is validated on a simple, laboratory assembly and a more complex, production assembly. Significant modeling findings are reported as well as the comparison of the analytical to physical results.
|
68 |
Some questions in risk management and high-dimensional data analysisWang, Ruodu 04 May 2012 (has links)
This thesis addresses three topics in the area of statistics and
probability, with applications in risk management. First, for the
testing problems in the high-dimensional (HD) data analysis, we
present a novel method to formulate empirical likelihood tests and
jackknife empirical likelihood tests by splitting the sample into
subgroups. New tests are constructed to test the equality of two HD
means, the coefficient in the HD linear models and the HD covariance
matrices. Second, we propose jackknife empirical likelihood methods
to formulate interval estimations for important quantities in
actuarial science and risk management, such as the risk-distortion
measures, Spearman's rho and parametric copulas. Lastly, we
introduce the theory of completely mixable (CM) distributions. We
give properties of the CM distributions, show that a few classes of
distributions are CM and use the new technique to find the bounds
for the sum of individual risks with given marginal distributions
but unspecific dependence structure. The result partially solves a
problem that had been a challenge for decades, and directly leads to
the bounds on quantities of interest in risk management, such as the
variance, the stop-loss premium, the price of the European options
and the Value-at-Risk associated with a joint portfolio.
|
69 |
Application of Distance Covariance to Time Series Modeling and Assessing Goodness-of-FitFernandes, Leon January 2024 (has links)
The overarching goal of this thesis is to use distance covariance based methods to extend asymptotic results from the i.i.d. case to general time series settings. Accounting for dependence may make already difficult statistical inference all the more challenging. The distance covariance is an increasingly popular measure of dependence between random vectors that goes beyond linear dependence as described by correlation. It is defined by a squared integral norm of the difference between the joint and marginal characteristic functions with respect to a specific weight function. Distance covariance has the advantage of being able to detect dependence even for uncorrelated data. The energy distance is a closely related quantity that measures distance between distributions of random vectors. These statistics can be used to establish asymptotic limit theory for stationary ergodic time series. The asymptotic results are driven by the limit theory for the empirical characteristic functions.
In this thesis we apply the distance covariance to three problems in time series modeling: (i) Independent Component Analysis (ICA), (ii) multivariate time series clustering, and (iii) goodness-of-fit using residuals from a fitted model. The underlying statistical procedures for each topic uses the distance covariance function as a measure of dependence. The distance covariance arises in various ways in each of these topics; one as a measure of independence among the components of a vector, second as a measure of similarity of joint distributions and, third for assessing serial dependence among the fitted residuals. In each of these cases, limit theory is established for the corresponding empirical distance covariance statistics when the data comes from a stationary ergodic time series.
For Topic (i) we consider an ICA framework, which is a popular tool used for blind source separation and has found application in fields such as financial time series, signal processing, feature extraction, and brain imaging. The Structural Vector Autogregression (SVAR) model is often the basic model used for modeling macro time series. The residuals in such a model are given by e_t = A S_t, the classical ICA model. In certain applications, one of the components of S_t has infinite variance. This differs from the standard ICA model. Furthermore the e_t's are not observed directly but are only estimated from the SVAR modeling. Many of the ICA procedures require the existence of a finite second or even fourth moment. We derive consistency when using the distance covariance for measuring independence of residuals under the infinite variance case.Extensions to the ICA model with noise, which has a direct application to SVAR models when testing independence of residuals based on their estimated counterparts is also considered.
In Topic (ii) we propose a novel methodology for clustering multivariate time series data using energy distance. Specifically, a dissimilarity matrix is formed using the energy distance statistic to measure separation between the finite dimensional distributions for the component time series. Once the pairwise dissimilarity matrix is calculated, a hierarchical clustering method is then applied to obtain the dendrogram. This procedure is completely nonparametric as the dissimilarities between stationary distributions are directly calculated without making any model assumptions. In order to justify this procedure, asymptotic properties of the energy distance estimates are derived for general stationary and ergodic time series.
Topic (iii) considers the fundamental and often final step in time series modeling, assessing the quality of fit of a proposed model to the data. Since the underlying distribution of the innovations that generate a model is often not prescribed, goodness-of-fit tests typically take the form of testing the fitted residuals for serial independence. However, these fitted residuals are inherently dependent since they are based on the same parameter estimates and thus standard tests of serial independence, such as those based on the autocorrelation function (ACF) or distance correlation function (ADCF) of the fitted residuals need to be adjusted. We apply sample splitting in the time series setting to perform tests of serial dependence of fitted residuals using the sample ACF and ADCF. Here the first f_n of the n data points in the time series are used to estimate the parameters of the model. Tests for serial independence are then based on all the n residuals. With f_n = n/2 the ACF and ADCF tests of serial independence tests often have the same limit distributions as though the underlying residuals are indeed i.i.d. That is, if the first half of the data is used to estimate the parameters and the estimated residuals are computed for the entire data set based on these parameter estimates, then the ACF and ADCF can have the same limit distributions as though the residuals were i.i.d. This procedure ameliorates the need for adjustment in the construction of confidence bounds for both the ACF and ADCF, based on the fitted residuals, in goodness-of-fit testing. We also show that if f_n < n/2 then the asymptotic distribution of the tests stochastically dominate the corresponding asymptotic distributions for the true i.i.d. noise; the stochastic order gets reversed under f_n > n/2.
|
70 |
The identification and application of common principal componentsPepler, Pieter Theo 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: When estimating the covariance matrices of two or more populations,
the covariance matrices are often assumed to be either equal or completely
unrelated. The common principal components (CPC) model provides an
alternative which is situated between these two extreme assumptions: The
assumption is made that the population covariance matrices share the same
set of eigenvectors, but have di erent sets of eigenvalues.
An important question in the application of the CPC model is to determine
whether it is appropriate for the data under consideration. Flury (1988)
proposed two methods, based on likelihood estimation, to address this question.
However, the assumption of multivariate normality is untenable for
many real data sets, making the application of these parametric methods
questionable. A number of non-parametric methods, based on bootstrap
replications of eigenvectors, is proposed to select an appropriate common
eigenvector model for two population covariance matrices. Using simulation
experiments, it is shown that the proposed selection methods outperform the
existing parametric selection methods.
If appropriate, the CPC model can provide covariance matrix estimators
that are less biased than when assuming equality of the covariance matrices,
and of which the elements have smaller standard errors than the elements of
the ordinary unbiased covariance matrix estimators. A regularised covariance
matrix estimator under the CPC model is proposed, and Monte Carlo simulation
results show that it provides more accurate estimates of the population
covariance matrices than the competing covariance matrix estimators.
Covariance matrix estimation forms an integral part of many multivariate
statistical methods. Applications of the CPC model in discriminant analysis,
biplots and regression analysis are investigated. It is shown that, in cases
where the CPC model is appropriate, CPC discriminant analysis provides signi
cantly smaller misclassi cation error rates than both ordinary quadratic
discriminant analysis and linear discriminant analysis. A framework for the
comparison of di erent types of biplots for data with distinct groups is developed,
and CPC biplots constructed from common eigenvectors are compared
to other types of principal component biplots using this framework.
A subset of data from the Vermont Oxford Network (VON), of infants admitted to participating neonatal intensive care units in South Africa and
Namibia during 2009, is analysed using the CPC model. It is shown that
the proposed non-parametric methodology o ers an improvement over the
known parametric methods in the analysis of this data set which originated
from a non-normally distributed multivariate population.
CPC regression is compared to principal component regression and partial least squares regression in the tting of models to predict neonatal mortality
and length of stay for infants in the VON data set. The tted regression
models, using readily available day-of-admission data, can be used by medical
sta and hospital administrators to counsel parents and improve the
allocation of medical care resources. Predicted values from these models can
also be used in benchmarking exercises to assess the performance of neonatal
intensive care units in the Southern African context, as part of larger quality
improvement programmes. / AFRIKAANSE OPSOMMING: Wanneer die kovariansiematrikse van twee of meer populasies beraam
word, word dikwels aanvaar dat die kovariansiematrikse of gelyk, of heeltemal
onverwant is. Die gemeenskaplike hoofkomponente (GHK) model verskaf
'n alternatief wat tussen hierdie twee ekstreme aannames gele e is: Die
aanname word gemaak dat die populasie kovariansiematrikse dieselfde versameling
eievektore deel, maar verskillende versamelings eiewaardes het.
'n Belangrike vraag in die toepassing van die GHK model is om te bepaal
of dit geskik is vir die data wat beskou word. Flury (1988) het twee metodes,
gebaseer op aanneemlikheidsberaming, voorgestel om hierdie vraag aan te
spreek. Die aanname van meerveranderlike normaliteit is egter ongeldig vir
baie werklike datastelle, wat die toepassing van hierdie metodes bevraagteken.
'n Aantal nie-parametriese metodes, gebaseer op skoenlus-herhalings van
eievektore, word voorgestel om 'n geskikte gemeenskaplike eievektor model
te kies vir twee populasie kovariansiematrikse. Met die gebruik van simulasie
eksperimente word aangetoon dat die voorgestelde seleksiemetodes beter vaar
as die bestaande parametriese seleksiemetodes.
Indien toepaslik, kan die GHK model kovariansiematriks beramers verskaf
wat minder sydig is as wanneer aanvaar word dat die kovariansiematrikse
gelyk is, en waarvan die elemente kleiner standaardfoute het as die elemente
van die gewone onsydige kovariansiematriks beramers. 'n Geregulariseerde
kovariansiematriks beramer onder die GHK model word voorgestel, en Monte
Carlo simulasie resultate toon dat dit meer akkurate beramings van die populasie
kovariansiematrikse verskaf as ander mededingende kovariansiematriks
beramers.
Kovariansiematriks beraming vorm 'n integrale deel van baie meerveranderlike
statistiese metodes. Toepassings van die GHK model in diskriminantanalise,
bi-stippings en regressie-analise word ondersoek. Daar word
aangetoon dat, in gevalle waar die GHK model toepaslik is, GHK diskriminantanalise
betekenisvol kleiner misklassi kasie foutkoerse lewer as beide
gewone kwadratiese diskriminantanalise en line^ere diskriminantanalise. 'n
Raamwerk vir die vergelyking van verskillende tipes bi-stippings vir data
met verskeie groepe word ontwikkel, en word gebruik om GHK bi-stippings
gekonstrueer vanaf gemeenskaplike eievektore met ander tipe hoofkomponent
bi-stippings te vergelyk. 'n Deelversameling van data vanaf die Vermont Oxford Network (VON),
van babas opgeneem in deelnemende neonatale intensiewe sorg eenhede in
Suid-Afrika en Namibi e gedurende 2009, word met behulp van die GHK
model ontleed. Daar word getoon dat die voorgestelde nie-parametriese
metodiek 'n verbetering op die bekende parametriese metodes bied in die ontleding van hierdie datastel wat afkomstig is uit 'n nie-normaal verdeelde
meerveranderlike populasie.
GHK regressie word vergelyk met hoofkomponent regressie en parsi ele
kleinste kwadrate regressie in die passing van modelle om neonatale mortaliteit
en lengte van verblyf te voorspel vir babas in die VON datastel. Die
gepasde regressiemodelle, wat maklik bekombare dag-van-toelating data gebruik,
kan deur mediese personeel en hospitaaladministrateurs gebruik word
om ouers te adviseer en die toewysing van mediese sorg hulpbronne te verbeter.
Voorspelde waardes vanaf hierdie modelle kan ook gebruik word in
normwaarde oefeninge om die prestasie van neonatale intensiewe sorg eenhede
in die Suider-Afrikaanse konteks, as deel van groter gehalteverbeteringprogramme,
te evalueer.
|
Page generated in 0.1004 seconds