• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 304
  • 139
  • 34
  • 31
  • 23
  • 19
  • 16
  • 16
  • 14
  • 12
  • 7
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 743
  • 743
  • 743
  • 141
  • 118
  • 112
  • 102
  • 86
  • 68
  • 65
  • 59
  • 57
  • 55
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Ionic Characterization of Laundry Detergents: Implications for Consumer Choice and Inland Freshwater Salinization

Mendoza, Kent Gregory 11 April 2024 (has links)
Increased salinity in freshwater systems – also called the Freshwater Salinization Syndrome (FSS) – can have far-ranging implications for the natural and built environment, agriculture, and public health at large. Such risks are clearly on display in the Occoquan Reservoir – a drinking water source for roughly one million people in the northern Virginia/ National Capital Region. Sodium concentrations in the Occoquan Reservoir are approaching levels that can affect taste and health. The Reservoir is also noteworthy as a flagship example of indirect potable reuse, which further adds complexity to understanding the sources of rising levels of sodium and other types of salinity. To help understand the role residential discharges might play in salinization of the Occoquan Reservoir, a suite of laundry detergent products was identified based upon survey data collected in the northern Virginia region. The ionic compositions of these products were then characterized using ion chromatography and inductively coupled plasma-mass spectrometry to quantify select ionic and elemental analytes. Sodium, chloride, and sulfate were consistently found in appreciable amounts. To comparatively characterize the laundry detergents, principal component analysis was employed to identify clusters of similar products. The physical formulation of the products was identified as a marker for their content, with dry formulations (free-flowing and encapsulated powders) being more enriched in sodium and sulfate. This result was corroborated by comparing nonparametric bootstrap intervals for individual analytes. The study's findings suggest an opportunity wherein consumer choice can play a role in mediating residential salt inputs in receiving bodies such as the Occoquan Reservoir. / Master of Science / Many streams, rivers, and other freshwater systems have become increasingly salty in recent decades. A rise in salinity can be problematic, stressing aquatic life, corroding pipes, and even enhancing the release of more pollutants into the water. This phenomenon, called Freshwater Salinization Syndrome, can threaten such systems' ability to serve as sources of drinking water, as is the case for the Occoquan Reservoir in northern Virginia. Serving roughly one million people, the Reservoir is notable for being one of the first in the country to purposely incorporate highly treated wastewater upstream of a drinking water supply. Despite the Reservoir's prominence, the reasons behind its rising salt levels are not well understood. This study sought to understand the role that individual residences could play when household products travel down the drain and are ultimately discharged into the watershed. Laundry detergents are potentially high-salt products. A survey of northern Virginian's laundry habits was conducted to understand local tastes and preferences. Informed by the survey, a suite of laundry detergents was chemically characterized to measure salt and element concentrations. The detergents were found to have notable amounts of sodium, chloride, and sulfate in particular, with sodium being the most abundant analyte in every detergent. However, not all detergents were equally salty; statistical tools revealed that dry formulations (such as powdered and powder-filled pak detergents) contributed more sodium and sulfate, among other things. This study's findings suggest that laundry detergents could be contributing to Freshwater Salinization Syndrome in the Occoquan Reservoir, and that local consumers' choice of detergents could make a difference.
492

Statistical Methods for Multivariate Functional Data Clustering, Recurrent Event Prediction, and Accelerated Degradation Data Analysis

Jin, Zhongnan 12 September 2019 (has links)
In this dissertation, we introduce three projects in machine learning and reliability applications after the general introductions in Chapter 1. The first project concentrates on the multivariate sensory data, the second project is related to the bivariate recurrent process, and the third project introduces thermal index (TI) estimation in accelerated destructive degradation test (ADDT) data, in which an R package is developed. All three projects are related to and can be used to solve certain reliability problems. Specifically, in Chapter 2, we introduce a clustering method for multivariate functional data. In order to cluster the customized events extracted from multivariate functional data, we apply the functional principal component analysis (FPCA), and use a model based clustering method on a transformed matrix. A penalty term is imposed on the likelihood so that variable selection is performed automatically. In Chapter 3, we propose a covariate-adjusted model to predict next event in a bivariate recurrent event system. Inspired by geyser eruptions in Yellowstone National Park, we consider two event types and model their event gap time relationship. External systematic conditions are taken account into the model with covariates. The proposed covariate adjusted recurrent process (CARP) model is applied to the Yellowstone National Park geyser data. In Chapter 4, we compare estimation methods for TI. In ADDT, TI is an important index indicating the reliability of materials, when the accelerating variable is temperature. Three methods are introduced in TI estimations, which are least-squares method, parametric model and semi-parametric model. An R package is implemented for all three methods. Applications of R functions are introduced in Chapter 5 with publicly available ADDT datasets. Chapter 6 includes conclusions and areas for future works. / Doctor of Philosophy / This dissertation focuses on three projects that are all related to machine learning and reliability. Specifically, in the first project, we propose a clustering method designated for events extracted from multivariate sensory data. When the customized event is corresponding to reliability issues, such as aging procedures, clustering results can help us learn different event characteristics by examining events belonging to the same group. Applications include diving behavior segmentation based on vehicle sensory data, where multiple sensors are measuring vehicle conditions simultaneously and events are defined as vehicle stoppages. In our project, we also proposed to conduct sensor selection by three different penalizations including individual, variable and group. Our method can be applied for multi-dimensional sensory data clustering, when optimal sensor design is also an objective. The second project introduces a covariate-adjusted model accommodated to a bivariate recurrent event process system. In such systems, events can occur repeatedly and event occurrences for each type can affect each other with certain dependence. Events in the system can be mechanical failures which is related to reliability, while next event time and type predictions are usually of interest. Precise predictions on the next event time and type can essentially prevent serious safety and economy consequences following the upcoming event. We propose two CARP models with marginal behaviors as well as the dependence structure characterized in the bivariate system. We innovate to incorporate external information to the model so that model results are enhanced. The proposed model is evaluated in simulation studies, while geyser data from Yellowstone National Park is applied. In the third project, we comprehensively discuss three estimation methods for thermal index. They are the least-square method, parametric model and semi-parametric model. When temperature is the accelerating variable, thermal index indicates the temperature at which our materials can hold up to a certain time. In reality, estimating the thermal index precisely can prolong lifetime of certain product by choosing the right usage temperature. Methods evaluations are conducted by simulation study, while applications are applied to public available datasets.
493

A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction

LI, Songyu January 2018 (has links)
This thesis proposes a method to reconstruct a frontal facial video basedon encoding done with the facial profile of another video sequence.The reconstructed facial video will have the similar facial expressionchanges as the changes in the profile video. First, the profiles for boththe reference video and for the test video are captured by edge detection.Then, asymmetrical principal component analysis is used to model thecorrespondence between the profile and the frontal face. This allows en-coding from a profile and decoding of the frontal face of another video.Another solution is to use dynamic time warping to match the profilesand select the best matching corresponding frontal face frame for re-construction. With this method, we can reconstructed the test frontalvideo to make it have the similar changing in facial expressions as thereference video. To improve the quality of the result video, Local Lin-ear Embedding is used to give the result video a smoother transitionbetween frames.
494

類典型相關分析及其在 免試入學上採計成績之研究 / A canonical correlation analysis type approach to model a criterion for enrolling high school students

卓惠敏, Cho, Hui Min Unknown Date (has links)
實施十二年國民基本教育,目的是為促進學生五育均衡發展,兼顧國中學習品質及日常生活表現。由於各校對成績的評分標準與評分方式皆不相同,因此如何使在校成績採計達到公平性將成為一項重要的問題。 戴岑熹(2011) 考慮了國中在校綜合學科分數與基測總分間的相關性,以決定在校各學科的權重。而本研究延伸其概念與方法,將基測各科量尺分數考慮進來,於在校綜合學科分數與基測綜合量尺分數的關聯性最密切的情況下,分析各學科權重的取決方式,希望能找出較理想的模式來代表學生在校三年的整體學習表現與成果,以做為免試升學採計在校成績的參考與依據。 本文的研究方法是運用典型相關分析的理論,但因權重的限制條件與傳統典型相關分析的要求不同,因此,便將其命名為「類典型相關分析」。在類典型相關分析中,我們證明了在校各學科分數及基測各科量尺分數的最佳權重,可先透過典型相關分析求得典型相關向量,若有必要的話,使用Rao-Ghangurad 方法加以修正,最後,再將所獲得的非負典型相關向量正規化,即可獲得所要的結果,這是一個求最佳權重向量極便捷的途徑。在實例分析方面,我們發現了一個有趣的現象,即在校學科分數與基測考科量尺分數的最佳權重向量相當接近,即名稱相同的學科與考科幾乎有相同的權重。在比較了幾個權重分配方式不同的在校綜合學科分數後,我們也發現一般學校常用的等加權模式,其表現結果也頗優異。 / The purpose of implementing the twelve-year compulsory education is to promote the balanced development of learning in students, taking into account their learning quality and normal daily performances in school. As the evaluation standard and method vary among schools, achieving fairness in calculating in-school grades has become an important issue. Dai (2011) considered the correlations between the scores of in-school academic performance and the total score of the BCTEST for junior high schools, which decided to the weightings of all learning subjects. This study extended his concept and method, and took into account the scale scores of all learning subjects. In the closest case of the weightings of all learning subjects and find out the correlations between the scores of in-school academic performance and the BCTEST, and analyse the weightings of all learning subjects. We hope the study can find a better approach that can not only reflect students’ learning situations and achievements for the three years in school but also provide a reference for the evaluation of entering senior high schools without entrance examinations. The research method in this paper employs the theory of canonical correlation analysis.However, due to that fact that weight restrictions are different from the requirements of canonical correlation analysis, it is named as the canonical correlation analysis type approach. In the canonical correlation analysis type approach, we proved that the optimal weights for school subject score and test subject score scales can be obtained by finding the canonical correlation vectors using canonical correlation analysis. Then the Rao-Ghangurad method can further be used for amending, if needed. Finally, the nonnegative canonical correlation vectors generated would be normalized to get the desired result. It is an extremely convenient way to obtain the optimal weight vector. In the case study, we found an interesting phenomenon as follows: When the optimal weight vectors for school subject score and test subject score scales were very close, subjects and tests of the same name had almost the same weight. After comparing several comprehensive school subject scores of different weight distribution, we also found that the results of the equal weighting model commonly used in schools also showed quite good results.
495

Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data / Avvikelse-detektering med ensemble LSTM auto-encoders på PCA-transformerad finansiell data

Stark, Love January 2021 (has links)
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA. / Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
496

Why Does Cash Still Exist? / Varför Existerar Fortfarande Kontanter?

Asplund, Oscar, Tzobras, Othon January 2018 (has links)
With non-cash transactions on the rise and the debate about the future cashless society is raging, cash is still being used around the world with varying degrees. This thesis studies the behavioral determinants of consumers with regard to cash usage. The current research have found several determinants of consumer behavior and this study aims at combining the existing knowledge into one model that might explain peoples’ payment medium behaviors. The method chosen in this thesis was to do a factor analysis in order to validate the hypothesized model. The obtained data-set was analyzed by using IBM SPSS Statistics, version 24.0.0.0. To start with the data was deemed to be suitable after checking the adequacy of our data set through a KMO and Bartlett’s test and further by looking at the MSA table where the variables score quite high indicating that the data is suitable. Secondly the results from the factor analysis indicated that there are eight components extracted with some components being uncorrelated but the majority of extracted components from the output being correlated. Finally we found that our theoretical model holds but we recommend further research to be conducted on how locations determine cash usage. Moreover, we noted that some components into the socio-demographic groups are uncorrelated and thus we would like to recommend further research into the statistical validity of the model. / Trots stadigt ökande andelen ickekontanta transaktioner kombinerat med den pågående debatten om det kontantlösa samhället så används fortfarande kontanter (i varieande utsträckning) runt om världen. Denna uppsats studerar beteendedeterminanter med hänsyn konsumenters val av olika betalningsmedel. Tidigare studier har hittat bevis för flera olika determinanter som påverkar konsumenters beteenden och denna uppsats syftar till att kombinera existerande determinanter till en modell som kan förklara konsumenters beteendemönster kring valet av olika betalningsmedier. Faktoranalys har varit den valda metoden för denna studie för att kunna validera den hypotiserad bettendemodellen. Det erhållna datasetet analyserades med hjälp av mjukvaran SPSS IBM SPSS Statistics, version 24.0.0.0. Till att börja med så ansågs datan vara passande efter lämplighetstest av det tillgängliggjorda datasetet och därefter kontrollera resultaten från KMO- och Barlettstesterna samt att undersöka resultaten från MSA-tabellen där flera variabler innehar höga värden vilket indikerade att datan var lämplig för vidare analys. Resultaten från faktoranalysen indikerar att vi erhöll totalt åtta komponenter där ett fåtal korrelerade men majoriteten av komponenterna var inte korrelerade. Till att börja med så fann vi att datan var lämplig för vidare analys. Därefter fick vi åtta extraherade komponenter vilka teoretiskt kunde härledas till vår modells hypotiserade determinanter. Till slut fann vi att vår teoretiska modell håller men vill rekommendera vidare forskning på hur specifika platser determinerar kontantanvändning. Dessutom så noterade vi att vissa komponeter inte korrelerar hos vissa sociodemografiska grupper och vi vill därför rekommendera vidare forskning för att bättre validera modellen statistiskt.
497

Unsupervised Anomaly Detection and Root Cause Analysis in HFC Networks : A Clustering Approach

Forsare Källman, Povel January 2021 (has links)
Following the significant transition from the traditional production industry to an informationbased economy, the telecommunications industry was faced with an explosion of innovation, resulting in a continuous change in user behaviour. The industry has made efforts to adapt to a more datadriven future, which has given rise to larger and more complex systems. Therefore, troubleshooting systems such as anomaly detection and root cause analysis are essential features for maintaining service quality and facilitating daily operations. This study aims to explore the possibilities, benefits, and drawbacks of implementing cluster analysis for anomaly detection in hybrid fibercoaxial networks. Based on the literature review on unsupervised anomaly detection and an assumption regarding the anomalous behaviour in hybrid fibercoaxial network data, the kmeans, SelfOrganizing Map, and Gaussian Mixture Model were implemented both with and without Principal Component Analysis. Analysis of the results demonstrated an increase in performance for all models when the Principal Component Analysis was applied, with kmeans outperforming both SelfOrganizing Map and Gaussian Mixture Model. On this basis, it is recommended to apply Principal Component Analysis for clusteringbased anomaly detection. Further research is necessary to identify whether cluster analysis is the most appropriate unsupervised anomaly detection approach. / Följt av övergången från den traditionella tillverkningsindustrin till en informationsbaserad ekonomi stod telekommunikationsbranschen inför en explosion av innovation. Detta skifte resulterade i en kontinuerlig förändring av användarbeteende och branschen tvingades genomgå stora ansträngningar för att lyckas anpassa sig till den mer datadrivna framtiden. Större och mer komplexa system utvecklades och således blev felsökningsfunktioner såsom anomalidetektering och rotfelsanalys centrala för att upprätthålla servicekvalitet samt underlätta för den dagliga driftverksamheten. Syftet med studien är att utforska de möjligheterna, för- samt nackdelar med att använda klusteranalys för anomalidetektering inom HFC- nätverk. Baserat på litteraturstudien för oövervakad anomalidetektering samt antaganden för anomalibeteenden inom HFC- data valdes algritmerna k- means, Self- Organizing Map och Gaussian Mixture Model att implementeras, både med och utan Principal Component Analysis. Analys av resultaten påvisade en uppenbar ökning av prestanda för samtliga modeller vid användning av PCA. Vidare överträffade k- means, både Self- Organizing Maps och Gaussian Mixture Model. Utifrån resultatanalysen rekommenderas det således att PCA bör tillämpas vid klusterings- baserad anomalidetektering. Vidare är ytterligare forskning nödvändig för att avgöra huruvida klusteranalys är den mest lämpliga metoden för oövervakad anomalidetektering.
498

Modelling Credit Spread Risk with a Focus on Systematic and Idiosyncratic Risk / Modellering av Kredit Spreads Risk med Fokus på Systematisk och Idiosynkratisk Risk

Korac Dalenmark, Maximilian January 2023 (has links)
This thesis presents an application of Principal Component Analysis (PCA) and Hierarchical PCA to credit spreads. The aim is to identify the underlying factors that drive the behavior of credit spreads as well as the left over idiosyncratic risk, which is crucial for risk management and pricing of credit derivatives. The study employs a dataset from the Swedish market of credit spreads for different maturities and ratings, split into Covered Bonds and Corporate Bonds, and performs PCA to extract the dominant factors that explain the variation in the data of the former set. The results show that most of the systemic movements in Swedish covered bonds can be extracted using a mean which coincides with the first principal component. The report further explores the idiosyncratic risk of the credit spreads to further the knowledge regarding the dynamics of credit spreads and improving risk management in credit portfolios, specifically in regards to new regulation in the form of the Fundemental Review of the Trading Book (FRTB). The thesis also explores a more general model on corporate bonds using HPCA and K-means clustering. Due to data issues it is less explored but there are useful findings, specifically regarding the feasibility of using clustering in combination with HPCA. / I detta arbete presenteras en tillämpning av Principal Komponent Analysis (PCA) och Hierarkisk PCA på kreditspreadar. Syftet är att identifiera de underliggande faktorer som styr kreditspreadarnas beteende samt den kvarvarande idiosynkratiska risken, vilket är avgörande för riskhantering och prissättning av diverse kreditderivat. I studien används en datamängd från den svenska marknaden med kreditspreadar för olika löptider och kreditbetyg, uppdelat på säkerställda obligationer och företagsobligationer, och PCA används för att ta fram de mest signifikanta faktorerna som förklarar variationen i data för de förstnämnda obligationerna. Resultaten visar att de flesta av de systematiska rörelserna i svenska säkerställda obligationer kan extraheras med hjälp av ett medelvärde som sammanfaller med den första principalkomponenten. I rapporten undersöks vidare den idiosynkratiska risken i kreditspreadarna för att öka kunskapen om dynamiken i kreditspreadarna och förbättre riskhanteringen i kreditportföljer, särskilt med tanke på regelverket "Fundemental Review of the Tradring book" (FRTB). I rapporten undersöktes vidare en mer allmän modell för företagsobligationer med hjälp av HPCA och K-means-klustering. På grund av dataproblem är den mindre utforstkad, men det finns användbara resultat, särskild när det gäller möjligheten att använda kluster i kombination med HPCA.
499

A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning

Vasilescu, M. Alex O. 09 June 2014 (has links)
This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recognition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentangling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on(i) a multilinear generalization of principal components analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the M-mode SVD, while our multilinear ICA method involves an analogous M-mode ICA algorithm. As applications of our tensor framework, we tackle important problems in computer graphics, computer vision, and pattern recognition; in particular, (i) image-based rendering, specifically introducing the multilinear synthesis of images of textured surfaces under varying view and illumination conditions, a new technique that we call ``TensorTextures'', as well as (ii) the multilinear analysis and recognition of facial images under variable face shape, view, and illumination conditions, a new technique that we call ``TensorFaces''. In developing these applications, we introduce a multilinear image-based rendering algorithm and a multilinear appearance-based recognition algorithm. As a final, non-image-based application of our framework, we consider the analysis, synthesis and recognition of human motion data using multilinear methods, introducing a new technique that we call ``Human Motion Signatures''.
500

A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision and Machine Learning

Vasilescu, M. Alex O. 09 June 2014 (has links)
This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recognition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentangling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on(i) a multilinear generalization of principal components analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the M-mode SVD, while our multilinear ICA method involves an analogous M-mode ICA algorithm. As applications of our tensor framework, we tackle important problems in computer graphics, computer vision, and pattern recognition; in particular, (i) image-based rendering, specifically introducing the multilinear synthesis of images of textured surfaces under varying view and illumination conditions, a new technique that we call ``TensorTextures'', as well as (ii) the multilinear analysis and recognition of facial images under variable face shape, view, and illumination conditions, a new technique that we call ``TensorFaces''. In developing these applications, we introduce a multilinear image-based rendering algorithm and a multilinear appearance-based recognition algorithm. As a final, non-image-based application of our framework, we consider the analysis, synthesis and recognition of human motion data using multilinear methods, introducing a new technique that we call ``Human Motion Signatures''.

Page generated in 0.0972 seconds