401 |
Unsupervised Gaussian mixture models for the classification of outdoor environments using 3D terrestrial lidar data / Modèles de mélange gaussien sans surveillance pour la classification des environnements extérieurs en utilisant des données 3D de lidar terrestreFernandes maligo, Artur otavio 28 January 2016 (has links)
Le traitement de nuages de points 3D de lidars permet aux robots mobiles autonomes terrestres de construire des modèles sémantiques de l'environnement extérieur dans lequel ils évoluent. Ces modèles sont intéressants car ils représentent des informations qualitatives, et ainsi donnent à un robot la capacité de raisonner à un niveau plus élevé d'abstraction. Le coeur d'un système de modélisation sémantique est la capacité de classifier les observations venant du capteur. Nous proposons un système de classification centré sur l'apprentissage non-supervisé. La prémière couche, la couche intermédiaire, consiste en un modèle de mélange gaussien. Ce modèle est déterminé de manière non-supervisée lors d'une étape de training. Il definit un ensemble de classes intermédiaires qui correspond à une partition fine des classes présentes dans l'environnement. La deuxième couche, la couche finale, consiste en un regroupement des classes intermédiaires dans un ensemble de classes finales qui, elles, sont interprétables dans le contexte de la tâche ciblée. Le regroupement est déterminé par un expert lors de l'étape de training, de manière supervisée, mais guidée par les classes intermédiaires. L'évaluation est basée sur deux jeux de données acquis avec de différents lidars et possédant différentes caractéristiques. L'évaluation est quantitative pour l'un des jeux de données, et qualitative pour l'autre. La concéption du système utilise la procédure standard de l'apprentissage, basée sur les étapes de training, validation et test. L'opération suit la pipeline standard de classification. Le système est simple, et ne requiert aucun pré-traitement ou post-traitement. / The processing of 3D lidar point clouds enable terrestrial autonomous mobile robots to build semantic models of the outdoor environments in which they operate. Such models are interesting because they encode qualitative information, and thus provide to a robot the ability to reason at a higher level of abstraction. At the core of a semantic modelling system, lies the capacity to classify the sensor observations. We propose a two-layer classi- fication model which strongly relies on unsupervised learning. The first, intermediary layer consists of a Gaussian mixture model. This model is determined in a training step in an unsupervised manner, and defines a set of intermediary classes which is a fine-partitioned representation of the environment. The second, final layer consists of a grouping of the intermediary classes into final classes that are interpretable in a considered target task. This grouping is determined by an expert during the training step, in a process which is supervised, yet guided by the intermediary classes. The evaluation is done for two datasets acquired with different lidars and possessing different characteristics. It is done quantitatively using one of the datasets, and qualitatively using another. The system is designed following the standard learning procedure, based on a training, a validation and a test steps. The operation follows a standard classification pipeline. The system is simple, with no requirement of pre-processing or post-processing stages.
|
402 |
Computational studies of the structures, reactions, and energetics of selected cyclic and sterically crowded species.January 2003 (has links)
Cheng Mei-Fun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references. / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgements --- p.iii / Table of Contents --- p.iv / List of Tables --- p.vi / List of Figures --- p.viii / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Gaussian-3 Method --- p.1 / Chapter 1.2 --- The G3 Method with Reduced Mφller-Plesset Order and Basis Set --- p.2 / Chapter 1.3 --- Density Functional Theory (DFT) --- p.3 / Chapter 1.4 --- Calculation of Thermodynamical Data --- p.3 / Chapter 1.5 --- Remark on the Location of Transition Structures --- p.3 / Chapter 1.6 --- Natural Bond Orbital (NBO) Analysis --- p.4 / Chapter 1.7 --- Scope of the Thesis --- p.4 / Chapter 1.8 --- References --- p.5 / Chapter Chapter 2 --- Heats of Formation for the Azine Series: A Gaussian-3 Study --- p.7 / Chapter 2.1 --- Introduction --- p.7 / Chapter 2.2 --- Methods of Calculation and Results --- p.8 / Chapter 2.3 --- Discussion --- p.8 / Chapter 2.4 --- Conclusion --- p.9 / Chapter 2.5 --- Publication Note --- p.10 / Chapter 2.6 --- References --- p.10 / Chapter Chapter 3 --- Heats of Formation for Some Boron Hydrides: A Gaussian-3 Study --- p.16 / Chapter 3.1 --- Introduction --- p.16 / Chapter 3.2 --- Methods of Calculation and Results --- p.18 / Chapter 3.3 --- Discussion --- p.19 / Chapter 3.4 --- Conclusion --- p.21 / Chapter 3.5 --- Publication Note --- p.21 / Chapter 3.6 --- References --- p.21 / Chapter Chapter 4 --- Structural and Energetics Studies of Tri- and Tetra-tert-butylmethane --- p.30 / Chapter 4.1 --- Introduction --- p.30 / Chapter 4.2 --- Methods of Calculation and Results --- p.32 / Chapter 4.3 --- Discussion --- p.34 / Chapter 4.3.1 --- Mono-tert-butylmethane --- p.34 / Chapter 4.3.2 --- Di-tert-butylmethane --- p.35 / Chapter 4.3.3 --- Tri-tert-butylmethane --- p.37 / Chapter 4.3.4 --- Tetra-tert-butylmethane --- p.38 / Chapter 4.4 --- Conclusion --- p.39 / Chapter 4.5 --- Publication Note --- p.40 / Chapter 4.6 --- References --- p.40 / Chapter Chapter 5 --- A Computational Study of the Diels-Alder Reactions Involving Acenes: Reactivity and Aromaticity --- p.49 / Chapter 5.1 --- Introduction --- p.49 / Chapter 5.2 --- Methods of Calculation and Results --- p.50 / Chapter 5.3 --- Discussion --- p.51 / Chapter 5.4 --- Conclusion --- p.53 / Chapter 5.5 --- Publication Note --- p.53 / Chapter 5.6 --- References --- p.53 / Chapter Chapter 6 --- A Computational Study of the Charge- Delocalized and Charge-Localized Forms of the Croconate and Rhodizonate Dianions --- p.65 / Chapter 6.1 --- Introduction --- p.65 / Chapter 6.2 --- Methods of Calculation and Results --- p.67 / Chapter 6.3 --- Discussion --- p.68 / Chapter 6.3.1 --- Charge-Localized Forms of C5052- (C2v) and C6O6 2-(C2v) --- p.68 / Chapter 6.3.2 --- Charge-Delocalized Forms of C5052- (D5h) and C6062- (D6h) --- p.71 / Chapter 6.4 --- Conclusion --- p.72 / Chapter 6.5 --- Publication Note --- p.73 / Chapter 6.6 --- References --- p.74 / Chapter Chapter 7 --- Conclusion --- p.89 / Appendix A --- p.90 / Appendix B --- p.92
|
403 |
Probabilistic machine learning for circular statistics : models and inference using the multivariate Generalised von Mises distributionWu Navarro, Alexandre Khae January 2018 (has links)
Probabilistic machine learning and circular statistics—the branch of statistics concerned with data as angles and directions—are two research communities that have grown mostly in isolation from one another. On the one hand, probabilistic machine learning community has developed powerful frameworks for problems whose data lives on Euclidean spaces, such as Gaussian Processes, but have generally neglected other topologies studied by circular statistics. On the other hand, the approximate inference frameworks from probabilistic machine learning have only recently started to the circular statistics landscape. This thesis intends to redress the gap between these two fields by contributing to both fields with models and approximate inference algorithms. In particular, we introduce the multivariate Generalised von Mises distribution (mGvM), which allows the use of kernels in circular statistics akin to Gaussian Processes, and an augmented representation. These models account for a vast number of applications comprising both latent variable modelling and regression of circular data. Then, we propose methods to conduct approximate inference on these models. In particular, we investigate the use of Variational Inference, Expectation Propagation and Markov chain Monte Carlo methods. The variational inference route taken was a mean field approach to efficiently leverage the mGvM tractable conditionals and create a baseline for comparison with other methods. Then, an Expectation Propagation approach is presented drawing on the Expectation Consistent Framework for Ising models and connecting the approximations used to the augmented model presented. In the final MCMC chapter, efficient Gibbs and Hamiltonian Monte Carlo samplers are derived for the mGvM and the augmented model.
|
404 |
Probabilistic modelling of cellular development from single-cell gene expressionSvensson, Valentine January 2017 (has links)
The recent technology of single-cell RNA sequencing can be used to investigate molecular, transcriptional, changes in cells as they develop. I reviewed the literature on the technology, and made a large scale quantitative comparison of the different implementations of single cell RNA sequencing to identify their technical limitations. I investigate how to model transcriptional changes during cellular development. The general forms of expression changes with respect to development leads to nonparametric regression models, in the forms of Gaussian Processes. I used Gaussian process models to investigate expression patterns in early embryonic development, and compared the development of mice and humans. When using in vivo systems, ground truth time for each cell cannot be known. Only a snapshot of cells, all being in different stages of development can be obtained. In an experiment measuring the transcriptome of zebrafish blood precursor cells undergoing the development from hematopoietic stem cells to thrombocytes, I used a Gaussian Process Latent Variable model to align the cells according to the developmental trajectory. This way I could investigate which genes were driving the development, and characterise the different patterns of expression. With the latent variable strategy in mind, I designed an experiment to study a rare event of murine embryonic stem cells entering a state similar to very early embryos. The GPLVM can take advantage of the nonlinear expression patterns involved with this process. The results showed multiple activation events of genes as cells progress towards the rare state. An essential feature of cellular biology is that precursor cells can give rise to multiple types of progenitor cells through differentiation. In the immune system, naive T-helper cells differentiate to different sub-types depending on the infection. For an experiment where mice were infected by malaria, the T-helper cells develop into two cell types, Th1 and Tfh. I model this branching development using an Overlapping Mixture of Gaussian Processes, which let me identify both which cells belong to which branch, and learn which genes are involved with the different branches. Researchers have now started performing high-throughput experiments where spatial context of gene expression is recorded. Similar to how I identify temporal expression patterns, spatial expression patterns can be identified nonparametrically. To enable researchers to make use of this technique, I developed a very fast method to perform a statistical test for spatial dependence, and illustrate the result on multiple data sets.
|
405 |
Machine learning for materials scienceRouet-Leduc, Bertrand January 2017 (has links)
Machine learning is a branch of artificial intelligence that uses data to automatically build inferences and models designed to generalise and make predictions. In this thesis, the use of machine learning in materials science is explored, for two different problems: the optimisation of gallium nitride optoelectronic devices, and the prediction of material failure in the setting of laboratory earthquakes. Light emitting diodes based on III-nitrides quantum wells have become ubiquitous as a light source, owing to their direct band-gap that covers UV, visible and infra-red light, and their very high quantum efficiency. This efficiency originates from most electronic transitions across the band-gap leading to the emission of a photon. At high currents however this efficiency sharply drops. In chapters 3 and 4 simulations are shown to provide an explanation for experimental results, shedding a new light on this drop of efficiency. Chapter 3 provides a simple and yet accurate model that explains the experimentally observed beneficial effect that silicon doping has on light emitting diodes. Chapter 4 provides a model for the experimentally observed detrimental effect that certain V-shaped defects have on light emitting diodes. These results pave the way for the association of simulations to detailed multi-microscopy. In the following chapters 5 to 7, it is shown that machine learning can leverage the use of device simulations, by replacing in a targeted and efficient way the very labour intensive tasks of making sure the numerical parameters of the simulations lead to convergence, and that the physical parameters reproduce experimental results. It is then shown that machine learning coupled with simulations can find optimal light emitting diodes structures, that have a greatly enhanced theoretical efficiency. These results demonstrate the power of machine learning for leveraging and automatising the exploration of device structures in simulations. Material failure is a very broad problem encountered in a variety of fields, ranging from engineering to Earth sciences. The phenomenon stems from complex and multi-scale physics, and failure experiments can provide a wealth of data that can be exploited by machine learning. In chapter 8 it is shown that by recording the acoustic waves emitted during the failure of a laboratory fault, an accurate predictive model can be built. The machine learning algorithm that is used retains the link with the physics of the experiment, and a new signal is thus discovered in the sound emitted by the fault. This new signal announces an upcoming laboratory earthquake, and is a signature of the stress state of the material. These results show that machine learning can help discover new signals in experiments where the amount of data is very large, and demonstrate a new method for the prediction of material failure.
|
406 |
An incremental gaussian mixture network for data stream classification in non-stationary environments / Uma rede de mistura de gaussianas incrementais para classificação de fluxos contínuos de dados em cenários não estacionáriosDiaz, Jorge Cristhian Chamby January 2018 (has links)
Classificação de fluxos contínuos de dados possui muitos desafios para a comunidade de mineração de dados quando o ambiente não é estacionário. Um dos maiores desafios para a aprendizagem em fluxos contínuos de dados está relacionado com a adaptação às mudanças de conceito, as quais ocorrem como resultado da evolução dos dados ao longo do tempo. Duas formas principais de desenvolver abordagens adaptativas são os métodos baseados em conjunto de classificadores e os algoritmos incrementais. Métodos baseados em conjunto de classificadores desempenham um papel importante devido à sua modularidade, o que proporciona uma maneira natural de se adaptar a mudanças de conceito. Os algoritmos incrementais são mais rápidos e possuem uma melhor capacidade anti-ruído do que os conjuntos de classificadores, mas têm mais restrições sobre os fluxos de dados. Assim, é um desafio combinar a flexibilidade e a adaptação de um conjunto de classificadores na presença de mudança de conceito, com a simplicidade de uso encontrada em um único classificador com aprendizado incremental. Com essa motivação, nesta dissertação, propomos um algoritmo incremental, online e probabilístico para a classificação em problemas que envolvem mudança de conceito. O algoritmo é chamado IGMN-NSE e é uma adaptação do algoritmo IGMN. As duas principais contribuições da IGMN-NSE em relação à IGMN são: melhoria de poder preditivo para tarefas de classificação e a adaptação para alcançar um bom desempenho em cenários não estacionários. Estudos extensivos em bases de dados sintéticas e do mundo real demonstram que o algoritmo proposto pode rastrear os ambientes em mudança de forma muito próxima, independentemente do tipo de mudança de conceito. / Data stream classification poses many challenges for the data mining community when the environment is non-stationary. The greatest challenge in learning classifiers from data stream relates to adaptation to the concept drifts, which occur as a result of changes in the underlying concepts. Two main ways to develop adaptive approaches are ensemble methods and incremental algorithms. Ensemble method plays an important role due to its modularity, which provides a natural way of adapting to change. Incremental algorithms are faster and have better anti-noise capacity than ensemble algorithms, but have more restrictions on concept drifting data streams. Thus, it is a challenge to combine the flexibility and adaptation of an ensemble classifier in the presence of concept drift, with the simplicity of use found in a single classifier with incremental learning. With this motivation, in this dissertation we propose an incremental, online and probabilistic algorithm for classification as an effort of tackling concept drifting. The algorithm is called IGMN-NSE and is an adaptation of the IGMN algorithm. The two main contributions of IGMN-NSE in relation to the IGMN are: predictive power improvement for classification tasks and adaptation to achieve a good performance in non-stationary environments. Extensive studies on both synthetic and real-world data demonstrate that the proposed algorithm can track the changing environments very closely, regardless of the type of concept drift.
|
407 |
Continuous reinforcement learning with incremental Gaussian mixture models / Aprendizagem por reforço contínua com modelos de mistura gaussianas incrementaisPinto, Rafael Coimbra January 2017 (has links)
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais. / This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
|
408 |
Finite Gaussian mixture and finite mixture-of-expert ARMA-GARCH models for stock price prediction.January 2003 (has links)
Tang Him John. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaves 76-80). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgment --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Background --- p.2 / Chapter 1.1.1 --- Linear Time Series --- p.2 / Chapter 1.1.2 --- Mixture Models --- p.3 / Chapter 1.1.3 --- EM algorithm --- p.6 / Chapter 1.1.4 --- Model Selection --- p.6 / Chapter 1.2 --- Main Objectives --- p.7 / Chapter 1.3 --- Outline of this thesis --- p.7 / Chapter 2 --- Finite Gaussian Mixture ARMA-GARCH Model --- p.9 / Chapter 2.1 --- Introduction --- p.9 / Chapter 2.1.1 --- "AR, MA, and ARMA" --- p.10 / Chapter 2.1.2 --- Stationarity --- p.11 / Chapter 2.1.3 --- ARCH and GARCH --- p.12 / Chapter 2.1.4 --- Gaussian mixture --- p.13 / Chapter 2.1.5 --- EM and GEM algorithms --- p.14 / Chapter 2.2 --- Finite Gaussian Mixture ARMA-GARCH Model --- p.16 / Chapter 2.3 --- Estimation of Gaussian mixture ARMA-GARCH model --- p.17 / Chapter 2.3.1 --- Autocorrelation and Stationarity --- p.20 / Chapter 2.3.2 --- Model Selection --- p.24 / Chapter 2.4 --- Experiments: First Step Prediction --- p.26 / Chapter 2.5 --- Chapter Summary --- p.28 / Chapter 2.6 --- Notations and Terminologies --- p.30 / Chapter 2.6.1 --- White Noise Time Series --- p.30 / Chapter 2.6.2 --- Lag Operator --- p.30 / Chapter 2.6.3 --- Covariance Stationarity --- p.31 / Chapter 2.6.4 --- Wold's Theorem --- p.31 / Chapter 2.6.5 --- Multivariate Gaussian Density function --- p.32 / Chapter 3 --- Finite Mixture-of-Expert ARMA-GARCH Model --- p.33 / Chapter 3.1 --- Introduction --- p.33 / Chapter 3.1.1 --- Mixture-of-Expert --- p.34 / Chapter 3.1.2 --- Alternative Mixture-of-Expert --- p.35 / Chapter 3.2 --- ARMA-GARCH Finite Mixture-of-Expert Model --- p.36 / Chapter 3.3 --- Estimation of Mixture-of-Expert ARMA-GARCH Model --- p.37 / Chapter 3.3.1 --- Model Selection --- p.38 / Chapter 3.4 --- Experiments: First Step Prediction --- p.41 / Chapter 3.5 --- Second Step and Third Step Prediction --- p.44 / Chapter 3.5.1 --- Calculating Second Step Prediction --- p.44 / Chapter 3.5.2 --- Calculating Third Step Prediction --- p.45 / Chapter 3.5.3 --- Experiments: Second Step and Third Step Prediction . --- p.46 / Chapter 3.6 --- Comparison with Other Models --- p.50 / Chapter 3.7 --- Chapter Summary --- p.57 / Chapter 4 --- Stable Estimation Algorithms --- p.58 / Chapter 4.1 --- Stable AR(1) estimation algorithm --- p.59 / Chapter 4.2 --- Stable AR(2) Estimation Algorithm --- p.60 / Chapter 4.2.1 --- Real p1 and p2 --- p.61 / Chapter 4.2.2 --- Complex p1 and p2 --- p.61 / Chapter 4.2.3 --- Experiments for AR(2) --- p.63 / Chapter 4.3 --- Experiment with Real Data --- p.64 / Chapter 4.4 --- Chapter Summary --- p.65 / Chapter 5 --- Conclusion --- p.66 / Chapter 5.1 --- Further Research --- p.69 / Chapter A --- Equation Derivation --- p.70 / Chapter A.1 --- First Derivatives for Gaussian Mixture ARMA-GARCH Esti- mation --- p.70 / Chapter A.2 --- First Derivatives for Mixture-of-Expert ARMA-GARCH Esti- mation --- p.71 / Chapter A.3 --- First Derivatives for BYY Harmony Function --- p.72 / Chapter A.4 --- First Derivatives for stable estimation algorithms --- p.73 / Chapter A.4.1 --- AR(1) --- p.74 / Chapter A.4.2 --- AR(2) --- p.74 / Bibliography --- p.80
|
409 |
Value-at-risk analysis of portfolio return model using independent component analysis and Gaussian mixture model.January 2004 (has links)
Sen Sui. / Thesis submitted in: August 2003. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 88-92). / Abstracts in English and Chinese. / Abstract --- p.ii / Acknowledgement --- p.iv / Dedication --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Objective --- p.1 / Chapter 1.2 --- Contributions --- p.4 / Chapter 1.3 --- Thesis Organization --- p.5 / Chapter 2 --- Background of Risk Management --- p.7 / Chapter 2.1 --- Measuring Return --- p.8 / Chapter 2.2 --- Objectives of Risk Measurement --- p.11 / Chapter 2.3 --- Simple Statistics for Measurement of Risk --- p.15 / Chapter 2.4 --- Methods for Value-at-Risk Measurement --- p.16 / Chapter 2.5 --- Conditional VaR --- p.18 / Chapter 2.6 --- Portfolio VaR Methods --- p.18 / Chapter 2.7 --- Coherent Risk Measure --- p.20 / Chapter 2.8 --- Summary --- p.22 / Chapter 3 --- Selection of Independent Factors for VaR Computation --- p.23 / Chapter 3.1 --- Mixture Convolution Approach Restated --- p.24 / Chapter 3.2 --- Procedure for Selection and Evaluation --- p.26 / Chapter 3.2.1 --- Data Preparation --- p.26 / Chapter 3.2.2 --- ICA Using JADE --- p.27 / Chapter 3.2.3 --- Factor Statistics --- p.28 / Chapter 3.2.4 --- Factor Selection --- p.29 / Chapter 3.2.5 --- Reconstruction and VaR Computation --- p.30 / Chapter 3.3 --- Result and Comparison --- p.30 / Chapter 3.4 --- Problem of Using Kurtosis and Skewness --- p.40 / Chapter 3.5 --- Summary --- p.43 / Chapter 4 --- Mixture of Gaussians and Value-at-Risk Computation --- p.45 / Chapter 4.1 --- Complexity of VaR Computation --- p.45 / Chapter 4.1.1 --- Factor Selection Criteria and Convolution Complexity --- p.46 / Chapter 4.1.2 --- Sensitivity of VaR Estimation to Gaussian Components --- p.47 / Chapter 4.2 --- Gaussian Mixture Model --- p.52 / Chapter 4.2.1 --- Concept and Justification --- p.52 / Chapter 4.2.2 --- Formulation and Method --- p.53 / Chapter 4.2.3 --- Result and Evaluation of Fitness --- p.55 / Chapter 4.2.4 --- Evaluation of Fitness using Z-Transform --- p.56 / Chapter 4.2.5 --- Evaluation of Fitness using VaR --- p.58 / Chapter 4.3 --- VaR Estimation using Convoluted Mixtures --- p.60 / Chapter 4.3.1 --- Portfolio Returns by Convolution --- p.61 / Chapter 4.3.2 --- VaR Estimation of Portfolio Returns --- p.64 / Chapter 4.3.3 --- Result and Analysis --- p.64 / Chapter 4.4 --- Summary --- p.68 / Chapter 5 --- VaR for Portfolio Optimization and Management --- p.69 / Chapter 5.1 --- Review of Concepts and Methods --- p.69 / Chapter 5.2 --- Portfolio Optimization Using VaR --- p.72 / Chapter 5.3 --- Contribution of the VaR by ICA/GMM --- p.76 / Chapter 5.4 --- Summary --- p.79 / Chapter 6 --- Conclusion --- p.80 / Chapter 6.1 --- Future Work --- p.82 / Chapter A --- Independent Component Analysis --- p.83 / Chapter B --- Gaussian Mixture Model --- p.85 / Bibliography --- p.88
|
410 |
Extensions of independent component analysis: towards applications. / CUHK electronic theses & dissertations collectionJanuary 2005 (has links)
In practice, the application and extension of the ICA model depend on the problem and the data to be investigated. We finally focus on GARCH models in finance, and show that estimation of univariate or multivariate GARCH models is actually a nonlinear ICA problem; maximizing the likelihood is equivalent to minimizing the statistical dependence in standardized residuals. ICA can then be used for factor extraction in multivariate factor GARCH models. We also develop some extensions of ICA for this task. These techniques for extracting factors from multivariate return series are compared both theoretically and experimentally. We find that the one based on conditional decorrelation between factors behaves best. / In this thesis, first we consider the problem of source separation of post-nonlinear (PNL) mixtures, which is an extension of ICA to the nonlinear mixing case. With a large number of parameters, existing methods are computation-demanding and may be prone to local optima. Based on the fact that linear mixtures of independent variables tend to be Gaussian, we develop a simple and efficient method for this problem, namely extended Gaussianization. With Gaussianization as preprocessing, this method approximates each linear mixture of independent sources by the Cornish-Fisher expansion with only two parameters. Inspired by the relationship between the PNL mixing model and the Wiener system, extended Gaussianization is also proposed for blind inversion of Wiener systems. / Independent component analysis (ICA) is a recent and powerful technique for recovering latent independent sources given only their mixtures. The basic ICA model assumes that sources are linearly mixed and mutually independent. / Next, we study the subband decomposition ICA (SDICA) model, which extends the basic ICA model to allow dependence between sources by assuming that only some narrow-band source sub-components are independent. In SDICA, it is difficult to determine the subbands of source independent sub-components. We discuss the feasibility of performing SDICA in an adaptive manner. An adaptive method, called band selective ICA, is then proposed for this task. We also investigate the relationship between overcomplete ICA and SDICA and show that band selective ICA can solve the overcomplete ICA problems with sources having specific frequency localizations. Experimental results on separating images of human faces as well as artificial data are presented to verify the powerfulness of band selective ICA. / Zhang Kun. / "July 2005." / Adviser: Lai-Wan Chan. / Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3925. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (p. 218-234). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract in English and Chinese. / School code: 1307.
|
Page generated in 0.2484 seconds