Spelling suggestions: "subject:"gaussian processes"" "subject:"maussian processes""
121 |
Application of shifted delta cepstral features for GMM language identification /Lareau, Jonathan. January 2006 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2006. / Typescript. Includes bibliographical references (leaves 80-82).
|
122 |
Algorithms and data structures for cache-efficient computation theory and experimental evaluation /Chowdhury, Rezaul Alam. January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
|
123 |
On portfolio construction through functional generationVervuurt, Alexander January 2016 (has links)
One of the main research questions in financial mathematics is that of portfolio construction: how should one systematically invest their wealth in a financial market? This problem has been tackled in numerous ways, typically through the modeling of market prices and the optimization of an investment objective. A recent approach to portfolio construction is that offered by Stochastic Portfolio Theory, in which a relatively general market model is assumed, and the portfolio selection criterion is to outperform a benchmark with probability one. In order to achieve this, Robert Fernholz developed the method of functional generation, which allows one to explicitly construct and study portfolios that depend deterministically on the currently observable prices. The typical example of such a strategy is the diversity-weighted portfolio, which we extend in the first chapter of this work with a negative-parameter variation. We show that several modifications of this portfolio outperform the market index in theory, under certain assumptions on the market, and we perform an empirical study that confirms this. In our second chapter, we develop a data-driven portfolio construction method that goes beyond functional generation, allowing for the inclusion of factors other than current prices. We empirically show that this Bayesian nonparametric approach, which utilizes Gaussian processes, leads to drastically improved performance compared to benchmark portfolios. Next, we establish a formal equivalence between the method of functional generation and the mathematical field of optimal transport. Our results fortify known relations between the two, and extend this connection to additive functional generation, a recent variation of the method. In Chapter 4, we apply our results to derive new properties and characterizations of functionally-generated wealth processes in very general market models. Finally, we develop methods for incorporating defaults into functional generation, improving its real-world implementability.
|
124 |
Kriging-based black-box global optimization : analysis and new algorithms / Optimisation Globale et processus Gaussiens : analyse et nouveaux algorithmesMohammadi, Hossein 11 April 2016 (has links)
L’«Efficient Global Optimization» (EGO) est une méthode de référence pour l’optimisation globale de fonctions «boites noires» coûteuses. Elle peut cependant rencontrer quelques difficultés, comme le mauvais conditionnement des matrices de covariance des processus Gaussiens (GP) qu’elle utilise, ou encore la lenteur de sa convergence vers l’optimum global. De plus, le choix des paramètres du GP, crucial car il contrôle la famille des fonctions d’approximation utilisées, mériterait une étude plus poussée que celle qui en a été faite jusqu’à présent. Enfin, on peut se demander si l’évaluation classique des paramètres du GP est la plus appropriée à des fins d’optimisation. \\Ce travail est consacré à l'analyse et au traitement des différentes questions soulevées ci-dessus.La première partie de cette thèse contribue à une meilleure compréhension théorique et pratique de l’impact des stratégies de régularisation des processus Gaussiens, développe une nouvelle technique de régularisation, et propose des règles pratiques. Une seconde partie présente un nouvel algorithme combinant EGO et CMA-ES (ce dernier étant un algorithme d’optimisation globale et convergeant). Le nouvel algorithme, nommé EGO-CMA, utilise EGO pour une exploration initiale, puis CMA-ES pour une convergence finale. EGO-CMA améliore les performances des deux algorithmes pris séparément. Dans une troisième partie, l’effet des paramètres du processus Gaussien sur les performances de EGO est soigneusement analysé. Finalement, un nouvel algorithme EGO auto-adaptatif est présenté, dans une nouvelle approche où ces paramètres sont estimés à partir de leur influence sur l’efficacité de l’optimisation elle-même. / The Efficient Global Optimization (EGO) is regarded as the state-of-the-art algorithm for global optimization of costly black-box functions. Nevertheless, the method has some difficulties such as the ill-conditioning of the GP covariance matrix and the slow convergence to the global optimum. The choice of the parameters of the GP is critical as it controls the functional family of surrogates used by EGO. The effect of different parameters on the performance of EGO needs further investigation. Finally, it is not clear that the way the GP is learned from data points in EGO is the most appropriate in the context of optimization. This work deals with the analysis and the treatment of these different issues. Firstly, this dissertation contributes to a better theoretical and practical understanding of the impact of regularization strategies on GPs and presents a new regularization approach based on distribution-wise GP. Moreover, practical guidelines for choosing a regularization strategy in GP regression are given. Secondly, a new optimization algorithm is introduced that combines EGO and CMA-ES which is a global but converging search. The new algorithm, called EGO-CMA, uses EGO for early exploration and then CMA-ES for final convergence. EGO-CMA improves the performance of both EGO and CMA-ES. Thirdly, the effect of GP parameters on the EGO performance is carefully analyzed. This analysis allows a deeper understanding of the influence of these parameters on the EGO iterates. Finally, a new self-adaptive EGO is presented. With the self-adaptive EGO, we introduce a novel approach for learning parameters directly from their contribution to the optimization.
|
125 |
Gaussian process tools for modelling stellar signals and studying exoplanetsRajpaul, Vinesh Maguire January 2017 (has links)
The discovery of exoplanets represents one of the greatest scientific revolutions in history, and exoplanetary science has rapidly become uniquely positioned to address profound questions about the origins of life, and about humanity's place (and future) in the cosmos. Since the discovery of the first exoplanet over two decades ago, the radial velocity (RV) method has been one of the most productive techniques for discovering new planets. It has also become indispensable for characterising exoplanets detected via other techniques, notably transit photometry. Unfortunately, signals intrinsic to stars themselves - especially magnetic activity signals - can induce RV variations that can drown out or even mimic planetary signals. Modelling and thus mitigating these signals is notoriously difficult, which represents a major obstacle to using next-generation instruments to detect lower mass planets, planets with longer periods, and planets around more magnetically-active stars. Enter Gaussian processes (GPs), which have a number of features that make them very well suited to the joint modelling of stochastic activity processes and dynamical (e.g. planetary) signals. In this thesis, I leverage GPs to enable the study of smaller planets around a wider variety of stars than has previously been possible. In particular, I develop a principled and sophisticated Bayesian framework, based on GPs, for modelling RV time series jointly with ancillary activity-sensitive proxies, thus allowing activity signals to be constrained and disentangled from genuine planetary signals. I show that my framework succeeds even in cases where existing techniques would fail to detect planets, e.g. the case of a weak planetary signal with period identical to its host star's rotation period. In a first application of the framework, I demonstrate that Alpha Centauri Bb - until 2016, thought to be the closest exoplanet to Earth, and also the lowest minimum-mass exoplanet around a Sun-like star - was, in fact, an astrophysical false positive. Next, I use the framework to re-characterise the well-studied Kepler-10 system, thereby resolving a mystery surrounding the mass of planet Kepler-10c. I also use the framework to help discover or characterise various exoplanets. Finally, the activity modelling framework aside, I also present in outline form a few promising applications of GPs in the context of modelling stellar signals and studying exoplanets, viz. GPs for (i) enhanced characterisation of stellar rotation; (ii) generating realistic synthetic observations, and modelling in a systematic way the effects of an observing window function; and (iii) ultra-precise extraction of RV shifts directly from observed spectra, without requiring template cross-correlation.
|
126 |
Desenvolvimento de uma metodologia de analise da qualidade cristalina de monocristaisMETAIRON, SABRINA 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:43:13Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:35Z (GMT). No. of bitstreams: 1
06444.pdf: 11587450 bytes, checksum: c6dc7c542bc99d2c882eb508cb25673a (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
|
127 |
Probing the expansion history of the universe using upernovae and Baryon Acoustic OscillationsAli, Sahba Yahya Hamid January 2016 (has links)
Philosophiae Doctor - PhD / The standard model of cosmology (the ɅCDM model) has been very successful and is compatible with all observational data up to now. However, it remains an important task to develop and apply null tests of this model. These tests are based on observables that probe cosmic distances and cosmic evolution history. Supernovae observations use the so-called `standard candle' property of SNIa to probe cosmic distances D(z). The evolution of the expansion rate H(z) is probed by the baryon acoustic oscillation (BAO) feature in the galaxy distribution, which serves as an effective `standard ruler'. The observables D(z) and H(z) are used in various consistency tests of ɅCDM that have been developed. We review the consistency tests, also looking for possible new tests. Then the tests are applied, first using existing data, and then using mock data from future planned experiments. In particular we use data from the recently commissioned Dark Energy Survey (DES) for SNIa. Gaussian Processes, and possibly other non-parametric methods, used to reconstruct the derivatives of D (z) and H (z) that are needed to apply the null tests of the standard cosmological model. This allows us to estimate the current and future power of observations to probe the ɅCDM model, which is the foundation of modern cosmology. In addition, we present an improved model of the HI galaxy number counts and bias from semi-analytic simulations, and we use it to calculate the expected yield of HI galaxies from surveys with a variety of phase 1 and 2 SKA configurations. We illustrate the relative performance of the different surveys by forecasting errors on the radial and transverse scales of the BAO feature. We use the Fisher matrix method to estimate the error bars on the cosmological parameters from future SKA HI galaxy surveys. We find that the SKA phase 1 galaxy surveys will not contend with surveys such as the Baryon Oscillation Spectroscopic Survey (BOSS) whereas the full "billion galaxy survey" with SKA phase 2 will deliver the largest dark energy Figure of Merit of any current or future large-scale structure survey. / South African Square Kilometre Array Project (SKA) and German Academic Exchange Service (DAAD)
|
128 |
Desenvolvimento de uma metodologia de analise da qualidade cristalina de monocristaisMETAIRON, SABRINA 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:43:13Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T13:58:35Z (GMT). No. of bitstreams: 1
06444.pdf: 11587450 bytes, checksum: c6dc7c542bc99d2c882eb508cb25673a (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
|
129 |
Stochastic volatility modeling of the Ornstein Uhlenbeck type : pricing and calibrationMarshall, Jean-Pierre 23 February 2010 (has links)
M.Sc.
|
130 |
Gaussian processes for state space models and change point detectionTurner, Ryan Darby January 2012 (has links)
This thesis details several applications of Gaussian processes (GPs) for enhanced time series modeling. We first cover different approaches for using Gaussian processes in time series problems. These are extended to the state space approach to time series in two different problems. We also combine Gaussian processes and Bayesian online change point detection (BOCPD) to increase the generality of the Gaussian process time series methods. These methodologies are evaluated on predictive performance on six real world data sets, which include three environmental data sets, one financial, one biological, and one from industrial well drilling. Gaussian processes are capable of generalizing standard linear time series models. We cover two approaches: the Gaussian process time series model (GPTS) and the autoregressive Gaussian process (ARGP).We cover a variety of methods that greatly reduce the computational and memory complexity of Gaussian process approaches, which are generally cubic in computational complexity. Two different improvements to state space based approaches are covered. First, Gaussian process inference and learning (GPIL) generalizes linear dynamical systems (LDS), for which the Kalman filter is based, to general nonlinear systems for nonparametric system identification. Second, we address pathologies in the unscented Kalman filter (UKF).We use Gaussian process optimization (GPO) to learn UKF settings that minimize the potential for sigma point collapse. We show how to embed mentioned Gaussian process approaches to time series into a change point framework. Old data, from an old regime, that hinders predictive performance is automatically and elegantly phased out. The computational improvements for Gaussian process time series approaches are of even greater use in the change point framework. We also present a supervised framework learning a change point model when change point labels are available in training.
|
Page generated in 0.0546 seconds