• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrating local information for inference and optimization in machine learning

Zhu, Zhanxing January 2016 (has links)
In practice, machine learners often care about two key issues: one is how to obtain a more accurate answer with limited data, and the other is how to handle large-scale data (often referred to as “Big Data” in industry) for efficient inference and optimization. One solution to the first issue might be aggregating learned predictions from diverse local models. For the second issue, integrating the information from subsets of the large-scale data is a proven way of achieving computation reduction. In this thesis, we have developed some novel frameworks and schemes to handle several scenarios in each of the two salient issues. For aggregating diverse models – in particular, aggregating probabilistic predictions from different models – we introduce a spectrum of compositional methods, Rényi divergence aggregators, which are maximum entropy distributions subject to biases from individual models, with the Rényi divergence parameter dependent on the bias. Experiments are implemented on various simulated and real-world datasets to verify the findings. We also show the theoretical connections between Rényi divergence aggregators and machine learning markets with isoelastic utilities. The second issue involves inference and optimization with large-scale data. We consider two important scenarios: one is optimizing large-scale Convex-Concave Saddle Point problem with a Separable structure, referred as Sep-CCSP; and the other is large-scale Bayesian posterior sampling. Two different settings of Sep-CCSP problem are considered, Sep-CCSP with strongly convex functions and non-strongly convex functions. We develop efficient stochastic coordinate descent methods for both of the two cases, which allow fast parallel processing for large-scale data. Both theoretically and empirically, it is demonstrated that the developed methods perform comparably, or more often, better than state-of-the-art methods. To handle the scalability issue in Bayesian posterior sampling, the stochastic approximation technique is employed, i.e., only touching a small mini batch of data items to approximate the full likelihood or its gradient. In order to deal with subsampling error introduced by stochastic approximation, we propose a covariance-controlled adaptive Langevin thermostat that can effectively dissipate parameter-dependent noise while maintaining a desired target distribution. This method achieves a substantial speedup over popular alternative schemes for large-scale machine learning applications.
2

Vers une nouvelle génération de modèles de glissements co-sismiques : analyse stochastique et approche multi-données / Toward the next generation of earthquake source models : a stochastic approach involving multiple data-types

Gombert, Baptiste 23 March 2018 (has links)
L’explosion du nombre et de la variété des données géodésiques, sismologiques et tsunami disponibles est une opportunité exceptionnelle pour produire de nouveaux modèles de la source sismique. Mais ces données n’apportent pas toutes la même information et sont soumises à différentes sources d’incertitudes, rendant la solution au problème inverse non-unique. Dans cette thèse, nous utilisons une méthode d’échantillonnage bayésien pour produire de nouveaux modèles de glissement moins assujettis au sur-ajustement des données et permettant une estimation réaliste de l’incertitude associée aux paramètres estimés. Nous l’appliquons à l’étude du glissement dans trois contextes tectoniques différents : le séisme de Landers (1992, Mw=7.3), la zone de subduction équato-colombienne où s’est produit le séisme de Pedernales (2016, Mw=7.8), et le séisme intra-plaque de Tehuantepec (2017, Mw=8.2). À travers ce travail, nous démontrons l’importance de la considération rigoureuse des incertitudes et les atouts de l’approche bayésienne pour l’étude des différentes phases du cycle sismique. / The explosion in the amount and variety of available geodetic, tsunami, and seismological observations offers an outstanding opportunity to develop new seismic source models. But these data are sensitive to different sources of uncertainty and provide heterogeneous information, which makes the solution of the inverse problem non-unique.In this thesis, we use a Bayesian sampling method to propose new slip models, which benefit from an objective weighting of the various datasets by combining observational and modelling errors. These models are less affected by data overfit and allow a realistic assessment of posterior uncertainties. We apply this method to the study of slip processes occurring in three different tectonic contexts: the Landers earthquake (1992, Mw=7.3), the Ecuador-Colombia subduction zone which hosted the Pedernales earthquake (2016, Mw=7.8), and the intraslab Tehuantepec earthquake (2017, Mw=8.2). Through these analyses, we demonstrate how the study of the seismic cycle can benefit from rigorous uncertainty estimates and Bayesian sampling.

Page generated in 0.0817 seconds