• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Gaussian Processes for Power System Monitoring, Optimization, and Planning

Jalali, Mana 26 July 2022 (has links)
The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. Gaussian process regression is a machine learning paradigm that provides closed-form predictions with quantified uncertainties. The key property of Gaussian processes is the natural ability to integrate the sensitivity of the labels with respect to features, yielding improved accuracy. This dissertation tailors Gaussian process regression for three applications in power systems. First, a physics-informed approach is introduced to infer the grid dynamics using synchrophasor data with minimal network information. The suggested method is useful for a wide range of applications, including prediction, extrapolation, and anomaly detection. Further, the proposed framework accommodates heterogeneous noisy measurements with missing entries. Second, a learn-to-optimize scheme is presented using Gaussian process regression that predicts the optimal power flow minimizers given grid conditions. The main contribution is leveraging sensitivities to expedite learning and achieve data efficiency without compromising computational efficiency. Third, Bayesian optimization is applied to solve a bi-level minimization used for strategic investment in electricity markets. This method relies on modeling the cost of the outer problem as a Gaussian process and is applicable to non-convex and hard-to-evaluate objective functions. The designed algorithm shows significant improvement in speed while attaining a lower cost than existing methods. / Doctor of Philosophy / The proliferation of renewables, electric vehicles, and power electronic devices calls for innovative approaches to learn, optimize, and plan the power system. The uncertain and volatile nature of the integrated components necessitates using swift and probabilistic solutions. This dissertation focuses on three practically important problems stemming from the power system modernization. First, a novel approach is proposed that improves power system monitoring, which is the first and necessary step for the stable operation of the network. The suggested method applies to a wide range of applications and is adaptable to use heterogeneous and noisy measurements with missing entries. The second problem focuses on predicting the minimizers of an optimization task. Moreover, a computationally efficient framework is put forth to expedite this process. The third part of this dissertation identifies investment portfolios for electricity markets that yield maximum revenue and minimum cost.
2

A Study of Adaptive Random Features Models in Machine Learning based on Metropolis Sampling / En studie av anpassningsbara slumpmässiga funktioner i maskininlärning baserat på Metropolis-sampling

Bai, Bing January 2021 (has links)
Artificial neural network (ANN) is a machine learning approach where parameters, i.e., frequency parameters and amplitude parameters, are learnt during the training process. Random features model is a special case of ANN that the structure of random features model is as same as ANN’s but the parameters’ learning processes are different. For random features model, the amplitude parameters are learnt during the training process but the frequency parameters are sampled from some distributions. If the frequency distribution of the random features model is well-chosen, both models can approximate data well. Adaptive random Fourier features with Metropolis sampling is an enhanced random Fourier features model which can select appropriate frequency distribution adaptively. This thesis studies Rectified Linear Unit and sigmoid features and combines them with the adaptive idea to generate another two adaptive random features models. The results show that using the particular set of hyper-parameters, adaptive random Rectified Linear Unit features model can also approximate the data relatively well, though the adaptive random Fourier features model performs slightly better. / I artificiella neurala nätverk (ANN), som används inom maskininlärning, behöver parametrar, kallade frekvensparametrar och amplitudparametrar, hittasgenom en så kallad träningsprocess. Random feature-modeller är ett specialfall av ANN där träningen sker på ett annat sätt. I dessa modeller tränasamplitudparametrarna medan frekvensparametrarna samplas från någon sannolikhetstäthet. Om denna sannolikhetstäthet valts med omsorg kommer båda träningsmodellerna att ge god approximation av givna data. Metoden Adaptiv random Fourier feature[1] uppdaterar frekvensfördelningen adaptivt. Denna uppsats studerar aktiveringsfunktionerna ReLU och sigmoid och kombinerar dem med den adaptiva iden i [1] för att generera två ytterligare Random feature-modeller. Resultaten visar att om samma hyperparametrar som i [1] används så kan den adaptiva ReLU features-modellen approximera data relativt väl, även om Fourier features-modellen ger något bättre resultat.
3

Apprentissage de modèles de mélange à large échelle par Sketching / Sketching for large-scale learning of mixture models

Keriven, Nicolas 12 October 2017 (has links)
Les bases de données modernes sont de très grande taille, parfois divisées et distribuées sur plusieurs lieux de stockage, ou encore sous forme de flux de données : ceci soulève de nouveaux défis majeurs pour les méthodes d’apprentissage statistique. Une des méthodes récentes capable de s’adapter à ces situations consiste à d’abord compresser les données en une structure appelée sketch linéaire, puis ensuite de réaliser la tâche d’apprentissage en utilisant uniquement ce sketch, ce qui est extrêmement rapide si celui-ci est de petite taille. Dans cette thèse, nous définissons une telle méthode pour estimer un modèle de mélange de distributions de probabilités à partir des données, en utilisant uniquement un sketch de celles-ci. Ce sketch est défini en s’inspirant de plusieurs notions venant du domaine des méthodes à noyaux : le plongement par noyau moyen et les approximations aléatoires de noyaux. Défini comme tel, le sketch correspond à des mesures linéaires de la distribution de probabilité sous-jacente aux données. Ainsi nous analysons le problème en utilisant des outils venant du domaine de l’acquisition comprimée, dans lequel un signal est mesuré aléatoirement sans perte d’information, sous certaines conditions. Nous étendons certains résultats de l’acquisition comprimée à la dimension infinie, donnons des conditions génériques garantissant le succès de notre méthode d’estimation de modèles de mélanges, et les appliquons à plusieurs problèmes, dont notamment celui d’estimer des mélanges de distributions stables multivariées, pour lequel il n’existait à ce jour aucun estimateur. Notre analyse est basée sur la construction d’opérateurs de sketch construits aléatoirement, qui satisfont une Propriété d’Isométrie Restreinte dans l’espace de Banach des mesures finies signées avec forte probabilité. Dans une second partie, nous introduisons un algorithme glouton capable heuristiquement d’estimer un modèle de mélange depuis un sketch linéaire. Cet algorithme est appliqué sur données simulées et réelles à trois problèmes : l’estimation de centres significatifs dans les données, pour lequel on constate que la méthode de sketch est significativement plus rapide qu’un algorithme de k-moyennes classique, l’estimation de mélanges de Gaussiennes, pour lequel elle est plus rapide qu’un algorithme d’Espérance-Maximisation, et enfin l’estimation de mélange de distributions stables multivariées, pour lequel il n’existait à ce jour, à notre connaissance, aucun algorithme capable de réaliser une telle tâche. / Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond to linear measurements of the underlying probability distribution of the data, and the estimation problem is thus analyzed under the lens of Compressive Sensing (CS), in which a (traditionally finite-dimensional) signal is randomly measured and recovered. We extend CS results to our infinite-dimensional framework, give generic conditions for successful estimation and apply them analysis to many problems, with a focus on mixture models estimation. We base our method on the construction of random sketching operators such that some Restricted Isometry Property (RIP) condition holds in the Banach space of finite signed measures with high probability. In a second part we introduce a flexible heuristic greedy algorithm to estimate mixture models from a sketch. We apply it on synthetic and real data on three problems: the estimation of centroids from a sketch, for which it is seen to be significantly faster than k-means, Gaussian Mixture Model estimation, for which it is more efficient than Expectation-Maximization, and the estimation of mixtures of multivariate stable distributions, for which, to our knowledge, it is the only algorithm capable of performing such a task.

Page generated in 0.0513 seconds