• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • Tagged with
  • 7
  • 7
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Accelerating longitudinal spinfluctuation theory for iron at high temperature using a machine learning method

Arale Brännvall, Marian January 2020 (has links)
In the development of materials, the understanding of their properties is crucial. For magnetic materials, magnetism is an apparent property that needs to be accounted for. There are multiple factors explaining the phenomenon of magnetism, one being the effect of vibrations of the atoms on longitudinal spin fluctuations. This effect can be investigated by simulations, using density functional theory, and calculating energy landscapes. Through such simulations, the energy landscapes have been found to depend on the magnetic background and the positions of the atoms. However, when simulating a supercell of many atoms, to calculate energy landscapes for all atoms consumes many hours on the supercomputer. In this thesis, the possibility of using machine learning models to accelerate the approximation of energy landscapes is investigated. The material under investigation is body-centered cubic iron in the paramagnetic state at 1043 K. Machine learning enables statistical predictions to be made on new data based on patterns found in a previous set of data. Kernel ridge regression is used as the machine learning method. An important issue when training a machine learning model is the representation of the data in the so called descriptor (feature vector representation) or, more specific to this case, how the environment of an atom in a supercell is accounted for and represented properly. Four different descriptors are developed and compared to investigate which one yields the best result and why. Apart from comparing the descriptors, the results when using machine learning models are compared to when using other methods to approximate the energy landscapes. The machine learning models are also tested in a combined atomistic spin dynamics and ab initio molecular dynamics simulation (ASD-AIMD) where they were used to approximate energy landscapes and, from that, magnetic moment magnitudes at 1043 K. The results of these simulations are compared to the results from two other cases: one where the magnetic moment magnitudes are set to a constant value and one where they are set to their magnitudes at 0 K. From these investigations it is found that using machine learning methods to approximate the energy landscapes does, to a large degree, decrease the errors compared to the other approximation methods investigated. Some weaknesses of the respective descriptors were detected and if, in future work, these are accounted for, the errors have the potential of being lowered further.
2

Predicting Reactor Instability Using Neural Networks

Hubert, Hilborn January 2022 (has links)
The study of the instabilities in boiling water reactors is of significant importance to the safety withwhich they can be operated, as they can cause damage to the reactor posing risks to both equipmentand personnel. The instabilities that concern this paper are progressive growths in the oscillatingpower of boiling-water reactors. As thermal power is oscillatory is important to be able to identifywhether or not the power amplitude is stable. The main focus of this paper has been the development of a neural network estimator of these insta-bilities, fitting a non-linear model function to data by estimating it’s parameters. In doing this, theambition was to optimize the networks to the point that it can deliver near ”best-guess” estimationsof the parameters which define these instabilities, evaluating the usefulness of these networks whenapplied to problems like this. The goal was to design both MLP(Multi-Layer Perceptron) and SVR/KRR(Support Vector Regres-sion/Kernel Rigde Regression) networks and improve them to the point that they provide reliableand useful information about the waves in question. This goal was accomplished only in part asthe SVR/KRR networks proved to have some difficulty in ascertaining the phase shift of the waves.Overall, however, these networks prove very useful in this kind of task, succeeding with a reasonabledegree of confidence to calculating the different parameters of the waves studied.
3

Comparison of different models for forecasting of Czech electricity market / Comparison of different models for forecasting of Czech electricity market

Kunc, Vladimír January 2017 (has links)
There is a demand for decision support tools that can model the electricity markets and allows to forecast the hourly electricity price. Many different ap- proach such as artificial neural network or support vector regression are used in the literature. This thesis provides comparison of several different estima- tors under one settings using available data from Czech electricity market. The resulting comparison of over 5000 different estimators led to a selection of several best performing models. The role of historical weather data (temper- ature, dew point and humidity) is also assesed within the comparison and it was found that while the inclusion of weather data might lead to overfitting, it is beneficial under the right circumstances. The best performing approach was the Lasso regression estimated using modified Lars. 1
4

Machine Learning of Crystal Formation Energies with Novel Structural Descriptors / Maskininlärning av kristallers formationsenergier

Bratu, Claudia January 2017 (has links)
To assist technology advancements, it is important to continue the search for new materials. The stability of a crystal structures is closely connected to its formation energy. By calculating the formation energies of theoretical crystal structures it is possible to find new stable materials. However, the number of possible structures are so many that traditional methods relying on quantum mechanics, such as Density Functional Theory (DFT), require too much computational time to be viable in such a project. A presented alternative to such calculations is machine learning. Machine learning is an umbrella term for algorithms that can use information gained from one set of data to predict properties of new, similar data. Feature vector representations (descriptors) are used to present data in an appropriate manner to the machine. Thus far, no combination of machine learning method and feature vector representation has been established as general and accurate enough to be of practical use for accelerating the phase diagram calculations necessary for predicting material stability. It is important that the method predicts all types of structures equally well, regardless of stability, composition, or geometrical structure. In this thesis, the performances of different feature vector representations were compared to each other. The machine learning method used was primarily Kernel Ridge Regression, implemented in Python. The training and validation were performed on two different datasets and subsets of these. The representation which consistently yielded the lowest cross-validated error was a representation using the Voronoi tessellation of the structure by Ward et. al. [Phys. Rev. B 96, 024104 (2017)]. Following up was an experimental representation called the SLATM representation presented by Huang and von Lilienfeld [arXiv:1707.04146], which is partially based on the Radial Distribution Function. The Voronoi representation achieved an MAE of 0.16 eV/atom at 3534 training set size for one of the sets, and 0.28 eV/atom at 10086 training set size for the other set. The effect of separating linear and non-linear energy contributions was evaluated using the sinusoidal and Coulomb representations. The result was that separating these improved the error for small training set sizes, but the effect diminishes as the training set size increases. The results from this thesis implicate that further work is still required for machine learning to be used effectively in the search for new materials.
5

Forecasting hourly electricity consumption for sets of households using machine learning algorithms

Linton, Thomas January 2015 (has links)
To address inefficiency, waste, and the negative consequences of electricity generation, companies and government entities are looking to behavioural change among residential consumers. To drive behavioural change, consumers need better feedback about their electricity consumption. A monthly or quarterly bill provides the consumer with almost no useful information about the relationship between their behaviours and their electricity consumption. Smart meters are now widely dispersed in developed countries and they are capable of providing electricity consumption readings at an hourly resolution, but this data is mostly used as a basis for billing and not as a tool to assist the consumer in reducing their consumption. One component required to deliver innovative feedback mechanisms is the capability to forecast hourly electricity consumption at the household scale. The work presented by this thesis is an evaluation of the effectiveness of a selection of kernel based machine learning methods at forecasting the hourly aggregate electricity consumption for different sized sets of households. The work of this thesis demonstrates that k-Nearest Neighbour Regression and Gaussian process Regression are the most accurate methods within the constraints of the problem considered. In addition to accuracy, the advantages and disadvantages of each machine learning method are evaluated, and a simple comparison of each algorithms computational performance is made. / För att ta itu med ineffektivitet, avfall, och de negativa konsekvenserna av elproduktion så vill företag och myndigheter se beteendeförändringar bland hushållskonsumenter. För att skapa beteendeförändringar så behöver konsumenterna bättre återkoppling när det gäller deras elförbrukning. Den nuvarande återkopplingen i en månads- eller kvartalsfaktura ger konsumenten nästan ingen användbar information om hur deras beteenden relaterar till deras konsumtion. Smarta mätare finns nu överallt i de utvecklade länderna och de kan ge en mängd information om bostäders konsumtion, men denna data används främst som underlag för fakturering och inte som ett verktyg för att hjälpa konsumenterna att minska sin konsumtion. En komponent som krävs för att leverera innovativa återkopplingsmekanismer är förmågan att förutse elförbrukningen på hushållsskala. Arbetet som presenteras i denna avhandling är en utvärdering av noggrannheten hos ett urval av kärnbaserad maskininlärningsmetoder för att förutse den sammanlagda förbrukningen för olika stora uppsättningar av hushåll. Arbetet i denna avhandling visar att "k-Nearest Neighbour Regression" och "Gaussian Process Regression" är de mest exakta metoder inom problemets begränsningar. Förutom noggrannhet, så görs en utvärdering av fördelar, nackdelar och prestanda hos varje maskininlärningsmetod.
6

Accelerating bulk material property prediction using machine learning potentials for molecular dynamics : predicting physical properties of bulk Aluminium and Silicon / Acceleration av materialegenskapers prediktion med hjälp av maskininlärda potentialer för molekylärdynamik

Sepp Löfgren, Nicholas January 2021 (has links)
In this project machine learning (ML) interatomic potentials are trained and used in molecular dynamics (MD) simulations to predict the physical properties of total energy, mean squared displacement (MSD) and specific heat capacity for systems of bulk Aluminium and Silicon. The interatomic potentials investigated are potentials trained using the ML models kernel ridge regression (KRR) and moment tensor potentials (MTPs). The simulations using these ML potentials are then compared with results obtained from ab-initio simulations using the gold standard method of density functional theory (DFT), as implemented in the Vienna ab-intio simulation package (VASP). The results show that the MTP simulations reach comparable accuracy compared to the DFT simulations for the properties total energy and MSD for Aluminium, with errors in the orders of magnitudes of meV and 10-5 Å2. Specific heat capacity is not reasonably replicated for Aluminium. The MTP simulations do not reasonably replicate the studied properties for the system of Silicon. The KRR models are implemented in the most direct way, and do not yield reasonably low errors even when trained on all available 10000 time steps of DFT training data. On the other hand, the MTPs require only to be trained on approximately 100 time steps to replicate the physical properties of Aluminium with accuracy comparable to DFT. After being trained on 100 time steps, the trained MTPs achieve mean absolute errors in the orders of magnitudes for the energy per atom and force magnitude predictions of 10-3 and 10-1 respectively for Aluminium, and 10-3 and 10-2 respectively for Silicon. At the same time, the MTP simulations require less core hours to simulate the same amount of time steps as the DFT simulations. In conclusion, MTPs could very likely play a role in accelerating both materials simulations themselves and subsequently the emergence of the data-driven materials design and informatics paradigm.
7

Analysis of the human corneal shape with machine learning

Bouazizi, Hala 01 1900 (has links)
Cette thèse cherche à examiner les conditions optimales dans lesquelles les surfaces cornéennes antérieures peuvent être efficacement pré-traitées, classifiées et prédites en utilisant des techniques de modélisation géométriques (MG) et d’apprentissage automatiques (AU). La première étude (Chapitre 2) examine les conditions dans lesquelles la modélisation géométrique peut être utilisée pour réduire la dimensionnalité des données utilisées dans un projet d’apprentissage automatique. Quatre modèles géométriques ont été testés pour leur précision et leur rapidité de traitement : deux modèles polynomiaux (P) – polynômes de Zernike (PZ) et harmoniques sphériques (PHS) – et deux modèles de fonctions rationnelles (R) : fonctions rationnelles de Zernike (RZ) et fonctions rationnelles d’harmoniques sphériques (RSH). Il est connu que les modèles PHS et RZ sont plus précis que les modèles PZ pour un même nombre de coefficients (J), mais on ignore si les modèles PHS performent mieux que les modèles RZ, et si, de manière plus générale, les modèles SH sont plus précis que les modèles R, ou l’inverse. Et prenant en compte leur temps de traitement, est-ce que les modèles les plus précis demeurent les plus avantageux? Considérant des valeurs de J (nombre de coefficients du modèle) relativement basses pour respecter les contraintes de dimensionnalité propres aux taches d’apprentissage automatique, nous avons établi que les modèles HS (PHS et RHS) étaient tous deux plus précis que les modèles Z correspondants (PZ et RR), et que l’avantage de précision conféré par les modèles HS était plus important que celui octroyé par les modèles R. Par ailleurs, les courbes de temps de traitement en fonction de J démontrent qu’alors que les modèles P sont traités en temps quasi-linéaires, les modèles R le sont en temps polynomiaux. Ainsi, le modèle SHR est le plus précis, mais aussi le plus lent (un problème qui peut en partie être remédié en appliquant une procédure de pré-optimisation). Le modèle ZP était de loin le plus rapide, et il demeure une option intéressante pour le développement de projets. SHP constitue le meilleur compromis entre la précision et la rapidité. La classification des cornées selon des paramètres cliniques a une longue tradition, mais la visualisation des effets moyens de ces paramètres sur la forme de la cornée par des cartes topographiques est plus récente. Dans la seconde étude (Chapitre 3), nous avons construit un atlas de cartes d’élévations moyennes pour différentes variables cliniques qui pourrait s’avérer utile pour l’évaluation et l’interprétation des données d’entrée (bases de données) et de sortie (prédictions, clusters, etc.) dans des tâches d’apprentissage automatique, entre autres. Une base de données constituée de plusieurs milliers de surfaces cornéennes antérieures normales enregistrées sous forme de matrices d’élévation de 101 by 101 points a d’abord été traitée par modélisation géométrique pour réduire sa dimensionnalité à un nombre de coefficients optimal dans une optique d’apprentissage automatique. Les surfaces ainsi modélisées ont été regroupées en fonction de variables cliniques de forme, de réfraction et de démographie. Puis, pour chaque groupe de chaque variable clinique, une surface moyenne a été calculée et représentée sous forme de carte d’élévations faisant référence à sa SMA (sphère la mieux ajustée). Après avoir validé la conformité de la base de donnée avec la littérature par des tests statistiques (ANOVA), l’atlas a été vérifié cliniquement en examinant si les transformations de formes cornéennes présentées dans les cartes pour chaque variable étaient conformes à la littérature. C’était le cas. Les applications possibles d’un tel atlas sont discutées. La troisième étude (Chapitre 4) traite de la classification non-supervisée (clustering) de surfaces cornéennes antérieures normales. Le clustering cornéen un domaine récent en ophtalmologie. La plupart des études font appel aux techniques d’extraction des caractéristiques pour réduire la dimensionnalité de la base de données cornéennes. Le but est généralement d’automatiser le processus de diagnostique cornéen, en particulier en ce qui a trait à la distinction entre les cornées normales et les cornées irrégulières (kératocones, Fuch, etc.), et dans certains cas, de distinguer différentes sous-classes de cornées irrégulières. L’étude de clustering proposée ici se concentre plutôt sur les cornées normales afin de mettre en relief leurs regroupements naturels. Elle a recours à la modélisation géométrique pour réduire la dimensionnalité de la base de données, utilisant des polynômes de Zernike, connus pour leur interprétativité transparente (chaque terme polynomial est associé à une caractéristique cornéenne particulière) et leur bonne précision pour les cornées normales. Des méthodes de différents types ont été testées lors de prétests (méthodes de clustering dur (hard) ou souple (soft), linéaires or non-linéaires. Ces méthodes ont été testées sur des surfaces modélisées naturelles (non-normalisées) ou normalisées avec ou sans traitement d’extraction de traits, à l’aide de différents outils d’évaluation (scores de séparabilité et d’homogénéité, représentations par cluster des coefficients de modélisation et des surfaces modélisées, comparaisons statistiques des clusters sur différents paramètres cliniques). Les résultats obtenus par la meilleure méthode identifiée, k-means sans extraction de traits, montrent que les clusters produits à partir de surfaces cornéennes naturelles se distinguent essentiellement en fonction de la courbure de la cornée, alors que ceux produits à partir de surfaces normalisées se distinguent en fonction de l’axe cornéen. La dernière étude présentée dans cette thèse (Chapitre 5) explore différentes techniques d’apprentissage automatique pour prédire la forme de la cornée à partir de données cliniques. La base de données cornéennes a d’abord été traitée par modélisation géométrique (polynômes de Zernike) pour réduire sa dimensionnalité à de courts vecteurs de 12 à 20 coefficients, une fourchette de valeurs potentiellement optimales pour effectuer de bonnes prédictions selon des prétests. Différentes méthodes de régression non-linéaires, tirées de la bibliothèque scikit-learn, ont été testées, incluant gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, et multi-layer perceptron. Les prédicteurs proviennent des variables cliniques disponibles dans la base de données, incluant des variables géométriques (diamètre horizontal de la cornée, profondeur de la chambre cornéenne, côté de l’œil), des variables de réfraction (cylindre, sphère et axe) et des variables démographiques (âge, genre). Un test de régression a été effectué pour chaque modèle de régression, défini comme la sélection d’une des 256 combinaisons possibles de variables cliniques (les prédicteurs), d’une méthode de régression, et d’un vecteur de coefficients de Zernike d’une certaine taille (entre 12 et 20 coefficients, les cibles). Tous les modèles de régression testés ont été évalués à l’aide de score de RMSE établissant la distance entre les surfaces cornéennes prédites (les prédictions) et vraies (les topographies corn¬éennes brutes). Les meilleurs d’entre eux ont été validés sur l’ensemble de données randomisé 20 fois pour déterminer avec plus de précision lequel d’entre eux est le plus performant. Il s’agit de gradient boosting utilisant toutes les variables cliniques comme prédicteurs et 16 coefficients de Zernike comme cibles. Les prédictions de ce modèle ont été évaluées qualitativement à l’aide d’un atlas de cartes d’élévations moyennes élaborées à partir des variables cliniques ayant servi de prédicteurs, qui permet de visualiser les transformations moyennes d’en groupe à l’autre pour chaque variables. Cet atlas a permis d’établir que les cornées prédites moyennes sont remarquablement similaires aux vraies cornées moyennes pour toutes les variables cliniques à l’étude. / This thesis aims to investigate the best conditions in which the anterior corneal surface of normal corneas can be preprocessed, classified and predicted using geometric modeling (GM) and machine learning (ML) techniques. The focus is on the anterior corneal surface, which is the main responsible of the refractive power of the cornea. Dealing with preprocessing, the first study (Chapter 2) examines the conditions in which GM can best be applied to reduce the dimensionality of a dataset of corneal surfaces to be used in ML projects. Four types of geometric models of corneal shape were tested regarding their accuracy and processing time: two polynomial (P) models – Zernike polynomial (ZP) and spherical harmonic polynomial (SHP) models – and two corresponding rational function (R) models – Zernike rational function (ZR) and spherical harmonic rational function (SHR) models. SHP and ZR are both known to be more accurate than ZP as corneal shape models for the same number of coefficients, but which type of model is the most accurate between SHP and ZR? And is an SHR model, which is both an SH model and an R model, even more accurate? Also, does modeling accuracy comes at the cost of the processing time, an important issue for testing large datasets as required in ML projects? Focusing on low J values (number of model coefficients) to address these issues in consideration of dimensionality constraints that apply in ML tasks, it was found, based on a number of evaluation tools, that SH models were both more accurate than their Z counterparts, that R models were both more accurate than their P counterparts and that the SH advantage was more important than the R advantage. Processing time curves as a function of J showed that P models were processed in quasilinear time, R models in polynomial time, and that Z models were fastest than SH models. Therefore, while SHR was the most accurate geometric model, it was the slowest (a problem that can partly be remedied by applying a preoptimization procedure). ZP was the fastest model, and with normal corneas, it remains an interesting option for testing and development, especially for clustering tasks due to its transparent interpretability. The best compromise between accuracy and speed for ML preprocessing is SHP. The classification of corneal shapes with clinical parameters has a long tradition, but the visualization of their effects on the corneal shape with group maps (average elevation maps, standard deviation maps, average difference maps, etc.) is relatively recent. In the second study (Chapter 3), we constructed an atlas of average elevation maps for different clinical variables (including geometric, refraction and demographic variables) that can be instrumental in the evaluation of ML task inputs (datasets) and outputs (predictions, clusters, etc.). A large dataset of normal adult anterior corneal surface topographies recorded in the form of 101×101 elevation matrices was first preprocessed by geometric modeling to reduce the dimensionality of the dataset to a small number of Zernike coefficients found to be optimal for ML tasks. The modeled corneal surfaces of the dataset were then grouped in accordance with the clinical variables available in the dataset transformed into categorical variables. An average elevation map was constructed for each group of corneal surfaces of each clinical variable in their natural (non-normalized) state and in their normalized state by averaging their modeling coefficients to get an average surface and by representing this average surface in reference to the best-fit sphere in a topographic elevation map. To validate the atlas thus constructed in both its natural and normalized modalities, ANOVA tests were conducted for each clinical variable of the dataset to verify their statistical consistency with the literature before verifying whether the corneal shape transformations displayed in the maps were themselves visually consistent. This was the case. The possible uses of such an atlas are discussed. The third study (Chapter 4) is concerned with the use of a dataset of geometrically modeled corneal surfaces in an ML task of clustering. The unsupervised classification of corneal surfaces is recent in ophthalmology. Most of the few existing studies on corneal clustering resort to feature extraction (as opposed to geometric modeling) to achieve the dimensionality reduction of the dataset. The goal is usually to automate the process of corneal diagnosis, for instance by distinguishing irregular corneal surfaces (keratoconus, Fuch, etc.) from normal surfaces and, in some cases, by classifying irregular surfaces into subtypes. Complementary to these corneal clustering studies, the proposed study resorts mainly to geometric modeling to achieve dimensionality reduction and focuses on normal adult corneas in an attempt to identify their natural groupings, possibly in combination with feature extraction methods. Geometric modeling was based on Zernike polynomials, known for their interpretative transparency and sufficiently accurate for normal corneas. Different types of clustering methods were evaluated in pretests to identify the most effective at producing neatly delimitated clusters that are clearly interpretable. Their evaluation was based on clustering scores (to identify the best number of clusters), polar charts and scatter plots (to visualize the modeling coefficients involved in each cluster), average elevation maps and average profile cuts (to visualize the average corneal surface of each cluster), and statistical cluster comparisons on different clinical parameters (to validate the findings in reference to the clinical literature). K-means, applied to geometrically modeled surfaces without feature extraction, produced the best clusters, both for natural and normalized surfaces. While the clusters produced with natural corneal surfaces were based on the corneal curvature, those produced with normalized surfaces were based on the corneal axis. In each case, the best number of clusters was four. The importance of curvature and axis as grouping criteria in corneal data distribution is discussed. The fourth study presented in this thesis (Chapter 5) explores the ML paradigm to verify whether accurate predictions of normal corneal shapes can be made from clinical data, and how. The database of normal adult corneal surfaces was first preprocessed by geometric modeling to reduce its dimensionality into short vectors of 12 to 20 Zernike coefficients, found to be in the range of appropriate numbers to achieve optimal predictions. The nonlinear regression methods examined from the scikit-learn library were gradient boosting, Gaussian process, kernel ridge, random forest, k-nearest neighbors, bagging, and multilayer perceptron. The predictors were based on the clinical variables available in the database, including geometric variables (best-fit sphere radius, white-towhite diameter, anterior chamber depth, corneal side), refraction variables (sphere, cylinder, axis) and demographic variables (age, gender). Each possible combination of regression method, set of clinical variables (used as predictors) and number of Zernike coefficients (used as targets) defined a regression model in a prediction test. All the regression models were evaluated based on their mean RMSE score (establishing the distance between the predicted corneal surfaces and the raw topographic true surfaces). The best model identified was further qualitatively assessed based on an atlas of predicted and true average elevation maps by which the predicted surfaces could be visually compared to the true surfaces on each of the clinical variables used as predictors. It was found that the best regression model was gradient boosting using all available clinical variables as predictors and 16 Zernike coefficients as targets. The most explicative predictor was the best-fit sphere radius, followed by the side and refractive variables. The average elevation maps of the true anterior corneal surfaces and the predicted surfaces based on this model were remarkably similar for each clinical variable.

Page generated in 0.044 seconds