• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 8
  • 2
  • Tagged with
  • 24
  • 24
  • 16
  • 15
  • 9
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Restructuring Controllers to Accommodate Plant Nonlinearities

Sahare, Kushal 21 March 2018 (has links)
This thesis explores the possibility of controller restructuring for improved closed-loop performance of nonlinear plants using a gradient based method of symbolic adaptation- Model Structure Adaptation Method (MSAM). The adaptation method starts with a controller which is a linear controller designed according to the linearized model of the nonlinear plant. This controller is then restructured into a series of nonlinear candidate controllers and adapted iteratively toward a desired closed-loop response. The noted feature of the adaptation method is its ability to quantify structural perturbations to the controllers. This quantification is important in scaling the structural Jacobian that is used in gradient-based adaptation of the candidate controllers. To investigate this, two nonlinear plants with unknown nonlinearities viz., nonlinear valve and nonlinear inverted pendulum are chosen. Furthermore, the properties of restructured controllers obtained for two systems, stability, effect of measurement noise, reachability, scalability and algorithmic issues of MSAM are studied and compared with the starting controller.
2

Neuro-Symbolic Distillation of Reinforcement Learning Agents

Abir, Farhan Fuad 01 January 2024 (has links) (PDF)
In the past decade, reinforcement learning (RL) has achieved breakthroughs across various domains, from surpassing human performance in strategy games to enhancing the training of large language models (LLMs) with human feedback. However, RL has yet to gain widespread adoption in mission-critical fields such as healthcare and autonomous vehicles. This is primarily attributed to the inherent lack of trust, explainability, and generalizability of neural networks in deep reinforcement learning (DRL) agents. While neural DRL agents leverage the power of neural networks to solve specific tasks robustly and efficiently, this often comes at the cost of explainability and generalizability. In contrast, pure symbolic agents maintain explainability and trust but often underperform in high-dimensional data. In this work, we developed a method to distill explainable and trustworthy agents using neuro-symbolic AI. Neuro-symbolic distillation combines the strengths of symbolic reasoning and neural networks, creating a hybrid framework that leverages the structured knowledge representation of symbolic systems alongside the learning capabilities of neural networks. The key steps of neuro-symbolic distillation involve training traditional DRL agents, followed by extracting, selecting, and distilling their learned policies into symbolic forms using symbolic regression and tree-based models. These symbolic representations are then employed instead of the neural agents to make interpretable decisions with comparable accuracy. The approach is validated through experiments on Lunar Lander and Pong, demonstrating that symbolic representations can effectively replace neural agents while enhancing transparency and trustworthiness. Our findings suggest that this approach mitigates the black-box nature of neural networks, providing a pathway toward more transparent and trustworthy AI systems. The implications of this research are significant for fields requiring both high performance and explainability, such as autonomous systems, healthcare, and financial modeling.
3

Reverse Engineering the Human Brain: An Evolutionary Computation Approach to the Analysis of fMRI

Allgaier, Nicholas 01 January 2015 (has links)
The field of neuroimaging has truly become data rich, and as such, novel analytical methods capable of gleaning meaningful information from large stores of imaging data are in high demand. Those methods that might also be applicable on the level of individual subjects, and thus potentially useful clinically, are of special interest. In this dissertation we introduce just such a method, called nonlinear functional mapping (NFM), and demonstrate its application in the analysis of resting state fMRI (functional Magnetic Resonance Imaging) from a 242-subject subset of the IMAGEN project, a European study of risk-taking behavior in adolescents that includes longitudinal phenotypic, behavioral, genetic, and neuroimaging data. Functional mapping employs a computational technique inspired by biological evolution to discover and mathematically characterize interactions among ROI (regions of interest), without making linear or univariate assumptions. Statistics of the resulting interaction relationships comport with recent independent work, constituting a preliminary cross-validation. Furthermore, nonlinear terms are ubiquitous in the models generated by NFM, suggesting that some of the interactions characterized here are not discoverable by standard linear methods of analysis. One such nonlinear interaction is discussed in the context of a direct comparison with a procedure involving pairwise correlation, designed to be an analogous linear version of functional mapping. Another such interaction suggests a novel distinction in brain function between drinking and non-drinking adolescents: a tighter coupling of ROI associated with emotion, reward, and interceptive processes such as thirst, among drinkers. Finally, we outline many improvements and extensions of the methodology to reduce computational expense, complement other analytical tools like graph-theoretic analysis, and possibly allow for voxel level functional mapping to eliminate the necessity of ROI selection.
4

Evolving Spatially Aggregated Features for Regional Modeling and its Application to Satellite Imagery

Kriegman, Sam 01 January 2016 (has links)
Satellite imagery and remote sensing provide explanatory variables at relatively high resolutions for modeling geospatial phenomena, yet regional summaries are often desirable for analysis and actionable insight. In this paper, we propose a novel method of inducing spatial aggregations as a component of the statistical learning process, yielding regional model features whose construction is driven by model prediction performance rather than prior assumptions. Our results demonstrate that Genetic Programming is particularly well suited to this type of feature construction because it can automatically synthesize appropriate aggregations, as well as better incorporate them into predictive models compared to other regression methods we tested. In our experiments we consider a specific problem instance and real-world dataset relevant to predicting snow properties in high-mountain Asia.
5

Applications of Artificial Neural Networks (ANNs) in exploring materials property-property correlations

Cheng, Xiaoyu January 2014 (has links)
The discoveries of materials property-property correlations usually require prior knowledge or serendipity, the process of which can be time-consuming, costly, and labour-intensive. On the other hand, artificial neural networks (ANNs) are intelligent and scalable modelling techniques that have been used extensively to predict properties from materials’ composition or processing parameters, but are seldom used in exploring materials property-property correlations. The work presented in this thesis has employed ANNs combinatorial searches to explore the correlations of different materials properties, through which, ‘known’ correlations are verified, and ‘unknown’ correlations are revealed. An evaluation criterion is proposed and demonstrated to be useful in identifying nontrivial correlations. The work has also extended the application of ANNs in the fields of data corrections, property predictions and identifications of variables’ contributions. A systematic ANN protocol has been developed and tested against the known correlating equations of elastic properties and the experimental data, and is found to be reliable and effective to correct suspect data in a complicated situation where no prior knowledge exists. Moreover, the hardness increments of pure metals due to HPT are accurately predicted from shear modulus, melting temperature and Burgers vector. The first two variables are identified to have the largest impacts on hardening. Finally, a combined ANN-SR (symbolic regression) method is proposed to yield parsimonious correlating equations by ruling out redundant variables through the partial derivatives method and the connection weight approach, which are based on the analysis of the ANNs weight vectors. By applying this method, two simple equations that are at least as accurate as other models in providing a rapid estimation of the enthalpies of vaporization for compounds are obtained.
6

Synthesis of Local Thermo-Physical Models Using Genetic Programming

Zhang, Ying 11 December 2009 (has links)
Local thermodynamic models are practical alternatives to computationally expensive rigorous models that involve implicit computational procedures and often complement them to accelerate computation for real-time optimization and control. Human-centered strategies for development of these models are based on approximation of theoretical models. Genetic Programming (GP) system can extract knowledge from the given data in the form of symbolic expressions. This research describes a fully data driven automatic self-evolving algorithm that builds appropriate approximating formulae for local models using genetic programming. No a-priori information on the type of mixture (ideal/non ideal etc.) or assumptions are necessary. The approach involves synthesis of models for a given set of variables and mathematical operators that may relate them. The selection of variables is automated through principal component analysis and heuristics. For each candidate model, the model parameters are optimized in the inner integrated nested loop. The trade-off between accuracy and model complexity is addressed through incorporation of the Minimum Description Length (MDL) into the fitness (objective) function. Statistical tools including residual analysis are used to evaluate performance of models. Adjusted R-square is used to test model's accuracy, and F-test is used to test if the terms in the model are necessary. The analysis of the performance of the models generated with the data driven approach depicts theoretically expected range of compositional dependence of partition coefficients and limits of ideal gas as well as ideal solution behavior. Finally, the model built by GP integrated into a steady state and dynamic flow sheet simulator to show the benefits of using such models in simulation. The test systems were propane-propylene for ideal solutions and acetone-water for non-ideal. The result shows that, the generated models are accurate for the whole range of data and the performance is tunable. The generated local models can indeed be used as empirical models go beyond elimination of the local model updating procedures to further enhance the utility of the approach for deployment of real-time applications.
7

Temporal Feature Selection with Symbolic Regression

Fusting, Christopher Winter 01 January 2017 (has links)
Building and discovering useful features when constructing machine learning models is the central task for the machine learning practitioner. Good features are useful not only in increasing the predictive power of a model but also in illuminating the underlying drivers of a target variable. In this research we propose a novel feature learning technique in which Symbolic regression is endowed with a ``Range Terminal'' that allows it to explore functions of the aggregate of variables over time. We test the Range Terminal on a synthetic data set and a real world data in which we predict seasonal greenness using satellite derived temperature and snow data over a portion of the Arctic. On the synthetic data set we find Symbolic regression with the Range Terminal outperforms standard Symbolic regression and Lasso regression. On the Arctic data set we find it outperforms standard Symbolic regression, fails to beat the Lasso regression, but finds useful features describing the interaction between Land Surface Temperature, Snow, and seasonal vegetative growth in the Arctic.
8

Prediction of self-compacting concrete elastic modulus using two symbolic regression techniques

Golafshani, E.M., Ashour, Ashraf 28 December 2015 (has links)
yes / This paper introduces a novel symbolic regression approach, namely biogeographical-based programming (BBP), for the prediction of elastic modulus of self-compacting concrete (SCC). The BBP model was constructed directly from a comprehensive dataset of experimental results of SCC available in the literature. For comparison purposes, another new symbolic regression model, namely artificial bee colony programming (ABCP), was also developed. Furthermore, several available formulas for predicting the elastic modulus of SCC were assessed using the collected database. The results show that the proposed BBP model provides slightly closer results to experiments than ABCP model and existing available formulas. A sensitivity analysis of BBP parameters also shows that the prediction by BBP model improves with the increase of habitat size, colony size and maximum tree depth. In addition, among all considered empirical and design code equations, Leemann and Hoffmann and ACI 318-08’s equations exhibit a reasonable performance but Persson and Felekoglu et al.’s equations are highly inaccurate for the prediction of SCC elastic modulus.
9

Towards algorithmic theory construction in physics

Möller, Hampus January 2023 (has links)
This master thesis explores the challenge of algorithmic hypothesis generation and its connection to potential inductive biases. In the scientific method, hypotheses are formulated and tested against data for validity. The process of hypothesis generation is, however, not explicitly formulated. A structured approach to hypothesis generation would allow for a near algorithmic process that could scale far beyond the current capabilities of science. The thesis explored the concepts of entropy, symmetry and minimum description length for use as inductive biases. Two algorithms were implemented and evaluated: one for symmetry finding and one for symbolic regression. The theoretical results show a strong connection between entropy and minimum description length with a weaker connection to symmetry. Both implementations indicate potential paths exist to partial or full automation of the hypothesis generation process. In general, there do not seem to exist fundamental issues in automating the process besides the challenge of its implementation. Thus, the thesis demonstrates a clear path forward towards partial or complete automation of the scientific method for physical research.
10

Koevoluční algoritmus pro úlohy založené na testu / Coevolutionary Algorithm for Test-Based Problems

Hulva, Jiří January 2014 (has links)
This thesis deals with the usage of coevolution in the task of symbolic regression. Symbolic regression is used for obtaining mathematical formula which approximates the measured data. It can be executed by genetic programming - a method from the category of evolutionary algorithms that is inspired by natural evolutionary processes. Coevolution works with multiple evolutionary processes that are running simultaneously and influencing each other. This work deals with the design and implementation of the application which performs symbolic regression using coevolution on test-based problems. The test set was generated by a new method, which allows to adjust its size dynamically. Functionality of the application was verified on a set of five test tasks. The results were compared with a coevolution algorithm with a fixed-sized test set. In three cases the new method needed lesser number of generations to find a solution of a desired quality, however, in most cases more data-point evaluations were required.

Page generated in 0.1065 seconds