• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 16
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Une approche Bayésienne pour l'optimisation multi-objectif sous contraintes / A Bayesian approach to constrained multi-objective optimization

Feliot, Paul 12 July 2017 (has links)
Ces travaux de thèse portent sur l'optimisation multi-objectif de fonctions à valeurs réelles sous contraintes d'inégalités. En particulier, nous nous intéressons à des problèmes pour lesquels les fonctions objectifs et contraintes sont évaluées au moyen d'un programme informatique nécessitant potentiellement plusieurs heures de calcul pour retourner un résultat. Dans ce cadre, il est souhaitable de résoudre le problème d'optimisation en utilisant le moins possible d'appels au code de calcul. Afin de résoudre ce problème, nous proposons dans cette thèse un algorithme d'optimisation Bayésienne baptiséBMOO. Cet algorithme est fondé sur un nouveau critère d'amélioration espérée construit afin d'être applicable à des problèmes fortement contraints et/ou avecde nombreux objectifs. Ce critère s'appuie sur une fonction de perte mesurant le volume de l'espace dominé par les observations courantes, ce dernier étant défini au moyen d'une règle de domination étendue permettant de comparer des solutions potentielles à la fois selon les valeurs des objectifs et des contraintes qui leurs sont associées. Le critère ainsi défini généralise plusieurs critères classiques d'amélioration espérée issus de la littérature. Il prend la forme d'une intégrale définie sur l'espace des objectifs et des contraintes pour laquelle aucune forme fermée n'est connue dans leas général. De plus, il doit être optimisé à chaque itération de l'algorithme.Afin de résoudre ces difficultés, des algorithmes de Monte-Carlo séquentiel sont également proposés. L'efficacité de BMOO est illustrée à la fois sur des cas tests académiques et sur quatre problèmes d'optimisation représentant de réels problèmes de conception. / In this thesis, we address the problem of the derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints. In particular, we consider a setting where the objectives and constraints of the problem are evaluated simultaneously using a potentially time-consuming computer program. To solve this problem, we propose a Bayesian optimization algorithm called BMOO. This algorithm implements a new expected improvement sampling criterion crafted to apply to potentially heavily constrained problems and to many-objective problems. This criterion stems from the use of the hypervolume of the dominated region as a loss function, where the dominated region is defined using an extended domination rule that applies jointly on the objectives and constraints. Several criteria from the Bayesian optimization literature are recovered as special cases. The criterion takes the form of an integral over the space of objectives and constraints for which no closed form expression exists in the general case. Besides, it has to be optimized at every iteration of the algorithm. To solve these difficulties, specific sequential Monte-Carlo algorithms are also proposed. The effectiveness of BMOO is shown on academic test problems and on four real-life design optimization problems.
12

Satellite Constellation Optimization for In-Situ Sampling and Reconstruction of Tides in the Thermospheric Gap

Lane, Kayton Anne 04 January 2024 (has links)
Earth's atmosphere is a dynamic region with a complex interplay of energetic inputs, outputs, and transport mechanisms. A complete understanding of the atmosphere and how various fields within it interact is essential for predicting atmospheric shifts relevant for spaceflight, the evolution of Earth's climate, radio communications, and other practical applications. In-situ observations of a critical altitude region within Earth's atmosphere from 100-200 km in altitude, a subset of a larger 90 – 400 km altitude region deemed the "Thermospheric Gap", are required for constraining atmospheric models of wind, temperature, and density perturbations caused by atmospheric tides. Observations within this region that are sufficient to fully reconstruct and understand the evolution of tides therein are nonexistent. Certain missions have sought to fill portions of this observation gap, including Daedalus which was selected as a candidate for the Earth Explorer program by the European Space Agency in 2018. This study focuses on the design and optimization of a two-satellite, highly elliptical satellite constellation to perform in-situ observations and reconstruction of tidal features in the 100-200 km region. The model atmosphere for retrieving sample data is composed of DE3 and DE2 tidal features from the Climatological Model of the Thermosphere (CTMT) and background winds from the Thermosphere-Ionosphere-Electrodynamic General Circulation Model (TIEGCM). BoTorch, a Bayesian Optimization package for Python, is integrated with the Ansys Systems Tool Kit (STK) to model the constellation's propagation and simulated atmospheric sampling. A least squares fitting algorithm is utilized to fit the sampled data to a known tidal function form. Key results include 14 Pareto optimal solutions for the satellite constellation based on a set of 7 objective functions, 3 constellation input parameters, and a sample set of n = 86. Four of these solutions are discussed in more detail. The first two are the best and second-best options on the Pareto front for sampling and reconstruction of the input tidal fields. The third is the best solution for latitudinal tidal fitting coverage. The fourth is a compromise solution that nearly minimizes delta-v expenditure, while sacrificing some quality in tidal fitting and fitting coverage. / Master of Science / Earth's atmosphere, the envelope of gaseous material surrounding the planet from an altitude of 0 km to approximately 10,000 km, is a dynamic system with a diverse set of energy inputs, outputs, and transfer mechanisms. A complete understanding of the atmosphere and how various fields within it interact is essential for predicting atmospheric shifts relevant for spaceflight, the evolution of Earth's climate, radio communications, and other practical applications. The atmosphere life breathes on Earth's surface evolves in physical and chemical properties, such as temperature, pressure, and composition, as distance from Earth increases. In addition, the atmosphere varies temporally, with shifts in its properties occurring on several timescales, some as short as a few minutes and some on the order of the age of the planet itself. This thesis project seeks to study the optimization of a satellite system to further understand an important source of atmospheric variability – atmospheric tides. Just as the forces of gravity from the moon and sun cause tides in the oceans, the Earth's rotation and the periodic absorption of heat into the atmosphere from the sun cause atmospheric tides. A model atmosphere with a few tides and a background wind is generated to perform simulated tidal sampling. The latitude, longitude, and altitude coordinates of the satellites as they propagate through the atmosphere are used to model samples of the northward and southward atmospheric winds and determine how well the constellation does at regenerating the input tidal data. The integration of several software tools and a Bayesian Optimization algorithm automate the process of finding a range of options for the constellation to best perform the tidal fitting, minimize satellite fuel consumption, and cover as many latitude bands of the atmosphere as possible.
13

Power Dispatch and Storage Configuration Optimization of an IntegratedEnergy System using Deep Reinforcement Learning and Hyperparameter Tuning

Katikaneni, Sravya January 2022 (has links)
No description available.
14

Physics-informed Machine Learning for Digital Twins of Metal Additive Manufacturing

Gnanasambandam, Raghav 07 May 2024 (has links)
Metal additive manufacturing (AM) is an emerging technology for producing parts with virtually no constraint on the geometry. AM builds a part by depositing materials in a layer-by-layer fashion. Despite the benefits in several critical applications, quality issues are one of the primary concerns for the widespread adoption of metal AM. Addressing these issues starts with a better understanding of the underlying physics and includes monitoring and controlling the process in a real-world manufacturing environment. Digital Twins (DTs) are virtual representations of physical systems that enable fast and accurate decision-making. DTs rely on Artificial Intelligence (AI) to process complex information from multiple sources in a manufacturing system at multiple levels. This information typically comes from partially known process physics, in-situ sensor data, and ex-situ quality measurements for a metal AM process. Most current AI models cannot handle ill-structured information from metal AM. Thus, this work proposes three novel machine-learning methods for improving the quality of metal AM processes. These methods enable DTs to control quality in several processes, including laser powder bed fusion (LPBF) and additive friction stir deposition (AFSD). The proposed three methods are as follows 1. Process improvement requires mapping the process parameters with ex-situ quality measurements. These mappings often tend to be non-stationary, with limited experimental data. This work utilizes a novel Deep Gaussian Process-based Bayesian optimization (DGP-SI-BO) method for sequential process design. DGP can model non-stationarity better than a traditional Gaussian Process (GP), but it is challenging for BO. The proposed DGP-SI-BO provides a bagging procedure for acquisition function with a DGP surrogate model inferred via Stochastic Imputation (SI). For a fixed time budget, the proposed method gives 10% better quality for the LPBF process than the widely used BO method while being three times faster than the state-of-the-art method. 2. For metal AM, the process physics information is usually in the form of Partial Differential Equations (PDEs). Though the PDEs, along with in-situ data, can be handled through Physics-informed Neural Networks (PINNs), the activation function in NNs is traditionally not designed to handle multi-scale PDEs. This work proposes a novel activation function Self-scalable tanh (Stan) function for PINNs. The proposed activation function modifies the traditional tanh function. Stan function is smooth, non-saturating, and has a trainable parameter. It can allow an easy flow of gradients and enable systematic scaling of the input-output mapping during training. Apart from solving the heat transfer equations for LPBF and AFSD, this work provides applications in areas including quantum physics and solid and fluid mechanics. Stan function also accelerates notoriously hard and ill-posed inverse discovery of process physics. 3. PDE-based simulations typically need to be much faster for in-situ process control. This work proposes to use a Fourier Neural Operator (FNO) for instantaneous predictions (1000 times speed up) of quality in metal AM. FNO is a data-driven method that maps the process parameters with a high dimensional quality tensor (like thermal distribution in LPBF). Training the FNO with simulated data from PINN ensures a quick response to alter the course of the manufacturing process. Once trained, a DT can readily deploy the model for real-time process monitoring. The proposed methods combine complex information to provide reliable machine-learning models and improve understanding of metal AM processes. Though these models can be independent, they complement each other to build DTs and achieve quality assurance in metal AM. / Doctor of Philosophy / Metal 3D printing, technically known as metal additive manufacturing (AM), is an emerging technology for making virtually any physical part with a click of a button. For instance, one of the most common AM processes, Laser Powder Bed Fusion (L-PBF), melts metal powder using a laser to build into any desired shape. Despite the attractiveness, the quality of the built part is often not satisfactory for its intended usage. For example, a metal plate built for a fractured bone may not adhere to the required dimensions. Improving the quality of metal AM parts starts with a better understanding the underlying mechanisms at a fine length scale (size of the powder or even smaller). Collecting data during the process and leveraging the known physics can help adjust the AM process to improve quality. Digital Twins (DTs) are exactly suited for the task, as they combine the process physics and the data obtained from sensors on metal AM machines to inform an AM machine on process settings and adjustments. This work develops three specific methods to utilize the known information from metal AM to improve the quality of the parts built from metal AM machines. These methods combine different types of known information to alter the process setting for metal AM machines that produce high-quality parts.
15

Sensor-Enabled Accelerated Engineering of Soft Materials

Liu, Yang 24 May 2024 (has links)
Many grand societal challenges are rooted in the need for new materials, such as those related to energy, health, and the environment. However, the traditional way of discovering new materials is basically trial and error. This time-consuming and expensive method can't meet the quickly growing requirements for material discovery. To meet this challenge, the government of the United States started the Materials Genome Initiative (MGI) in 2011. MGI aims at accelerating the pace and reducing the cost of discovering new materials. The success of MGI needs materials innovation infrastructure including data tools, computation tools, and experiment tools. The last decade has witnessed significant progress for MGI, especially with respect to hard materials. However, relatively less attention has been paid to soft materials. One important reason is the lack of experimental tools, especially characterization tools for high-throughput experimentation. This dissertation aims to enrich the toolbox by trying new sensor tools for high-throughput characterization of hydrogels. Piezoelectric-excited millimeter-sized cantilever (PEMC) sensors were used in this dissertation to characterize hydrogels. Their capability to investigate hydrogels was first demonstrated by monitoring the synthesis and stimuli-response of composite hydrogels. The PEMC sensors enabled in-situ study of how the manufacturing process, i.e. bulk vs. layer-by-layer, affects the structure and properties of hydrogels. Afterwards, the PEMC sensors were integrated with robots to develop a method of high-throughput experimentation. Various hydrogels were formulated in a well-plate format and characterized by the sensor tools in an automated manner. High-throughput characterization, especially multi-property characterization enabled optimizing the formulation to achieve tradeoff between different properties. Finally, the sensor-based high-throughput experimentation was combined with active learning for accelerated material discovery. A collaborative learning was used to guide the high-throughput formulation and characterization of hydrogels, which demonstrated rapid discovery of mechanically optimized composite glycogels. Through this dissertation, we hope to provide a new tool for high-throughput characterization of soft materials to accelerate the discovery and optimization of materials. / Doctor of Philosophy / Many grand societal challenges, including those associated with energy and healthcare, are driven by the need for new materials. However, the traditional way of discovering new materials is based on trial and error using low throughput computational and experimental methods. For example, it often takes several years, even decades, to discover and commercialize new materials. The lithium-ion battery is a good example. Traditional time-consuming and expensive methods cannot meet the fast-growing requirements of modern material discovery. With the development of computer science and automation, the idea of using data, artificial intelligence, and robots for accelerated materials discovery has attracted more and more attention. Significant progress has been made in metals and inorganic non-metal materials (e.g., semiconductors) in the past decade under the guidance of machine learning and the assistance of automated robots. However, relatively less progress has been made in materials having complex structures and dynamic properties, such as hydrogels. Hydrogels have wide applications in our daily lives, such as drugs and biomedical devices. One significant barrier to accelerated discovery and engineering of hydrogels is the lack of tools that can rapidly characterize the material's properties. In this dissertation, a sensor-based approach was created to characterize the mechanical properties and stimuli-response of soft materials using low sample volumes. The sensor was integrated with a robot to test materials in high-throughput formats in a rapid and automated measurement format. In combination with machine learning, the high-throughput characterization method was demonstrated to accelerate the engineering and optimization of several hydrogels. Through this dissertation, we hope to provide new tools and methods for rapid engineering of soft materials.
16

Computational and Data-Driven Design of Perturbed Metal Sites for Catalytic Transformations

Huang, Yang 23 May 2024 (has links)
We integrate theoretical, computational and data-driven approaches for the sake of understanding, design and discovery of metal based catalysts. Firstly, we develop theoretical frameworks for predicting electronic descriptors of transition and noble metal alloys, including a physics model of d-band center, and a tight-binding theory of d-band moments to systematically elucidate the distinct electronic structures of novel catalysts. Within this framework, the hybridization of semi-empirical theories with graph neural network and attribution analysis enables accurate prediction equipped with mechanistic insights. In addition, novel physics effect controlling surface reactivity beyond conventional understanding is uncovered. Secondly, we develop a computational and data-driven framework to model high entropy alloy (HEA) catalysis, incorporating thermodynamic descriptor-based phase stability evaluation, surface segregation modeling by deep learning potential-driven molecular simulation and activity prediction through machine learning-embedded electrokinetic model. With this framework, we successfully elucidate the experimentally observed improved activity of PtPdCuNiCo HEA in oxygen reduction reaction. Thirdly, a Bayesian optimization framework is employed to optimize racemic lactide polymerization by searching for stereoselective aluminum (Al) -complex catalysts. We identified multiple new Al-complex molecules that catalyzed either isoselective or heteroselective polymerization. In addition, feature attribution analysis uncovered mechanistically meaningful ligand descriptors that can access quantitative and predictive models for catalyst development. / Doctor of Philosophy / In addressing the critical issues of climate change, energy scarcity, and pollution, the drive towards a sustainable economy has made catalysis a key area of focus. Computational chemistry has revolutionized our understanding of catalysts, especially in identifying and analyzing their active sites. Furthermore, the integration of open-access data and advanced computing has elevated data science as a crucial component in catalysis research. This synergy of computational and data-driven approaches is advancing the development of innovative catalytic materials, marking a significant stride in tackling environmental challenges. In my PhD research, I mainly work on the development of computational and data-driven methods for better understanding, design and discovery of catalytic materials. Firstly, I develop physics models for people to intuitively understand the reactivity of transition and noble metal catalysts. Then I embed the physics models into deep learning models for accurate and insightful predictions. Secondly, for a class of complex metal catalysts called high-entropy alloy (HEA) which is hard to model, I develop a modeling framework by hybridizing computational and data-driven approaches. With this framework, we successfully elucidate the experimentally observed improved activity of PtPdCuNiCo HEA in oxygen reduction reaction which is a key reaction in fuel cell technology. Thirdly, I develop a framework to virtually screen catalyst molecules to optimize polymerization reaction and provide potential candidates to our experimental collaborator to synthesize. Our collaboration leads to the discovery of novel high-performance molecular catalysts.
17

Bayesian-based Multi-Objective Hyperparameter Optimization for Accurate, Fast, and Efficient Neuromorphic System Designs

Maryam Parsa (9412388) 16 December 2020 (has links)
<div>Neuromorphic systems promise a novel alternative to the standard von-Neumann architectures that are computationally expensive for analyzing big data, and are not efficient for learning and inference. This novel generation of computing aims at ``mimicking" the human brain based on deploying neural networks on event-driven hardware architectures. A key bottleneck in designing such brain-inspired architectures is the complexity of co-optimizing the algorithm’s speed and accuracy along with the hardware’s performance and energy efficiency. This complexity stems from numerous intrinsic hyperparameters in both software and hardware that need to be optimized for an optimum design.</div><div><br></div><div>In this work, we present a versatile hierarchical pseudo agent-based multi-objective hyperparameter optimization approach for automatically tuning the hyperparameters of several training algorithms (such as traditional artificial neural networks (ANN), and evolutionary-based, binary, back-propagation-based, and conversion-based techniques in spiking neural networks (SNNs)) on digital and mixed-signal neural accelerators. By utilizing the proposed hyperparameter optimization approach we achieve improved performance over the previous state-of-the-art on those training algorithms and close some of the performance gaps that exist between SNNs and standard deep learning architectures.</div><div><br></div><div>We demonstrate >2% improvement in accuracy and more than 5X reduction in the training/inference time for a back-propagation-based SNN algorithm on the dynamic vision sensor (DVS) gesture dataset. In the case of ANN-SNN conversion-based techniques, we demonstrate 30% reduction in time-steps while surpassing the accuracy of state-of-the-art networks on an image classification dataset (CIFAR10) on a simpler and shallower architecture. Further, our analysis shows that in some cases even a seemingly minor change in hyperparameters may change the accuracy of these networks by 5‑6X. From the application perspective, we show that the optimum set of hyperparameters might drastically improve the performance (52% to 71% for Pole-Balance control application). In addition, we demonstrate resiliency of different input/output encoding, training neural network, or the underlying accelerator modules in a neuromorphic system to the changes of the hyperparameters.</div>
18

Physics-Based Modelling and Simulation Framework for Multi-Objective Optimization of Lithium-Ion Cells in Electric Vehicle Applications

Gaonkar, Ashwin 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In the last years, lithium-ion batteries (LIBs) have become the most important energy storage system for consumer electronics, electric vehicles, and smart grids. The development of lithium-ion batteries (LIBs) based on current practice allows an energy density increase estimated at 10% per year. However, the required power for portable electronic devices is predicted to increase at a much faster rate, namely 20% per year. Similarly, the global electric vehicle battery capacity is expected to increase from around 170 GWh per year today to 1.5 TWh per year in 2030--this is an increase of 125% per year. Without a breakthrough in battery design technology, it will be difficult to keep up with the increasing energy demand. To that end, a design methodology to accelerate the LIB development is needed. This can be achieved through the integration of electro-chemical numerical simulations and machine learning algorithms. To help this cause, this study develops a design methodology and framework using Simcenter Battery Design Studio® (BDS) and Bayesian optimization for design and optimization of cylindrical cell type 18650. The materials of the cathode are Nickel-Cobalt-Aluminum (NCA)/Nickel-Manganese-Cobalt-Aluminum (NMCA), anode is graphite, and electrolyte is Lithium hexafluorophosphate (LiPF6). Bayesian optimization has emerged as a powerful gradient-free optimization methodology to solve optimization problems that involve the evaluation of expensive black-box functions. The black-box functions are simulations of the cyclic performance test in Simcenter Battery Design Studio. The physics model used for this study is based on full system model described by Fuller and Newman. It uses Butler-Volmer Equation for ion-transportation across an interface and solvent diffusion model (Ploehn Model) for Aging of Lithium-Ion Battery Cells. The BDS model considers effects of SEI, cell electrode and microstructure dimensions, and charge-discharge rates to simulate battery degradation. Two objectives are optimized: maximization of the specific energy and minimization of the capacity fade. We perform global sensitivity analysis and see that thickness and porosity of the coating of the LIB electrodes that affect the objective functions the most. As such the design variables selected for this study are thickness and porosity of the electrodes. The thickness is restricted to vary from 22microns to 240microns and the porosity varies from 0.22 to 0.54. Two case studies are carried out using the above-mentioned objective functions and parameters. In the first study, cycling tests of 18650 NCA cathode Li-ion cells are simulated. The cells are charged and discharged using a constant 0.2C rate for 500 cycles. In the second case study a cathode active material more relevant to the electric vehicle industry, Nickel-Manganese-Cobalt-Aluminum (NMCA), is used. Here, the cells are cycled for 5 different charge-discharge scenarios to replicate charge-discharge scenario that an EVs battery module experiences. The results show that the design and optimization methodology can identify cells to satisfy the design objective that extend and improve the pareto front outside the original sampling plan for several practical charge-discharge scenarios which maximize energy density and minimize capacity fade.
19

Bayesian Topology Optimization for Efficient Design of Origami Folding Structures

Shende, Sourabh 15 June 2020 (has links)
No description available.
20

Optimisation des lois de commande d’un imageur sur critère optronique. Application à un imageur à deux étages de stabilisation. / Line of Sight controller global tuning based on a high-level optronic criterion. Application to a double-stage stabilization platform

Frasnedo, Sophie 06 December 2016 (has links)
Ces travaux sur la stabilisation de la Ligne de Visée d’un dispositif optronique s’inscrivent dans le contexte actuel de durcissement des exigences de stabilisation et de réduction du temps accordé à la synthèse des lois de commande.Ils incluent dans un premier temps l’amélioration de la performance intrinsèque de stabilisation du système. La solution proposée ici est l’ajout d’un étage de stabilisation supplémentaire à une structure de stabilisation existante. L’architecture de ce nouvel étage est définie. Les composants sont choisis parmi les technologies existantes puis caractérisés expérimentalement. Un modèle complet du système à deux étages de stabilisation est ensuite proposé.L’objectif de ces travaux comprend également la simplification des procédures d’élaboration des lois de commande par l’utilisation d’une fonction de coût F incluant notamment la Fonction de Transfert de Modulation (qui quantifie le flou introduit par l’erreur de stabilisation dans l’image) en lieu et place ducritère dérivé usuel qui nécessite des vérifications supplémentaires et qui peut s’avérer conservatif.L’évaluation de F étant coûteuse en temps de calcul, un algorithme d’optimisation bayésienne, adapté à l’optimisation des fonctions coûteuses, permet la synthèse des lois de commande du système dans un temps compatible avec les contraintes industrielles, à partir de la modélisation du système précédemment proposée. / The presented work on the Line of Sight stabilization of an optronic device meets the heightened demands regarding stabilization performances that come with the reduction of the time allowed to controller tuning.It includes the intrinsinc improvement of the system stabilization. The proposed solution features a double stabilization stage built from a single stabilization stage existing system. The new architecture is specified and the new components are chosen among the existing technology and experimentally characterized. A complete double stabilization stage model is then proposed.The simplification of the controller tuning process is another goal. The designed cost function F includes a high-level optronic criterion, the Modulation Transfer Function (that quantifies the level of blur broughtinto the image by the residual motion of the platform) instead of the usual low-level and potentially conservative criterion.The function F is costly to evaluate. In order to tune the controller parameters within industrial time constraints, a Bayesian algorithm, adapted to optimization with a reduced budget of evaluations, is implemented.Controllers of both stabilization stages are simultaneously tuned thanks to the previously developped system model.

Page generated in 0.0477 seconds