• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 48
  • 48
  • 19
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Bayesian Optimization for Neural Architecture Search using Graph Kernels

Krishnaswami Sreedhar, Bharathwaj January 2020 (has links)
Neural architecture search is a popular method for automating architecture design. Bayesian optimization is a widely used approach for hyper-parameter optimization and can estimate a function with limited samples. However, Bayesian optimization methods are not preferred for architecture search as it expects vector inputs while graphs are high dimensional data. This thesis presents a Bayesian approach with Gaussian priors that use graph kernels specifically targeted to work in the higherdimensional graph space. We implemented three different graph kernels and show that on the NAS-Bench-101 dataset, an untrained graph convolutional network kernel outperforms previous methods significantly in terms of the best network found and the number of samples required to find it. We follow the AutoML guidelines to make this work reproducible. / Neural arkitektur sökning är en populär metod för att automatisera arkitektur design. Bayesian-optimering är ett vanligt tillvägagångssätt för optimering av hyperparameter och kan uppskatta en funktion med begränsade prover. Bayesianska optimeringsmetoder är dock inte att föredra för arkitektonisk sökning eftersom vektoringångar förväntas medan grafer är högdimensionella data. Denna avhandling presenterar ett Bayesiansk tillvägagångssätt med gaussiska prior som använder grafkärnor som är särskilt fokuserade på att arbeta i det högre dimensionella grafutrymmet. Vi implementerade tre olika grafkärnor och visar att det på NASBench- 101-data, till och med en otränad Grafkonvolutionsnätverk-kärna, överträffar tidigare metoder när det gäller det bästa nätverket som hittats och antalet prover som krävs för att hitta det. Vi följer AutoML-riktlinjerna för att göra detta arbete reproducerbart.
12

Bayesovská optimalizace hyperparametrů pomocí Gaussovských procesů / Bayesian Optimization of Hyperparameters Using Gaussian Processes

Arnold, Jakub January 2019 (has links)
The goal of this thesis was to implement a practical tool for optimizing hy- perparameters of neural networks using Bayesian optimization. We show the theoretical foundations of Bayesian optimization, including the necessary math- ematical background for Gaussian Process regression, and some extensions to Bayesian optimization. In order to evaluate the performance of Bayesian op- timization, we performed multiple real-world experiments with different neural network architectures. In our comparison to a random search, Bayesian opti- mization usually obtained a higher objective function value, and achieved lower variance in repeated experiments. Furthermore, in three out of four experi- ments, the hyperparameters discovered by Bayesian optimization outperformed the manually designed ones. We also show how the underlying Gaussian Process regression can be a useful tool for visualizing the effects of each hyperparameter, as well as possible relationships between multiple hyperparameters. 1
13

Algoritmo de otimização bayesiano com detecção de comunidades / Bayesian optimization algorithm with community detection

Crocomo, Márcio Kassouf 02 October 2012 (has links)
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva / ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
14

Algoritmo de otimização bayesiano com detecção de comunidades / Bayesian optimization algorithm with community detection

Márcio Kassouf Crocomo 02 October 2012 (has links)
ALGORITMOS de Estimação de Distribuição (EDAs) compõem uma frente de pesquisa em Computação Evolutiva que tem apresentado resultados promissores para lidar com problemas complexos de larga escala. Nesse contexto, destaca-se o Algoritmo de Otimização Bayesiano (BOA) que usa um modelo probabilístico multivariado (representado por uma rede Bayesiana) para gerar novas soluções a cada iteração. Baseado no BOA e na investigação de algoritmos de detecção de estrutura de comunidades (para melhorar os modelos multivariados construídos), propõe-se dois novos algoritmos denominados CD-BOA e StrOp. Mostra-se que ambos apresentam vantagens significativas em relação ao BOA. O CD-BOA mostra-se mais flexível que o BOA, ao apresentar uma maior robustez a variações dos valores de parâmetros de entrada, facilitando o tratamento de uma maior diversidade de problemas do mundo real. Diferentemente do CD-BOA e BOA, o StrOp mostra que a detecção de comunidades a partir de uma rede Bayesiana pode modelar mais adequadamente problemas decomponíveis, reestruturando-os em subproblemas mais simples, que podem ser resolvidos por uma busca gulosa, resultando em uma solução para o problema original que pode ser ótima no caso de problemas perfeitamente decomponíveis, ou uma aproximação, caso contrário. Também é proposta uma nova técnica de reamostragens para EDAs (denominada REDA). Essa técnica possibilita a obtenção de modelos probabilísticos mais representativos, aumentando significativamente o desempenho do CD-BOA e StrOp. De uma forma geral, é demonstrado que, para os casos testados, CD-BOA e StrOp necessitam de um menor tempo de execução do que o BOA. Tal comprovação é feita tanto experimentalmente quanto por análise das complexidades dos algoritmos. As características principais desses algoritmos são avaliadas para a resolução de diferentes problemas, mapeando assim suas contribuições para a área de Computação Evolutiva / ESTIMATION of Distribution Algorithms represent a research area which is showing promising results, especially in dealing with complex large scale problems. In this context, the Bayesian Optimization Algorithm (BOA) uses a multivariate model (represented by a Bayesian network) to find new solutions at each iteration. Based on BOA and in the study of community detection algorithms (to improve the constructed multivariate models), two new algorithms are proposed, named CD-BOA and StrOp. This paper indicates that both algorithms have significant advantages when compared to BOA. The CD-BOA is shown to be more flexible, being more robust when using different input parameters, what makes it easier to deal with a greater diversity of real-world problems. Unlike CD-BOA and BOA, StrOp shows that the detection of communities on a Bayesian network more adequately models decomposable problems, resulting in simpler subproblems that can be solved by a greedy search, resulting in a solution to the original problem which may be optimal in the case of perfectly decomposable problems, or a fair approximation if not. Another proposal is a new resampling technique for EDAs (called REDA). This technique results in multivariate models that are more representative, significantly improving the performance of CD-BOA and StrOp. In general, it is shown that, for the scenarios tested, CD-BOA and StrOp require lower running time than BOA. This indication is done experimentally and by the analysis of the computational complexity of the algorithms. The main features of these algorithms are evaluated for solving various problems, thus identifying their contributions to the field of Evolutionary Computation
15

Une approche Bayésienne pour l'optimisation multi-objectif sous contraintes / A Bayesian approach to constrained multi-objective optimization

Feliot, Paul 12 July 2017 (has links)
Ces travaux de thèse portent sur l'optimisation multi-objectif de fonctions à valeurs réelles sous contraintes d'inégalités. En particulier, nous nous intéressons à des problèmes pour lesquels les fonctions objectifs et contraintes sont évaluées au moyen d'un programme informatique nécessitant potentiellement plusieurs heures de calcul pour retourner un résultat. Dans ce cadre, il est souhaitable de résoudre le problème d'optimisation en utilisant le moins possible d'appels au code de calcul. Afin de résoudre ce problème, nous proposons dans cette thèse un algorithme d'optimisation Bayésienne baptiséBMOO. Cet algorithme est fondé sur un nouveau critère d'amélioration espérée construit afin d'être applicable à des problèmes fortement contraints et/ou avecde nombreux objectifs. Ce critère s'appuie sur une fonction de perte mesurant le volume de l'espace dominé par les observations courantes, ce dernier étant défini au moyen d'une règle de domination étendue permettant de comparer des solutions potentielles à la fois selon les valeurs des objectifs et des contraintes qui leurs sont associées. Le critère ainsi défini généralise plusieurs critères classiques d'amélioration espérée issus de la littérature. Il prend la forme d'une intégrale définie sur l'espace des objectifs et des contraintes pour laquelle aucune forme fermée n'est connue dans leas général. De plus, il doit être optimisé à chaque itération de l'algorithme.Afin de résoudre ces difficultés, des algorithmes de Monte-Carlo séquentiel sont également proposés. L'efficacité de BMOO est illustrée à la fois sur des cas tests académiques et sur quatre problèmes d'optimisation représentant de réels problèmes de conception. / In this thesis, we address the problem of the derivative-free multi-objective optimization of real-valued functions subject to multiple inequality constraints. In particular, we consider a setting where the objectives and constraints of the problem are evaluated simultaneously using a potentially time-consuming computer program. To solve this problem, we propose a Bayesian optimization algorithm called BMOO. This algorithm implements a new expected improvement sampling criterion crafted to apply to potentially heavily constrained problems and to many-objective problems. This criterion stems from the use of the hypervolume of the dominated region as a loss function, where the dominated region is defined using an extended domination rule that applies jointly on the objectives and constraints. Several criteria from the Bayesian optimization literature are recovered as special cases. The criterion takes the form of an integral over the space of objectives and constraints for which no closed form expression exists in the general case. Besides, it has to be optimized at every iteration of the algorithm. To solve these difficulties, specific sequential Monte-Carlo algorithms are also proposed. The effectiveness of BMOO is shown on academic test problems and on four real-life design optimization problems.
16

Satellite Constellation Optimization for In-Situ Sampling and Reconstruction of Tides in the Thermospheric Gap

Lane, Kayton Anne 04 January 2024 (has links)
Earth's atmosphere is a dynamic region with a complex interplay of energetic inputs, outputs, and transport mechanisms. A complete understanding of the atmosphere and how various fields within it interact is essential for predicting atmospheric shifts relevant for spaceflight, the evolution of Earth's climate, radio communications, and other practical applications. In-situ observations of a critical altitude region within Earth's atmosphere from 100-200 km in altitude, a subset of a larger 90 – 400 km altitude region deemed the "Thermospheric Gap", are required for constraining atmospheric models of wind, temperature, and density perturbations caused by atmospheric tides. Observations within this region that are sufficient to fully reconstruct and understand the evolution of tides therein are nonexistent. Certain missions have sought to fill portions of this observation gap, including Daedalus which was selected as a candidate for the Earth Explorer program by the European Space Agency in 2018. This study focuses on the design and optimization of a two-satellite, highly elliptical satellite constellation to perform in-situ observations and reconstruction of tidal features in the 100-200 km region. The model atmosphere for retrieving sample data is composed of DE3 and DE2 tidal features from the Climatological Model of the Thermosphere (CTMT) and background winds from the Thermosphere-Ionosphere-Electrodynamic General Circulation Model (TIEGCM). BoTorch, a Bayesian Optimization package for Python, is integrated with the Ansys Systems Tool Kit (STK) to model the constellation's propagation and simulated atmospheric sampling. A least squares fitting algorithm is utilized to fit the sampled data to a known tidal function form. Key results include 14 Pareto optimal solutions for the satellite constellation based on a set of 7 objective functions, 3 constellation input parameters, and a sample set of n = 86. Four of these solutions are discussed in more detail. The first two are the best and second-best options on the Pareto front for sampling and reconstruction of the input tidal fields. The third is the best solution for latitudinal tidal fitting coverage. The fourth is a compromise solution that nearly minimizes delta-v expenditure, while sacrificing some quality in tidal fitting and fitting coverage. / Master of Science / Earth's atmosphere, the envelope of gaseous material surrounding the planet from an altitude of 0 km to approximately 10,000 km, is a dynamic system with a diverse set of energy inputs, outputs, and transfer mechanisms. A complete understanding of the atmosphere and how various fields within it interact is essential for predicting atmospheric shifts relevant for spaceflight, the evolution of Earth's climate, radio communications, and other practical applications. The atmosphere life breathes on Earth's surface evolves in physical and chemical properties, such as temperature, pressure, and composition, as distance from Earth increases. In addition, the atmosphere varies temporally, with shifts in its properties occurring on several timescales, some as short as a few minutes and some on the order of the age of the planet itself. This thesis project seeks to study the optimization of a satellite system to further understand an important source of atmospheric variability – atmospheric tides. Just as the forces of gravity from the moon and sun cause tides in the oceans, the Earth's rotation and the periodic absorption of heat into the atmosphere from the sun cause atmospheric tides. A model atmosphere with a few tides and a background wind is generated to perform simulated tidal sampling. The latitude, longitude, and altitude coordinates of the satellites as they propagate through the atmosphere are used to model samples of the northward and southward atmospheric winds and determine how well the constellation does at regenerating the input tidal data. The integration of several software tools and a Bayesian Optimization algorithm automate the process of finding a range of options for the constellation to best perform the tidal fitting, minimize satellite fuel consumption, and cover as many latitude bands of the atmosphere as possible.
17

Power Dispatch and Storage Configuration Optimization of an IntegratedEnergy System using Deep Reinforcement Learning and Hyperparameter Tuning

Katikaneni, Sravya January 2022 (has links)
No description available.
18

Physics-informed Machine Learning for Digital Twins of Metal Additive Manufacturing

Gnanasambandam, Raghav 07 May 2024 (has links)
Metal additive manufacturing (AM) is an emerging technology for producing parts with virtually no constraint on the geometry. AM builds a part by depositing materials in a layer-by-layer fashion. Despite the benefits in several critical applications, quality issues are one of the primary concerns for the widespread adoption of metal AM. Addressing these issues starts with a better understanding of the underlying physics and includes monitoring and controlling the process in a real-world manufacturing environment. Digital Twins (DTs) are virtual representations of physical systems that enable fast and accurate decision-making. DTs rely on Artificial Intelligence (AI) to process complex information from multiple sources in a manufacturing system at multiple levels. This information typically comes from partially known process physics, in-situ sensor data, and ex-situ quality measurements for a metal AM process. Most current AI models cannot handle ill-structured information from metal AM. Thus, this work proposes three novel machine-learning methods for improving the quality of metal AM processes. These methods enable DTs to control quality in several processes, including laser powder bed fusion (LPBF) and additive friction stir deposition (AFSD). The proposed three methods are as follows 1. Process improvement requires mapping the process parameters with ex-situ quality measurements. These mappings often tend to be non-stationary, with limited experimental data. This work utilizes a novel Deep Gaussian Process-based Bayesian optimization (DGP-SI-BO) method for sequential process design. DGP can model non-stationarity better than a traditional Gaussian Process (GP), but it is challenging for BO. The proposed DGP-SI-BO provides a bagging procedure for acquisition function with a DGP surrogate model inferred via Stochastic Imputation (SI). For a fixed time budget, the proposed method gives 10% better quality for the LPBF process than the widely used BO method while being three times faster than the state-of-the-art method. 2. For metal AM, the process physics information is usually in the form of Partial Differential Equations (PDEs). Though the PDEs, along with in-situ data, can be handled through Physics-informed Neural Networks (PINNs), the activation function in NNs is traditionally not designed to handle multi-scale PDEs. This work proposes a novel activation function Self-scalable tanh (Stan) function for PINNs. The proposed activation function modifies the traditional tanh function. Stan function is smooth, non-saturating, and has a trainable parameter. It can allow an easy flow of gradients and enable systematic scaling of the input-output mapping during training. Apart from solving the heat transfer equations for LPBF and AFSD, this work provides applications in areas including quantum physics and solid and fluid mechanics. Stan function also accelerates notoriously hard and ill-posed inverse discovery of process physics. 3. PDE-based simulations typically need to be much faster for in-situ process control. This work proposes to use a Fourier Neural Operator (FNO) for instantaneous predictions (1000 times speed up) of quality in metal AM. FNO is a data-driven method that maps the process parameters with a high dimensional quality tensor (like thermal distribution in LPBF). Training the FNO with simulated data from PINN ensures a quick response to alter the course of the manufacturing process. Once trained, a DT can readily deploy the model for real-time process monitoring. The proposed methods combine complex information to provide reliable machine-learning models and improve understanding of metal AM processes. Though these models can be independent, they complement each other to build DTs and achieve quality assurance in metal AM. / Doctor of Philosophy / Metal 3D printing, technically known as metal additive manufacturing (AM), is an emerging technology for making virtually any physical part with a click of a button. For instance, one of the most common AM processes, Laser Powder Bed Fusion (L-PBF), melts metal powder using a laser to build into any desired shape. Despite the attractiveness, the quality of the built part is often not satisfactory for its intended usage. For example, a metal plate built for a fractured bone may not adhere to the required dimensions. Improving the quality of metal AM parts starts with a better understanding the underlying mechanisms at a fine length scale (size of the powder or even smaller). Collecting data during the process and leveraging the known physics can help adjust the AM process to improve quality. Digital Twins (DTs) are exactly suited for the task, as they combine the process physics and the data obtained from sensors on metal AM machines to inform an AM machine on process settings and adjustments. This work develops three specific methods to utilize the known information from metal AM to improve the quality of the parts built from metal AM machines. These methods combine different types of known information to alter the process setting for metal AM machines that produce high-quality parts.
19

Bayesian Optimization of PCB-Embedded Electric-Field Grading Geometries for a 10 kV SiC MOSFET Power Module

Cairnie, Mark A. Jr. 28 April 2021 (has links)
A finite element analysis (FEA) driven, automated numerical optimization technique is used to design electric field grading structures in a PCB-integrated bus bar for a 10 kV bondwire-less silicon-carbide (SiC) MOSFET power module. Due to the ultra-high-density of the power module, careful design of field-grading structures inside the bus bar is required to mitigate the high electric field strength in the air. Using Bayesian optimization and a new weighted point-of-interest (POI) cost function, the highly non-uniform electric field is efficiently optimized without the use of field integration, or finite-difference derivatives. The proposed optimization technique is used to efficiently characterize the performance of the embedded field grading structure, providing insights into the fundamental limitations of the system. The characterization results are used to streamline the design and optimization of the bus bar and high-density module interface. The high-density interface experimentally demonstrated a partial discharge inception voltage (PDIV) of 11.6 kV rms. When compared to a state-of-the-art descent-based optimization technique, the proposed algorithm converges 3x faster and with 7x smaller error, making both the field grading structure and the design technique widely applicable to other high-density high-voltage design problems. / M.S. / Innovation trends in electrical engineering such as the electrification of consumer and commercial vehicles, renewable energy, and widespread adoption of personal electronics have spurred the development of new semiconductor materials to replace conventional silicon technology. To fully take advantage of the better efficiency and faster speeds of these new materials, innovation is required at the system-level, to reduce the size of power conversion systems, and develop converters with higher levels of integration. As the size of these systems decreases, and operating voltages rise, the design of the insulation systems that protect them becomes more critical. Historically, the design of high-density insulation system requires time-consuming design iteration, where the designer simulates a case, assesses its performance, modifies the design, and repeats, until adequate performance is achieved. The process is computationally expensive, time-consuming, and the results are not easily applied to other insulation design problems. This work proposes an automated design process that allows for the streamlined optimization of high-density insulation systems. The process is applied to a 10 kV power module and experimentally demonstrates a 38\% performance improvement over manual design techniques, while providing an 8 times reduction in design cycle time.
20

Sensor-Enabled Accelerated Engineering of Soft Materials

Liu, Yang 24 May 2024 (has links)
Many grand societal challenges are rooted in the need for new materials, such as those related to energy, health, and the environment. However, the traditional way of discovering new materials is basically trial and error. This time-consuming and expensive method can't meet the quickly growing requirements for material discovery. To meet this challenge, the government of the United States started the Materials Genome Initiative (MGI) in 2011. MGI aims at accelerating the pace and reducing the cost of discovering new materials. The success of MGI needs materials innovation infrastructure including data tools, computation tools, and experiment tools. The last decade has witnessed significant progress for MGI, especially with respect to hard materials. However, relatively less attention has been paid to soft materials. One important reason is the lack of experimental tools, especially characterization tools for high-throughput experimentation. This dissertation aims to enrich the toolbox by trying new sensor tools for high-throughput characterization of hydrogels. Piezoelectric-excited millimeter-sized cantilever (PEMC) sensors were used in this dissertation to characterize hydrogels. Their capability to investigate hydrogels was first demonstrated by monitoring the synthesis and stimuli-response of composite hydrogels. The PEMC sensors enabled in-situ study of how the manufacturing process, i.e. bulk vs. layer-by-layer, affects the structure and properties of hydrogels. Afterwards, the PEMC sensors were integrated with robots to develop a method of high-throughput experimentation. Various hydrogels were formulated in a well-plate format and characterized by the sensor tools in an automated manner. High-throughput characterization, especially multi-property characterization enabled optimizing the formulation to achieve tradeoff between different properties. Finally, the sensor-based high-throughput experimentation was combined with active learning for accelerated material discovery. A collaborative learning was used to guide the high-throughput formulation and characterization of hydrogels, which demonstrated rapid discovery of mechanically optimized composite glycogels. Through this dissertation, we hope to provide a new tool for high-throughput characterization of soft materials to accelerate the discovery and optimization of materials. / Doctor of Philosophy / Many grand societal challenges, including those associated with energy and healthcare, are driven by the need for new materials. However, the traditional way of discovering new materials is based on trial and error using low throughput computational and experimental methods. For example, it often takes several years, even decades, to discover and commercialize new materials. The lithium-ion battery is a good example. Traditional time-consuming and expensive methods cannot meet the fast-growing requirements of modern material discovery. With the development of computer science and automation, the idea of using data, artificial intelligence, and robots for accelerated materials discovery has attracted more and more attention. Significant progress has been made in metals and inorganic non-metal materials (e.g., semiconductors) in the past decade under the guidance of machine learning and the assistance of automated robots. However, relatively less progress has been made in materials having complex structures and dynamic properties, such as hydrogels. Hydrogels have wide applications in our daily lives, such as drugs and biomedical devices. One significant barrier to accelerated discovery and engineering of hydrogels is the lack of tools that can rapidly characterize the material's properties. In this dissertation, a sensor-based approach was created to characterize the mechanical properties and stimuli-response of soft materials using low sample volumes. The sensor was integrated with a robot to test materials in high-throughput formats in a rapid and automated measurement format. In combination with machine learning, the high-throughput characterization method was demonstrated to accelerate the engineering and optimization of several hydrogels. Through this dissertation, we hope to provide new tools and methods for rapid engineering of soft materials.

Page generated in 0.0705 seconds