• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 243
  • 243
  • 62
  • 58
  • 53
  • 36
  • 35
  • 34
  • 34
  • 28
  • 26
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Computer Model Emulation and Calibration using Deep Learning

Bhatnagar, Saumya January 2022 (has links)
No description available.
142

[en] BRANCHING PROCESSES FOR EPIDEMICS STUDY / [pt] PROCESSOS DE RAMIFICAÇÃO PARA O ESTUDO DE EPIDEMIAS

JOAO PEDRO XAVIER FREITAS 26 October 2023 (has links)
[pt] Este trabalho modela a evolução temporal de uma epidemia com uma abordagem estocástica. O número de novas infecções por infectado é modelado como uma variável aleatória discreta, chamada aqui de contágio. Logo, a evolução temporal da doença é um processo estocástico. Mais especificamente, a propagação é dada pelo modelo de Bienaymé-Galton-Watson, um tipo de processo de ramificação de parâmetro discreto. Neste processo, para um determinado instante, o número de membros infectados, ou seja, a geração de membros infectados é uma variável aleatória. Na primeira parte da dissertação, dado que o modelo probabilístico do contágio é conhecido, quatro metodologias utilizadas para obter as funções de massa das gerações do processo estocástico são comparadas. As metodologias são: funções geradoras de probabilidade com e sem identidades polinomiais, cadeia de Markov e simulações de Monte Carlo. A primeira e terceira metodologias fornecem expressões analíticas relacionando a variável aleatória de contágio com a variável aleatória do tamanho de uma geração. Essas expressões analíticas são utilizadas na segunda parte desta dissertação, na qual o problema clássico de inferência paramétrica bayesiana é estudado. Com a ajuda do teorema de Bayes, parâmetros da variável aleatória de contágio são inferidos a partir de realizações do processo de ramificação. As expressões analíticas obtidas na primeira parte do trabalho são usadas para construir funções de verossimilhança apropriadas. Para resolver o problema inverso, duas maneiras diferentes de se usar dados provindos do processo de Bienaymé-Galton-Watson são desenvolvidas e comparadas: quando dados são realizações de uma única geração do processo de ramificação ou quando os dados são uma única realização do processo de ramificação observada ao longo de uma quantidade de gerações. O critério abordado neste trabalho para encerrar o processo de atualização na inferência paramétrica usa a distância de L2-Wasserstein, que é uma métrica baseada no transporte ótimo de massa. Todas as rotinas numéricas e simbólicas desenvolvidas neste trabalho são escritas em MATLAB. / [en] This work models an epidemic s spreading over time with a stochastic approach. The number of infections per infector is modeled as a discrete random variable, named here as contagion. Therefore, the evolution of the disease over time is a stochastic process. More specifically, this propagation is modeled as the Bienaymé-Galton-Watson process, one kind of branching process with discrete parameter. In this process, for a given time, the number of infected members, i.e. a generation of infected members, is a random variable. In the first part of this dissertation, given that the mass function of the contagion s random variable is known, four methodologies to find the mass function of the generations of the stochastic process are compared. The methodologies are: probability generating functions with and without polynomial identities, Markov chain and Monte Carlo simulations. The first and the third methodologies provide analytical expressions relating the contagion random variable and the generation s size random variable. These analytical expressions are used in the second part of this dissertation, where a classical inverse problem of bayesian parametric inference is studied. With the help of Bayes rule, parameters of the contagion random variable are inferred from realizations of the stochastic process. The analytical expressions obtained in the first part of the work are used to build appropriate likelihood functions. In order to solve the inverse problem, two different ways of using data from the Bienaymé-Galton-Watson process are developed and compared: when data are realizations of a single generation of the branching process and when data is just one realization of the branching process observed over a certain number of generations. The criteria used in this work to stop the update process in the bayesian parametric inference uses the L2-Wasserstein distance, which is a metric based on optimal mass transference. All numerical and symbolical routines developed to this work are written in MATLAB.
143

Computational and Machine Learning-Reinforced Modeling and Design of Materials under Uncertainty

Hasan, Md Mahmudul 05 July 2023 (has links)
The component-level performance of materials is fundamentally determined by the underlying microstructural features. Therefore, designing high-performance materials using multi-scale models plays a significant role to improve the predictability, reliability, proper functioning, and longevity of components for a wide range of applications in the fields of aerospace, electronics, energy, and structural engineering. This thesis aims to develop new methodologies to design microstructures under inherent material uncertainty by incorporating machine learning techniques. To achieve this objective, the study addresses gradient-based and machine learning-driven design optimization methods to enhance homogenized linear and non-linear properties of polycrystalline microstructures. However, variations arising from the thermo-mechanical processing of materials affect microstructural features and properties by propagating over multiple length scales. To quantify this inherent microstructural uncertainty, this study introduces a linear programming-based analytical method. When this analytical uncertainty quantification formulation is not applicable (e.g., uncertainty propagation on non-linear properties), a machine learning-based inverse design approach is presented to quantify the microstructural uncertainty. Example design problems are discussed for different polycrystalline systems (e.g., Titanium, Aluminium, and Galfenol). Though conventional machine learning performs well when used for designing microstructures or modeling material properties, its predictions may still fail to satisfy design constraints associated with the physics of the system. Therefore, the physics-informed neural network (PINN) is developed to incorporate problem physics in the machine learning formulation. In this study, a PINN model is built and integrated into materials design to study the deformation processes of Copper and a Titanium-Aluminum alloy. / Doctor of Philosophy / Microstructure-sensitive design is a high-throughput computational approach for materials design, where material performance is improved through the control and design of microstructures. It enhances component performance and, subsequently, the overall system's performance at the application level. This thesis aims to design microstructures for polycrystalline materials such as Galfenol, Titanium-Aluminum alloys, and Copper to obtain desired mechanical properties for certain applications. The advantage of the microstructure-sensitive design approach is that multiple microstructures can be suggested, which provide a similar value of the design parameters. Therefore, manufacturers can follow any of these microstructure designs to fabricate the materials with the desired properties. Moreover, the microstructure uncertainty arising from the variations in thermo-mechanical processing and measurement of the experimental data is quantified. It is necessary to address the resultant randomness of the microstructure because it can alter the expected mechanical properties. To check the manufacturability of proposed microstructure designs, a physics-informed machine learning model is developed to build a relation between the process, microstructure, and material properties. This model can be used to solve the process design problem to identify the processing parameters to achieve a given/desired microstructure.
144

DIMENSION REDUCTION, OPERATOR LEARNING AND UNCERTAINTY QUANTIFICATION FOR PROBLEMS OF DIFFERENTIAL EQUATIONS

Shiqi Zhang (12872678) 26 July 2022 (has links)
<p>In this work, we mainly focus on the topic related to dimension reduction, operator learning and uncertainty quantification for problems of differential equations. The supervised machine learning methods introduced here belong to a newly booming field compared to traditional numerical methods. The building blocks for our works are mainly Gaussian process and neural network. </p> <p><br></p> <p>The first work focuses on supervised dimension reduction problems. A new framework based on rotated multi-fidelity Gaussian process regression is introduced. It can effectively solve high-dimensional problems while the data are insufficient for traditional methods. Moreover, an accurate surrogate Gaussian process model of original problem can be formulated. The second one we would like to introduce is a physics-assisted Gaussian process framework with active learning for forward and inverse problems of partial differential equations(PDEs). In this work, Gaussian process regression model is incorporated with given physical information to find solutions or discover unknown coefficients of given PDEs. Three different models are introduce and their performance are compared and discussed. Lastly, we propose attention based MultiAuto-DeepONet for operator learning of stochastic problems. The target of this work is to solve operator learning problems related to time-dependent stochastic differential equations(SDEs). The work is built on MultiAuto-DeepONet and attention mechanism is applied to improve the model performance in specific type of problems. Three different types of attention mechanism are presented and compared. Numerical experiments are provided to illustrate the effectiveness of our proposed models.</p>
145

Multidisciplinary Design Under Uncertainty Framework of a Spacecraft and Trajectory for an Interplanetary Mission

Siddhesh Ajay Naidu (18437880) 28 April 2024 (has links)
<p dir="ltr">Design under uncertainty (DUU) for spacecraft is crucial in ensuring mission success, especially given the criticality of their failure. To obtain a more realistic understanding of space systems, it is beneficial to holistically couple the modeling of the spacecraft and its trajectory as a multidisciplinary analysis (MDA). In this work, a MDA model is developed for an Earth-Mars mission by employing the general mission analysis tool (GMAT) to model the mission trajectory and rocket propulsion analysis (RPA) to design the engines. By utilizing this direct MDA model, the deterministic optimization (DO) of the system is performed first and yields a design that completed the mission in 307 days while requiring 475 kg of fuel. The direct MDA model is also integrated into a Monte Carlo simulation (MCS) to investigate the uncertainty quantification (UQ) of the spacecraft and trajectory system. When considering the combined uncertainty in the launch date for a 20-day window and the specific impulses, the time of flight ranges from 275 to 330 days and the total fuel consumption ranges from 475 to 950 kg. The spacecraft velocity exhibits deviations ranging from 2 to 4 km/s at any given instance in the Earth inertial frame. The amount of fuel consumed during the TCM ranges from 1 to 250 kg, while during the MOI, the amount of fuel consumed ranges from 350 to 810 kg. The usage of the direct MDA model for optimization and uncertainty quantification of the system can be computationally prohibitive for DUU. To address this challenge, the effectiveness of utilizing surrogate-based approaches for performing UQ is demonstrated, resulting in significantly lower computational costs. Gaussian processes (GP) models trained on data from the MDA model were implemented into the UQ framework and their results were compared to those of the direct MDA method. When considering the combined uncertainty from both sources, the surrogate-based method had a mean error of 1.67% and required only 29% of the computational time. When compared to the direct MDA, the time of flight range matched well. While the TCM and MOI fuel consumption ranges were smaller by 5 kg. These GP models were integrated into the DUU framework to perform reliability-based design optimization (RBDO) feasibly for the spacecraft and trajectory system. For the combined uncertainty, the DO design yielded a poor reliability of 54%, underscoring the necessity for performing RBDO. The DUU framework obtained a design with a significantly improved reliability of 99%, which required an additional 39.19 kg of fuel and also resulted in a reduced time of flight by 0.55 days.</p>
146

Ensemble for Deterministic Sampling with positive weights : Uncertainty quantification with deterministically chosen samples

Sahlberg, Arne January 2016 (has links)
Knowing the uncertainty of a calculated result is always important, but especially so when performing calculations for safety analysis. A traditional way of propagating the uncertainty of input parameters is Monte Carlo (MC) methods. A quicker alternative to MC, especially useful when computations are heavy, is Deterministic Sampling (DS). DS works by hand-picking a small set of samples, rather than randomizing a large set as in MC methods. The samples and its corresponding weights are chosen to represent the uncertainty one wants to propagate by encoding the first few statistical moments of the parameters' distributions. Finding a suitable ensemble for DS in not easy, however. Given a large enough set of samples, one can always calculate weights to encode the first couple of moments, but there is good reason to want an ensemble with only positive weights. How to choose the ensemble for DS so that all weights are positive is the problem investigated in this project. Several methods for generating such ensembles have been derived, and an algorithm for calculating weights while forcing them to be positive has been found. The methods and generated ensembles have been tested for use in uncertainty propagation in many different cases and the ensemble sizes have been compared. In general, encoding two or four moments in an ensemble seems to be enough to get a good result for the propagated mean value and standard deviation. Regarding size, the most favorable case is when the parameters are independent and have symmetrical distributions. In short, DS can work as a quicker alternative to MC methods in uncertainty propagation as well as in other applications.
147

Predicting multibody assembly of proteins

Rasheed, Md. Muhibur 25 September 2014 (has links)
This thesis addresses the multi-body assembly (MBA) problem in the context of protein assemblies. [...] In this thesis, we chose the protein assembly domain because accurate and reliable computational modeling, simulation and prediction of such assemblies would clearly accelerate discoveries in understanding of the complexities of metabolic pathways, identifying the molecular basis for normal health and diseases, and in the designing of new drugs and other therapeutics. [...] [We developed] F²Dock (Fast Fourier Docking) which includes a multi-term function which includes both a statistical thermodynamic approximation of molecular free energy as well as several of knowledge-based terms. Parameters of the scoring model were learned based on a large set of positive/negative examples, and when tested on 176 protein complexes of various types, showed excellent accuracy in ranking correct configurations higher (F² Dock ranks the correcti solution as the top ranked one in 22/176 cases, which is better than other unsupervised prediction software on the same benchmark). Most of the protein-protein interaction scoring terms can be expressed as integrals over the occupied volume, boundary, or a set of discrete points (atom locations), of distance dependent decaying kernels. We developed a dynamic adaptive grid (DAG) data structure which computes smooth surface and volumetric representations of a protein complex in O(m log m) time, where m is the number of atoms assuming that the smallest feature size h is [theta](r[subscript max]) where r[subscript max] is the radius of the largest atom; updates in O(log m) time; and uses O(m)memory. We also developed the dynamic packing grids (DPG) data structure which supports quasi-constant time updates (O(log w)) and spherical neighborhood queries (O(log log w)), where w is the word-size in the RAM. DPG and DAG together results in O(k) time approximation of scoring terms where k << m is the size of the contact region between proteins. [...] [W]e consider the symmetric spherical shell assembly case, where multiple copies of identical proteins tile the surface of a sphere. Though this is a restricted subclass of MBA, it is an important one since it would accelerate development of drugs and antibodies to prevent viruses from forming capsids, which have such spherical symmetry in nature. We proved that it is possible to characterize the space of possible symmetric spherical layouts using a small number of representative local arrangements (called tiles), and their global configurations (tiling). We further show that the tilings, and the mapping of proteins to tilings on arbitrary sized shells is parameterized by 3 discrete parameters and 6 continuous degrees of freedom; and the 3 discrete DOF can be restricted to a constant number of cases if the size of the shell is known (in terms of the number of protein n). We also consider the case where a coarse model of the whole complex of proteins are available. We show that even when such coarse models do not show atomic positions, they can be sufficient to identify a general location for each protein and its neighbors, and thereby restricts the configurational space. We developed an iterative refinement search protocol that leverages such multi-resolution structural data to predict accurate high resolution model of protein complexes, and successfully applied the protocol to model gp120, a protein on the spike of HIV and currently the most feasible target for anti-HIV drug design. / text
148

Quantification of parametric uncertainties effects in structural failure criteria /

Yanik, Yasar January 2019 (has links)
Orientador: Samuel Silva / Resumo: Critérios de falhas realizam a predição de circunstâncias nas quais materiais sólidos estão sobre ação de carregamentos externos. As teorias de falhas são conhecidas como diferentes critérios de falhas, como von Mises e Tresca, os quais são os mais famosos para determinados materiais. Além disso, esta dissertação de mestrado pretende mostrar a comparação entre os critérios de falha de Tresca e von Mises, levando em conta incertezas subjacentes nas equações constitutivas e na análise de tensão. Para exemplificar acomparação,algumassimulaçõessãorealizadasusandoumaplacasimples,umproblema de deflexão simples,e a estrutura de um carro do formula SAE. Devido à complexidade deste sistema, diferentes tipos de etapas probabilísticas são utilizadas, como o método de superfície de resposta e a correlação de parâmetros. Os resultados mostram que várias variáveis aleatórias de entrada afetam em maneiras diferentes as variáveis aleatórias de saída e que não há uma diferença grande entre os critérios de falha de von Mises e Tresca quando incertezas são assumidas na formulação para a análise de tensão. / Abstract: Failure theory is the investigation of predicting circumstances under which solid materials under the processing of external loads. The theories of failure are known as different failure criteria such as von Mises and Tresca which are the most famous of these for certain materials. Additionally, this master dissertation intends to show a comparison between Tresca and von Mises failure criterions, taking into account the underlying uncertainties in the constitutive equations and stress analysis. In order to exemplify the comparison, some numerical simulations are performed using a simple plate, simple deflection problem and a frame of the formula car. Due to the complexity of frame of the formula car, different kind of probabilistic steps are used as a response surface method and parameters correlation. Results show that several random input variables effect the random output variables in various ways, and there is no such a big difference between the von Mises and Tresca failure criterions when uncertainties are assumed in the formulation for stress analysis. / Mestre
149

BAYESIAN OPTIMAL DESIGN OF EXPERIMENTS FOR EXPENSIVE BLACK-BOX FUNCTIONS UNDER UNCERTAINTY

Piyush Pandita (6561242) 10 June 2019 (has links)
<div>Researchers and scientists across various areas face the perennial challenge of selecting experimental conditions or inputs for computer simulations in order to achieve promising results.</div><div> The aim of conducting these experiments could be to study the production of a material that has great applicability.</div><div> One might also be interested in accurately modeling and analyzing a simulation of a physical process through a high-fidelity computer code.</div><div> The presence of noise in the experimental observations or simulator outputs, called aleatory uncertainty, is usually accompanied by limited amount of data due to budget constraints.</div><div> This gives rise to what is known as epistemic uncertainty. </div><div> This problem of designing of experiments with limited number of allowable experiments or simulations under aleatory and epistemic uncertainty needs to be treated in a Bayesian way.</div><div> The aim of this thesis is to extend the state-of-the-art in Bayesian optimal design of experiments where one can optimize and infer statistics of the expensive experimental observation(s) or simulation output(s) under uncertainty.</div>
150

Amortissement virtuel pour la conception vibroacoustique des lanceurs futurs / Thin films and heterostructures of LiNbO3 for acoustical / optical integrated devices

Krifa, Mohamed 19 May 2017 (has links)
Dans le dimensionnement des lanceurs spatiaux, la maîtrise de l'amortissement est une problématique majeure. Faute d'essais sur structure réelle très couteux avant la phase finale de qualification, la modélisation de l'amortissement peut conduire à un sur-dimensionnement de la structure alors que le but recherché est de diminuer le coût du lancement d'une fusée tout en garantissant le confort vibratoire de la charge utile.Nos contributions sont les suivantes. Premièrement, une méthode de prédiction par le calcul des niveaux vibratoires dans les structures de lanceurs en utilisant une stratégie d'essais virtuels qui permet de prédire les amortissements en basses fréquences, est proposée. Cette méthode est basée sur l'utilisation de méta-modèles construits à partir de plans d'expériences numériques à l'aide de modèles détaillés des liaisons. Ces méta-modèles peuvent être obtenus grâce à des calculs spécifiques utilisant une résolution 3D par éléments finis avec prise en compte du contact. En utilisant ces méta-modèles, l'amortissement modal dans un cycle de vibration peut être calculé comme étant le ratio entre l'énergie dissipée et l'énergie de déformation. L'approche utilisée donne une approximation précise et peu coûteuse de la solution. Le calcul non-linéaire global qui est inaccessible pour les structures complexes est rendu accessible en utilisant l'approche virtuelle basées sur les abaques.Deuxièmement, une validation des essais virtuels sur la structure du lanceur Ariane 5 a été élaborée en tenant compte des liaisons boulonnées entre les étages afin d'illustrer l'approche proposée. Lorsque la matrice d'amortissement généralisé n'est pas diagonale (car des dissipations localisées), ces méthodes modales ne permettent pas de calculer ou d'estimer les termes d'amortissement généralisé extra-diagonaux. La problématique posée est alors la quantification de l'erreur commise lorsque l'on néglige les termes extra-diagonaux dans le calcul des niveaux vibratoires ; avec un bon ratio précision / coût de calcul.Troisièmement, la validité de l'hypothèse de diagonalité de la matrice d'amortissement généralisée a été examinée et une méthode très peu coûteuse de quantification a posteriori de l'erreur d'estimation de l'amortissement modal par la méthodes des perturbations a été proposée.Finalement, la dernière contribution de cette thèse est la proposition d'un outil d'aide à la décision qui permet de quantifier l'impact des méconnaissances sur l'amortissement dans les liaisons sur le comportement global des lanceurs via l'utilisation de la méthode info-gap. / In the dimensioning of space launchers, controlling depreciation is a major problem. In the absence of very expensive real structural tests before the final qualification phase, damping modeling can lead to over-sizing of the structure while the aim is to reduce the cost of launching a rocket while guaranteeing the vibratory comfort of the payload.[...]

Page generated in 0.1667 seconds