• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 453
  • 274
  • 163
  • 47
  • 25
  • 22
  • 19
  • 10
  • 6
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 1190
  • 259
  • 193
  • 143
  • 124
  • 87
  • 74
  • 67
  • 61
  • 61
  • 61
  • 61
  • 57
  • 54
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
552

Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes

Macatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
553

<b>DIFFUSION QUANTIFICATION IN SPATIALLY HETEROGENEOUS MATERIALS</b>

Dustin M Harmon (11267964) 08 April 2024 (has links)
<p dir="ltr">Spatial heterogeneity is ubiquitous across life and the universe; the same is true for phase-separating pharmaceutical formulations, cells, and tissues. To interrogate these spatially-varying complicated samples, simple analysis techniques such as fluorescence recovery after photobleaching (FRAP) can provide information on molecular transport. Conventional FRAP approaches localize analysis to small spots, which may not be representative of trends across the full field of view.</p><p dir="ltr">Taking advantage of strategies used for structures illumination, an approach has been developed to use patterned illumination in combination with FRAP for probing large fields of view while representatively sampling. Patterned illumination is used to establish a concentration gradient across a sample by irreversibly photobleaching fluorophores, such as with the simple comb pattern photobleach presented in Chapters 1 and 4. Patterned photobleaching allows spatial Fourier-domain analysis of multiple spatial harmonics simultaneously. In the spatial FT-domain the real-space photobleach signal is integrated into puncta, greatly increasing the signal to noise ratio compared to conventional point-bleach FRAP. The order of the spatial harmonic is directly related to the length-scale of translational diffusion measured, with a series of harmonics accessing diffusion over many length scales in a single experiment. Measurements of diffusion at multiple length scales informs on the diffusion mechanism by sensitively reporting on deviations away from normal diffusion.</p><p dir="ltr">Complementing the physical hardware for inducing patterned illumination, this dissertation introduces novel algorithms for reconstructing spatially-resolved diffusion maps in heterogeneous materials by combining Fourier domain analysis with patterned photobleaching. FT-FRAP is introduced in Chapter 1 for interrogating phase-separating samples using beam-scanning instrumentation for comb-bleach illumination. This analysis allowed disentangling separate contributions to diffusion from normal bulk diffusion and an interfacial exchange mechanism only available due to multi-harmonic analysis. The introduction of a dot-array bleach pattern using widefield microscopy is presented in Chapter 2 for high-throughput detection of mobility in simple binary systems as well as for segmentation in phase-separating pharmaceutical formulations. The analysis becomes more complicated as more components are added to the system such as a surfactant. Introduced in chapter 3, FT-FRAP with dot-array photobleaching was shown to be useful for characterizing diffusion of phase-separating micro-domain smaller than a single pixel of the camera. Supported by simulations, a biexponential fitting model was developed for quantification of diffusion by multiple species simultaneously. Chapter 4 introduces imaging inside of 3D particles comprised of an active pharmaceutical ingredient (API) in microencapsulated agglomerates which exhibited strong interfacial exchange. Multi-photon excited fluorescence enabled imaging a small focal volume within the particles.</p>
554

Computational Reconstruction and Quantification of Aerospace Materials

Long, Matthew Thomas 14 May 2024 (has links)
Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructure related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty (epistemic uncertainty) that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty (aleatoric uncertainty), which is the noise that is inherent in the original image representing the experimental data. The epistemic uncertainty that arises from the MRF algorithm is analyzed through the study of the percentage of isolated pixels and the difference in average grain sizes between the initial image and the reconstructed image. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses. / Master of Science / Microstructure reconstruction is a necessary tool for use in multi-scale modeling, as it allows for the analysis of the microstructure of a material without the cost of measuring all of the required data for the analysis. For microstructure reconstruction to be effective, the synthetic microstructure needs to predict what a small sample of measured data would look like on a larger domain. The Markov Random Field (MRF) algorithm is a method of generating statistically similar microstructures for this process. In this work, two key factors of the MRF algorithm are analyzed. The first factor explored is how the base features of the microstructures related to orientation and grain/phase topology information influence the selection of the MRF parameters to perform the reconstruction. The second focus is on the analysis of the numerical uncertainty that arises from the use of the MRF algorithm. This is done by first removing the material uncertainty, which is the noise that is inherent in the original image representing the experimental data. This research mainly focuses on two different microstructures, B4C-TiB2 and Ti-7Al, which are a ceramic composite and a metallic alloy, respectively. Both of them are candidate materials for many aerospace systems owing to their desirable mechanical performance under large thermo-mechanical stresses.
555

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
556

Model Integration in Data Mining: From Local to Global Decisions

Bella Sanjuán, Antonio 31 July 2012 (has links)
El aprendizaje autom�atico es un �area de investigaci�on que proporciona algoritmos y t�ecnicas que son capaces de aprender autom�aticamente a partir de experiencias pasadas. Estas t�ecnicas son esenciales en el �area de descubrimiento de conocimiento de bases de datos (KDD), cuya fase principal es t�ÿpicamente conocida como miner�ÿa de datos. El proceso de KDD se puede ver como el aprendizaje de un modelo a partir de datos anteriores (generaci�on del modelo) y la aplicaci�on de este modelo a nuevos datos (utilizaci�on del modelo). La fase de utilizaci�on del modelo es muy importante, porque los usuarios y, muy especialmente, las organizaciones toman las decisiones dependiendo del resultado de los modelos. Por lo general, cada modelo se aprende de forma independiente, intentando obtener el mejor resultado (local). Sin embargo, cuando varios modelos se usan conjuntamente, algunos de ellos pueden depender los unos de los otros (por ejemplo, las salidas de un modelo pueden ser las entradas de otro) y aparecen restricciones. En este escenario, la mejor decisi�on local para cada problema tratado individualmente podr�ÿa no dar el mejor resultado global, o el resultado obtenido podr�ÿa no ser v�alido si no cumple las restricciones del problema. El �area de administraci�on de la relaci�on con los clientes (CRM) ha dado origen a problemas reales donde la miner�ÿa de datos y la optimizaci�on (global) deben ser usadas conjuntamente. Por ejemplo, los problemas de prescripci�on de productos tratan de distinguir u ordenar los productos que ser�an ofrecidos a cada cliente (o sim�etricamente, elegir los clientes a los que se les deber�ÿa de ofrecer los productos). Estas �areas (KDD, CRM) carecen de herramientas para tener una visi�on m�as completa de los problemas y una mejor integraci�on de los modelos de acuerdo a sus interdependencias y las restricciones globales y locales. La aplicaci�on cl�asica de miner�ÿa de datos a problemas de prescripci�on de productos, por lo general, ha / Bella Sanjuán, A. (2012). Model Integration in Data Mining: From Local to Global Decisions [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16964
557

Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods

García Galache, José Pedro 03 November 2017 (has links)
Contamination is becoming an important problem in great metropolitan areas. A large portion of the contaminants is emitted by the vehicle fleet. At European level, as well as in other economical areas, the regulation is becoming more and more restrictive. Euro regulations are the best example of this tendency. Specially important are the emissions of nitrogen oxide (NOx) and Particle Matter (PM). Two different strategies exist to reduce the emission of pollutants. One of them is trying to avoid their creation. Modifying the combustion process by means of different fuel injection laws or controlling the air regeneration are the typical methods. The second set of strategies is focused on the contaminant elimination. The NOx are reduced by means of catalysis and/or reducing atmosphere, usually created by injection of urea. The particle matter is eliminated using filters. This thesis is focused in this matter. Most of the strategies to reduce the emission of contaminants penalise fuel consumption. The particle filter is not an exception. Its installation, located in the exhaust duct, restricts the pass of the air. It increases the pressure along the whole exhaust line before the filter reducing the performance. Optimising the filter is then an important task. The efficiency of the filter has to be good enough to obey the contaminant normative. At the same time the pressure drop has to be as low as possible to optimise fuel consumption and performance. The objective of the thesis is to find the relation between the micro-structure and the macroscopic properties. With this knowledge the optimisation of the micro-structure is possible. The micro-structure of the filter mimics acicular mullite. It is created by procedural generation using random parameters. The relation between micro-structure and the macroscopic properties such as porosity and permeability are studied in detail. The flow field is solved using LabMoTer, a software developed during this thesis. The formulation is based on Lattice Botlzmann Methods, a new approach to simulate fluid dynamics. In addition, Walberla framework is used to solve the flow field too. This tool has been developed by Friedrich Alexander University of Erlangen Nürnberg. The second part of the thesis is focused on the particles immersed into the fluid. The properties of the particles are given as a function of the aerodynamic diameter. This is enough for macroscopic approximations. However, the discretization of the porous media has the same order of magnitude than the particle size. Consequently realistic geometry is necessary. Diesel particles are aggregates of spheres. A simulation tool is developed to create these aggregated using ballistic collision. The results are analysed in detail. The second step is to characterise their aerodynamic properties. Due to the small size of the particles, with the same order of magnitude than the separation between molecules of air, the fluid can not be approximated as a continuous medium. A new approach is needed. Direct Simulation Monte Carlo is the appropriate tool. A solver based on this formulation is developed. Unfortunately complex geometries could not be implemented on time. The thesis has been fruitful in several aspects. A new model based on procedural generation has been developed to create a micro-structure which mimics acicular mullite. A new CFD solver based on Lattice Boltzmann Methods, LabMoTer, has been implemented and validated. At the same time it is proposed a technique to optimized setup. Ballistic agglomeration process is studied in detail thanks to a new simulator developed ad hoc for this task. The results are studied in detail to find correlation between properties and the evolution in time. Uncertainty Quantification is used to include the Uncertainty in the models. A new Direct Simulation Monte Carlo solver has been developed and validated to calculate rarefied flow. / La contaminación se está volviendo un gran problema para las grandes áreas metropolitanas, en gran parte debido al tráfico. A nivel europeo, al igual que en otras áreas, la regulación es cada vez más restrictiva. Una buena prueba de ello es la normativa Euro de la Unión Europea. Especialmente importantes son las emisiones de óxidos de nitrógeno (NOx) y partículas (PM). La reducción de contaminantes se puede abordar desde dos estrategias distintas. La primera es la prevención. Modificar el proceso de combustión a través de las leyes de inyección o controlar la renovación de la carda son los métodos más comunes. La segunda estrategia es la eliminación. Se puede reducir los NOx mediante catálisis o atmósfera reductora y las partículas mediante la instalación de un filtro en el conducto de escape. La presente tesis se centra en el estudio de éste último. La mayoría de as estrategias para la reducción de emisiones penalizan el consumo. El filtro de partículas no es una excepción. Restringe el paso de aire. Como consecuencia la presión se incrementa a lo largo de toda la línea reduciendo las prestaciones del motor. La optimización del filtro es de vital importancia. Tiene que mantener su eficacia a la par que que se minimiza la caída de presión y con ella el consumo de combustible. El objetivo de la tesis es encontrar la relación entre la miscroestructura y las propiedades macroscópicas del filtro. Las conclusiones del estudio podrán utilizarse para optimizar la microestructura. La microestructura elegida imita los filtros de mulita acicular. Se genera por ordenador mediante generación procedimental utilizando parámetros aleatorios. Gracias a ello se puede estudiar la relación que existe entre la microestructura y las propiedades macroscópicas como la porosidad y la permeabilidad. El campo fluido se resuelve con LabMoTer, un software desarrollado en esta tesis. Está basado en Lattice Boltzmann, una nueva aproximación para simular fluidos. Además también se ha utilizado el framework Walberla desarrollado por la universidad Friedrich Alexander de Erlangen Nürnberg. La segunda parte de la tesis se centra en las partículas suspendidas en el fluido. Sus propiedades vienen dadas en función del diámetro aerodinámico. Es una buena aproximación desde un punto de vista macroscópico. Sin embargo éste no es el caso. El tamaño de la discretización requerida para calcular el medio poroso es similar al tamaño de las partículas. En consecuencia se necesita simular geometrías realistas. Las partículas Diesel son agregados de esferas. El proceso de aglomeración se ha simulado mediante colisión balística. Los resultados se han analizado con detalle. El segundo paso es la caracterización aerodinámica de los aglomerados. Debido a que el tamaño de las partículas precursoras es similar a la distancia entre moléculas el fluido no puede ser considerado un medio continuo. Se necesita una nueva aproximación. La herramienta apropiada es la Simulación Directa Monte Carlo (DSMC). Por ello se ha desarrollado un software basado en esta formulación. Desafortunadamente no ha habido tiempo suficiente como para implementar condiciones de contorno sobre geometrías complejas. La tesis ha sido fructífera en múltiples aspectos. Se ha desarrollado un modelo basado en generación procedimental capaz de crear una microestructura que aproxime mulita acicular. Se ha implementado y validado un nuevo solver CFD, LabMoTer. Además se ha planteado una técnica que optimiza la preparación del cálculo. El proceso de aglomeración se ha estudiado en detalle gracias a un nuevo simulador desarrollado ad hoc para esta tarea. Mediante el análisis estadístico de los resultados se han planteado modelos que reproducen la población de partículas y su evolución con el tiempo. Técnicas de Cuantificación de Incertidumbre se han empleado para modelar la dispersión de datos. Por último, un simulador basado / La contaminació s'està tornant un gran problema per a les grans àrees metropolitanes, en gran part degut al tràfic. A nivell europeu, a l'igual que en atres àrees, la regulació és cada volta més restrictiva. Una bona prova d'això és la normativa Euro de l'Unió Europea. Especialment importants són les emissions d'òxits de nitrogen (NOX) i partícules (PM). La reducció de contaminants se pot abordar des de dos estratègies distintes. La primera és la prevenció. Modificar el procés de combustió a través de les lleis d'inyecció o controlar la renovació de la càrrega són els mètodos més comuns. La segona estratègia és l'eliminació. Se pot reduir els NOX mediant catàlisis o atmòsfera reductora i les partícules mediant l'instalació d'un filtre en el vas d'escap. La present tesis se centra en l'estudi d'este últim. La majoria de les estratègies per a la reducció d'emissions penalisen el consum. El filtre de partícules no és una excepció. Restringix el pas d'aire. Com a conseqüència la pressió s'incrementa a lo llarc de tota la llínea reduint les prestacions del motor. L'optimisació del filtre és de vital importància. Ha de mantindre la seua eficàcia a la par que que es minimisa la caiguda de pressió i en ella el consum de combustible. L'objectiu de la tesis és trobar la relació entre la microescritura i les propietats macroscòpiques del filtre. Les conclusions de l'estudi podran utilisar-se per a optimisar la microestructura. La microestructura elegida imita els filtres de mulita acicular. Se genera per ordenador mediant generació procedimental utilisant paràmetros aleatoris. Gràcies ad això es pot estudiar la relació que existix entre la microestructura i les propietats macroscòpiques com la porositat i la permeabilitat. El camp fluït se resol en LabMoTer, un software desenrollat en esta tesis. Està basat en Lattice Boltzmann, una nova aproximació per a simular fluïts. Ademés també s'ha utilisat el framework Walberla, desentollat per l'Universitat Friedrich Alexander d'Erlangen Nürnberg. La segona part de la tesis se centra en les partícules suspeses en el fluït. Les seues propietats venen donades en funció del diàmetro aerodinàmic. És una bona aproximació des d'un punt de vista macroscòpic. No obstant este no és el cas. El tamany de la discretisació requerida per a calcular el mig porós és similar al tamany de les partícules. En conseqüència es necessita simular geometries realistes. Les partícules diésel són agregats d'esferes. El procés d'aglomeració s'ha simulat mediant colisió balística. Els resultats s'han analisat en detall. El segon pas és la caracterisació aerodinàmica dels aglomerats. Degut a que el tamany de les partícules precursores és similar a la distància entre molècules el fluït no pot ser considerat un mig continu. Se necessita una nova aproximació. La ferramenta apropiada és la Simulació Directa Monte Carlo (DSMC). Per això s'ha desenrollat un software basat en esta formulació. Malafortunadament no ha hagut temps suficient com per a implementar condicions de contorn sobre geometries complexes. La tesis ha segut fructífera en múltiples aspectes. S'ha desenrollat un model basat en generació procedimental capaç de crear una microestructura que aproxime mulita acicular. S'ha implementat i validat un nou solver CFD, LabMoTer. Ademés s'ha plantejat una tècnica que optimisa la preparació del càlcul. El procés d'aglomeració s'ha estudiat en detall gràcies a un nou simulador desenrollat ad hoc per ad esta tasca. Mediant l'anàlisis estadístic dels resultats s'han plantejat models que reproduixen la població de partícules i la seua evolució en el temps. Tècniques de Quantificació d'Incertea s'han empleat per a modelar la dispersió de senyes. Per últim, un simulador basat en DSMC s'ha desenrollat per a calcular fluïts rarificats. / García Galache, JP. (2017). Study of the flow field through the wall of a Diesel particulate filter using Lattice Boltzmann Methods [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90413
558

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D. / Uncertainties are ubiquitous in computer simulation models especially for network-based models where the underlying mechanisms are difficult to characterize explicitly by mathematical formalizations. Quantifying uncertainties is challenging because of either the lack of knowledge or their inherent indeterminate properties. Verification and validation of models with uncertainties cannot include every detail of real systems and therefore will remain a fundamental task in modeling. Many tools are developed for supporting uncertainty quantification, sensitivity analysis, and experimental design. However, few of them is domain-independent or supports the data management and complex simulation workflow of network-based simulation models. In this dissertation, we present a computational framework called GENEUS, which incorporates a multitude of functions including uncertain parameter specification, experimental design, model execution management, data access and registrations, sensitivity analysis, surrogate modeling, and model calibration. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses for various scenarios. GENEUS provides researchers access to uncertainty quantification, sensitivity analysis and experimental design methods with robust and tested implementations without requiring detailed expertise in modeling, statistics, or computing. Even for groups having all the skills, GENEUS can help save time, guard against mistakes and improve productivity.
559

Implementation of highly sensitive small extracellular vesicle (sEV) quantification method in the identification of novel sEV production modulators and the evaluation of sEV pharmacokinetics / 高感度定量法を利用した細胞外小胞の産生モジュレーターの探索と体内動態解析

Yamamoto, Aki 24 September 2021 (has links)
京都大学 / 新制・課程博士 / 博士(薬学) / 甲第23473号 / 薬博第849号 / 新制||薬||242(附属図書館) / 京都大学大学院薬学研究科薬学専攻 / (主査)教授 髙倉 喜信, 教授 山下 富義, 教授 小野 正博 / 学位規則第4条第1項該当 / Doctor of Pharmaceutical Sciences / Kyoto University / DFAM
560

Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations

Wang, Jianxun 05 April 2017 (has links)
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling. / Ph. D. / Turbulence modeling is a critical component in computational fluid dynamics (CFD) simulations of industrial flows. Despite the significant growth in computational resources over the past two decades, the time-resolved high-fidelity simulations (e.g., large eddy simulation and direct numerical simulation) are not feasible for engineering applications. Therefore, the small-scale turbulent velocity fluctuations have to resort to the time-averaging modeling. Reynolds-averaged Navier-Stokes (RANS) equations based turbulence models describe the averaged flow quantities for turbulent flows and are believed to be the dominant tools for industrial applications in coming decades. However, for many practical flows, the predictive accuracy of RANS models is largely limited by the model-form uncertainties stemming from the potential inaccuracies in the Reynolds stress closure. As RANS models are used in the design and safety evaluation of many mission-critical systems, such as airplanes and nuclear power plants, properly estimating and reducing these model uncertainties are of significant importance. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Several data-driven approaches based on stateof-the-art data assimilation and machine learning algorithms are proposed to achieve this goal by leveraging the use of on-line and off-line high-fidelity data. Numerical simulations of several canonical flows are used to demonstrate the merits of the proposed approaches. Moreover, the proposed methods also have implications in many fields in which the governing equations are well understood, but the model uncertainties come from unresolved physical processes.

Page generated in 0.0497 seconds