• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 249
  • 249
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Multistability in microbeams: Numerical simulations and experiments in capacitive switches and resonant atomic force microscopy systems

Devin M Kalafut (11013732) 23 July 2021 (has links)
Microelectromechanical systems (MEMS) depend on mechanical deformation to sense their environment, enhance electrical circuitry, or store data. Nonlinear forces arising from multiphysics phenomena at the micro- and nanoscale -- van der Waals forces, electrostatic fields, dielectric charging, capillary forces, surface roughness, asperity interactions -- lead to challenging problems for analysis, simulation, and measurement of the deforming device elements. Herein, a foundation for the study of mechanical deformation is provided through computational and experimental studies of MEMS microcantilever capacitive switches. Numerical techniques are built to capture deformation equilibria expediently. A compact analytical model is developed from principle multiphysics governing operation. Experimental measurements support the phenomena predicted by the analytical model, and finite element method (FEM) simulations confirm device-specific performance. Altogether, the static multistability and quasistatic performance of the electrostatically-actuated switches are confirmed across analysis, simulation, and experimentation. <p><br></p> <p>The nonlinear multiphysics forces present in the devices are critical to the switching behavior exploited for novel applications, but are also a culprit in a common failure mode when the attractive forces overcome the restorative and repulsive forces to result in two elements sticking together. Quasistatic operation is functional for switching between multistable states during normal conditions, but is insufficient under such stiction-failure. Exploration of dynamic methods for stiction release is often the only option for many system configurations. But how and when is release achieved? To investigate the fundamental mechanism of dynamic release, an atomic force microscopy (AFM) system -- a microcantilever with a motion-controlled base and a single-asperity probe tip, measured and actuated via lasers -- is configured to replicate elements of a stiction-failed MEMS device. Through this surrogate, observable dynamic signatures of microcantilever deflection indicate the onset of detachment between the probe and a sample.</p>
242

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
243

[en] ANALYSIS OF THE COMPUTATIONAL COST OF THE MONTE CARLO METHOD: A STOCHASTIC APPROACH APPLIED TO A VIBRATION PROBLEM WITH STICK-SLIP / [pt] ANÁLISE DO CUSTO COMPUTACIONAL DO MÉTODO DE MONTE CARLO: UMA ABORDAGEM ESTOCÁSTICA APLICADA A UM PROBLEMA DE VIBRAÇÕES COM STICK-SLIP

MARIANA GOMES DIAS DOS SANTOS 20 June 2023 (has links)
[pt] Um dos objetivos desta tese é analisar o custo computacional do método de Monte Carlo aplicado a um problema modelo de dinâmica, considerando incertezas na força de atrito. O sistema mecânico a ser estudado é composto por um oscilador de um grau de liberdade que se desloca sobre uma esteira em movimento. Considera-se a existência de atrito seco entre a massa do oscilador e a esteira. Devido a uma descontinuidade na força de atrito, a dinâmica resultante pode ser dividida em duas fases que se alternam, chamadas de stick e slip. Neste estudo, um parâmetro da força de atrito dinâmica é modelado como uma variável aleatória. A propagação de incerteza é estudada por meio da aplicação do método de Monte Carlo, considerando três abordagens diferentes para calcular aproximações da resposta dos problemas de valor inicial que modelam a dinâmica do problema: NV) aproximações numéricas calculadas usando método de Runge-Kutta de quarta e quinta ordens com passo de integração variável; NF) aproximações numéricas calculadas usando método de Runge-Kutta de quarta ordem com passo de integração fixo; AN) aproximação analítica obtida com o método de múltiplas escalas. Nas abordagens NV e NF, para cada valor de parâmetro, uma aproximação numérica foi calculada. Já para a AN, apenas uma aproximação analítica foi calculada e avaliada para os diferentes valores usados. Entre as variáveis aleatórias de interesse associadas ao custo computacional do método de Monte Carlo, encontram-se o tempo de execução e o espaço em disco consumido. Devido à propagação de incertezas, a resposta do sistema é um processo estocástico com uma sequência aleatória de fases de stick e slip. Essa sequência pode ser caracterizada pelas seguintes variáveis aleatórias: instantes de transição entre as fases de stick e slip, suas durações e o número de fases. Para estudar as variáveis associadas ao custo computacional e ao processo estocástico foram construídos modelos estatísticos, histogramas normalizados e gráficos de dispersão. O objetivo é estudar a dependência entre as variáveis do processo estocástico e o custo computacional. Porém, a construção destas análises não é simples devido à dimensão do problema e à impossibilidade de visualização das distribuições conjuntas de vetores aleatórios de três ou mais dimensões. / [en] One of the objectives of this thesis is to analyze the computational cost of the Monte Carlo method applied to a toy problem concerning the dynamics of a mechanical system with uncertainties in the friction force. The system is composed by an oscillator placed over a moving belt. The existence of dry friction between the two elements in contact is considered. Due to a discontinuity in the frictional force, the resulting dynamics can be divided into two alternating phases, called stick and slip. In this study, a parameter of the dynamic friction force is modeled as a random variable. Uncertainty propagation is analyzed by applying the Monte Carlo method, considering three different strategies to compute approximations to the initial value problems that model the system s dynamics: NV) numerical approximations computed with the Runge-Kutta method of 4th and 5th orders, with variable integration time-step; NF) numerical approximations computed with the Runge-Kutta method of 4th order, with a fixed integration time-step; AN) analytical approximation obtained with the multiple scale method. In the NV and NF strategies, for each parameter value, a numerical approximation was calculated, whereas for the AN strategy, only one analytical approximation was calculated and evaluated for the different values of parameters considered. The run-time and the storage are among the random variables of interest associated with the computational cost of the Monte Carlo method. Due to uncertainty propagation, the system response is a stochastic process given by a random sequence of stick and slip phases. This sequence can be characterized by the following random variables: the transition instants between the stick and slip phases, their durations and the number of phases. To study the random processes and the variables related to the computational costs, statistical models, normalized histograms and scatterplots were built. Afterwards, a joint analysis was performed to study the dependece between the variables of the random process and the computational cost. However, the construction of these analyses is not a simple task due to the impossibility of viewing the distributionto of joint distributions of random vectors of three or more.
244

Efficient Sequential Sampling for Neural Network-based Surrogate Modeling

Pavankumar Channabasa Koratikere (15353788) 27 April 2023 (has links)
<p>Gaussian Process Regression (GPR) is a widely used surrogate model in efficient global optimization (EGO) due to its capability to provide uncertainty estimates in the prediction. The cost of creating a GPR model for large data sets is high. On the other hand, neural network (NN) models scale better compared to GPR as the number of samples increase. Unfortunately, the uncertainty estimates for NN prediction are not readily available. In this work, a scalable algorithm is developed for EGO using NN-based prediction and uncertainty (EGONN). Initially, two different NNs are created using two different data sets. The first NN models the output based on the input values in the first data set while the second NN models the prediction error of the first NN using the second data set. The next infill point is added to the first data set based on criteria like expected improvement or prediction uncertainty. EGONN is demonstrated on the optimization of the Forrester function and a constrained Branin function and is compared with EGO. The convergence criteria is based on the maximum number of infill points in both cases. The algorithm is able to reach the optimum point within the given budget. The EGONN is extended to handle constraints explicitly and is utilized for aerodynamic shape optimization of the RAE 2822 airfoil in transonic viscous flow at a free-stream Mach number of 0.734 and a Reynolds number of 6.5 million. The results obtained from EGONN are compared with the results from gradient-based optimization (GBO) using adjoints. The optimum shape obtained from EGONN is comparable to the shape obtained from GBO and is able to eliminate the shock. The drag coefficient is reduced from 200 drag counts to 114 and is close to 110 drag counts obtained from GBO. The EGONN is also extended to handle uncertainty quantification (uqEGONN) using prediction uncertainty as an infill method. The convergence criteria is based on the relative change of summary statistics such as mean and standard deviation of an uncertain quantity. The uqEGONN is tested on Ishigami function with an initial sample size of 100 samples and the algorithm terminates after 70 infill points. The statistics obtained from uqEGONN (using only 170 function evaluations) are close to the values obtained from directly evaluating the function one million times. uqEGONN is demonstrated on to quantifying the uncertainty in the airfoil performance due to geometric variations. The algorithm terminates within 100 computational fluid dynamics (CFD) analyses and the statistics obtained from the algorithm are close to the one obtained from 1000 direct CFD based evaluations.</p>
245

MULTI-LEVEL DEEP OPERATOR LEARNING WITH APPLICATIONS TO DISTRIBUTIONAL SHIFT, UNCERTAINTY QUANTIFICATION AND MULTI-FIDELITY LEARNING

Rohan Moreshwar Dekate (18515469) 07 May 2024 (has links)
<p dir="ltr">Neural operator learning is emerging as a prominent technique in scientific machine learn- ing for modeling complex nonlinear systems with multi-physics and multi-scale applications. A common drawback of such operators is that they are data-hungry and the results are highly dependent on the quality and quantity of the training data provided to the models. Moreover, obtaining high-quality data in sufficient quantity can be computationally prohibitive. Faster surrogate models are required to overcome this drawback which can be learned from datasets of variable fidelity and also quantify the uncertainty. In this work, we propose a Multi-Level Stacked Deep Operator Network (MLSDON) which can learn from datasets of different fidelity and is not dependent on the input function. Through various experiments, we demonstrate that the MLSDON can approximate the high-fidelity solution operator with better accuracy compared to a Vanilla DeepONet when sufficient high-fidelity data is unavailable. We also extend MLSDON to build robust confidence intervals by making conformalized predictions. This technique guarantees trajectory coverage of the predictions irrespective of the input distribution. Various numerical experiments are conducted to demonstrate the applicability of MLSDON to multi-fidelity, multi-scale, and multi-physics problems.</p>
246

Adaptation strategies of dam safety management to new climate change scenarios informed by risk indicators

Fluixá Sanmartín, Javier 21 December 2020 (has links)
Tesis por compendio / [ES] Las grandes presas, así como los diques de protección, son infraestructuras críticas cuyo fallo puede conllevar importantes consecuencias económicas y sociales. Tradicionalmente, la gestión del riesgo y la definición de estrategias de adaptación en la toma de decisiones han asumido la invariabilidad de las condiciones climáticas, incluida la persistencia de patrones históricos de variabilidad natural y la frecuencia de eventos extremos. Sin embargo, se espera que el cambio climático afecte de forma importante a los sistemas hídricos y comprometa la seguridad de las presas, lo que puede acarrear posibles impactos negativos en términos de costes económicos, sociales y ambientales. Los propietarios y operadores de presas deben por tanto adaptar sus estrategias de gestión y adaptación a medio y largo plazo a los nuevos escenarios climáticos. En la presente tesis se ha desarrollado una metodología integral para incorporar los impactos del cambio climático en la gestión de la seguridad de presas y en el apoyo a la toma de decisiones. El objetivo es plantear estrategias de adaptación que incorporen la variabilidad de los futuros riesgos, así como la incertidumbre asociada a los nuevos escenarios climáticos. El impacto del cambio climático en la seguridad de presas se ha estructurado utilizando modelos de riesgo y mediante una revisión bibliográfica interdisciplinaria sobre sus potenciales efectos. Esto ha permitido establecer un enfoque dependiente del tiempo que incorpore la evolución futura del riesgo, para lo cual se ha definido un nuevo indicador que evalúa cuantitativamente la eficiencia a largo plazo de las medidas de reducción de riesgo. Además, para integrar la incertidumbre de los escenarios futuros en la toma de decisiones, la metodología propone una estrategia robusta que permite establecer secuencias optimizadas de implementación de medidas correctoras para la adaptación al cambio climático. A pesar de las dificultades para asignar probabilidades a eventos específicos, esta metodología permite un análisis sistemático y objetivo, reduciendo considerablemente la subjetividad. Esta metodología se ha aplicado al caso real de una presa española susceptible a los efectos del cambio climático. El análisis se centra en el escenario hidrológico, donde las avenidas son la principal carga a la que está sometida la presa. Respecto de análisis previos de la presa, los resultados obtenidos proporcionan nueva y valiosa información sobre la evolución de los riesgos futuros y sobre cómo abordarlos. En general, se espera un aumento del riesgo con el tiempo; esto ha llevado a plantear nuevas medidas de adaptación que no están justificadas en la situación actual. Esta es la primera aplicación documentada de un análisis exhaustivo de los impactos del cambio climático sobre el riesgo de rotura de una presa que sirve como marco de referencia para la definición de estrategias de adaptación a largo plazo y la evaluación de su eficiencia. / [CAT] Les grans preses, així com els dics de protecció, són infraestructures crítiques que si fallen poden produir importants conseqüències econòmiques i socials. Tradicionalment, la gestió del risc i la definició d'estratègies d'adaptació en la presa de decisions han assumit la invariabilitat de les condicions climàtiques, inclosa la persistència de patrons històrics de variabilitat natural i la probabilitat d'esdeveniments extrems. No obstant això, s'espera que el canvi climàtic afecte de manera important als sistemes hídrics i comprometi la seguretat de les preses, la qual cosa pot implicar possibles impactes negatius en termes de costos econòmics, socials i ambientals. Els propietaris i operadors de preses deuen per tant adaptar les seues estratègies de gestió i adaptació a mitjà i llarg termini als nous escenaris climàtics. En la present tesi s'ha desenvolupat una metodologia integral per a incorporar els impactes del canvi climàtic en la gestió de la seguretat de preses i en el suport a la presa de decisions. L'objectiu és plantejar estratègies d'adaptació que incorporen la variabilitat dels futurs riscos, així com la incertesa associada als nous escenaris climàtics. L'impacte del canvi climàtic en la seguretat de preses s'ha estructurat utilitzant models de risc i mitjançant una revisió bibliogràfica interdisciplinària sobre els seus potencials efectes. Això ha permès establir un enfocament dependent del temps que incorpori l'evolució futura del risc, per a això s'ha definit un nou indicador que avalua quantitativament l'eficiència a llarg termini de les mesures de reducció de risc. A més, per a integrar la incertesa dels escenaris futurs en la presa de decisions, la metodologia proposa una estratègia robusta que permet establir seqüències optimitzades d'implementació de mesures correctores per a l'adaptació al canvi climàtic. A pesar de les dificultats per a assignar probabilitats a esdeveniments específics, esta metodologia permet una anàlisi sistemàtica i objectiva, reduint considerablement la subjectivitat. Aquesta metodologia s'ha aplicat al cas real d'una presa espanyola susceptible a l'efecte del canvi climàtic. L'anàlisi se centra en l'escenari hidrològic, on les avingudes són la principal càrrega a la qual està sotmesa la presa. Respecte d'anàlisis prèvies de la presa, els resultats obtinguts proporcionen nova i valuosa informació sobre l'evolució dels riscos futurs i sobre com abordar-los. En general, s'espera un augment del risc amb el temps; això ha portat a plantejar noves mesures d'adaptació que no estarien justificades en la situació actual. Aquesta és la primera aplicació documentada d'una anàlisi exhaustiva dels impactes del canvi climàtic sobre el risc de trencament d'una presa que serveix com a marc de referència per a la definició d'estratègies d'adaptació a llarg termini i l'avaluació de la seua eficiencia. / [EN] Large dams as well as protective dikes and levees are critical infrastructures whose failure has major economic and social consequences. Risk assessment approaches and decision-making strategies have traditionally assumed the stationarity of climatic conditions, including the persistence of historical patterns of natural variability and the likelihood of extreme events. However, climate change has a major impact on the world's water systems and is endangering dam safety, leading to potentially damaging impacts in terms of economic, social and environmental costs. Owners and operators of dams must adapt their mid- and long-term management and adaptation strategies to new climate scenarios. This thesis proposes a comprehensive approach to incorporate climate change impacts on dam safety management and decision-making support. The goal is to design adaptation strategies that incorporate the non-stationarity of future risks as well as the uncertainties associated with new climate scenarios. Based on an interdisciplinary review of the state-of-the-art research on its potential effects, the global impact of climate change on dam safety is structured using risk models. This allows a time-dependent approach to be established to consider the potential evolution of risk with time. Consequently, a new indicator is defined to support the quantitative assessment of the long-term efficiency of risk reduction measures. Additionally, in order to integrate the uncertainty of future scenarios, the approach is enhanced with a robust decision-making strategy that helps establish the consensus sequence of measures to be implemented for climate change adaptation. Despite the difficulties to allocate probabilities to specific events, such framework allows for a systematic and objective analysis, reducing considerably the subjectivity. Such a methodology is applied to a real case study of a Spanish dam subjected to the effects of climate change. The analysis focus on hydrological scenarios, where floods are the main load to which the dam is subjected. The results provide valuable new information with respect to the previously existing analysis of the dam regarding the evolution of future risks and how to cope with it. In general, risks are expected to increase with time and, as a result, new adaptation measures that are not justifiable for the present situation are recommended. This is the first documented application of a comprehensive analysis of climate change impacts on dam failure risk and serves as a reference benchmark for the definition of long-term adaptation strategies and the evaluation of their efficiency. / Fluixá Sanmartín, J. (2020). Adaptation strategies of dam safety management to new climate change scenarios informed by risk indicators [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/157634 / Compendio
247

Experimental and Modelling Study of Interfacial Phenomena in Annular Flow with Uncertainty Quantification

Rivera Durán, Yago 03 July 2023 (has links)
Tesis por compendio / [ES] El flujo anular es uno de los regímenes de flujo bifásico más importantes y se caracteriza por que una fracción de líquido muy pequeña conocida como película de líquido que viaja cerca de la pared y un núcleo gaseoso. El flujo anular se puede observar durante la operación de plantas nucleares y en diferentes escenarios transitorios, aunque también en muchas otras aplicaciones industriales. La película de líquido es determinante en muchas de ellas ya que posee una alta capacidad de transferencia de masa, momento y energía. Parte de estas propiedades se deben a que la película presenta un comportamiento interfacial no linear con desarrollo de ondas interfaciales. Además, en determinadas instalaciones donde se la película de líquido actúa como refrigerante, es esencial conocer su comportamiento tanto por motivos de optimización como por razones de seguridad. Para estudiar los fundamentos del comportamiento de la película de líquido se han llevado a cabo una serie de experimentos en una instalación diseñada para generar flujo anular aire-agua en tubería circular vertical. En esta instalación se ha medido la evolución temporal del espesor de la película de líquido bajo diferentes condiciones y subrégimenes, como flujo en caída libre o flujo en cocorriente ascendente y descendente. El sistema de medida empleado se ha diseñado y construido para esta aplicación y consiste en sondas de conductancia de 3 electrodos rasantes a la pared y dispuestas en diferentes partes de la sección de test. Tanto el sistema electrónico como el dispositivo de calibración se diseñaron específicamente para trabajar con estas son-das de conductancia. La instalación cuenta con dos diámetros diferentes para poder comparar también el efecto del diámetro de la tubería así como aumentar el rango de medidas disponibles en bases de datos. Una de las características más particulares de la película de líquido son sus ondas interfaciales. Las principales ondas que se pueden diferenciar son las disturbance waves, ondas coherentes de gran calibre; y las ripple waves, ondas de pequeño tamaño, no coherentes que se generan constantemente antes de desaparecer al ser absorbidas por otras ondas. Las variables principales de la película de líquido que se han analizado en la instalación experimental son el espesor medio, la altura y frecuencia de las disturbance waves, la altura de las ripple waves y la altura de líquido no perturbado. Se han lleva-do a cabo diferentes estudios experimentales con objeto de añadir un valor adicional a las medidas. Para flujo anular descendente se ha estudiado el desarrollo de la película a través de diferentes zonas de medida y se han comparado las secciones de test de diferente diámetro. Además, múltiples correlaciones se han propuesto y los resultados se han comparado con estudios similares de otros autores. Para el análisis del flujo anular ascendente, se ha añadido un estudio del efecto de la tensión superficial en las variables de la película de líquido mediante la adición de pequeñas cantidades de 1-butanol. Es objeto de esta tesis también la modelación del flujo anular mediante análisis numérico. Los códigos de fluidodinámica computacional (CFD) son herramientas computacionales que permiten analizar el comportamiento de los fluidos. Han experimentado una fuerte evolución a lo largo de los últimos años gracias a los avances tecnológicos y los resultados que se obtienen de su correcta utilización son muy prometedores. No obstante, el flujo multifásico sigue siendo difícil de modelar y es necesario contrastar las predicciones de los códigos CFD con medidas experimentales. Por lo tanto, la fenomenología de flujo anular desarrollado se ha estudiado también mediante el código ANSYS CFX. Existe un importante vacío de conocimiento en la cuantificación de la incertidumbre (UQ) de dichos códigos CFD. Esta tesis también presenta los fundamentos del método UQ Polynomial Chaos Expansion (PCE) aplicado a dos casos prácticos. / [CA] El flux anular és un dels règims de flux bifàsic més importants i es caracteritza perquè la fracció de líquid és molt xicoteta coneguda com a pel·lícula de líquid. El flux anul·lar es pot observar durant l'operació de plantes nuclears i en diferents escenaris transitoris, encara que també en moltes altres aplicacions industrials. La pel·lícula de líquid és determinant en moltes d'elles ja que posseeix una alta capacitat de transferència de massa, moment i energia. Part d'aquestes propietats es deuen al fet que la pel·lícula presenta un comportament interfacial no linear amb desenvolupament d'ones interfacials. A més, en determinades instal·lacions on li la pel·lícula de líquid actua com a refrigerant, és essencial conéixer el seu comportament tant per motius d'optimització com per raons de seguretat. Per a estudiar els fonaments del comportament de la pel·lícula de líquid s'han dut a terme una sèrie d'experiments en una instal·lació dissenyada per a generar flux anular aïre-aigua en canonada circular vertical. En aquesta instal·lació s'ha mesurat l'evolució temporal de la grossària de la pel·lícula de líquid sota diferents condicions i subrégimenes, com a flux en caiguda lliure o flux en cocorriente ascendent i descendent. El sistema de mesura emprat s'ha dissenyat i construït per a aquesta aplicació i consisteix en sondes de conductància de 3 elèctrodes i disposades en diferents parts de la secció de test. Tant el sistema electrònic com el dispositiu de calibratge es van dissenyar específicament per a treballar amb aquestes sondes de conductància. La instal·lació compta amb dos diàmetres diferents per a poder comparar també l'efecte del diàmetre de la canonada així com augmentar el rang de mesures disponibles en bases de dades. Una de les característiques més particulars de la pel·lícula de líquid són les seues ones interfacials. Les principals ones que es poden diferenciar són les disturbance waves, ones coherents de gran calibre; i les ripple waves, ones de xicoteta grandària, no coherents que es generen constantment abans de desaparéixer en ser absorbides per altres ones. Les variables principals de la pel·lícula de líquid que s'han analitzat en la instal·lació experimental són la grossària mitjana, l'altura i freqüència de les disturbance waves, l'altura de les ripple waves i l'altura de líquid no pertorbat. S'han dut a terme diferents estudis experimentals a fi d'afegir un valor addicional a les mesures. Per a flux anul·lar descendent s'ha estudiat el desenvolupament de la pel·lícula a través de diferents zones de mesura i s'han comparat els diferents diàmetres. A més, múltiples correlacions s'han proposat i els resultats s'han comparat amb estudis similars d'altres autors. Per a l'anàlisi del flux anul·lar ascendent, s'ha afegit un estudi de l'efecte de la tensió superficial en les variables de la pel·lícula de líquid mitjançant l'addició de xicotetes quantitats de 1-butanol. És objecte d'aquesta tesi també el modelatge del flux anul·lar mitjançant anàlisi numèrica. Els codis de fluidodinámica computacional (CFD) són eines computacionals que permeten analitzar el comportament dels fluids. Han experimentat una forta evolució al llarg dels últims anys gràcies als avanços tecnològics i els resultats que s'obtenen de la seua correcta utilització són molt prometedors. No obstant això, el flux multifásico continua sent difícil de modelar i és necessari contrastar les prediccions dels codis CFD amb mesures experimentals. Per tant, la fenomenologia de flux anular desenvolupat s'ha estudiat també mitjançant el codi ANSYS CFX. Existeix un important buit de conei-xement en la quantificació de la incertesa d'aquests codis CFD. En aquesta tesi es mostren els fonaments del Polynomial Chaos Expansion (PCE) com a mètode per a calcular la incertesa dels resultats de simulació mitjançant propagació. El PCE per quadratura de Gauss-Hermite s'ha aplicat a les simulacions de dos experiments. / [EN] Annular flow is one of the most important two-phase flow regimes and is characterized by a very small liquid fraction known as a liquid film travelling close to the wall and a gas core. Annular flow can be observed during the operation of nuclear plants, in different transient scenarios, and many other industrial applications. The liquid film is decisive in many of them as it has a high mass, momentum and energy transfer capacity. Many of these properties are due to the film exhibiting nonlinear interfacial behavior with the generation of interfacial waves. In addition, in certain facilities where the liquid film acts as a coolant, it is essential to know its behavior both for optimization and safety reasons. In order to study the fundamentals of the liquid film, a series of experiments have been carried out in a facility designed to generate air-water annual flow in a vertical circular pipe. In this facility, the time evolution of the liquid film thickness has been measured under different conditions and sub-regimes, such as free-fall flow or upward and downward cocurrent flow. The measurement system used has been designed and built for this application and consists of 3-electrode conductance probes mounted flush to the wall and arranged at different distances from the entrance of the test section. Both the electronics and the calibration device were specifically designed to work with these conductance probes. The facility has two different diameters to compare the effect of the pipe diameter and increase the range of measurements available in databases. One of the main characteristics of the liquid film is its interfacial waves. The two primary types of waves that can be distinguished are the disturbance waves, which are large coherent waves, and the ripple waves, small, non-coherent waves that are constantly generated before disappearing when absorbed by other waves. The main variables of the liquid film analyzed in the experimental setup are the mean film thickness, the height and frequency of the disturbance waves, the height of the ripple waves and the height of the unperturbed liquid. Different experimental studies have been carried out to add additional value to the measurements. For downward annular flow, the development of the film through different measuring zones has been studied, and the different test section diameters have been compared. In addition, multiple correlations have been proposed, and the results have been compared with similar studies by other authors. To analyze the upward annular flow, a study of the effect of surface tension on the liquid film variables by adding small amounts of 1-butanol has been added. The modelling of annular flow by numerical analysis is also the subject of this thesis. Computational Fluid Dynamics (CFD) codes are computational tools that allow the analysis of fluid behavior. They have undergone a strong evolution over the last few years thanks to technological advances, and the results obtained from their correct use are very promising. However, multiphase flow remains challenging to model, and it is necessary to contrast the predictions of CFD codes with experimental measurements. Therefore, the developed annular flow phenomenology has also been studied using the ANSYS CFX code. There is a significant knowledge gap in the uncertainty quantification of CFD codes. Some methodologies are available, although many are in the early stages or have not been explored by researchers. All applications of CFD codes in nuclear safety require extensive knowledge of the uncertainty of the predictions, so developing these methodologies is crucial. This thesis shows the fundamentals of Polynomial Chaos Expansion (PCE) as a method to calculate the uncertainty of simulation results by propagation. The PCE by Gauss-Hermite quadrature has also been applied to the simulations of two experiments: the experimental setup of this thesis, and an international benchmark. / I would like to acknowledge the support provided by the Ministerio de Economía, Industria y Competitividad and the Agencia Nacional de Investigación under the FPI grant BES-2017-080031, which provided funding for my research. / Rivera Durán, Y. (2023). Experimental and Modelling Study of Interfacial Phenomena in Annular Flow with Uncertainty Quantification [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194606 / Compendio
248

Feed-and-bleed transient analysis of OSU APEX facility using the modern Code Scaling, Applicability, and Uncertainty method

Hallee, Brian Todd 05 March 2013 (has links)
The nuclear industry has long relied upon bounding parametric analyses in predicting the safety margins of reactor designs undergoing design-basis accidents. These methods have been known to return highly-conservative results, limiting the operating conditions of the reactor. The Best-Estimate Plus Uncertainty (BEPU) method using a modernized version of the Code-Scaling, Applicability, and Uncertainty (CSAU) methodology has been applied to more accurately predict the safety margins of the Oregon State University Advanced Plant Experiment (APEX) facility experiencing a Loss-of-Feedwater Accident (LOFA). The statistical advantages of the Bayesian paradigm of probability was utilized to incorporate prior knowledge when determining the analysis required to justify the safety margins. RELAP5 Mod 3.3 was used to accurately predict the thermal-hydraulics of a primary Feed-and-Bleed response to the accident using assumptions to accompany the lumped-parameter calculation approach. A novel coupling of thermal-hydraulic and statistical software was accomplished using the Symbolic Nuclear Analysis Package (SNAP). Uncertainty in Peak Cladding Temperature (PCT) was calculated at the 95/95 probability/confidence levels under a series of four separate sensitivity studies. / Graduation date: 2013
249

ENSURING FATIGUE PERFORMANCE VIA LOCATION-SPECIFIC LIFING IN AEROSPACE COMPONENTS MADE OF TITANIUM ALLOYS AND NICKEL-BASE SUPERALLOYS

Ritwik Bandyopadhyay (8741097) 21 April 2020 (has links)
<div>In this thesis, the role of location-specific microstructural features in the fatigue performance of the safety-critical aerospace components made of Nickel (Ni)-base superalloys and linear friction welded (LFW) Titanium (Ti) alloys has been studied using crystal plasticity finite element (CPFE) simulations, energy dispersive X-ray diffraction (EDD), backscatter electron (BSE) images and digital image correlation (DIC).</div><div><br></div><div>In order to develop a microstructure-sensitive fatigue life prediction framework, first, it is essential to build trust in the quantitative prediction from CPFE analysis by quantifying uncertainties in the mechanical response from CPFE simulations. Second, it is necessary to construct a unified fatigue life prediction metric, applicable to multiple material systems; and a calibration strategy of the unified fatigue life model parameter accounting for uncertainties originating from CPFE simulations and inherent in the experimental calibration dataset. To achieve the first task, a genetic algorithm framework is used to obtain the statistical distributions of the crystal plasticity (CP) parameters. Subsequently, these distributions are used in a first-order, second-moment method to compute the mean and the standard deviation for the stress along the loading direction (σ_load), plastic strain accumulation (PSA), and stored plastic strain energy density (SPSED). The results suggest that an ~10% variability in σ_load and 20%-25% variability in the PSA and SPSED values may exist due to the uncertainty in the CP parameter estimation. Further, the contribution of a specific CP parameter to the overall uncertainty is path-dependent and varies based on the load step under consideration. To accomplish the second goal, in this thesis, it is postulated that a critical value of the SPSED is associated with fatigue failure in metals and independent of the applied load. Unlike the classical approach of estimating the (homogenized) SPSED as the cumulative area enclosed within the macroscopic stress-strain hysteresis loops, CPFE simulations are used to compute the (local) SPSED at each material point within polycrystalline aggregates of 718Plus, an additively manufactured Ni-base superalloy. A Bayesian inference method is utilized to calibrate the critical SPSED, which is subsequently used to predict fatigue lives at nine different strain ranges, including strain ratios of 0.05 and -1, using nine statistically equivalent microstructures. For each strain range, the predicted lives from all simulated microstructures follow a log-normal distribution; for a given strain ratio, the predicted scatter is seen to be increasing with decreasing strain amplitude and are indicative of the scatter observed in the fatigue experiments. Further, the log-normal mean lives at each strain range are in good agreement with the experimental evidence. Since the critical SPSED captures the experimental data with reasonable accuracy across various loading regimes, it is hypothesized to be a material property and sufficient to predict the fatigue life.</div><div><br></div><div>Inclusions are unavoidable in Ni-base superalloys, which lead to two competing failure modes, namely inclusion- and matrix-driven failures. Each factor related to the inclusion, which may contribute to crack initiation, is isolated and systematically investigated within RR1000, a powder metallurgy produced Ni-base superalloy, using CPFE simulations. Specifically, the role of the inclusion stiffness, loading regime, loading direction, a debonded region in the inclusion-matrix interface, microstructural variability around the inclusion, inclusion size, dissimilar coefficient of thermal expansion (CTE), temperature, residual stress, and distance of the inclusion from the free surface are studied in the emergence of two failure modes. The CPFE analysis indicates that the emergence of a failure mode is an outcome of the complex interaction between the aforementioned factors. However, the possibility of a higher probability of failure due to inclusions is observed with increasing temperature, if the CTE of the inclusion is higher than the matrix, and vice versa. Any overall correlation between the inclusion size and its propensity for damage is not found, based on inclusion that is of the order of the mean grain size. Further, the CPFE simulations indicate that the surface inclusions are more damaging than the interior inclusions for similar surrounding microstructures. These observations are utilized to instantiate twenty realistic statistically equivalent microstructures of RR1000 – ten containing inclusions and remaining ten without inclusions. Using CPFE simulations with these microstructures at four different temperatures and three strain ranges for each temperature, the critical SPSED is calibrated as a function of temperature for RR1000. The results suggest that critical SPSED decreases almost linearly with increasing temperature and is appropriate to predict the realistic emergence of the competing failure modes as a function of applied strain range and temperature.</div><div><br></div><div>LFW process leads to the development of significant residual stress in the components, and the role of residual stress in the fatigue performance of materials cannot be overstated. Hence, to ensure fatigue performance of the LFW Ti alloys, residual strains in LFW of similar (Ti-6Al-4V welded to Ti-6Al-4V or Ti64-Ti64) and dissimilar (Ti-6Al-4V welded to Ti-5Al-5V-5Mo-3Cr or Ti64-Ti5553) Ti alloys have been characterized using EDD. For each type of LFW, one sample is chosen in the as-welded (AW) condition and another sample is selected after a post-weld heat treatment (HT). Residual strains have been separately studied in the alpha and beta phases of the material, and five components (three axial and two shear) have been reported in each case. In-plane axial components of the residual strains show a smooth and symmetric behavior about the weld center for the Ti64-Ti64 LFW samples in the AW condition, whereas these components in the Ti64-Ti5553 LFW sample show a symmetric trend with jump discontinuities. Such jump discontinuities, observed in both the AW and HT conditions of the Ti64-Ti5553 samples, suggest different strain-free lattice parameters in the weld region and the parent material. In contrast, the results from the Ti64-Ti64 LFW samples in both AW and HT conditions suggest nearly uniform strain-free lattice parameters throughout the weld region. The observed trends in the in-plane axial residual strain components have been rationalized by the corresponding microstructural changes and variations across the weld region via BSE images. </div><div><br></div><div>In the literature, fatigue crack initiation in the LFW Ti-6Al-4V specimens does not usually take place in the seemingly weakest location, i.e., the weld region. From the BSE images, Ti-6Al-4V microstructure, at a distance from the weld-center, which is typically associated with crack initiation in the literature, are identified in both AW and HT samples and found to be identical, specifically, equiaxed alpha grains with beta phases present at the alpha grain boundaries and triple points. Hence, subsequent fatigue performance in LFW Ti-6Al-4V is analyzed considering the equiaxed alpha microstructure.</div><div><br></div><div>The LFW components made of Ti-6Al-4V are often designed for high cycle fatigue performance under high mean stress or high R ratios. In engineering practice, mean stress corrections are employed to assess the fatigue performance of a material or structure; albeit this is problematic for Ti-6Al-4V, which experiences anomalous behavior at high R ratios. To address this problem, high cycle fatigue analyses are performed on two Ti-6Al-4V specimens with equiaxed alpha microstructures at a high R ratio. In one specimen, two micro-textured regions (MTRs) having their c-axes near-parallel and perpendicular to the loading direction are identified. High-resolution DIC is performed in the MTRs to study grain-level strain localization. In the other specimen, DIC is performed on a larger area, and crack initiation is observed in a random-textured region. To accompany the experiments, CPFE simulations are performed to investigate the mechanistic aspects of crack initiation, and the relative activity of different families of slip systems as a function of R ratio. A critical soft-hard-soft grain combination is associated with crack initiation indicating possible dwell effect at high R ratios, which could be attributed to the high-applied mean stress and high creep sensitivity of Ti-6Al-4V at room temperature. Further, simulations indicated more heterogeneous deformation, specifically the activation of multiple families of slip systems with fewer grains being plasticized, at higher R ratios. Such behavior is exacerbated within MTRs, especially the MTR composed of grains with their c-axes near parallel to the loading direction. These features of micro-plasticity make the high R ratio regime more vulnerable to fatigue damage accumulation and justify the anomalous mean stress behavior experienced by Ti-6Al-4V at high R ratios.</div><div><br></div>

Page generated in 0.1091 seconds