• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 246
  • 246
  • 63
  • 58
  • 53
  • 37
  • 36
  • 35
  • 34
  • 28
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Data-driven Uncertainty Analysis in Neural Networks with Applications to Manufacturing Process Monitoring

Bin Zhang (11073474) 12 August 2021 (has links)
<p>Artificial neural networks, including deep neural networks, play a central role in data-driven science due to their superior learning capacity and adaptability to different tasks and data structures. However, although quantitative uncertainty analysis is essential for training and deploying reliable data-driven models, the uncertainties in neural networks are often overlooked or underestimated in many studies, mainly due to the lack of a high-fidelity and computationally efficient uncertainty quantification approach. In this work, a novel uncertainty analysis scheme is developed. The Gaussian mixture model is used to characterize the probability distributions of uncertainties in arbitrary forms, which yields higher fidelity than the presumed distribution forms, like Gaussian, when the underlying uncertainty is multimodal, and is more compact and efficient than large-scale Monte Carlo sampling. The fidelity of the Gaussian mixture is refined through adaptive scheduling of the width of each Gaussian component based on the active assessment of the factors that could deteriorate the uncertainty representation quality, such as the nonlinearity of activation functions in the neural network. </p> <p>Following this idea, an adaptive Gaussian mixture scheme of nonlinear uncertainty propagation is proposed to effectively propagate the probability distributions of uncertainties through layers in deep neural networks or through time in recurrent neural networks. An adaptive Gaussian mixture filter (AGMF) is then designed based on this uncertainty propagation scheme. By approximating the dynamics of a highly nonlinear system with a feedforward neural network, the adaptive Gaussian mixture refinement is applied at both the state prediction and Bayesian update steps to closely track the distribution of unmeasurable states. As a result, this new AGMF exhibits state-of-the-art accuracy with a reasonable computational cost on highly nonlinear state estimation problems subject to high magnitudes of uncertainties. Next, a probabilistic neural network with Gaussian-mixture-distributed parameters (GM-PNN) is developed. The adaptive Gaussian mixture scheme is extended to refine intermediate layer states and ensure the fidelity of both linear and nonlinear transformations within the network so that the predictive distribution of output target can be inferred directly without sampling or approximation of integration. The derivatives of the loss function with respect to all the probabilistic parameters in this network are derived explicitly, and therefore, the GM-PNN can be easily trained with any backpropagation method to address practical data-driven problems subject to uncertainties.</p> <p>The GM-PNN is applied to two data-driven condition monitoring schemes of manufacturing processes. For tool wear monitoring in the turning process, a systematic feature normalization and selection scheme is proposed for the engineering of optimal feature sets extracted from sensor signals. The predictive tool wear models are established using two methods, one is a type-2 fuzzy network for interval-type uncertainty quantification and the other is the GM-PNN for probabilistic uncertainty quantification. For porosity monitoring in laser additive manufacturing processes, convolutional neural network (CNN) is used to directly learn patterns from melt-pool patterns to predict porosity. The classical CNN models without consideration of uncertainty are compared with the CNN models in which GM-PNN is embedded as an uncertainty quantification module. For both monitoring schemes, experimental results show that the GM-PNN not only achieves higher prediction accuracies of process conditions than the classical models but also provides more effective uncertainty quantification to facilitate the process-level decision-making in the manufacturing environment.</p><p>Based on the developed uncertainty analysis methods and their proven successes in practical applications, some directions for future studies are suggested. Closed-loop control systems may be synthesized by combining the AGMF with data-driven controller design. The AGMF can also be extended from a state estimator to the parameter estimation problems in data-driven models. In addition, the GM-PNN scheme may be expanded to directly build more complicated models like convolutional or recurrent neural networks.</p>
222

DEVELOPMENT OF IMAGE-BASED DENSITY DIAGNOSTICS WITH BACKGROUND-ORIENTED SCHLIEREN AND APPLICATION TO PLASMA INDUCED FLOW

Lalit Rajendran (8960978) 07 May 2021 (has links)
<p>There is growing interest in the use of nanosecond surface dielectric barrier discharge (ns-SDBD) actuators for high-speed (supersonic/hypersonic) flow control. A plasma discharge is created using a nanosecond-duration pulse of several kilovolts, and leads to a rapid heat release and a complex three-dimensional flow field. Past work has been limited to qualitative visualizations such as schlieren imaging, and detailed measurements of the induced flow are required to develop a mechanistic model of the actuator performance. </p><p><br></p><p></p><p>Background-Oriented Schlieren (BOS) is a quantitative variant of schlieren imaging and measures density gradients in a flow field by tracking the apparent distortion of a target dot pattern. The distortion is estimated by cross-correlation, and the density gradients can be integrated spatially to obtain the density field. Owing to the simple setup and ease of use, BOS has been applied widely, and is becoming the preferred density measurement technique. However, there are several unaddressed limitations with potential for improvement, especially for application to complex flow fields such as those induced by plasma actuators. </p><p></p><p>This thesis presents a series of developments aimed at improving the various aspects of the BOS measurement chain to provide an overall improvement in the accuracy, precision, spatial resolution and dynamic range. A brief summary of the contributions are: </p><p>1) a synthetic image generation methodology to perform error and uncertainty analysis for PIV/BOS experiments, </p><p>2) an uncertainty quantification methodology to report local, instantaneous, a-posteriori uncertainty bounds on the density field, by propagating displacement uncertainties through the measurement chain,</p><p>3) an improved displacement uncertainty estimation method using a meta-uncertainty framework whereby uncertainties estimated by different methods are combined based on the sensitivities to image perturbations, </p><p>4) the development of a Weighted Least Squares-based density integration methodology to reduce the sensitivity of the density estimation procedure to measurement noise.</p><p>5) a tracking-based processing algorithm to improve the accuracy, precision and spatial resolution of the measurements, </p><p>6) a theoretical model of the measurement process to demonstrate the effect of density gradients on the position uncertainty, and an uncertainty quantification methodology for tracking-based BOS,</p><p>Then the improvements to BOS are applied to perform a detailed characterization of the flow induced by a filamentary surface plasma discharge to develop a reduced-order model for the length and time scales of the induced flow. The measurements show that the induced flow consists of a hot gas kernel filled with vorticity in a vortex ring that expands and cools over time. A reduced-order model is developed to describe the induced flow and applying the model to the experimental data reveals that the vortex ring's properties govern the time scale associated with the kernel dynamics. The model predictions for the actuator-induced flow length and time scales can guide the choice of filament spacing and pulse frequencies for practical multi-pulse ns-SDBD configurations.</p>
223

Towards robust prediction of the dynamics of the Antarctic ice sheet: Uncertainty quantification of sea-level rise projections and grounding-line retreat with essential ice-sheet models / Vers des prédictions robustes de la dynamique de la calotte polaire de l'Antarctique: Quantification de l'incertitude sur les projections de l'augmentation du niveau des mers et du retrait de la ligne d'ancrage à l'aide de modèles glaciologiques essentiels

Bulthuis, Kevin 29 January 2020 (has links) (PDF)
Recent progress in the modelling of the dynamics of the Antarctic ice sheet has led to a paradigm shift in the perception of the Antarctic ice sheet in a changing climate. New understanding of the dynamics of the Antarctic ice sheet now suggests that the response of the Antarctic ice sheet to climate change will be driven by instability mechanisms in marine sectors. As concerns have grown about the response of the Antarctic ice sheet in a warming climate, interest has grown simultaneously in predicting with quantified uncertainty the evolution of the Antarctic ice sheet and in clarifying the role played by uncertainties in predicting the response of the Antarctic ice sheet to climate change. Essential ice-sheet models have recently emerged as computationally efficient ice-sheet models for large-scale and long-term simulations of the ice-sheet dynamics and integration into Earth system models. Essential ice-sheet models, such as the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model developed at the Université Libre de Bruxelles, achieve computational tractability by representing essential mechanisms and feedbacks of ice-sheet thermodynamics through reduced-order models and appropriate parameterisations. Given their computational tractability, essential ice-sheet models combined with methods from the field of uncertainty quantification provide opportunities for more comprehensive analyses of the impact of uncertainty in ice-sheet models and for expanding the range of uncertainty quantification methods employed in ice-sheet modelling. The main contributions of this thesis are twofold. On the one hand, we contribute a new assessment and new understanding of the impact of uncertainties on the multicentennial response of the Antarctic ice sheet. On the other hand, we contribute new methods for uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models, with, as a motivation in glaciology, a focus on predicting with quantified uncertainty the retreat of the grounded region of the Antarctic ice sheet. For the first contribution, we carry out new probabilistic projections of the multicentennial response of the Antarctic ice sheet to climate change using the f.ETISh model. We apply methods from the field of uncertainty quantification to the f.ETISh model to investigate the influence of several sources of uncertainty, namely sources of uncertainty in atmospheric forcing, basal sliding, grounding-line flux parameterisation, calving, sub-shelf melting, ice-shelf rheology, and bedrock relation, on the continental response on the Antarctic ice sheet. We provide new probabilistic projections of the contribution of the Antarctic ice sheet to future sea-level rise; we carry out stochastic sensitivity analysis to determine the most influential sources of uncertainty; and we provide new probabilistic projections of the retreat of the grounded portion of the Antarctic ice sheet. For the second contribution, we propose to address uncertainty quantification of geometrical characteristics of the spatial response of physics-based computational models within the probabilistic context of the random set theory. We contribute to the development of the concept of confidence sets that either contain or are contained within an excursion set of the spatial response with a specified probability level. We propose a new multifidelity quantile-based method for the estimation of such confidence sets and we demonstrate the performance of the proposed method on an application concerned with predicting with quantified uncertainty the retreat of the Antarctic ice sheet. In addition to these two main contributions, we contribute to two additional pieces of research pertaining to the computation of Sobol indices in global sensitivity analysis in small-data settings using the recently introduced probabilistic learning on manifolds (PLoM) and to a multi-model comparison of the projections of the contribution of the Antarctic ice sheet to global mean sea-level rise. / Les progrès récents effectués dans la modélisation de la dynamique de la calotte polaire de l'Antarctique ont donné lieu à un changement de paradigme vis-à-vis de la perception de la calotte polaire de l'Antarctique face au changement climatique. Une meilleure compréhension de la dynamique de la calotte polaire de l'Antarctique suggère désormais que la réponse de la calotte polaire de l'Antarctique au changement climatique sera déterminée par des mécanismes d'instabilité dans les régions marines. Tandis qu'un nouvel engouement se porte sur une meilleure compréhension de la réponse de la calotte polaire de l'Antarctique au changement climatique, un intérêt particulier se porte simultanément vers le besoin de quantifier les incertitudes sur l'évolution de la calotte polaire de l'Antarctique ainsi que de clarifier le rôle joué par les incertitudes sur le comportement de la calotte polaire de l'Antarctique en réponse au changement climatique. D'un point de vue numérique, les modèles glaciologiques dits essentiels ont récemment été développés afin de fournir des modèles numériques efficaces en temps de calcul dans le but de réaliser des simulations à grande échelle et sur le long terme de la dynamique des calottes polaires ainsi que dans l'optique de coupler le comportement des calottes polaires avec des modèles globaux du sytème terrestre. L'efficacité en temps de calcul de ces modèles glaciologiques essentiels, tels que le modèle f.ETISh (fast Elementary Thermomechanical Ice Sheet) développé à l'Université Libre de Bruxelles, repose sur une modélisation des mécanismes et des rétroactions essentiels gouvernant la thermodynamique des calottes polaires au travers de modèles d'ordre réduit et de paramétrisations. Vu l'efficacité en temps de calcul des modèles glaciologiques essentiels, l'utilisation de ces modèles en complément des méthodes du domaine de la quantification des incertitudes offrent de nombreuses opportunités afin de mener des analyses plus complètes de l'impact des incertitudes dans les modèles glaciologiques ainsi que de développer de nouvelles méthodes du domaine de la quantification des incertitudes dans le cadre de la modélisation glaciologique. Les contributions de cette thèse sont doubles. D'une part, nous contribuons à une nouvelle estimation et une nouvelle compréhension de l'impact des incertitudes sur la réponse de la calotte polaire de l'Antarctique dans les prochains siècles. D'autre part, nous contribuons au développement de nouvelles méthodes pour la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques avec, comme motivation en glaciologie, un intérêt particulier vers la prédiction sous incertitudes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. Dans le cadre de la première contribution, nous réalisons de nouvelles projections probabilistes de la réponse de la calotte polaire de l'Antarctique au changement climatique au cours des prochains siècles à l'aide du modèle numérique f.ETISh. Nous appliquons des méthodes du domaine de la quantification des incertitudes au modèle numérique f.ETISh afin d'étudier l'impact de différentes sources d'incertitude sur la réponse continentale de la calotte polaire de l'Antarctique. Les sources d'incertitude étudiées sont relatives au forçage atmosphérique, au glissement basal, à la paramétrisation du flux à la ligne d'ancrage, au vêlage, à la fonte sous les barrières de glace, à la rhéologie des barrières de glace et à la relaxation du lit rocheux. Nous réalisons de nouvelles projections probabilistes de la contribution de la calotte polaire de l'Antarctique à l'augmentation future du niveau des mers; nous réalisons une analyse de sensibilité afin de déterminer les sources d'incertitude les plus influentes; et nous réalisons de nouvelles projections probabilistes du retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux.Dans le cadre de la seconde contribution, nous étudions la quantification des incertitudes sur les caractéristiques géométriques de la réponse spatiale de modèles physiques numériques dans le cadre de la théorie des ensembles aléatoires. Dans le cadre de la théorie des ensembles aléatoires, nous développons le concept de régions de confiance qui contiennent ou bien sont inclus dans un ensemble d'excursion de la réponse spatiale du modèle numérique avec un niveau donné de probabilité. Afin d'estimer ces régions de confiance, nous proposons de formuler l'estimation de ces régions de confiance dans une famille d'ensembles paramétrés comme un problème d'estimation de quantiles d'une variable aléatoire et nous proposons une nouvelle méthode de type multifidélité pour estimer ces quantiles. Finalement, nous démontrons l'efficacité de cette nouvelle méthode dans le cadre d'une application relative au retrait de la région de la calotte polaire de l'Antarctique en contact avec le lit rocheux. En plus de ces deux contributions principales, nous contribuons à deux travaux de recherche additionnels. D'une part, nous contribuons à un travail de recherche relatif au calcul des indices de Sobol en analyse de sensibilité dans le cadre de petits ensembles de données à l'aide d'une nouvelle méthode d'apprentissage probabiliste sur des variétés géométriques. D'autre part, nous fournissons une comparaison multimodèle de différentes projections de la contribution de la calotte polaire de l'Antarctique à l'augmentation du niveau des mers. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
224

Shape optimization of lightweight structures under blast loading

Israel, Joshua James 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Structural optimization of vehicle components for blast mitigation seeks to counteract the damaging effects of an impulsive threat on occupants and critical components. The strong and urgent need for improved protection from blast events has made blast mitigating component design an active research subject. Standard up-armoring of ground vehicles can significantly increase the mass of the vehicle. Without concurrent modifications to the power train, suspension, braking and steering components, the up-armored vehicles suffer from degraded stability and mobility. For these reasons, there is a critical need for effective methods to generate lightweight components for blast mitigation. The overall objective of this research is to make advances in structural design methods for the optimization of lightweight blast-mitigating systems. This thesis investigates the automated design process of isotropic plates to mitigate the effects of blast loading by addressing the design of blast-protective structures from a design optimization perspective. The general design problem is stated as finding the optimum shape of a protective shell of minimum mass satisfying deformation and envelops constraints. This research was conducted in terms of three primary research projects. The first project was to investigate the design of lightweight structures under deterministic loading conditions and subject to the same objective function and constraints, in order to compare feasible design methodologies through the expansion of the problem dimension in order to reach the limits of performance. The second research project involved the investigation of recently developed uncertainty quantification methods, the univariate dimensional reduction method and the performance moment integration method, to structures under stochastic loading conditions. The third research project involved application of these uncertainty quantification methods to problems of design optimization under uncertainty, in order to develop a methodology for the generation of lightweight reliable structures. This research has resulted in the construction of a computational framework, incorporating uncertainty quantification methods and various optimization techniques, which can be used for the generation of lightweight structures for blast mitigation under uncertainty. Applied to practical structural design problems, the results demonstrate that the methodologies provide a practical tool to aid the design engineer in generating design concepts for blast-mitigating structures. These methods can be used to advance research into the generation of reliable structures under uncertain loading conditions inherent to blast events.
225

Probabilistic Ensemble-based Streamflow Forecasting Framework

Darbandsari, Pedram January 2021 (has links)
Streamflow forecasting is a fundamental component of various water resources management systems, ranging from flood control and mitigation to long-term planning of irrigation and hydropower systems. In the context of floods, a probabilistic forecasting system is required for proper and effective decision-making. Therefore, the primary goal of this research is the development of an advanced ensemble-based streamflow forecasting framework to better quantify the predictive uncertainty and generate enhanced probabilistic forecasts. This research started by comprehensively evaluating the performances of various lumped conceptual models in data-poor watersheds and comparing various Bayesian Model Averaging (BMA) modifications for probabilistic streamflow simulation. Then, using the concept of BMA, two novel probabilistic post-processing approaches were developed to enhance streamflow forecasting performance. The combination of the entropy theory and the BMA method leads to an entropy-based Bayesian Model Averaging (En-BMA) approach for enhanced probabilistic streamflow and precipitation forecasting. Also, the integration of the Hydrologic Uncertainty Processor (HUP) and the BMA methods is proposed for probabilistic post-processing of multi-model streamflow forecasts. Results indicated that the MACHBV and GR4J models are highly competent in simulating hydrological processes within data-scarce watersheds, however, the presence of the lower skill hydrologic models is still beneficial for ensemble-based streamflow forecasting. The comprehensive verification of the BMA approach in terms of streamflow predictions has identified the merits of implementing some of the previously recommended modifications and showed the importance of possessing a mutually exclusive and collectively exhaustive ensemble. By targeting the remaining limitation of the BMA approach, the proposed En-BMA method can improve probabilistic streamflow forecasting, especially under high flow conditions. Also, the proposed HUP-BMA approach has taken advantage of both HUP and BMA methods to better quantify the hydrologic uncertainty. Moreover, the applicability of the modified En-BMA as a more robust post-processing approach for precipitation forecasting, compared to BMA, has been demonstrated. / Thesis / Doctor of Philosophy (PhD) / Possessing a reliable streamflow forecasting framework is of special importance in various fields of operational water resources management, non-structural flood mitigation in particular. Accurate and reliable streamflow forecasts lead to the best possible in-advanced flood control decisions which can significantly reduce its consequent loss of lives and properties. The main objective of this research is to develop an enhanced ensemble-based probabilistic streamflow forecasting approach through proper quantification of predictive uncertainty using an ensemble of streamflow forecasts. The key contributions are: (1) implementing multiple diverse forecasts with full coverage of future possibilities in the Bayesian ensemble-based forecasting method to produce more accurate and reliable forecasts; and (2) developing an ensemble-based Bayesian post-processing approach to enhance the hydrologic uncertainty quantification by taking the advantages of multiple forecasts and initial flow observation. The findings of this study are expected to benefit streamflow forecasting, flood control and mitigation, and water resources management and planning.
226

Numerical Simulation of Short Fibre Reinforced Composites

Springer, Rolf 09 November 2023 (has links)
Lightweight structures became more and more important over the last years. One special class of such structures are short fibre reinforced composites, produced by injection moulding. To avoid expensive experiments for testing the mechanical behaviour of these composites proper material models are needed. Thereby, the stochastic nature of the fibre orientation is the main problem. In this thesis it is looked onto the simulation of such materials in a linear thermoelastic setting. This means the material is described by its heat conduction tensor κ(p), its thermal expansion tensor T(p), and its stiffness tensor C(p). Due to the production process the internal fibre orientation p has to been understood as random variable. As a consequence the previously mentioned material quantities also become random. The classical approach is to average these quantities and solve the linear hermoelastic deformation problem with the averaged expressions. Within this thesis the incorpora- tion of this approach in a time and memory efficient manner in an existing finite element software is shown. Especially for the time and memory efficient improvement several implementation aspects of the underlying software are highlighted. For both - the classical material simulation as well as the time efficient improvement of the software - numerical results are shown. Furthermore, the aforementioned classical approach is extended within this thesis for the simulation of the thermal stresses by using the stochastic nature of the heat conduc tion. This is done by developing it into a series w.r.t. the underlying stochastic. For this series known results from uncertainty quantification are applied. With the help of these results the temperature is developed in a Taylor series. For this Taylor series a suitable expansion point is chosen. Afterwards, this series is incorporated into the computation of the thermal stresses. The advantage of this approach is shown in numerical experiments.
227

Mean square solutions of random linear models and computation of their probability density function

Jornet Sanz, Marc 05 March 2020 (has links)
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case. / [ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana. / [CA] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana. / This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València. / Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
228

Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models

Boopathy, Komahan 05 June 2014 (has links)
No description available.
229

Informing Industry End-Users on the Credibility of Model Predictions for Design Decisions

Jakob T Hartl (13145352) 25 July 2022 (has links)
<p>Many industrial organizations invest heavily in modeling and simulation (M&S) to support the design process. The primary business motivation for M&S is as a cheaper and faster alternative for obtaining information towards a better understanding of system behavior or to help with decision making. However, M&S predictions are known to be inexact because models and simulations are mathematical approximations of reality. To ensure that models are applicable for their intended use, organizations must collect evidence that the M&S is credible. Verification, validation, and uncertainty quantification (VVUQ) are the established methods for collecting this evidence. Structured frameworks for building credibility in M&S through VVUQ methods exist in the scientific literature but these frameworks and methods are generally not well developed, nor well implemented in industrial environments. The core motivation of this work is to help make existing VVUQ frameworks more suitable for industry.</p> <p>As part of this objective, this work proposes a new credibility assessment that turns VVUQ results into an intuitive, numerical decision-making metric. This credibility assessment, called the Credibility Index, identifies the important aspects of credibility, extracts the relevant VVUQ results, and converts the results into an overall Credibility Index score (CRED). This CRED score is unique for each specific prediction scenario and serves as an easy-to-digest measure of credibility. The Credibility Index builds upon widely accepted definitions of credibility, well-established VVUQ frameworks, and decision theory.</p> <p>The Credibility Index has been applied to several prediction scenarios for two publicly available benchmark problems and one Rolls-Royce funded subsystem case; all examples relate to the aerodynamic design of turbine-engine compressors. The results from these studies show how the Credibility Index serves as a decision-making metric, supplements traditional M&S outputs, and guides VVUQ efforts. A product feedback study, involving model end-users in industry, compared the Credibility Index to three other established credibility assessments; the study provides evidence that CRED consistently captures all key aspects of information quality when informing end-users on the credibility of model predictions. Due to the industry partnership, this research already has multiple avenues of practical impact, including implementation of the structured VVUQ and credibility framework in an industrial toolkit and workflow. </p>
230

Parametric Optimal Design Of Uncertain Dynamical Systems

Hays, Joseph T. 02 September 2011 (has links)
This research effort develops a comprehensive computational framework to support the parametric optimal design of uncertain dynamical systems. Uncertainty comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it; not accounting for uncertainty may result in poor robustness, sub-optimal performance and higher manufacturing costs. Contemporary methods for the quantification of uncertainty in dynamical systems are computationally intensive which, so far, have made a robust design optimization methodology prohibitive. Some existing algorithms address uncertainty in sensors and actuators during an optimal design; however, a comprehensive design framework that can treat all kinds of uncertainty with diverse distribution characteristics in a unified way is currently unavailable. The computational framework uses Generalized Polynomial Chaos methodology to quantify the effects of various sources of uncertainty found in dynamical systems; a Least-Squares Collocation Method is used to solve the corresponding uncertain differential equations. This technique is significantly faster computationally than traditional sampling methods and makes the construction of a parametric optimal design framework for uncertain systems feasible. The novel framework allows to directly treat uncertainty in the parametric optimal design process. Specifically, the following design problems are addressed: motion planning of fully-actuated and under-actuated systems; multi-objective robust design optimization; and optimal uncertainty apportionment concurrently with robust design optimization. The framework advances the state-of-the-art and enables engineers to produce more robust and optimally performing designs at an optimal manufacturing cost. / Ph. D.

Page generated in 0.1352 seconds