• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 20
  • 16
  • 8
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 57
  • 56
  • 47
  • 35
  • 33
  • 32
  • 31
  • 26
  • 25
  • 24
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Validation and Uncertainty Quantification of Doublet Lattice Flight Loads using Flight Test Data

Olson, Nicholai Kenneth Keeney 19 July 2018 (has links)
This paper presents a framework for tuning, validating, and quantifying uncertainties for flight loads. The flight loads are computed using a Nastran doublet lattice model and are validated using measured data from a flight loads survey for a Cessna Model 525B business jet equipped with Tamarack® Aerospace Group’s active winglet modification, ATLAS® (Active Technology Load Alleviation System). ATLAS® allows for significant aerodynamic improvements to be realized by reducing loads to below the values of the original, unmodified airplane. Flight loads are measured using calibrated strain gages and are used to tune and validate a Nastran doublet-lattice flight loads model. Methods used to tune and validate the model include uncertainty quantification of the Nastran model form and lead to an uncertainty quantified model which can be used to estimate flight loads at any given flight condition within the operating envelope of the airplane. The methods presented herein improve the efficiency of the loads process and reduce conservatism in design loads through improved prediction techniques. Regression techniques and uncertainty quantification methods are presented to more accurately assess the complexities in comparing models to flight test results. / Master of Science / This paper presents a process for correlating analytical airplane loads models to flight test data and validating the results. The flight loads are computed using Nastran, a structural modeling tool coupled with an aerodynamic loads solver. The flight loads models are correlated to flight test data and are validated using measured data from a flight loads survey for a Cessna Model 525B business jet equipped with Tamarack ® Aerospace Group’s active winglet modification, ATLAS ® (Active Technology Load Alleviation System). ATLAS ® allows for significant aerodynamic improvements and efficiency gains to be realized by reducing loads to below the values of the original, unmodified airplane. Flight loads are measured using a series of strain gage sensors mounted on the wing. These sensors are calibrated to measure aerodynamic loads and are used to tune and validate the Nastran flight loads model. Methods used to tune and validate the model include quantification of error and uncertainties in the model. These efforts lead to a substantially increased understanding of the model limitations and uncertainties, which is especially valuable at the corners of the operating envelope of the airplane. The methods presented herein improve the efficiency of the loads process and reduce conservatism in design loads through improved prediction techniques. The results provide a greater amount of guidance for decision making throughout the design and certification of a load alleviation system and similar airplane aerodynamic improvements.
Read more
12

Cooperative Prediction and Planning Under Uncertainty for Autonomous Robots

Nayak, Anshul Abhijit 11 October 2024 (has links)
Autonomous robots are set to become ubiquitous in the future, with applications ranging from autonomous cars to assistive household robots. These systems must operate in close proximity of dynamic and static objects, including humans and other non-autonomous systems, adding complexity to their decision-making processes. The behaviour of such objects is often stochastic and hard to predict. Making robust decisions under such uncertain scenarios can be challenging for these autonomous robots. In the past, researchers have used deterministic approach to predict the motion of surrounding objects. However, these approaches can be over-confident and do not capture the stochastic behaviour of surrounding objects necessary for safe decision-making. In this dissertation, we show the importance of probabilistic prediction of surrounding dynamic objects and their incorporation into planning for safety-critical decision making. We utilise Bayesian inference models such as Monte Carlo dropout and deep ensemble to probabilistically predict the motion of surrounding objects. Our probabilistic trajectory forecasting model showed improvement over standard deterministic approaches and could handle adverse scenarios such as sensor noise and occlusion during prediction. The uncertainty-inclusive prediction of surrounding objects has been incorporated into planning. The inclusion of predicted states of surrounding objects with associated uncertainty enables the robot make proactive decisions while avoiding collisions. / Doctor of Philosophy / In future, humans will greatly rely on the assistance of autonomous robots in helping them with everyday tasks. Drones to deliver packages, cars for driving to places autonomously and household robots helping with day-to-day activities. In all such scenarios, the robot might have to interact with their surrounding, in particular humans. Robots working in close proximity to humans must be intelligent enough to make safe decisions not affecting or intruding the human. Humans, in particular make abrupt decisions and their motion can be unpredictable. It is necessary for the robot to understand the intention of human for navigating safely without affecting the human. Therefore, the robot must capture the uncertain human behaviour and predict its future motion so that it can make proactive decisions. We propose to capture the stochastic behaviour of humans using deep learning based prediction models by learning motion patterns from real human trajectories. Our method not only predicts future trajectory of humans but also captures the associated uncertainty during prediction. In this thesis, we also propose how to predict human motion under adverse scenarios like bad weather leading to noisy sensing as well as under occlusion. Further, we integrate the predicted stochastic behaviour of surrounding humans into the planning of the robot for safe navigation among humans.
Read more
13

Investigating Validation of a Simulation Model for Development and Certification of Future Fighter Aircraft Fuel Systems

Vilhelmsson, Markus, Strömberg, Isac January 2016 (has links)
In this thesis a method for verification, validation and uncertainty quantification (VV&UQ) has been tested and evaluated on a fuel transfer application in the fuel rig currently used at Saab. A simplified model has been developed for the limited part of the fuel system in the rig that is affected in the transfer, and VV&UQ has been performed on this model. The scope for the thesis has been to investigate if and how simulation models can be used for certification of the fuel system in a fighter aircraft. The VV&UQ-analysis was performed with the limitation that no probability distributions for uncertainties were considered. Instead, all uncertainties were described using intervals (so called epistemic uncertainties). Simulations were performed on five different operating points in terms of fuel flow to the engine with five different initial conditions for each, resulting in 25 different operating modes. For each of the 25 cases, the VV&UQ resulted in a minimum and maximum limit for how much fuel that could be transferred. 6 cases were chosen for validation measurements and the resulting amount of fuel transferred ended up between the corresponding epistemic intervals. Performing VV&UQ is a time demanding and computationally heavy task, which quickly grows as the model becomes more complex. Our conclusion is that a pilot study is necessary, where time and costs are evaluated, before choosing to use a simulation model and perform VV&UQ for certification. Further investigation of different methods for increasing confidence in simulation models is also needed, for which VV&UQ is one suitable option.
Read more
14

Asymptotic theory for Bayesian nonparametric procedures in inverse problems

Ray, Kolyan Michael January 2015 (has links)
The main goal of this thesis is to investigate the frequentist asymptotic properties of nonparametric Bayesian procedures in inverse problems and the Gaussian white noise model. In the first part, we study the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. This rate provides a quantitative measure of the quality of statistical estimation of the procedure. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting, minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform. In the second part of this thesis, we investigate Bernstein--von Mises type results for adaptive nonparametric Bayesian procedures in both the Gaussian white noise model and the mildly ill-posed inverse setting. The Bernstein--von Mises theorem details the asymptotic behaviour of the posterior distribution and provides a frequentist justification for the Bayesian approach to uncertainty quantification. We establish weak Bernstein--von Mises theorems in both a Hilbert space and multiscale setting, which have applications in $L^2$ and $L^\infty$ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets using a Bayesian approach. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the different geometries involved.
Read more
15

Cross-scale model validation with aleatory and epistemic uncertainty

Blumer, Joel David 08 June 2015 (has links)
Nearly every decision must be made with a degree of uncertainty regarding the outcome. Decision making based on modeling and simulation predictions needs to incorporate and aggregate uncertain evidence. To validate multiscale simulation models, it may be necessary to consider evidence collected at a length scale that is different from the one at which a model predicts. In addition, traditional methods of uncertainty analysis do not distinguish between two types of uncertainty: uncertainty due to inherently random inputs, and uncertainty due to lack of information about the inputs. This thesis examines and applies a Bayesian approach for model parameter validation that uses generalized interval probability to separate these two types of uncertainty. A generalized interval Bayes’ rule (GIBR) is used to combine the evidence and update belief in the validity of parameters. The sensitivity of completeness and soundness for interval range estimation in GIBR is investigated. Several approaches to represent complete ignorance of probabilities’ values are tested. The result from the GIBR method is verified using Monte Carlo simulations. The method is first applied to validate the parameter set for a molecular dynamics simulation of defect formation due to radiation. Evidence is supplied by the comparison with physical experiments. Because the simulation includes variables whose effects are not directly observable, an expanded form of GIBR is implemented to incorporate the uncertainty associated with measurement in belief update. In a second example, the proposed method is applied to combining the evidence from two models of crystal plasticity at different length scales.
Read more
16

Reliability methods in dynamic system analysis

Munoz, Brad Ernest 26 April 2013 (has links)
Standard techniques used to analyze a system's response with uncertain system parameters or inputs, are generally Importance sampling methods. Sampling methods require a large number of simulation runs before the system output statistics can be analyzed. As model fidelity increases, sampling techniques become computationally infeasible, and Reliability methods have gained popularity as an analysis method that requires significantly fewer simulation runs. Reliability analysis is an analytic technique which finds a particular point in the design space that can accurately be related to the probability of system failure. However, application to dynamic systems have remained limited. In the following thesis a First Order Reliability Method (FORM) is used to determine the failure probability of a dynamic system due to system/input uncertainties. A pendulum cart system is used as a case study to demonstrate the FORM on a dynamic system. Three failure modes are discussed which correspond to the maximum pendulum angle, the maximum system velocity, and a combined requirement that neither the maximum pendulum angle or system velocity are exceeded. An explicit formulation is generated from the implicit formulation using a Response Surface Methodology, and the FORM is performed using the explicit estimate. Although the analysis converges with minimal simulation computations, attempts to verify FORM results illuminate current limitations of the methodology. The results of this initial study conclude that, currently, sampling techniques are necessary to verify the FORM results, which restricts the potential applications of the FORM methodology. Suggested future work focuses on result verification without the use of Importance sampling which would allow Reliability methods to have widespread applicability. / text
Read more
17

Parametric uncertainty and sensitivity methods for reacting flows

Braman, Kalen Elvin 09 July 2014 (has links)
A Bayesian framework for quantification of uncertainties has been used to quantify the uncertainty introduced by chemistry models. This framework adopts a probabilistic view to describe the state of knowledge of the chemistry model parameters and simulation results. Given experimental data, this method updates the model parameters' values and uncertainties and propagates that parametric uncertainty into simulations. This study focuses on syngas, a combination in various ratios of H2 and CO, which is the product of coal gasification. Coal gasification promises to reduce emissions by replacing the burning of coal with the less polluting burning of syngas. Despite the simplicity of syngas chemistry models, they nonetheless fail to accurately predict burning rates at high pressure. Three syngas models have been calibrated using laminar flame speed measurements. After calibration the resulting uncertainty in the parameters is propagated forward into the simulation of laminar flame speeds. The model evidence is then used to compare candidate models. Sensitivity studies, in addition to Bayesian methods, can be used to assess chemistry models. Sensitivity studies provide a measure of how responsive target quantities of interest (QoIs) are to changes in the parameters. The adjoint equations have been derived for laminar, incompressible, variable density reacting flow and applied to hydrogen flame simulations. From the adjoint solution, the sensitivity of the QoI to the chemistry model parameters has been calculated. The results indicate the most sensitive parameters for flame tip temperature and NOx emission. Such information can be used in the development of new experiments by pointing out which are the critical chemistry model parameters. Finally, a broader goal for chemistry model development is set through the adjoint methodology. A new quantity, termed field sensitivity, is introduced to guide chemistry model development. Field sensitivity describes how information of perturbations in flowfields propagates to specified QoIs. The field sensitivity, mathematically shown as equivalent to finding the adjoint of the primal governing equations, is obtained for laminar hydrogen flame simulations using three different chemistry models. Results show that even when the primal solution is sufficiently close for the three mechanisms, the field sensitivity can vary. / text
Read more
18

Continuous Model Updating and Forecasting for a Naturally Fractured Reservoir

Almohammadi, Hisham 16 December 2013 (has links)
Recent developments in instrumentation, communication and software have enabled the integration of real-time data into the decision-making process of hydrocarbon production. Applications of real-time data integration in drilling operations and horizontal-well lateral placement are becoming industry common practice. In reservoir management, the use of real-time data has been shown to be advantageous in tasks such as improving smart-well performance and in pressure-maintenance programs. Such capabilities allow for a paradigm change in which reservoir management can be looked at as a strategy that enables a semi-continuous process of model updates and decision optimizations instead of being periodic or reactive. This is referred to as closed-loop reservoir management (CLRM). Due to the complexity of the dynamic physical processes, large sizes, and huge uncertainties associated with reservoir description, continuous model updating is a large-scale problem with a highly dimensional parameter space and high computational costs. The need for an algorithm that is both feasible for practical applications and capable of generating reliable estimates of reservoir uncertainty is a key element in CLRM. This thesis investigates the validity of Markov Chain Monte Carlo (MCMC) sampling used in a Bayesian framework as an uncertainty quantification and model-updating tool suitable for real-time applications. A 3-phase, dual-porosity, dual-permeability reservoir model is used in a synthetic experiment. Continuous probability density functions of cumulative oil production for two cases with different model updating frequencies and reservoir maturity levels are generated and compared to a case with a known geology, i.e., truth case. Results show continuously narrowing ranges for cumulative oil production, with mean values approaching the truth case as model updating advances and the reservoir becomes more mature. To deal with MCMC sampling sensitivity to increasing numbers of observed measurements, as in the case of real-time applications, a new formulation of the likelihood function is proposed. Changing the likelihood function significantly improved chain convergence, chain mixing and forecast uncertainty quantification. Further, methods to validate the sampling quality and to judge the prior model for the MCMC process in real applications are advised.
Read more
19

Bayesian Estimation of Material Properties in Case of Correlated and Insufficient Data

Giugno, Matteo 02 October 2013 (has links)
Identification of material properties has been highly discussed in recent times thanks to better technology availability and its application to the field of experimental mechanics. Bayesian approaches as Markov-chain Monte Carlo (MCMC) methods demonstrated to be reliable and suitable tools to process data, describing probability distributions and uncertainty bounds for investigated parameters in absence of explicit inverse analytical expressions. Though it is necessary to repeat experiments multiple times for good estimations, this might be not always feasible due to possible incurring limitations: the thesis addresses the problem of material properties estimation in presence of correlated and insufficient data, resulting in multivariate error modeling and high sample covariance matrix instability. To recover from the lack of information about the true covariance we analyze two different methodologies: first the hierarchical covariance modeling is investigated, then a method based on covariance shrinkage is employed. A numerical study comparing both approaches and employing finite element analysis within MCMC iterations will be presented, showing how the method based on covariance shrinkage is more suitable to post-process data for the range of problems under investigation.
Read more
20

Uncertainty Quantification of Thermo-acousticinstabilities in gas turbine combustors / Quantification des incertitudes pour la prédiction des instabilités thermo-acoustiques dans les chambres de combustion

Ndiaye, Aïssatou 18 April 2017 (has links)
Les instabilités thermo-acoustiques résultent de l'interaction entre les oscillations de pression acoustique et les fluctuations du taux de dégagement de chaleur de la flamme. Ces instabilités de combustion sont particulièrement préoccupantes en raison de leur fréquence dans les turbines à gaz modernes et à faible émission. Leurs principaux effets indésirables sont une réduction du temps de fonctionnement du moteur en raison des oscillations de grandes amplitudes ainsi que de fortes vibrations à l'intérieur de la chambre de combustion. La simulation numérique est maintenant devenue une approche clé pour comprendre et prédire ces instabilités dans la phase de conception industrielle. Cependant, la prédiction de ce phénomène reste difficile en raison de sa complexité; cela se confirme lorsque les paramètres physiques du processus de modélisation sont incertains, ce qui est pratiquement toujours le cas pour des systèmes réels.Introduire la quantification des incertitudes pour la thermo-acoustique est le seul moyen d'étudier et de contrôler la stabilité des chambres de combustion qui fonctionnent dans des conditions réalistes; c'est l'objectif de cette thèse.Dans un premier temps, une chambre de combustion académique (avec un seul injecteur et une seule flamme) ainsi que deux chambres de moteurs d'hélicoptère (avec N injecteurs et des flammes) sont étudiés. Les calculs basés sur un solveur de Helmholtz et un outil quasi-analytique de bas ordre fournissent des estimations appropriées de la fréquence et des structures modales pour chaque géométrie. L'analyse suggère que la réponse de la flamme aux perturbations acoustiques joue un rôle prédominant dans la dynamique de la chambre de combustion. Ainsi, la prise en compte des incertitudes liées à la représentation de la flamme apparaît comme une étape nécessaire vers une analyse robuste de la stabilité du système.Dans un second temps, la notion de facteur de risque, c'est-à-dire la probabilité pour un mode thermo-acoustique d'être instable, est introduite afin de fournir une description plus générale du système que la classification classique et binaire (stable / instable). Les approches de modélisation de Monte Carlo et de modèle de substitution sont associées pour effectuer une analyse de quantification d'incertitudes de la chambre de combustion académique avec deux paramètres incertains (amplitude et temps de réponse de la flamme). On montre que l'utilisation de modèles de substitution algébriques réduit drastiquement le nombre de calculs initiales, donc la charge de calcul, tout en fournissant des estimations précises du facteur de risque modal. Pour traiter les problèmes multidimensionnel tels que les deux moteurs d'hélicoptère, une stratégie visant à réduire le nombre de paramètres incertains est introduite. La méthode <<Active Subspace>> combinée à une approche de changement de variables a permis d'identifier trois directions dominantes (au lieu des N paramètres incertains initiaux) qui suffisent à décrire la dynamique des deux systèmes industriels. Dès lors que ces paramètres dominants sont associés à des modèles de substitution appropriés, cela permet de réaliser efficacement une analyse de quantification des incertitudes de systèmes thermo-acoustiques complexes.Finalement, on examine la perspective d'utiliser la méthode adjointe pour analyser la sensibilité des systèmes thermo-acoustiques représentés par des solveurs 3D de Helmholtz. Les résultats obtenus sur des cas tests 2D et 3D sont prometteurs et suggèrent d'explorer davantage le potentiel de cette méthode dans le cas de problèmes thermo-acoustiques encore plus complexes. / Thermoacoustic instabilities result from the interaction between acoustic pressure oscillations and flame heat release rate fluctuations. These combustion instabilities are of particular concern due to their frequent occurrence in modern, low emission gas turbine engines. Their major undesirable consequence is a reduced time of operation due to large amplitude oscillations of the flame position and structural vibrations within the combustor. Computational Fluid Dynamics (CFD) has now become one a key approach to understand and predict these instabilities at industrial readiness level. Still, predicting this phenomenon remains difficult due to modelling and computational challenges; this is even more true when physical parameters of the modelling process are uncertain, which is always the case in practical situations. Introducing Uncertainty Quantification for thermoacoustics is the only way to study and control the stability of gas turbine combustors operated under realistic conditions; this is the objective of this work.First, a laboratory-scale combustor (with only one injector and flame) as well as two industrial helicopter engines (with N injectors and flames) are investigated. Calculations based on a Helmholtz solver and quasi analytical low order tool provide suitable estimates of the frequency and modal structures for each geometry. The analysis suggests that the flame response to acoustic perturbations plays the predominant role in the dynamics of the combustor. Accounting for the uncertainties of the flame representation is thus identified as a key step towards a robust stability analysis.Second, the notion of Risk Factor, that is to say the probability for a particular thermoacoustic mode to be unstable, is introduced in order to provide a more general description of the system than the classical binary (stable/unstable) classification. Monte Carlo and surrogate modelling approaches are then combined to perform an uncertainty quantification analysis of the laboratory-scale combustor with two uncertain parameters (amplitude and time delay of the flame response). It is shown that the use of algebraic surrogate models reduces drastically the number of state computations, thus the computational load, while providing accurate estimates of the modal risk factor. To deal with the curse of dimensionality, a strategy to reduce the number of uncertain parameters is further introduced in order to properly handle the two industrial helicopter engines. The active subspace algorithm used together with a change of variables allows identifying three dominant directions (instead of N initial uncertain parameters) which are sufficient to describe the dynamics of the industrial systems. Combined with appropriate surrogate models construction, this allows to conduct computationally efficient uncertainty quantification analysis of complex thermoacoustic systems.Third, the perspective of using adjoint method for the sensitivity analysis of thermoacoustic systems represented by 3D Helmholtz solvers is examined. The results obtained for 2D and 3D test cases are promising and suggest to further explore the potential of this method on even more complex thermoacoustic problems.
Read more

Page generated in 0.043 seconds