• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 10
  • 2
  • 1
  • Tagged with
  • 46
  • 46
  • 15
  • 12
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Vehicle Sprung Mass Parameter Estimation Using an Adaptive Polynomial-Chaos Method

Shimp, Samuel Kline III 14 May 2008 (has links)
The polynomial-chaos expansion (PCE) approach to modeling provides an estimate of the probabilistic response of a dynamic system with uncertainty in the system parameters. A novel adaptive parameter estimation method exploiting the polynomial-chaos representation of a general quarter-car model is presented. Because the uncertainty was assumed to be concentrated in the sprung mass parameter, a novel pseudo mass matrix was developed for generating the state-space PCE model. In order to implement the PCE model in a real-time adaptation routine, a novel technique for representing PCE output equations was also developed. A simple parameter estimation law based on the output error between measured accelerations and PCE acceleration estimates was developed and evaluated through simulation and experiment. Simulation results of the novel adaptation algorithm demonstrate the estimation convergence properties as well as its limitations. The simulation results are further verified by a real-time experimental implementation on a quarter-car test rig. This work presents the first truly real-time implementation of a PCE model. The experimental real-time implementation of the novel adaptive PCE estimation method shows promising results by its ability to converge and maintain a stable estimate of the unknown parameter. / Master of Science
12

Statistical contribution to the virtual multicriteria optimisation of combinatorial molecules libraries and to the validation and application of QSAR models

Le Bailly de Tilleghem, Céline 07 January 2008 (has links)
This thesis develops an integrated methodology based on the desirability index and QSAR models to virtually optimise molecules. Statistical and algorithmic tools are proposed to search in huge collections of compounds obtained by combinatorial chemistry the most promising ones. First, once the drugability properties of interest have been precisely defined, QSAR models are developed to mimic the relationship between those optimised properties and chemical descriptors of molecules. The literature on QSAR models is reviewed and the statistical tools to validate the models, analyse their fit and their predictive power are detailed. Even if a QSAR model has been validated and sounds highly predictive, we emphasise the importance of measuring extrapolation by the definition of its applicability domain and quantifying the prediction error for a given molecule. Indeed, QSAR models are often massively applied to predict drugability properties for libraries of new compounds without taking care of the reliability of each individual prediction. Then, a desirability index measures the compromise between the multiple estimated drugability properties and allows to rank the molecules in the combinatorial library in preference order. The propagation of the models prediction error on the desirability index is quantified by a confidence interval that can be constructed under general conditions for linear regression, PLS regression or regression tree models. This fulfills an important lack of the desirability index literature that considers it as exact. Finally, a new efficient algorithm (WEALD) is proposed to virtually screen the combinatorial library and retain the molecule with the highest desirability indexes. For each explored molecule, it is checked if it belongs to the applicability domain of each QSAR models. In addition, the uncertainty of the desirability index of each explored molecule is taken into account by gathering molecules that can not be distinguished from the optimal one due to the propagation of QSAR models prediction error. Those molecules do not have a significantly smaller desirability than the optimal molecule found by WEALD. This constitutes another important improvement in the use of desirability index as a tool to compare solutions in a multicriteria optimisation problem. This integrated methodology has been developed in the context of lead optimisation and is illustrated on a real combinatorial library provided by Eli Lilly and Company. This is the main application of the thesis. Nevertheless, as the results on desirability index uncertainty are applicable under general conditions, they can be applied to any multicriteria optimisation problem, like it often occurs in industry.
13

Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie / Methods for sensitivity analysis and backward propagation of uncertainty applied on mathematical models in engineering applications

Alhossen, Iman 11 December 2017 (has links)
Dans de nombreuses disciplines, les approches permettant d'étudier et de quantifier l'influence de données incertaines sont devenues une nécessité. Bien que la propagation directe d'incertitudes ait été largement étudiée, la propagation inverse d'incertitudes demeure un vaste sujet d'étude, sans méthode standardisée. Dans cette thèse, une nouvelle méthode de propagation inverse d'incertitude est présentée. Le but de cette méthode est de déterminer l'incertitude d'entrée à partir de données de sortie considérées comme incertaines. Parallèlement, les méthodes d'analyse de sensibilité sont également très utilisées pour déterminer l'influence des entrées sur la sortie lors d'un processus de modélisation. Ces approches permettent d'isoler les entrées les plus significatives, c'est à dire les plus influentes, qu'il est nécessaire de tester lors d'une analyse d'incertitudes. Dans ce travail, nous approfondirons tout d'abord la méthode d'analyse de sensibilité de Sobol, qui est l'une des méthodes d'analyse de sensibilité globale les plus efficaces. Cette méthode repose sur le calcul d'indices de sensibilité, appelés indices de Sobol, qui représentent l'effet des données d'entrées (vues comme des variables aléatoires continues) sur la sortie. Nous démontrerons ensuite que la méthode de Sobol donne des résultats fiables même lorsqu'elle est appliquée dans le cas discret. Puis, nous étendrons le cadre d'application de la méthode de Sobol afin de répondre à la problématique de propagation inverse d'incertitudes. Enfin, nous proposerons une nouvelle approche de la méthode de Sobol qui permet d'étudier la variation des indices de sensibilité par rapport à certains facteurs du modèle ou à certaines conditions expérimentales. Nous montrerons que les résultats obtenus lors de ces études permettent d'illustrer les différentes caractéristiques des données d'entrée. Pour conclure, nous exposerons comment ces résultats permettent d'indiquer les meilleures conditions expérimentales pour lesquelles l'estimation des paramètres peut être efficacement réalisée. / Approaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done.
14

Incertitude des données biomécaniques : modélisation et propagation dans les modèles de diagnostic des pathologies du système musculosquelettique / Uncertainty of biomechanical data : modeling and propagation in the diagnostics models of diseases of musculoskeletal system

Hoang, Tuan Nha 16 December 2014 (has links)
Les pathologies du système musculosquelettique concernant les déformations / anomalies osseuses et musculaires (e.g. paralysie cérébrale) ont un fort impact sur la qualité de vie des personnes concernées. Les objectifs de la thèse sont d’approfondir les études précédentes en intégrant la modélisation de l’incertitude des données biomécaniques et biomédicales dans les modèles de diagnostic prédictif des pathologies du système musculosquelettique. L’intervalle a été choisi pour représenter l’incertitude des données biomécaniques. Ce formalisme est simple et peu coûteux sur le plan du calcul. Les données (physiologiques, morphologiques, mécaniques et analyse du mouvement) ont été recueillies à partir de la littérature en utilisant les moteurs de recherche des articles scientifiques fiables pour établir un espace d’incertitude. La nouvelle méthode de classement (nommée US-ECM) proposée est une méthode de classement semi-supervisé qui utilise la partition crédale pour représenter les connaissances partielles sur les clusters. L’utilisation de la fonction de croyance pour représenter ces éléments de connaissance permet de les combiner d’une manière souple et robuste. De plus, l’extension de cette méthode avec un espace d’incertitude multidimensionnelle a montré la meilleure performance par rapport à la version originale. L’analyse des avis d’expert permettra d’inclure ou d’exclure les sources de données selon leurs niveaux de fiabilité. Ensuite, le modèle de regroupement (US-ECM) développé sera appliqué sur une nouvelle base de données pour évaluer l’impact de la fiabilité des données sur la performance de diagnostic. / The aim of the project is to investigate the modeling of the reliability/incertitude/imprecision of biomedical and biomechanics data (medical images, kinematics/kinetics/EMG data, etc.) and its propagation in the predictive diagnosls models of the disorders of musculoskeletal systems. These diagnosis models will be based on multimodal and multidimensional patient data (3D medical imaging, mechanical data,dinical data,etc.). The literature-based data have been collected to estabish an uncertainty space, which represents fused data from multiple sources, of morphological, mechanical, and movement analysis properties of the musculoskeletal system from multiple sources (i.e. research papers from Science Direct and Pubmed). After that,a new clustering method (US-ECM) is proposed for integrating fused data from multiple sources ln form of a multidimensional uncertainty space (US). Reliability of biomechanical data was evaluated by a fusion approach expert opinion. Reliability criteria of a data source (ie scientific paper published) focus on the technique used the acquisition protocol and measurement and the number of data. A system of questionnaires was developed to co!lect expert opinion. Then, the theory of beliet functions has been applied to merge these opinion to establish a confidence level each data source.
15

Orbital Perturbations for Space Situational Awareness

Smriti Nandan Paul (9178595) 29 July 2020 (has links)
<pre>Because of the increasing population of space objects, there is an increasing necessity to monitor and predict the status of the near-Earth space environment, especially of critical regions like geosynchronous Earth orbit (GEO) and low Earth orbit (LEO) regions, for a sustainable future. Space Situational Awareness (SSA), however, is a challenging task because of the requirement for dynamically insightful fast orbit propagation models, presence of dynamical uncertainties, and limitations in sensor resources. Since initial parameters are often not known exactly and since many SSA applications require long-term orbit propagation, long-term effects of the initial uncertainties on orbital evolution are examined in this work. To get a long-term perspective in a fast and efficient manner, this work uses analytical propagation techniques. Existing analytical theories for orbital perturbations are investigated, and modifications are made to them to improve accuracy. While conservative perturbation forces are often studied, of particular interest here is the orbital perturbation due to non-conservative forces. Using the previous findings and the developments in this thesis, two SSA applications are investigated in this work. In the first SSA application, a sensor tasking algorithm is designed for the detection of new classes of GEO space objects. In the second application, the categorization of near-GEO objects is carried out by combining knowledge of orbit dynamics with machine learning techniques.</pre>
16

Forensic Validation of 3D models

Lindberg, Mimmi January 2020 (has links)
3D reconstruction can be used in forensic science to reconstruct crime scenes and objects so that measurements and further information can be acquired off-site. It is desirable to use image based reconstruction methods but there is currently no procedure available for determining the uncertainty of such reconstructions. In this thesis the uncertainty of Structure from Motion is investigated. This is done by exploring the literature available on the subject and compiling the relevant information in a literary summary. Also, Monte Carlo simulations are conducted to study how the feature position uncertainty affects the uncertainty of the parameters estimated by bundle adjustment. The experimental results show that poses of cameras that contain few image correspondences are estimated with higher uncertainty. The poses of such cameras are estimated with lesser uncertainty if they have feature correspondences in cameras that contain a higher number of projections.
17

Component-Based Transfer Path Analysis and Hybrid Substructuring at high frequencies : A treatise on error modelling in Transfer Path Analysis / Komponentbaserad överföringsanalys och hybridsubstrukturering för höga frekvenser

Venugopal, Harikrishnan January 2020 (has links)
The field of modal testing and analysis is currently facing a surge of interest in error modelling. Several errors which occur during testing campaigns are modelled analytically or numerically and propagated to various system coupling and interface reduction routines effectively. This study aims to propagate human errors, like position measurement errors and orientation measurement errors, and random noise-based errors in the measured Frequency Response Functions(FRFs) to the interface reduction algorithm called Virtual Point Transformation(VPT) and later to a substructure coupling method called Frequency-Based Substructuring(FBS). These methods form the cornerstone for Transfer Path Analsysis (TPA). Furthermore, common sources of error like sensor mass loading effect and sensor misalignment have also been investigated. Lastly, a new method to calculate the sensor positions and orientations after a measurement has been devised based on rigid body properties of the system and from the applied force characteristics. The error propagation was performed using a computationally efficient, moment method of the first order and later validated using Monte-Carlo simulations. The results show that the orientation measurement error is the most significant followed by FRF error and position measurement error. The mass loading effect is compensated using the Structural Modification Using Response Functions (SMURF) method and the sensor misalignment is corrected using coordinate transformation. The sensor positions and orientations are accurately estimated from rigid body properties and applied force characteristics; individually using matrix algebra and simultaneously using an optimization-based non-linear least squares solver. / För närvarande ser vi ett ökat intresse för felmodellering inom området modal provning och analys. Flera fel som uppstår under testserier modelleras analytiskt eller numeriskt och propageras effektivt till olika systemkopplings- och gränssnittsreduktionsrutiner. Denna studie syftar till att hantera mänskliga fel, som positionsmätningsfel och orienteringsmätfel, och slumpmässiga brusbaserade fel i de uppmätta frekvensresponsfunktionerna (FRF) till den gränssnittsreduktionsalgoritm, som kallas ”Virtual Point Transformation” (VPT), och senare till en substrukturskopplingsmetod, som kallas FBS (Frequency-Based Substructuring). Dessa metoder utgör hörnstenen för ”Transfer Path Analsysis” (TPA). Dessutom har vanliga felkällor som sensormassbelastningseffekter och felorientering av sensorer undersökts. Slutligen har en ny metod för att beräkna sensorns positioner och riktningar, efter att mätning gjorts, baserat på systemets stelkroppsegenskaper och de applicerade krafterna. Felpropageringen estimerades med en beräkningseffektiv, momentmetod av första ordningen och validerades senare med Monte-Carlo-simuleringar. Resultaten visar att orienteringsmätfelet är den mest signifikanta felkällan följt av FRF-fel och positionsmätningsfel. Massbelastningseffekten kompenseras med hjälp av ”Structural Modification Using Response Functions” (SMURF) -metoden och sensorjusteringen korrigeras med hjälp av koordinatomvandling. Sensorpositionerna och positioner och orientering beräknas exakt från stelkroppsegenskaperna och de applicerade krafterna; individuellt med matrisalgebra och samtidigt med en optimeringsbaserad icke-linjär minsta kvadratlösare.
18

Uncertainty quantification for offshore wind turbines / Osäkerhetskvantifiering för vindkraftverk till havs

Wang, Ziming January 2022 (has links)
Wind energy is a field with a large number of uncertainties. The random nature of the weather conditions, including wind speed, wind direction, and turbulence intensity, influences the energy output and the structural safety of a wind farm, making its performance fluctuate and difficult to predict. The uncertainties presented in the energy output and structure lifetime lead to increased investment risk. There are possibilities to reduce the risk associated with these uncertainties by optimizing the design of the farm or the wind turbine, with respect to the stochastic parameters. The goal of this project is to improve the wind farm optimization problem by providing accurate and computationally efficient annual energy production (AEP) estimates, which is a uncertainty quantification that is required in every optimization step. Uncertainty quantification has been recognized as a challenge in the wind energy industry, as the chaotic nature of the weather condition complicates the prediction of energy production. High-fidelity wind farm models usually employ advanced models like Large Eddy Simulation or Reynolds averaged Navier-Stokes equation for better accuracy. However, the prolonged computation time of these high-fidelity models make the traditional uncertainty quantification approach like the Monte-Carlo simulation or other integration techniques infeasible for larger wind farms.  To overcome this limitation, the report proposes the use of generalized polynomial chaos expansion (PCE) to characterize the AEP as a function of wind speed and wind direction. PCE is a technique that approximates a random variable using a series of orthogonal polynomials, the polynomials are chosen based on the target distribution. This report explains how a surrogate model of the AEP can be constructed using PCE, which can be used in optimization or model analysis. The objective of the thesis work is to minimize the number of model evaluations required for obtaining an accurate energy response surface. Different ideas of non-intrusive PCE are implemented and explored in this project. The report demonstrates that, the multi-element polynomial chaos fitted by point collocation, with a dependent polynomial basis, is not only able to make accurate and stable (with respect to the placement of the measurements) energy predictions, but also produces realistic energy response surface. / Vindkraft är en bransch med många osäkerheter, där väderförhållandena påverkar energiproduktionen och strukturens livslängd. Denna osäkerhet ökar investeringsrisken, men kan minskas genom optimering av vindkraftverkets design med hänsyn till de stokastiska parametrarna. Syftet med denna rapport är att förbättra optimeringsproblemet för vindkraftverk genom att ge noggranna och effektiva årliga energiproduktionsberäkningar (AEP), vilket krävs vid varje optimeringssteg. I rapporten används polynomial chaos expansion (PCE) för att approximera AEP och minska antalet nödvändiga modellutvärderingar. Resultaten visar att PCE är en effektiv metod för att göra energiprognoser.
19

Verification and Validation of a Transient Heat Exchanger Model

Carper, Jayme Lee 01 September 2015 (has links)
No description available.
20

Efficient and Adaptive Decentralized Sparse Gaussian Process Regression for Environmental Sampling Using Autonomous Vehicles

Norton, Tanner A. 27 June 2022 (has links)
In this thesis, I present a decentralized sparse Gaussian process regression (DSGPR) model with event-triggered, adaptive inducing points. This DSGPR model brings the advantages of sparse Gaussian process regression to a decentralized implementation. Being decentralized and sparse provides advantages that are ideal for multi-agent systems (MASs) performing environmental modeling. In this case, MASs need to model large amounts of information while having potential intermittent communication connections. Additionally, the model needs to correctly perform uncertainty propagation between autonomous agents and ensure high accuracy on the prediction. For the model to meet these requirements, a bounded and efficient real-time sparse Gaussian process regression (SGPR) model is needed. I improve real-time SGPR models in these regards by introducing an adaptation of the mean shift and fixed-width clustering algorithms called radial clustering. Radial clustering enables real-time SGPR models to have an adaptive number of inducing points through an efficient inducing point selection process. I show how this clustering approach scales better than other seminal Gaussian process regression (GPR) and SGPR models for real-time purposes while attaining similar prediction accuracy and uncertainty reduction performance. Furthermore, this thesis addresses common issues inherent in decentralized frameworks such as high computation costs, inter-agent message bandwidth restrictions, and data fusion integrity. These challenges are addressed in part through performing maximum consensus between local agent models which enables the MAS to gain the advantages of decentral- ization while keeping data fusion integrity. The inter-agent communication restrictions are addressed through the contribution of two message passing heuristics called the covariance reduction heuristic and the Bhattacharyya distance heuristic. These heuristics enable user to reduce message passing frequency and message size through the Bhattacharyya distance and properties of spatial kernels. The entire DSGPR framework is evaluated on multiple simulated random vector fields. The results show that this framework effectively estimates vector fields using multiple autonomous agents. This vector field is assumed to be a wind field; however, this framework may be applied to the estimation of other scalar or vector fields (e.g., fluids, magnetic fields, electricity, etc.).

Page generated in 0.1432 seconds