131 |
Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling MethodPark, Jangho 22 July 2019 (has links)
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly. / Doctor of Philosophy / In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
|
132 |
Prediction of homing pigeon flight paths using Gaussian processesMann, Richard Philip January 2010 (has links)
Studies of avian navigation are making increasing use of miniature Global Positioning Satellite devices, to regularly record the position of birds in flight with high spatial and temporal resolution. I suggest a novel approach to analysing the data sets pro- duced in these experiments, focussing on studies of the domesticated homing pigeon (Columba Livia) in the local, familiar area. Using Gaussian processes and Bayesian inference as a mathematical foundation I develop and apply a statistical model to make quantitative predictions of homing pigeon flight paths. Using this model I show that pigeons, when released repeatedly from the same site, learn and follow a habitual route back to their home loft. The model reveals the rate of route learning and provides a quantitative estimate of the habitual route complete with associated spatio-temporal covariance. Furthermore I show that this habitual route is best described by a sequence of isolated waypoints rather than as a continuous path, and that these waypoints are preferentially found in certain terrain types, being especially rare within urban and forested environments. As a corollary I demonstrate an extension of the flight path model to simulate ex- periments where pigeons are released in pairs, and show that this can account for observed large scale patterns in such experiments based only on the individual birds’ previous behaviour in solo flights, making a successful quantitative prediction of the critical value associated with a non-linear behavioural transition.
|
133 |
Prédiction de l'attrition en date de renouvellement en assurance automobile avec processus gaussiensPannetier Lebeuf, Sylvain 08 1900 (has links)
Le domaine de l’assurance automobile fonctionne par cycles présentant des phases
de profitabilité et d’autres de non-profitabilité. Dans les phases de non-profitabilité, les
compagnies d’assurance ont généralement le réflexe d’augmenter le coût des primes
afin de tenter de réduire les pertes. Par contre, de très grandes augmentations peuvent
avoir pour effet de massivement faire fuir la clientèle vers les compétiteurs. Un trop
haut taux d’attrition pourrait avoir un effet négatif sur la profitabilité à long terme de la
compagnie. Une bonne gestion des augmentations de taux se révèle donc primordiale
pour une compagnie d’assurance.
Ce mémoire a pour but de construire un outil de simulation de l’allure du porte-
feuille d’assurance détenu par un assureur en fonction du changement de taux proposé à
chacun des assurés. Une procédure utilisant des régressions à l’aide de processus gaus-
siens univariés est développée. Cette procédure offre une performance supérieure à la
régression logistique, le modèle généralement utilisé pour effectuer ce genre de tâche. / The field of auto insurance is working by cycles with phases of profitability and other
of non-profitability. In the phases of non-profitability, insurance companies generally
have the reflex to increase the cost of premiums in an attempt to reduce losses. For cons,
very large increases may have the effect of massive attrition of the customers. A too
high attrition rate could have a negative effect on long-term profitability of the company.
Proper management of rate increases thus appears crucial to an insurance company.
This thesis aims to build a simulation tool to predict the content of the insurance
portfolio held by an insurer based on the rate change proposed to each insured. A proce-
dure using univariate Gaussian Processes regression is developed. This procedure offers
a superior performance than the logistic regression model typically used to perform such
tasks.
|
134 |
Non-parametric workspace modelling for mobile robots using push broom lasersSmith, Michael January 2011 (has links)
This thesis is about the intelligent compression of large 3D point cloud datasets. The non-parametric method that we describe simultaneously generates a continuous representation of the workspace surfaces from discrete laser samples and decimates the dataset, retaining only locally salient samples. Our framework attains decimation factors in excess of two orders of magnitude without significant degradation in fidelity. The work presented here has a specific focus on gathering and processing laser measurements taken from a moving platform in outdoor workspaces. We introduce a somewhat unusual parameterisation of the problem and look to Gaussian Processes as the fundamental machinery in our processing pipeline. Our system compresses laser data in a fashion that is naturally sympathetic to the underlying structure and complexity of the workspace. In geometrically complex areas, compression is lower than that in geometrically bland areas. We focus on this property in detail and it leads us well beyond a simple application of non-parametric techniques. Indeed, towards the end of the thesis we develop a non-stationary GP framework whereby our regression model adapts to the local workspace complexity. Throughout we construct our algorithms so that they may be efficiently implemented. In addition, we present a detailed analysis of the proposed system and investigate model parameters, metric errors and data compression rates. Finally, we note that this work is predicated on a substantial amount of robotics engineering which has allowed us to produce a high quality, peer reviewed, dataset - the first of its kind.
|
135 |
Mapping and understanding the distributions of potential vector mosquitoes in the UK : new methods and applicationsGolding, Nicholas January 2013 (has links)
A number of emerging vector-borne diseases have the potential to be transmitted in the UK by native mosquitoes. Human infection by some of these diseases requires the presence of communities of multiple vector mosquito species. Mitigating the risk posed by these diseases requires an understanding of the spatial distributions of the UK mosquito fauna. Little empirical data is available from which to determine the distributions of mosquito species in the UK. Identifying areas at risk from mosquito-borne disease therefore requires statistical modelling to investigate and predict mosquito distributions. This thesis investigates the distributions of potential vector mosquitoes in the UK at landscape to national scales. A number of new methodological approaches for species distri- bution modelling are developed. These methods are then used to map and understand the distributions of mosquito communities with the potential to transmit diseases to humans. Chapter 2 reports the establishment of substantial populations of the West Nile virus (WNV) vector mosquito Culex modestus in wetlands in southern England. This represents a drastic shift in the species’ known range and an increase in the risk of WNV transmission where Cx. modestus is present. Chapter 3 develops and applies a new species interaction distribution model which identifies fish and ditch shrimp of the genus Palaemonetes as predators which may restrict the distribution of the potential WNV vector community in these wetlands. Chapter 4 develops a number of methods to make robust predictions of the probability of presence of a species from presence-only data, by eliciting and applying estimates of the species’ prevalence. Chapter 5 introduces a new Bayesian species distribution modelling approach which outperforms existing methods and has number of useful features for dealing with poor- quality data. Chapter 6 applies methods developed in the previous two chapters to produce the first high-resolution distribution maps of potential vector mosquitoes in the UK. These maps identify several wetland areas where vector communities exist which could maintain WNV transmission in birds and transmit it to humans. This thesis makes significant contributions to our understanding of the distributions of UK mosquito species. It also provides methods for species distribution modelling which could be widely applied in ecology and epidemiology.
|
136 |
Stochastické integrály řízené isonormálními gaussovskými procesy a aplikace / Stochastic Integrals Driven by Isonormal Gaussian Processes and ApplicationsČoupek, Petr January 2013 (has links)
Stochastic Integrals Driven by Isonormal Gaussian Processes and Applications Master Thesis - Petr Čoupek Abstract In this thesis, we introduce a stochastic integral of deterministic Hilbert space valued functions driven by a Gaussian process of the Volterra form βt = t 0 K(t, s)dWs, where W is a Brownian motion and K is a square integrable kernel. Such processes generalize the fractional Brownian motion BH of Hurst parameter H ∈ (0, 1). Two sets of conditions on the kernel K are introduced, the singular case and the regular case, and, in particular, the regular case is studied. The main result is that the space H of β-integrable functions can be, in the strictly regular case, embedded in L 2 1+2α ([0, T]; V ) which corresponds to the space L 1 H ([0, T]) for the fractional Brownian mo- tion. Further, the cylindrical Gaussian Volterra process is introduced and a stochastic integral of deterministic operator-valued functions, driven by this process, is defined. These results are used in the theory of stochastic differential equations (SDE), in particular, measurability of a mild solution of a given SDE is proven.
|
137 |
Méthodes avancées d'optimisation par méta-modèles – Applicationà la performance des voiliers de compétition / Advanced surrogate-based optimization methods - Application to racing yachts performanceSacher, Matthieu 10 September 2018 (has links)
L’optimisation de la performance des voiliers est un problème difficile en raison de la complexité du systèmemécanique (couplage aéro-élastique et hydrodynamique) et du nombre important de paramètres à optimiser (voiles, gréement,etc.). Malgré le fait que l’optimisation des voiliers est empirique dans la plupart des cas aujourd’hui, les approchesnumériques peuvent maintenant devenir envisageables grâce aux dernières améliorations des modèles physiques et despuissances de calcul. Les calculs aéro-hydrodynamiques restent cependant très coûteux car chaque évaluation demandegénéralement la résolution d’un problème non linéaire d’interaction fluide-structure. Ainsi, l’objectif central de cette thèseest de proposer et développer des méthodes originales dans le but de minimiser le coût numérique de l’optimisation dela performance des voiliers. L’optimisation globale par méta-modèles Gaussiens est utilisée pour résoudre différents problèmesd’optimisation. La méthode d’optimisation par méta-modèles est étendue aux cas d’optimisations sous contraintes,incluant de possibles points non évaluables, par une approche de type classification. L’utilisation de méta-modèles à fidélitésmultiples est également adaptée à la méthode d’optimisation globale. Les applications concernent des problèmesd’optimisation originaux où la performance est modélisée expérimentalement et/ou numériquement. Ces différentes applicationspermettent de valider les développements des méthodes d’optimisation sur des cas concrets et complexes, incluantdes phénomènes d’interaction fluide-structure. / Sailing yacht performance optimization is a difficult problem due to the high complexity of the mechanicalsystem (aero-elastic and hydrodynamic coupling) and the large number of parameters to optimize (sails, rigs, etc.).Despite the fact that sailboats optimization is empirical in most cases today, the numerical optimization approach is nowconsidered as possible because of the latest advances in physical models and computing power. However, these numericaloptimizations remain very expensive as each simulation usually requires solving a non-linear fluid-structure interactionproblem. Thus, the central objective of this thesis is to propose and to develop original methods aiming at minimizing thenumerical cost of sailing yacht performance optimization. The Efficient Global Optimization (EGO) is therefore appliedto solve various optimization problems. The original EGO method is extended to cases of optimization under constraints,including possible non computable points, using a classification-based approach. The use of multi-fidelity surrogates isalso adapted to the EGO method. The applications treated in this thesis concern the original optimization problems inwhich the performance is modeled experimentally and/or numerically. These various applications allow for the validationof the developments in optimization methods on real and complex problems, including fluid-structure interactionphenomena.
|
138 |
Sur l’évaluation statistique des risques pour les processus spatiaux / On statistical risk assessment for spatial processesAhmed, Manaf 29 June 2017 (has links)
La modélisation probabiliste des événements climatiques et environnementaux doit prendre en compte leur nature spatiale. Cette thèse porte sur l’étude de mesures de risque pour des processus spatiaux. Dans une première partie, nous introduisons des mesures de risque à même de prendre en compte la structure de dépendance des processus spatiaux sous-jacents pour traiter de données environnementales. Une deuxième partie est consacrée à l’estimation des paramètres de processus de type max-mélange. La première partie de la thèse est dédiée aux mesures de risque. Nous étendons les travaux réalisés dans [44] d’une part à des processus gaussiens, d’autre part à d’autres processus max-stables et à des processus max-mélange, d’autres structures de dépendance sont ainsi considérées. Les mesures de risque considérées sont basées sur la moyenne L(A,D) de pertes ou de dommages D sur une région d’intérêt A. Nous considérons alors l’espérance et la variance de ces dommages normalisés. Dans un premier temps, nous nous intéressons aux propriétés axiomatiques des mesures de risque, à leur calcul et à leur comportement asymptotique (lorsque la taille de la région A tend vers l’infini). Nous calculons les mesures de risque dans différents cas. Pour un processus gaussien, X, on considère la fonction d’excès : D+ X,u = (X−u)+ où u est un seuil fixé. Pour des processus max-stables et max-mélange X, on considère la fonction puissance : DνX = Xν. Dans certains cas, des formules semi-explicites pour les mesures de risque correspondantes sont données. Une étude sur simulations permet de tester le comportement des mesures de risque par rapport aux nombreux paramètres en jeu et aux différentes formes de noyau de corrélation. Nous évaluons aussi la performance calculatoire des différentes méthodes proposées. Celle-ci est satisfaisante. Enfin, nous avons utilisé une étude précédente sur des données de pollution dans le Piémont italien, celle-ci peuvent être considérées comme gaussiennes. Nous étudions la mesure de risque associée au seuil légal de pollution donnée par la directive européenne 2008/50/EC. Dans une deuxième partie, nous proposons une procédure d’estimation des paramètres d’un processus max-mélange, alternative à la méthode d’estimation par maximum de vraisemblance composite. Cette méthode plus classique d’estimation par maximum de vraisemblance composite est surtout performante pour estimer les paramètres de la partie max-stable du mélange (et moins performante pour estimer les paramètres de la partie asymptotiquement indépendante). Nous proposons une méthode de moindres carrés basée sur le F-madogramme : minimisation de l’écart quadratique entre le F-madogramme théorique et le F-madogramme empirique. Cette méthode est évaluée par simulation et comparée à la méthode par maximum de vraisemblance composite. Les simulations indiquent que la méthode par moindres carrés du F-madogramme est plus performante pour estimer les paramètres de la partie asymptotiquement indépendante / When dealing with environmental or climatic changes, a natural spatial dependence aspect appears. This thesis is dedicated to the study of risk measures in this spatial context. In the first part (Chapters 3 and 4), we study risk measures, which include the natural spatial dependence structure in order to assess the risks due to extreme environmental events and in the last part (Chapter 5), we propose estimation procedures for underlying processes, such as isotropic and stationary max-mixture processes. In the first part dedicated to risk measures, we extended the work in [44] in order to obtain spatial risk measures for various spatial processes and different dependence structures. We based these risk measures on the mean losses over a region A of interest. Risk measures are then defined as the expectation E[L(A,D)] and variance Var(L(A,D)) of the normalized loss. In the study of these measures, we focused on the axiomatic properties of asymptotic behavior (as the size of the region interest goes to infinity) and on computational aspects. We calculated two risk measures: risk measure for the gaussian process based on the damage function called access damage D+ X,u and risk measure for extreme processes based on the power damage function DνX . In simulation study and for each risk measure provided, we emphasized the theoretical results of asymptotic behavior by various parameters of a model and different Kernels for the correlation function. We also evaluated the performance of these risk measures. The results were encouraging. Finally, we implemented the risk measure corresponding to gaussian on the real data of pollution in Piemonte, Italy. We assessed the risks associated with this pollution when an excess of it was over the legal level determined by the European directive 2008/50/EC. With respect to estimation, we proposed a semi-parametric estimation procedure in order to estimate the parameters of a max-mixture model and also of a max-stable model ( inverse max-stable model) as an alternative to composite likelihood. A good estimation by the proposed estimator required the dependence measure to detect all dependence structures in the model, especially when dealing with the max-mixture model. We overcame this challenge by using the F-madogram. The semi-parametric estimation was then based on a quasi least square method, by minimizing the square difference between the theoretical F-madogram and an empirical one. We evaluated the performance of this estimator through a simulation study. It was shown that on a mean, the estimation is performed well, although in some cases, it encountered some difficulties
|
139 |
Sequential Design of Experiments to Estimate a Probability of Failure. / Planification d'expériences séquentielle pour l'estimation de probabilités de défaillanceLi, Ling 16 May 2012 (has links)
Cette thèse aborde le problème de l'estimation de la probabilité de défaillance d'un système à partir de simulations informatiques. Lorsqu'on dispose seulement d'un modèle du système coûteux à simuler, le budget de simulations est généralement très limité, ce qui est incompatible avec l’utilisation de méthodes Monte Carlo classiques. En fait, l’estimation d’une petite probabilité de défaillance à partir de simulations très coûteuses, comme on peut rencontrer dans certains problèmes industriels complexes, est un sujet particulièrement difficile. Une approche classique consiste à remplacer le modèle coûteux à simuler par un modèle de substitution nécessitant de faibles ressources informatiques. A partir d’un tel modèle de substitution, deux opérations peuvent être réalisées. La première opération consiste à choisir des simulations, en nombre aussi petit que possible, pour apprendre les régions de l’espace des paramètres du système qui construire de bons estimateurs de la probabilité de défaillance. Cette thèse propose deux contributions. Premièrement, nous proposons des stratégies de type SUR (Stepwise Uncertainty Reduction) à partir d’une formulation bayésienne du problème d’estimation d’une probabilité de défaillance. Deuxièmement, nous proposons un nouvel algorithme, appelé Bayesian Subset Simulation, qui prend le meilleur de l’algorithme Subset Simulation et des approches séquentielles bayésiennes utilisant la modélisation du système par processus gaussiens. Ces nouveaux algorithmes sont illustrés par des résultats numériques concernant plusieurs exemples de référence dans la littérature de la fiabilité. Les méthodes proposées montrent de bonnes performances par rapport aux méthodes concurrentes. / This thesis deals with the problem of estimating the probability of failure of a system from computer simulations. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited, which is incompatible with the use of classical Monte Carlo methods. In fact, estimating a small probability of failure with very few simulations, as required in some complex industrial problems, is a particularly difficult topic. A classical approach consists in replacing the expensive-to-simulate model with a surrogate model that will use little computer resources. Using such a surrogate model, two operations can be achieved. The first operation consists in choosing a number, as small as possible, of simulations to learn the regions in the parameter space of the system that will lead to a failure of the system. The second operation is about constructing good estimators of the probability of failure. The contributions in this thesis consist of two parts. First, we derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. Second, we propose a new algorithm, called Bayesian Subset Simulation, that takes the best from the Subset Simulation algorithm and from sequential Bayesian methods based on Gaussian process modeling. The new strategies are supported by numerical results from several benchmark examples in reliability analysis. The methods proposed show good performances compared to methods of the literature.
|
140 |
Oculométrie Numérique Economique : modèle d'apparence et apprentissage par variétés / Eye Tracking system : appearance based model and manifold learningLiang, Ke 13 May 2015 (has links)
L'oculométrie est un ensemble de techniques dédié à enregistrer et analyser les mouvements oculaires. Dans cette thèse, je présente l'étude, la conception et la mise en œuvre d'un système oculométrique numérique, non-intrusif permettant d'analyser les mouvements oculaires en temps réel avec une webcam à distance et sans lumière infra-rouge. Dans le cadre de la réalisation, le système oculométrique proposé se compose de quatre modules: l'extraction des caractéristiques, la détection et le suivi des yeux, l'analyse de la variété des mouvements des yeux à partir des images et l'estimation du regard par l'apprentissage. Nos contributions reposent sur le développement des méthodes autour de ces quatre modules: la première réalise une méthode hybride pour détecter et suivre les yeux en temps réel à partir des techniques du filtre particulaire, du modèle à formes actives et des cartes des yeux (EyeMap); la seconde réalise l'extraction des caractéristiques à partir de l'image des yeux en utilisant les techniques des motifs binaires locaux; la troisième méthode classifie les mouvements oculaires selon la variété générée par le Laplacian Eigenmaps et forme un ensemble de données d'apprentissage; enfin, la quatrième méthode calcul la position du regard à partir de cet ensemble d'apprentissage. Nous proposons également deux méthodes d'estimation:une méthode de la régression par le processus gaussien et un apprentissage semi-supervisé et une méthode de la catégorisation par la classification spectrale (spectral clustering). Il en résulte un système complet, générique et économique pour les applications diverses dans le domaine de l'oculométrie. / Gaze tracker offers a powerful tool for diverse study fields, in particular eye movement analysis. In this thesis, we present a new appearance-based real-time gaze tracking system with only a remote webcam and without infra-red illumination. Our proposed gaze tracking model has four components: eye localization, eye feature extraction, eye manifold learning and gaze estimation. Our research focuses on the development of methods on each component of the system. Firstly, we propose a hybrid method to localize in real time the eye region in the frames captured by the webcam. The eye can be detected by Active Shape Model and EyeMap in the first frame where eye occurs. Then the eye can be tracked through a stochastic method, particle filter. Secondly, we employ the Center-Symmetric Local Binary Patterns for the detected eye region, which has been divided into blocs, in order to get the eye features. Thirdly, we introduce manifold learning technique, such as Laplacian Eigen-maps, to learn different eye movements by a set of eye images collected. This unsupervised learning helps to construct an automatic and correct calibration phase. In the end, as for the gaze estimation, we propose two models: a semi-supervised Gaussian Process Regression prediction model to estimate the coordinates of eye direction; and a prediction model by spectral clustering to classify different eye movements. Our system with 5-points calibration can not only reduce the run-time cost, but also estimate the gaze accurately. Our experimental results show that our gaze tracking model has less constraints from the hardware settings and it can be applied efficiently in different real-time applications.
|
Page generated in 0.0717 seconds