• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 35
  • 14
  • 13
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 178
  • 178
  • 31
  • 25
  • 25
  • 24
  • 22
  • 21
  • 19
  • 18
  • 18
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method

Park, Jangho 22 July 2019 (has links)
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly. / Doctor of Philosophy / In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
112

Some Population Set-Based Methods for Unconstrained Global Optimization

Kaelo, Professor 16 November 2006 (has links)
Student Number : 0214677F - PhD thesis - School of Camputational and Applied Mathematics - Faculty of Science / Many real-life problems are formulated as global optimization problems with continuous variables. These problems are in most cases nonsmooth, nonconvex and often simulation based, making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable and derivative-free global optimization methods for solving such problems are needed. In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization methods. In particular, we concentrate on improving some population set-based methods for unconstrained global optimization, mainly through hybridization. Hybridization has widely been recognized to be one of the most attractive areas of unconstrained global optimization. Experiments have shown that through hybridization, new methods that inherit the strength of the original elements but not their weakness can be formed. We suggest a number of new hybridized population set-based methods based on differential evolution (de), controlled random search (crs2) and real coded genetic algorithm (ga). We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called random localization, in the mutation phase of de. In the second version, we propose a localization in the acceptance phase of de. In the third version, we form a de hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm. The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose #2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2 by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically combine the point generation scheme of crs2 with the linear interpolation scheme of a trust-region based method. The fourth version is a crs hybrid that probabilistically combines the quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a local technique. We propose, in the second version, an integrated crossover rule that generates two children at a time using two different crossover rules. We introduce a local technique in the second version to obtain the third version. The fourth and #2;fth versions are based on the probabilistic adaptation of crossover rules. The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments using a large test suite of both simple and dif#2;cult problems from the literature. Results indicate that the new hybrids are much better than their original counterparts both in reliability and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many currently available stochastic algorithms for solving global optimization problems in which the gradient information is not readily available.
113

Optimisation Globale basée sur l'Analyse d'Intervalles : relaxation Affine et Limitation de la Mémoire / Global Optimization based on Interval Analysis : affine Relaxation and Limited Memory

Ninin, Jordan 08 December 2010 (has links)
Depuis une vingtaine d’années, la résolution de problèmes d’optimisation globale non convexes avec contraintes a connu un formidable essor. Les algorithmes de branch and bound basée sur l’analyse d’intervalles ont su trouver leur place, car ils ont l’avantage de prouver l’optimalité de la solution de façon déterministe, avec un niveau de certitude pouvant aller jusqu’à la précision machine. Cependant, la complexité exponentielle en temps et en mémoire de ces algorithmes induit une limite intrinsèque, c’est pourquoi il est toujours nécessaire d’améliorer les techniques actuelles. Dans cette thèse, nous avons développé de nouvelles arithmétiques basées sur l’arithmétique d’intervalles et l’arithmétique affine, afin de calculer des minorants et des majorants de meilleure qualité de fonctions explicites sur un intervalle. Nous avons ensuite développé une nouvelle méthode automatique de construction de relaxations linéaires. Cette construction est basée sur l’arithmétique affine et procède par surcharge des opérateurs. Les programmes linéaires ainsi générés ont exactement le même nombre de variables et de contraintes d’inégalité que les problèmes originaux, les contraintes d’égalité étant remplacées par deux inégalités. Cette nouvelle procédure permet de calculer des minorants fiables et des certificats d’infaisabilité pour chaque sous-domaine à chaque itération de notre algorithme de branch and bound par intervalles. De nombreux tests numériques issus du site COCONUT viennent confirmer l’efficacité de cette approche. Un autre aspect de cette thèse a été l’étude d’une extension de ce type d’algorithmes en introduisant une limite sur mémoire disponible. L’idée principale de cette approche est de proposer un processus inverse de l’optimisation par le biais d’un principe métaheuristique : plutôt que d’améliorer des solutions locales à l’aide de métaheuristiques telles que les algorithmes Taboo ou VNS, nous partons d’une méthode exacte et nous la modifions en une heuristique. De cette façon, la qualité de la solution trouvée peut être évaluée. Une étude de la complexité de ce principe métaheuristique a également été effectuée. Enfin, pour finir l’étude, nous avons appliqué notre algorithme à la résolution de problème en géométrie plane, ainsi qu’à la résolution d’un problème de dimensionnement de moteur électrique. Les résultats obtenus ont permis de confirmer l’intérêt de ce type d’algorithme, en résolvant des problèmes ouverts sur les polygones convexes et proposant des structures innovantes en génie électrique. / Since about thirty years, interval Branch and Bound algorithms are increasingly used to solve constrained global optimization problems in a deterministic way. Such algorithms are reliable, i.e., they provide an optimal solution and its value with guaranteed bounds on the error, or a proof that the problem under study is infeasible. Other approaches to global optimization, while useful and often less time-consuming than interval methods, do not provide such a guarantee. However, the exponential complexity in time and memory of interval Branch and Bound algorithms implies a limitation, so it is always necessary to improve these methods. In this thesis, we have developed new arithmetics based on interval arithmetic and affine arithmetic, to compute better lower and upper bounds of a factorable function over an interval. An automatic method for constructing linear relaxations of constrained global optimization problems is proposed. Such a construction is based on affine and interval arithmetics and uses operator overloading. These linear programs have exactly the same numbers of variables and of inequality constraints as the given problems. Each equality constraint is replaced by two inequalities. This new procedure for computing reliable bounds and certificates of infeasibility is inserted into a classical interval Branch and Bound algorithm. Extensive computation experiments, made on a sample of test problems from the COCONUT database, prove its effectiveness. Another aspect is the study of an extension of such a global optimization code by limiting the available memory. The main idea of this new kind of metaheuristique is to propose a reverse process of optimization via heuristics : rather than improving local solutions by using metaheuristics such as Taboo or VNS, we begin with an exact method and we modify it into a heuristic one. In such a way, the quality of the solution could be evaluated. Moreover, a study of the complexity of this metaheurisque has been done. Finally, we applied our algorithm to solve open problem in geometry, and to solve a design problem of an electric motor. The results have confirmed the usefulness of this kind of algorithms, solving open problems on convex polygons and offering innovative structures in electrical engineering.
114

Metode promena formulacija i okolina za problem maksimalne klike grafa / Variable Formulation and Neighborhood Search Methods for the Maximum Clique Problem in Graph

Janićijević Stefana 29 September 2016 (has links)
<p>Doktorska disertacija se bavi temama rešavanja računarski teških<br />problema kombinatorne optimizacije. Istaknut je problem maksimalne<br />klike kao predstavnik određenih struktura u grafovima. Problem<br />maksimalne klike i sa njim povezani problemi su formulisani kao<br />nelinearne funkcije. Rešavani su sa ciljem otkrivanja novih metoda<br />koje pronalaze dobre aproksimacije rešenja za neko razumno vreme.<br />Predložene su varijante Metode promenljivih okolina na rešavanje<br />maksimalne klike u grafu. Povezani problemi na grafovima se mogu<br />primeniti na pretragu informacija, raspoređivanje, procesiranje<br />signala, teoriju klasifikacije, teoriju kodiranja, itd. Svi algoritmi<br />su implementirani i uspešno testirani na brojnim različitim<br />primerima.</p> / <p>This Ph.D. thesis addresses topics NP hard problem solving approaches in<br />combinatorial optimization and according to that it is highlighted maximum<br />clique problem as a representative of certain structures in graphs. Maximum<br />clique problem and related problems with this have been formulated as non<br />linear functions which have been solved to research for new methods and<br />good solution approximations for some reasonable time. It has been<br />proposed several different extensions of Variable Neighborhood Search<br />method. Related problems on graphs could be applied on information<br />retrieval, scheduling, signal processing, theory of classi_cation, theory of<br />coding, etc. Algorithms are implemented and successfully tested on various<br />different tasks.</p>
115

Heuristiques optimisées et robustes de résolution du problème de gestion d'énergie pour les véhicules électriques et hybrides / Optimized and robust heuristics for solving the problem of energy management for hybrid electric vehicles

Guemri, Mouloud 16 December 2013 (has links)
Le système étudié durant cette thèse est un véhicule électrique hybride avec deux sources d’énergies (Pile à combustible et Super-capacité). L’objectif fixé est de minimiser la consommation du carburant tout en satisfaisant la demande instantanée en puissance sous des contraintes de puissance et de capacité et de stockage. Le problème a été modélisé sous la forme d’un problème d’optimisation globale. Nous avons développé de nouvelles méthodes heuristiques pour le résoudre et proposé le calcul d’une borne inférieure de consommation, en apportant de meilleurs résultats que ceux trouvés dans la littérature. En plus, une étude de robustesse a été réalisée afin de minimiser la consommation de pire-cas suite à une perturbation ou du fait d’incertitudes sur les données d’entrée, précisément sur la puissance demandée. Le but de cette étude est de prendre en compte les perturbations dès la construction des solutions afin d’éviter l’infaisabilité des solutions non robustes en situation perturbée. Les heuristiques de résolution du problème robuste modélisé sous la forme d’un problème de Minimax ont fourni des solutions moins sensibles aux perturbations que les solutions classiques. / The system studied in this thesis is a hybrid electrical vehicle with two energy sources (fuel cell system and super-capacitor). The first goal is to minimize the fuel consumption whilst satisfying the requested power for each instant, taking into account constraints on the availability and the state of charge of the storage element. The system was modeled as a global optimization problem. The heuristics developped for obtaining the best power split between the two sources and the lower bound consumption computation proposed provide better results than those found in the literature. The second goal of the thesis is the study of the robustness of the solutions in order to minimize the worst-case consumption when perturbation happens or uncertainty is added to the input data. In this study the uncertainty concerns the power required for traction. The objective is to maintain the feasibility of solutions and limit the worst consumption that can happen due to a demand fluctuation. Dedicated heuristics are proposed for solving the identified robust variant of the problem, modeled as a Minimax problem. The solutions provided are less sensitive to the perturbations than the previous ones.
116

Técnicas de programação matemática para a análise e projeto de sistemas biotecnológicos. / Mathematical programming techniques for analysis and design of biotechnological systems.

Martínez Ríascos, Carlos Arturo 02 September 2005 (has links)
A complexidade de alguns sistemas biotecnológicos impossibilita seu estudo sem o uso de técnicas de programação matemática avançadas. A quantificação de fluxos metabólicos e a síntese e projeto ótimos de plantas multiproduto são problemas com esta característica, abordados na presente tese. A quantificação de fluxos metabólicos empregando balanços de marcações é representada como um problema de otimização não-linear, o qual se resolve através da minimização da diferença entre as medidas experimentais e as predições do modelo da rede metabólica. Este problema surge da necessidade de se caracterizar o metabolismo mediante a estimação das velocidades das reações bioquímicas. O modelo matemático para problemas deste tipo é composto basicamente por balanços de metabólitos e de isótopos; os primeiros são lineares, enquanto os segundos introduzem não-linearidades ao problema e, neste trabalho, são modelados mediante uma modificação da técnica de matrizes de mapeamento de átomos. Para quantificar os fluxos metabólicos considerando a existência de ótimos locais, desenvolveu-se um algoritmo branch & bound espacial, no qual a busca global é feita mediante a divisão da região de busca (branching) e a geração de seqüências de limites (bounding) que convergem para a solução global. Como estudo de caso, estimaram-se os fluxos no metabolismo central de Saccharomyces cerevisiae. Os resultados confirmam a existência de soluções locais e a necessidade de desenvolver uma estratégia de busca global; a solução global obtida apresenta semelhanças, nos fluxos centrais, com a melhor solução obtida por um algoritmo evolucionário. Quanto aos problemas de síntese e projeto de sistemas biotecnológicos multiproduto, As abordagens mais empregadas para resolve-los são a definição e dimensionamento seqüencial das operações unitárias, e a fixação dos parâmetros de dimensionamento e de estimação do tempo de operação (com valores obtidos em laboratório ou planta piloto); porém ambas abordagens fornecem soluções subótimas. Por outro lado, a solução simultânea da síntese e projeto de sistemas biotecnológicos multiproduto gera modelos misto-inteiros não-lineares (MINLP) de grande porte, devido à combinação das decisões, ligadas à existência de alternativas no processo, com as restrições não-lineares geradas dos modelos das operações. Como estudo de caso considera-se uma planta para produção de insulina, vacina para hepatite B, ativador de plasminogênio tecidual (tissue plasminogen activator) e superóxido dismutase, mediante três hospedeiros diferentes: levedura (S. cerevisiae) com expressão extra ou intracelular, Escherichia coli e células de mamíferos. O projeto deve satisfazer a meta de produção para cada produto, minimizando os custos de capital e selecionando os hospedeiros, as operações e o arranjo dos equipamentos em cada estágio. Os resultados obtidos mostram que a formulação das decisões por abordagem big-M permite resolver o modelo MINLP gerado e que a consideração de múltiplos produtos com seqüências e condições de processamento diferentes gera grande ociosidade nos equipamentos e aumenta o custo total do projeto. Para o estudo de caso observou-se que a alocação de tanques intermediários tem um efeito limitado na diminuição do custo do projeto, porém a implementação simultânea da flexibilização do scheduling, do projeto de equipamentos auxiliares e tanques intermediários permite obter projetos satisfatórios. / The complexity of biotechnological systems does not allow their study without the use of advanced mathematical programming techniques. Metabolic flux quantification and optimal synthesis and design of multiproduct plants are problems with this characteristic, and are addressed in this thesis. The metabolic flux quantification employing labeling balances is formulated as a nonlinear optimization problem that is solved by the minimization of the difference between experimental measurements and predictions of the metabolic network model. This problem is generated by the necessity of estimating the rates of biochemical reactions that characterize the metabolism. The mathematical model for this class of problems is composed by balances of metabolites and isotopes; the former are linear whereas the latter are nonlinear and, in this work, are modeled by a modification of the atom mapping matrix technique. A spatial branch & bound algorithm was developed to quantify the metabolic fluxes, that considers the existence of local optima; in this algorithm, the global search is developed by the division of the searching region (branching) and the generation of sequences of bounds (bounding) that converge to the global solution. As a case study, fluxes in central metabolism of Saccharomyces cerevisiae were estimated. The results confirm the existence of local solutions and the necessity of develop a global search strategy; the central fluxes in the obtained global solution are similar to those ones obtained by an evolutionary algorithm. To solve problems of synthesis and design of multiproduct biotechnological systems, the most employed approaches are the sequential selection and sizing of the unit operations, and the fixing of sizing and time parameters (employing values from laboratory or pilot plants); nevertheless, both approaches generate suboptimal solutions. On the other hand, the simultaneous solution of the synthesis and design of multiproduct biotechnological systems generates large size mixed-integer nonlinear models (MINLP), due to the combination of options into the processing with nonlinear constraints from the operation models. As case study, a plant for production of insulin, hepatitis B vaccine, tissue plasminogen activator and superoxide dismutase was considered, by three hosts: yeast (S. cerevisiae) with extra or intracellular expression, Escherichia coli and mammalian cells. The design must satisfy the production target for each product, minimizing the capital cost and considering the selection of hosts, the operations and the number of parallel units in each stage. The obtained results show that the formulation of decisions by the big-M approach allows the solution of the generated MINLP model and that consideration of several products with different processing sequences and conditions generates large idleness at the equipment and increases the total cost of the design. In the case study it was observed that the allocation of storage tanks has a limited effect on cost reduction, but the simultaneous implementation of flexible scheduling, design of auxiliary equipments and intermediate storage tanks allow the generation of satisfactory designs.
117

Propagation des ondes sismiques dans les milieux multiphasiques hétérogènes : modélisation numérique, sensibilité et inversion des paramètres poroélastiques / Seismic wave propagation in heterogeneous multiphasic media : numerical modelling, sensibility and inversion of poroelastic parameters

Dupuy, Bastien 25 November 2011 (has links)
La propagation des ondes sismiques dans les milieux poreux multiphasiques présente des enjeux nombreux, tant sur le plan environnemental (risques naturels, géotechnique, pollutions de nappes...) que pour les réservoirs (aquifères, hydrocarbures, stockages de CO2...). L'utilisation des ondes sismiques pour étudier ces milieux se justifie par le fait qu'en se propageant, les ondes sont déformées par le milieu qu'elles traversent et contiennent ainsi des informations aux capteurs sur les phases fluides et solides et sur le squelette poreux. Ce travail de thèse s'intéresse aux caractéristiques des ondes sismiques dans les milieux multiphasiques (plusieurs phases fluides et solides), depuis la description physique jusqu'à la caractérisation des paramètres constitutifs par inversion, en passant par la modélisation numérique 2D de la propagation. La première partie du travail a consisté à décrire la physique des milieux multiphasiques (phase par phase et leurs intéractions dynamiques) en utilisant des méthodes d'homogénéisation pour se ramener à un milieu équivalent défini par sept paramètres. Ainsi, dans des milieux simple porosité saturés et dans des milieux plus complexes (double porosité, partiellement saturés ou visco-poroélastiques), je peux calculer la propagation des ondes sismiques sans approximation. En effet, j'utilise une méthode numérique dans le domaine fréquence-espace qui permet de prendre en compte tous les termes qui dépendent de la fréquence sans approximation. La discrétisation spatiale utilise une méthode d'éléments finis discontinus (Galerkin discontinu) qui permet de considérer des milieux hétérogènes.Je montre notamment que les attributs sismiques (vitesses et atténuations) des milieux poreux complexes sont fortement dispersifs et les formes d'ondes complètes, calculées sans approximation, sont fortement dépendantes de la description physique du milieu. La caractérisation des paramètres poroélastiques s'effectue par inversion. Une méthode en deux étapes a été proposée : la première consiste en une inversion ``classique`` (tomographie, inversion des formes d'ondes complètes) des données (sismogrammes) pour obtenir des paramètres macro-échelles (attributs sismiques). La seconde étape permet de reconstruire, à partir des paramètres macro-échelles, les paramètres poroélastiques micro-échelles. Cette étape d'inversion utilise une méthode d'optimisation semi-globale (algorithme de voisinage). Une analyse de sensibilité montre qu'en connaissant a-priori certains paramètres, on peut inverser avec précision les paramètres du squelette poroélastique ou retrouver la nature du fluide saturant, à partir des vitesses de propagation. En revanche, pour retrouver la saturation en fluide, il est préférable de connaître les atténuations. Deux applications réalistes (monitoring de réservoir et hydrogéophysique) mettent en oeuvre ce type d'inversion en deux étapes et démontrent qu'à partir de données estimées par des méthodes classiques d'imagerie, on peut remonter à certains paramètres poroélastiques constitutifs. / Seismic wave propagation in multiphasic porous media have various environmental (natural risks, geotechnics, groundwater pollutions...) and ressources (aquifers, oil and gas, CO2 storage...) issues. When seismic waves are crossing a given material, they are distorted and thus contain information on fluid and solid phases. This work focuses on the characteristics of seismic waves propagating in multiphasic media, from the physical complex description to the parameter characterisation by inversion, including 2D numerical modelling of the wave propagation. The first part consists in the description of the physics of multiphasic media (each phase and their interactions), using several upscaling methods, in order to obtain an equivalent mesoscale medium defined by seven parameters. Thus, in simple porosity saturated media and in complex media (double porosity, patchy saturation, visco-poroelasticity), I can compute seismic wave propagation without any approximation. Indeed, I use a frequency-space domain for the numerical method, which allows to consider all the frequency dependent terms. The spatial discretisation employs a discontinuous finite elements method (discontinuous Galerkin), which allows to take into account complex interfaces.The computation of the seismic attributes (velocities and attenuations) of complex porous media shows strong variations in respect with the frequency. Waveforms, computed without approximation, are strongly different if we take into account the full description of the medium or an homogenisation by averages. The last part of this work deals with the poroelastic parameters characterisation by inversion. For this, I develop a two-steps method: the first one consists in a classical inversion (tomography, full waveform inversion) of seismograms data to obtain macro-scale parameters (seismic attributes). The second step allows to recover, from the macroscale parameters, the poroelastic micro-scale properties. This downscaling step uses a semi-global optimisation method (neighbourhood algorithm), which allows the sampling of the full model space (thanks to the low numerical cost of the analytic direct model). With the a-priori knowledge of some parameters, a sensibility analysis shows that I can invert precisely skeleton parameters or the saturating fluid type, from the velocities only. Nevertheless, to recover the fluid saturation, it is preferable to use the attenuations. This two-steps procedure is tested on two realistic applications (reservoir monitoring and subsurface hydrogeophysics) and show that we can recover some constituve poroelastic parameters.
118

Méthodes à intervalles et stratégies de parcours d'arbre pour l'optimisation globale / Interval methods and node selection strategies for constrained global optimisation

Sans, Olivier 19 November 2018 (has links)
Depuis quelques années, la méthode de séparation et évaluation par intervalles (Interval Branch and Bound) est de plus en plus utilisée pour résoudre les problèmes d’optimisation globale sous contraintes (Constrained Global Optimisation), notamment ceux qui sont non convexes. Contrairement à un grand nombre de ses concurrents, cette méthode permet de prouver l’optimalité d’une solution avec un niveau de précision donné. En revanche, son processus d’exploration arborescent implique une complexité exponentielle en temps et en mémoire dans le pire cas. De ce fait, le développement de techniques permettant d’accélérer la convergence de cette méthode définit un pan de recherche important.Une première contribution concerne les méthodes de contraction destinées à réduire l’espace de recherche. Nous proposons une nouvelle méthode de contraction, nommée TEC, qui généralise à plusieurs dimensions le principe de la disjonction constructive utilisée par la méthode de contraction CID. TEC construit un sous- arbre d’exploration par un processus de bissection et de contraction avant d’effectuer l’union des espaces de recherche associés aux feuilles de ce sous-arbre. Nous proposons deux variantes de TEC exploitant sa structure arborescente. La première, nommée Graham-TEC, permet d’apprendre des contraintes linéaires implicites utilisées pour améliorer la contraction. La seconde, nommée TEC-UB, permet d’apporter une amélioration à la recherche de solutions en faisant appel à une heuristique de recherche de solutions sur les feuilles du sous-arbre.Une deuxième contribution concerne les stratégies de parcours de l’arbre d’exploration. Nous étudions une stratégie récente qui fait un compromis entre un parcours en meilleur d’abord et un parcours en profondeur d’abord. Nous proposons des variantes de cet algorithme qui privilégient l’exploration des régions faisables.Les algorithmes proposés, testés sur un banc d’essai reconnu par la communauté, obtiennent des temps comparables à l’état de l’art. / In recent years, the Interval Branch and Bound method has been used more and more to solve constraint global optimization problems, especially those which are non-convex. Unlike many of its competitors, this method makes it possible to prove the optimality of a solution with a given level of accuracy. On the other hand, its tree exploration process implies an exponential complexity in time and memory in the worst case. That is why, the development of acceleration techniques defines an important piece of research.A first contribution concerns the interval filtering operators designed to reduce the search space. We propose a new interval filtering operator, named TEC, that generalizes to several dimensions the constructive disjunction principle used by the existing CID operator. TEC constructs a bounded subtree using a branch and contract process and returns the parallel-to-axes hull of the leaf domains/boxes. We propose two variants of TEC exploiting its tree structure. The first variant, named Graham-TEC, learns implicit linear constraints, used for improving the contraction. The second one, named TEC-UB, searches for a good feasible point inside a leaf box of the TEC subtree.A second contribution concerns the node selection strategies. We have studied a recent strategy that makes a compromise between a best-first search and a depth-first search and have proposed variants of this algorithm that favor the exploration of feasible regions.
119

Otimização de funções contínuas usando algoritmos quânticos / Quantum continuous function optimization algorithms

Lara, Pedro Carlos da Silva 22 April 2015 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2015-09-23T18:31:34Z No. of bitstreams: 1 tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2015-09-23T18:31:58Z (GMT) No. of bitstreams: 1 tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5) / Made available in DSpace on 2015-09-23T18:32:21Z (GMT). No. of bitstreams: 1 tese_pedro.pdf: 954527 bytes, checksum: e9834fab8c799933912f185f0a422658 (MD5) Previous issue date: 2015-04-22 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Optimization algorithms are known to have a wide range of applications in various areas of knowledge. Thus, any improvement in the performance of optimization algorithms generate great impact in solving various problems. Thus, this work indroduces the area of quantum algorithms for global optimization (maximization/minimization) of continuous functions through different quantum search methods and classical local optimization algorithms. In this case, the use of search quantum algorithms is tied directly to performance with respect to the classical method: using a quantum computer can find an element in an unsorted database using only $O(\sqrt{N})$ queries. / Algoritmos de otimização são conhecidos por apresentarem uma vasta gama de aplicações em diversas áreas do conhecimento. Desta forma, qualquer melhoria no desempenho dos algoritmos de otimização gera grande impacto na resolução de diversos problemas. Neste sentido, este trabalho introduz a área de algoritmos quânticos para a otimização global (maximização/minimização) de funções contínuas através de diferentes métodos quânticos de busca e algoritmos clássicos de otimização local. Neste caso, a utilização de algoritmos quânticos de busca está diretamente associada ao desempenho com relação ao método clássico: usando um computador quântico pode-se encontrar um elemento em um banco de dados não-ordenado usando apenas $O(\sqrt{N})$ consultas.
120

Méthodes avancées d'optimisation par méta-modèles – Applicationà la performance des voiliers de compétition / Advanced surrogate-based optimization methods - Application to racing yachts performance

Sacher, Matthieu 10 September 2018 (has links)
L’optimisation de la performance des voiliers est un problème difficile en raison de la complexité du systèmemécanique (couplage aéro-élastique et hydrodynamique) et du nombre important de paramètres à optimiser (voiles, gréement,etc.). Malgré le fait que l’optimisation des voiliers est empirique dans la plupart des cas aujourd’hui, les approchesnumériques peuvent maintenant devenir envisageables grâce aux dernières améliorations des modèles physiques et despuissances de calcul. Les calculs aéro-hydrodynamiques restent cependant très coûteux car chaque évaluation demandegénéralement la résolution d’un problème non linéaire d’interaction fluide-structure. Ainsi, l’objectif central de cette thèseest de proposer et développer des méthodes originales dans le but de minimiser le coût numérique de l’optimisation dela performance des voiliers. L’optimisation globale par méta-modèles Gaussiens est utilisée pour résoudre différents problèmesd’optimisation. La méthode d’optimisation par méta-modèles est étendue aux cas d’optimisations sous contraintes,incluant de possibles points non évaluables, par une approche de type classification. L’utilisation de méta-modèles à fidélitésmultiples est également adaptée à la méthode d’optimisation globale. Les applications concernent des problèmesd’optimisation originaux où la performance est modélisée expérimentalement et/ou numériquement. Ces différentes applicationspermettent de valider les développements des méthodes d’optimisation sur des cas concrets et complexes, incluantdes phénomènes d’interaction fluide-structure. / Sailing yacht performance optimization is a difficult problem due to the high complexity of the mechanicalsystem (aero-elastic and hydrodynamic coupling) and the large number of parameters to optimize (sails, rigs, etc.).Despite the fact that sailboats optimization is empirical in most cases today, the numerical optimization approach is nowconsidered as possible because of the latest advances in physical models and computing power. However, these numericaloptimizations remain very expensive as each simulation usually requires solving a non-linear fluid-structure interactionproblem. Thus, the central objective of this thesis is to propose and to develop original methods aiming at minimizing thenumerical cost of sailing yacht performance optimization. The Efficient Global Optimization (EGO) is therefore appliedto solve various optimization problems. The original EGO method is extended to cases of optimization under constraints,including possible non computable points, using a classification-based approach. The use of multi-fidelity surrogates isalso adapted to the EGO method. The applications treated in this thesis concern the original optimization problems inwhich the performance is modeled experimentally and/or numerically. These various applications allow for the validationof the developments in optimization methods on real and complex problems, including fluid-structure interactionphenomena.

Page generated in 0.1957 seconds