Spelling suggestions: "subject:"derivativefree"" "subject:"derivativeree""
1 |
Minimalizace času průjezdu vozidla zadanou trajektorií / Time minimization for vehicles passing a given trajectorySuja, Jerguš January 2019 (has links)
The diploma thesis deals with vehicle movement dynamics and defining a theoretical model for software simulation of vehicle passing a given trajectory, while main aim is time minimization driving mode. Simulation (algorithm for computing speed profile) is then applicated for passing experimental vehicle along Masaryk circuit in Brno. At the end, we optimize chosen vehicle parameters with derivate-free algorithms Multilevel Coordinate Search and Particle Swarm.
|
2 |
Quantum Speed-ups for Boolean Satisfiability and Derivative-Free OptimizationArunachalam, Srinivasan January 2014 (has links)
In this thesis, we have considered two important problems, Boolean satisfiability (SAT) and derivative free optimization in the context of large scale quantum computers. In the first part, we survey well known classical techniques for solving satisfiability. We compute the approximate time it would take to solve SAT instances using quantum techniques and compare it with state-of-the heart classical heuristics employed annually in SAT competitions. In the second part of the thesis, we consider a few classically well known algorithms for derivative free optimization which are
ubiquitously employed in engineering problems. We propose a quantum speedup to this classical algorithm by using techniques of the quantum minimum finding algorithm. In the third part of the thesis, we consider practical applications in the fields of bio-informatics, petroleum refineries and civil engineering which involve solving either satisfiability or derivative free optimization. We investigate if using known quantum techniques to speedup these algorithms directly translate to
the benefit of industries which invest in technology to solve these problems. In the last section, we propose a few open problems which we feel are immediate hurdles, either from an algorithmic or architecture perspective to getting a convincing speedup for the practical problems considered.
|
3 |
Gradient-Based Sensitivity Analysis with KernelsWycoff, Nathan Benjamin 20 August 2021 (has links)
Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions. / Doctor of Philosophy / Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result.
|
4 |
Otimização sem derivadas em conjuntos magros / Derivative-free optimization on thin domainsSobral, Francisco Nogueira Calmon, 1984- 20 August 2018 (has links)
Orientador: José Mario Martínez Pérez / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-20T03:18:55Z (GMT). No. of bitstreams: 1
Sobral_FranciscoNogueiraCalmon_D.pdf: 3255516 bytes, checksum: 380cc11e2ad93213e66f456ef5945f1c (MD5)
Previous issue date: 2012 / Resumo: Os problemas de otimização sem derivadas surgem de modelos para os quais as derivadas das funções e das restrições envolvidas, por alguma razão, não estão disponíveis. Os motivos variam desde usuários que não querem programar as derivadas até funções excessivamente complexas e caixas-pretas, oriundas de simulações só possíveis graças ao crescimento na capacidade de processamento dos computadores. Acompanhando esse crescimento, o número de algoritmos para resolver problemas de otimização sem derivadas aumentou nos últimos anos. Porém, poucos são aqueles que conseguem lidar de forma eficiente com problemas cujos domínios são magros, como, por exemplo, quando há restrições de igualdade. Neste trabalho, apresentamos a teoria e implementação de dois algoritmos capazes de trabalhar com domínios magros em problemas de otimização sem derivadas. Ambos partem da premissa de que a parte mais custosa na resolução é a avaliação da função objetivo. Com isso em mente, o processo de resolução é dividido em duas fases. Na fase de restauração, buscamos por pontos menos inviáveis sem utilizar avaliações da função objetivo. Na fase de minimização, ou otimização, o objetivo é reduzir a função objetivo com o uso de algoritmos bem estabelecidos para problemas sem derivadas com restrições simples. O primeiro algoritmo utiliza ideias de Restauração Inexata associadas a uma tolerância decrescente à inviabilidade. Utilizando hipóteses simples e usuais dos métodos de busca direta direcional, mostramos propriedades de convergência a minimizadores globais. O segundo algoritmo recupera totalmente os resultados teóricos de um algoritmo recente de Restauração Inexata com busca linear e aplica-se a problemas nos quais apenas as derivadas da função objetivo não estão disponíveis. Testes numéricos mostram as boas propriedades dos dois algoritmos, em particular quando comparados com algoritmos baseados em penalidades / Abstract: Derivative-free optimization problems arise from models whose derivatives of some functions are not available. This information is unavailable due to extremely complex and black-box functions, originated from simulation procedures, or even to user inability. Following the growth in the number of applications, the number of derivative-free algorithms has increased in the last years. However, few algorithms are able to handle thin feasible domains efficiently, for example, in the presence of equality nonlinear constraints. In the present work, we describe the theory and implementation of two algorithms capable of dealing with thin-constrained derivative-free problems. Their definition considers that the objective function evaluation is the most expensive part of the problem. Based on this principle, the process of solving a problem is split into two phases. In the restoration phase, we try to improve the feasibility without evaluating the objective function. In the minimization phase, the aim is to decrease the objective function value by using well-established algorithms in order to solve derivative-free problems with simple constraints. The _rst algorithm uses Inexact Restoration ideas together with a decreasing infeasibility tolerance. Under the usual hypotheses of direct search methods, we show global minimization results. The second algorithm extends to the derivative-free case all the theoretical results obtained in a recent line-search Inexact Restoration algorithm. In this approach, only the derivatives of the objective function are not available. We perform numerical experiments to show the advantages of each algorithm, in particular when comparing with penalty-like algorithms / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
|
5 |
Derivative Free Multilevel Optimization MethodsPekmen, Bengisen 01 August 2009 (has links) (PDF)
Derivative free optimization algorithms are implementations of trust region based derivative-free methods using multivariate polynomial interpolation. These are designed to minimize smooth functions whose derivatives are not available or costly to compute. The trust region based multilevel optimization algorithms for solving large scale unconstrained optimization problems resulting by discretization of partial differential equations (PDEs), make use of different discretization levels to reduce the computational cost. In this thesis, a derivative free multilevel optimization algorithm is derived and its convergence behavior is analyzed. The effectiveness of the algorithms is demonstrated on a shape optimization problem.
|
6 |
Methods for Network Optimization and Parallel Derivative-free OptimizationOlsson, Per-Magnus January 2014 (has links)
This thesis is divided into two parts that each is concerned with a specific problem. The problem under consideration in the first part is to find suitable graph representations, abstractions, cost measures and algorithms for calculating placements of unmanned aerial vehicles (UAVs) such that they can keep one or several static targets under constant surveillance. Each target is kept under surveillance by a surveillance UAV, which transmits information, typically real time video, to a relay UAV. The role of the relay UAV is to retransmit the information to another relay UAV, which retransmits it again to yet another UAV. This chain of retransmission continues until the information eventually reaches an operator at a base station. When there is a single target, then all Pareto-optimal solutions, i.e. all relevant compromises between quality and the number of UAVs required, can be found using an efficient new algorithm. If there are several targets, the problem becomes a variant of the Steiner tree problem and to solve this problem we adapt an existing algorithm to find an initial tree. Once it is found, we can further improve it using a new algorithm presentedin this thesis. The second problem is optimization of time-consuming problems where the objective function is seen as a black box, where the input parameters are sent and a function valueis returned. This has the important implication that no gradient or Hessian information is available. Such problems are common when simulators are used to perform advanced calculations such as crash test simulations of cars, dynamic multibody simulations etc. It is common that a single function evaluation takes several hours. Algorithms for solving such problems can be broadly divided into direct search algorithms and model building algorithms. The first kind evaluates the objective function directly, whereas the second kind builds a model of the objective function, which is then optimized in order to find a new point where it is believed that objective function has agood value. Then the objective function is evaluated in that point. Since the objective function is very time-consuming, it is common to focus on minimizing the number of function evaluations. However, this completely disregards the possibility to perform calculations in parallel and to exploit this we investigate different ways parallelization can be used in model-building algorithms. Some of the ways to do this is to use several starting points, generate several new points in each iteration, new ways of predicting a point’s value and more. We have implemented the parallel extensions in one of the state of the art algorithms for derivative-free optimization and report results from testing on synthetic benchmarksas well as from solving real industrial problems.
|
7 |
Paramétrisation et optimisation sans dérivées pour le problème de calage d’historique / Parametrization and derivative free optimization for the history matching problemMarteau, Benjamin 04 February 2015 (has links)
Dans cette thèse, on s’intéresse à un problème inverse classique en ingénierie pétrolière, àsavoir le calage d’historique. Plus précisément, une nouvelle méthode de paramétrisation géostatistiqueainsi qu’un nouvel algorithme d’optimisation sans dérivées adaptés aux particularitésdu problème sont présentés ici. La nouvelle méthode de paramétrisation repose sur les principes des méthodes de déformation graduelle et de déformation de domaines. Comme la déformation graduelle locale, elle consiste àcombiner à l’intérieur de zones préalablement définies deux réalisations ou plus de modèle avec lapossibilité supplémentaire de modifier dynamiquement la forme des zones choisies. La flexibilitéapportée par cette méthode dans le choix des zones a ainsi permis de garantir l’obtention d’unbon point initial pour l’optimisation. Concernant l’optimisation, l’hypothèse que les paramètres locaux dans le modèle de réservoir n’influent que faiblement sur les données de puits distants conduit à considérer que la fonction àoptimiser est à variables partiellement séparables. La nouvelle méthode d’optimisation développée,nommée DFO-PSOF, de type région de confiance avec modèle quadratique d’interpolation,exploite alors au maximum cette propriété de séparabilité partielle. Les résultats numériquesobtenus sur plusieurs cas de réservoir valident à la fois l’hypothèse effectuée ainsi que la qualitéde l’algorithme pour le problème de calage d’historique. En complément de cette validation numérique,un résultat théorique de convergence vers un point critique est prouvé pour la méthoded’optimisation construite / We worked in this thesis on a classical inverse problem in the petroleum industry, historymatching. We proposed a new geostatistical parameterization technique as well as a new derivativefree optimization algorithm adapted to the problem specificities. The parameterization method is based on two approaches found in the literature, the local gradual deformation method and the domain deformation method. Similarly to the local gradual deformation method, our method combines two or more model realizations inside previouslydefined zones. Moreover, our method adds the possibility to dynamically update the shape ofthe zones during the optimization process. This property substantially improves its robustnesswith regard to the initial choice of the zones. Thus, the greater flexibility brought by our methodallowed us to develop an initialization methodology which garantees a good initial point for theoptimization. To reduce the number of evaluations needed to minimize the objective function, we madethe assumption that a local parameter does not influence the production data of a distantwell. With this hypothesis, the objective function is then considered partially separable. Theoptimization algorithm we developped, called DFO-PSOF, is a trust region algorithm basedon quadratic interpolation models which exploits this partial separability property. Numericalresults obtained on some reservoir test cases validate both the hypothesis and the quality of ouralgorithm for the history matching problem. Moreover, a theoretical convergence result towardsa first order critical point, is proved for this new optimization method
|
8 |
Programação não linear sem derivadas / Derivative-free nonlinear programmingPedroso, Lucas Garcia 14 August 2018 (has links)
Orientadores: Jose Mario Martinez, Maria Aparecida Diniz Ehrhardt / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-14T08:44:35Z (GMT). No. of bitstreams: 1
Pedroso_LucasGarcia_D.pdf: 1569234 bytes, checksum: 22491a86b6f7cc218acc26f3c2cb768a (MD5)
Previous issue date: 2009 / Resumo: Neste trabalho propomos um algoritmo Lagrangiano Aumentado sem derivadas para o problema geral de otimização. Consideramos o método introduzido por Andreani, Birgin, Martínez e Schuverdt, eliminando os cálculos de derivadas inerentes ao algoritmo através de modificações adequadas no critério de parada. Foram mantidos os bons resultados teóricos do método, como convergência sob a condição de qualificação CPLD e a limitação do parâmetro de penalidade. Experimentos numéricos são apresentados, entre os quais destacamos um exemplo de problema sem derivadas baseado na simulação de áreas de figuras no plano. / Abstract: We propose in this work a derivative-free Augmented Lagrangian algorithm for the general problem of optimization. We consider the method due to Andreani, Birgin, Martínez and Schuverdt, eliminating the derivative computations in the algorithm by making suitable modifications on the stopping criterion. The good theoretical results of the method were mantained, as convergence under the CPLD constraint qualification and the limitation of the penalty parameter. Numerical experiments are presented, and the most relevant of them is an example of derivative-free problem based on the simulation of areas of figures on the plane. / Doutorado / Otimização Matematica / Doutor em Matemática Aplicada
|
9 |
An active-set trust-region method for bound-constrained nonlinear optimization without derivatives applied to noisy aerodynamic design problems / Une méthode de région de confiance avec ensemble actif pour l'optimisation non linéaire sans dérivées avec contraintes de bornes appliquée à des problèmes aérodynamiques bruitésTröltzsch, Anke 07 June 2011 (has links)
L’optimisation sans dérivées (OSD) a connu un regain d’intérêt ces dernières années, principalement motivée par le besoin croissant de résoudre les problèmes d’optimisation définis par des fonctions dont les valeurs sont calculées par simulation (par exemple, la conception technique, la restauration d’images médicales ou de nappes phréatiques).Ces dernières années, un certain nombre de méthodes d’optimisation sans dérivée ont été développées et en particulier des méthodes fondées sur un modèle de région de confiance se sont avérées obtenir de bons résultats.Dans cette thèse, nous présentons un nouvel algorithme de région de confiance, basé sur l’interpolation, qui se montre efficace et globalement convergent (en ce sens que sa convergence vers un point stationnaire est garantie depuis tout point de départ arbitraire). Le nouvel algorithme repose sur la technique d’auto-correction de la géométrie proposé par Scheinberg and Toint (2010). Dans leur théorie, ils ont fait avancer la compréhension du rôle de la géométrie dans les méthodes d’OSD à base de modèles. Dans notre travail, nous avons pu améliorer considérablement l’efficacité de leur méthode, tout en maintenant ses bonnes propriétés de convergence. De plus, nous examinons l’influence de différents types de modèles d’interpolation sur les performances du nouvel algorithme.Nous avons en outre étendu cette méthode pour prendre en compte les contraintes de borne par l’application d’une stratégie d’activation. Considérer une méthode avec ensemble actif pour l’optimisation basée sur des modèles d’interpolation donne la possibilité d’économiser une quantité importante d’évaluations de fonctions. Il permet de maintenir les ensembles d’interpolation plus petits tout en poursuivant l’optimisation dans des sous-espaces de dimension inférieure. L’algorithme résultant montre un comportement numérique très compétitif. Nous présentons des résultats sur un ensemble de problèmes-tests issu de la collection CUTEr et comparons notre méthode à des algorithmes de référence appartenant à différentes classes de méthodes d’OSD.Pour réaliser des expériences numériques qui intègrent le bruit, nous créons un ensemble de cas-tests bruités en ajoutant des perturbations à l’ensemble des problèmes sans bruit. Le choix des problèmes bruités a été guidé par le désir d’imiter les problèmes d’optimisation basés sur la simulation. Enfin, nous présentons des résultats sur une application réelle d’un problème de conception de forme d’une aile fourni par Airbus. / Derivative-free optimization (DFO) has enjoyed renewed interest over the past years, mostly motivated by the ever growing need to solve optimization problems defined by functions whose values are computed by simulation (e.g. engineering design, medical image restoration or groundwater supply).In the last few years, a number of derivative-free optimization methods have been developed and especially model-based trust-region methods have been shown to perform well.In this thesis, we present a new interpolation-based trust-region algorithm which shows to be efficient and globally convergent (in the sense that its convergence is guaranteed to a stationary point from arbitrary starting points). The new algorithm relies on the technique of self-correcting geometry proposed by Scheinberg and Toint [128] in 2009. In their theory, they advanced the understanding of the role of geometry in model-based DFO methods, in our work, we improve the efficiency of their method while maintaining its good theoretical convergence properties. We further examine the influence of different types of interpolation models on the performance of the new algorithm.Furthermore, we extended this method to handle bound constraints by applying an active-set strategy. Considering an active-set method in bound-constrained model-based optimization creates the opportunity of saving a substantial amount of function evaluations. It allows to maintain smaller interpolation sets while proceeding optimization in lower dimensional subspaces. The resulting algorithm is shown to be numerically highly competitive. We present results on a test set of smooth problems from the CUTEr collection and compare to well-known state-of-the-art packages from different classes of DFO methods.To report numerical experiments incorporating noise, we create a test set of noisy problems by adding perturbations to the set of smooth problems. The choice of noisy problems was guided by a desire to mimic simulation-based optimization problems. Finally, we will present results on a real-life application of a wing-shape design problem provided by Airbus.
|
10 |
Advances in adaptive control theory: gradient- and derivative-free approachesYucelen, Tansel 29 September 2011 (has links)
In this dissertation, we present new approaches to improve standard designs in adaptive control theory, and novel adaptive control architectures.
We first present a novel Kalman filter based approach for approximately enforcing a linear constraint in standard adaptive control design. One application is that this leads to alternative forms for well known modification terms such as e-modification. In addition, it leads to smaller tracking errors without incurring significant oscillations in the system response and without requiring high modification gain. We derive alternative forms of e- and adaptive loop recovery (ALR-) modifications.
Next, we show how to use Kalman filter optimization to derive a novel adaptation law. This results in an optimization-based time-varying adaptation gain that reduces the need for adaptation gain tuning.
A second major contribution of this dissertation is the development of a novel derivative-free, delayed weight update law for adaptive control. The assumption of constant unknown ideal weights is relaxed to the existence of time-varying weights, such that fast and possibly discontinuous variation in weights are allowed. This approach is particularly advantageous for applications to systems that can undergo a sudden change in dynamics, such as might be due to reconfiguration, deployment of a payload, docking, or structural damage, and for rejection of external disturbance processes.
As a third and final contribution, we develop a novel approach for extending all the methods developed in this dissertation to the case of output feedback. The approach is developed only for the case of derivative-free adaptive control, and the extension of the other approaches developed previously for the state feedback case to output feedback is left as a future research topic.
The proposed approaches of this dissertation are illustrated in both simulation and flight test.
|
Page generated in 0.0508 seconds