• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 10
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 25
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Discrete Tomographic Reconstruction Methods From The Theories Of Optimization And Inverse Problems: Application In Vlsi Microchip Production

Ozgur, Osman 01 January 2006 (has links) (PDF)
Optimization theory is a key technology for inverse problems of reconstruction in science, engineering and economy. Discrete tomography is a modern research field dealing with the reconstruction of finite objects in, e.g., VLSI chip design, where this thesis will focus on. In this work, a framework with its supplementary algorithms and a new problem reformulation are introduced to approximately resolve this NP-hard problem. The framework is modular, so that other reconstruction methods, optimization techniques, optimal experimental design methods can be incorporated within. The problem is being revisited with a new optimization formulation, and interpretations of known methods in accordance with the framework are also given. Supplementary algorithms are combined or incorporated to improve the solution or to reduce the cost in terms of time and space from the computational point of view.
12

Algorithmes d'optimisation sans dérivées à caractère probabiliste ou déterministe : analyse de complexité et importance en pratique / Derivative-free optimization methods based on probabilistic and deterministic properties : complexity analysis and numerical relevance

Royer, Clément 04 November 2016 (has links)
L'utilisation d'aspects aléatoires a contribué de façon majeure aux dernières avancées dans le domaine de l'optimisation numérique; cela est dû en partie à la recrudescence de problèmes issus de l'apprentissage automatique (machine learning). Dans un tel contexte, les algorithmes classiques d'optimisation non linéaire, reposant sur des principes déterministes, se révèlent en effet bien moins performants que des variantes incorporant de l'aléatoire. Le coût de ces dernières est souvent inférieur à celui de leurs équivalents déterministes; en revanche, il peut s'avérer difficile de maintenir les propriétés théoriques d'un algorithme déterministe lorsque de l'aléatoire y est introduit. Effectuer une analyse de complexité d'une telle méthode est un procédé très répandu dans ce contexte. Cette technique permet déstimer la vitesse de convergence du schéma considéré et par là même d'établir une forme de convergence de celui-ci. Les récents travaux sur ce sujet, en particulier pour des problèmes d'optimisation non convexes, ont également contribué au développement de ces aspects dans le cadre déterministe, ceux-ci apportant en effet un éclairage nouveau sur le comportement des algorithmes. Dans cette thèse, on s'intéresse à l'amélioration pratique d'algorithmes d'optimisation sans dérivées à travers l'introduction d'aléatoire, ainsi qu'à l'impact numérique des analyses de complexité. L'étude se concentre essentiellement sur les méthodes de recherche directe, qui comptent parmi les principales catégories d'algorithmes sans dérivées; cependant, l'analyse sous-jacente est applicable à un large éventail de ces classes de méthodes. On propose des variantes probabilistes des propriétés requises pour assurer la convergence des algorithmes étudiés, en mettant en avant le gain en efficacité induit par ces variantes: un tel gain séxplique principalement par leur coût très faible en évaluations de fonction. Le cadre de base de notre analyse est celui de méthodes convergentes au premier ordre, que nous appliquons à des problèmes sans ou avec contraintes linéaires. Les bonnes performances obtenues dans ce contexte nous incitent par la suite à prendre en compte des aspects d'ordre deux. A partir des propriétés de complexité des algorithmes sans dérivées, on développe de nouvelles méthodes qui exploitent de l'information du second ordre. L'analyse de ces procédures peut être réalisée sur un plan déterministe ou probabiliste: la deuxième solution nous permet d'étudier de nouveaux aspects aléatoires ainsi que leurs conséquences sur l'éfficacité et la robustesse des algorithmes considérés. / Randomization has had a major impact on the latest developments in the field of numerical optimization, partly due to the outbreak of machine learning applications. In this increasingly popular context, classical nonlinear programming algorithms have indeed been outperformed by variants relying on randomness. The cost of these variants is usually lower than for the traditional schemes, however theoretical guarantees may not be straightforward to carry out from the deterministic to the randomized setting. Complexity analysis is a useful tool in the latter case, as it helps in providing estimates on the convergence speed of a given scheme, which implies some form of convergence. Such a technique has also gained attention from the deterministic optimization community thanks to recent findings in the nonconvex case, as it brings supplementary indicators on the behavior of an algorithm. In this thesis, we investigate the practical enhancement of deterministic optimization algorithms through the introduction of random elements within those frameworks, as well as the numerical impact of their complexity results. We focus on direct-search methods, one of the main classes of derivative-free algorithms, yet our analysis applies to a wide range of derivative-free methods. We propose probabilistic variants on classical properties required to ensure convergence of the studied methods, then enlighten their practical efficiency induced by their lower consumption of function evaluations. Firstorder concerns form the basis of our analysis, which we apply to address unconstrained and linearly-constrained problems. The observed gains incite us to additionally take second-order considerations into account. Using complexity properties of derivative-free schemes, we develop several frameworks in which information of order two is exploited. Both a deterministic and a probabilistic analysis can be performed on these schemes. The latter is an opportunity to introduce supplementary probabilistic properties, together with their impact on numerical efficiency and robustness.
13

Um metodo de região de confiança para minimização irrestrita sem derivadas / On the region method for unconstrained minimization without derivatives

Jimenez Urrea, Liliana 12 August 2018 (has links)
Orientador: Vera Lucia da Rocha Lopes / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-12T04:23:12Z (GMT). No. of bitstreams: 1 JimenezUrrea_Liliana_M.pdf: 3576733 bytes, checksum: 0e211564f3f081c060195cfa21aa4135 (MD5) Previous issue date: 2008 / Resumo: Neste trabalho apresentamos métodos de minimização irrestrita, de uma função objetivo F de várias variáveis, que não fazem uso nem do gradiente da função objetivo - métodos derivative-free, nem de aproximações do mesmo. Nosso objetivo básico foi estudar e comparar o desempenho de métodos desse tipo propostos por M. J. D. Powell, que consistem em aproximar a função F por funções quadráticas - modelos quadráticos - e minimizar tal aproximação em regiões de confiança. Além do algoritmo de Powell de 2002 - UOBYQA - são testados: uma variante dele, na qual utilizamos a escolha de alguns parâmetros, por nós estabelecida, e também a nova versão de NEWUOA, proposta por Powell em 2006. Todos os testes foram realizados com problemas da coleção de Hock-Schittkowski. São comparados os resultados numéricos obtidos pelos métodos de Powell: entre eles mesmos e também entre eles e um método de busca padrão de autoria de Virginia Torczon, o qual define, em cada iteração, um conjunto padrão de direções de busca a partir do ponto atual, procurando melhores valores para F. / Abstract: In this work we study numerical methods to solve problems of nonlinear programming without constraints, which do not make use, neither of the gradient of the objective function, nor of approaches to it. A method that consists on the approximation of the function F by a quadractic model, due to Powell (2002), UOBYQA, and a variant of this method were implemented. A new version of the NEWUOA, introduced by Powell in 2006, was also implemented. Besides the Powell algorithm, commentaries of the implementations are done. Numerical tests of such implementations with problems of the Hock-Schittkowski collection, are made at the end of the work. There are also comparisons of the Powell methods among themselves, and also a comparison among the Powell methods with a pattern search method, which looks for the improvement of the value of the objective function throughout a set of directions, depending on the current point. Such a method is due to Virginia Torczon. / Mestrado / Otimização / Mestre em Matemática Aplicada
14

Sobre métodos de busca padrão para minimização de funções com restrições lineares / On pattern search methods for linearly constrained minimization

Ferreira, Deise Gonçalves, 1988- 03 April 2013 (has links)
Orientador: Maria Aparecida Diniz Ehrhardt / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação / Made available in DSpace on 2018-08-22T06:10:21Z (GMT). No. of bitstreams: 1 Ferreira_DeiseGoncalves_M.pdf: 2631020 bytes, checksum: 45eb84901394375843735b1fdef599ad (MD5) Previous issue date: 2013 / Resumo: Neste trabalho voltamos nossa atenção para métodos de otimização que não fazem uso de derivadas. Dentre esses, estamos interessadas em um método de busca padrão para minimização de funções com restrições lineares. Abordamos um algoritmo proposto por Lewis e Torczon, cuja ideia geral é que o padrão deve conter direções de busca ao longo das quais iterações factíveis sejam determinadas. O algoritmo possui resultados de convergência global. Realizamos sua implementação computacional, e propomos novas estratégias de busca e atualização do tamanho do passo, além de um novo padrão de direções de busca. Realizamos testes numéricos, de modo a analisar o desempenho das estratégias propostas e comparar o desempenho do padrão de direções que introduzimos com o proposto por Lewis e Torczon / Abstract: In this work, our interest lies on derivative-free optimization methods. Among these, our aim is to study a pattern search method for linearly constrained minimization. We studied an algorithm proposed by Lewis and Torsion, whose general idea is that the pattern must contain search directions in which feasible iterations must be computed. The algorithm has global convergence results. We accomplished its computational implementation and we propose new strategies of search and updating rule for the step-length control parameter. We also propose a new pattern of search directions. We accomplished numerical experiments in order to analyze the performance of our proposals and also to compare the performance of our pattern with the one proposed by Lewis and Torczon / Mestrado / Matematica Aplicada / Mestra em Matemática Aplicada
15

Simulation-based optimization of Hybrid Systems Using Derivative Free Optimization Techniques

Jayakumar, Adithya 27 December 2018 (has links)
No description available.
16

Some Population Set-Based Methods for Unconstrained Global Optimization

Kaelo, Professor 16 November 2006 (has links)
Student Number : 0214677F - PhD thesis - School of Camputational and Applied Mathematics - Faculty of Science / Many real-life problems are formulated as global optimization problems with continuous variables. These problems are in most cases nonsmooth, nonconvex and often simulation based, making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable and derivative-free global optimization methods for solving such problems are needed. In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization methods. In particular, we concentrate on improving some population set-based methods for unconstrained global optimization, mainly through hybridization. Hybridization has widely been recognized to be one of the most attractive areas of unconstrained global optimization. Experiments have shown that through hybridization, new methods that inherit the strength of the original elements but not their weakness can be formed. We suggest a number of new hybridized population set-based methods based on differential evolution (de), controlled random search (crs2) and real coded genetic algorithm (ga). We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called random localization, in the mutation phase of de. In the second version, we propose a localization in the acceptance phase of de. In the third version, we form a de hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm. The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose #2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2 by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically combine the point generation scheme of crs2 with the linear interpolation scheme of a trust-region based method. The fourth version is a crs hybrid that probabilistically combines the quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a local technique. We propose, in the second version, an integrated crossover rule that generates two children at a time using two different crossover rules. We introduce a local technique in the second version to obtain the third version. The fourth and #2;fth versions are based on the probabilistic adaptation of crossover rules. The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments using a large test suite of both simple and dif#2;cult problems from the literature. Results indicate that the new hybrids are much better than their original counterparts both in reliability and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many currently available stochastic algorithms for solving global optimization problems in which the gradient information is not readily available.
17

Derivative Free Optimization Methods: Application In Stirrer Configuration And Data Clustering

Akteke, Basak 01 July 2005 (has links) (PDF)
Recent developments show that derivative free methods are highly demanded by researches for solving optimization problems in various practical contexts. Although well-known optimization methods that employ derivative information can be very effcient, a derivative free method will be more effcient in cases where the objective function is nondifferentiable, the derivative information is not available or is not reliable. Derivative Free Optimization (DFO) is developed for solving small dimensional problems (less than 100 variables) in which the computation of an objective function is relatively expensive and the derivatives of the objective function are not available. Problems of this nature more and more arise in modern physical, chemical and econometric measurements and in engineering applications, where computer simulation is employed for the evaluation of the objective functions. In this thesis, we give an example of the implementation of DFO in an approach for optimizing stirrer configurations, including a parametrized grid generator, a flow solver, and DFO. A derivative free method, i.e., DFO is preferred because the gradient of the objective function with respect to the stirrer&rsquo / s design variables is not directly available. This nonlinear objective function is obtained from the flow field by the flow solver. We present and interpret numerical results of this implementation. Moreover, a contribution is given to a survey and a distinction of DFO research directions, to an analysis and discussion of these. We also state a derivative free algorithm used within a clustering algorithm in combination with non-smooth optimization techniques to reveal the effectiveness of derivative free methods in computations. This algorithm is applied on some data sets from various sources of public life and medicine. We compare various methods, their practical backgrounds, and conclude with a summary and outlook. This work may serve as a preparation of possible future research.
18

Optimisation sans dérivées sous incertitudes appliquées à des simulateurs coûteux / Derivative-free optimization under uncertainty applied to costly simulators

Pauwels, Benoît 10 March 2016 (has links)
La modélisation de phénomènes complexes rencontrés dans les problématiques industrielles peut conduire à l'étude de codes de simulation numérique. Ces simulateurs peuvent être très coûteux en temps d'exécution (de quelques heures à plusieurs jours), mettre en jeu des paramètres incertains et même être intrinsèquement stochastiques. Fait d'importance en optimisation basée sur de tels simulateurs, les dérivées des sorties en fonction des entrées peuvent être inexistantes, inaccessibles ou trop coûteuses à approximer correctement. Ce mémoire est organisé en quatre chapitres. Le premier chapitre traite de l'état de l'art en optimisation sans dérivées et en modélisation d'incertitudes. Les trois chapitres suivants présentent trois contributions indépendantes --- bien que liées --- au champ de l'optimisation sans dérivées en présence d'incertitudes. Le deuxième chapitre est consacré à l'émulation de codes de simulation stochastiques coûteux --- stochastiques au sens où l'exécution de simulations avec les mêmes paramètres en entrée peut donner lieu à des sorties distinctes. Tel était le sujet du projet CODESTOCH mené au Centre d'été de mathématiques et de recherche avancée en calcul scientifique (CEMRACS) au cours de l'été 2013 avec deux doctorants de Électricité de France (EDF) et du Commissariat à l'énergie atomique et aux énergies alternatives (CEA). Nous avons conçu quatre méthodes de construction d'émulateurs pour des fonctions dont les valeurs sont des densités de probabilité. Ces méthodes ont été testées sur deux exemples-jouets et appliquées à des codes de simulation industriels concernés par trois phénomènes complexes: la distribution spatiale de molécules dans un système d'hydrocarbures (IFPEN), le cycle de vie de grands transformateurs électriques (EDF) et les répercussions d'un hypothétique accident dans une centrale nucléaire (CEA). Dans les deux premiers cas l'émulation est une étape préalable à la résolution d'un problème d'optimisation. Le troisième chapitre traite de l'influence de l'inexactitude des évaluations de la fonction objectif sur la recherche directe directionnelle --- un algorithme classique d'optimisation sans dérivées. Dans les problèmes réels, l'imprécision est sans doute toujours présente. Pourtant les utilisateurs appliquent généralement les algorithmes de recherche directe sans prendre cette imprécision en compte. Nous posons trois questions. Quelle précision peut-on espérer obtenir, étant donnée l'inexactitude ? À quel prix cette précision peut-elle être atteinte ? Quels critères d'arrêt permettent de garantir cette précision ? Nous répondons à ces trois questions pour l'algorithme de recherche directe directionnelle appliqué à des fonctions dont l'imprécision sur les valeurs --- stochastique ou non --- est uniformément bornée. Nous déduisons de nos résultats un algorithme adaptatif pour utiliser efficacement des oracles de niveaux de précision distincts. Les résultats théoriques et l'algorithme sont validés avec des tests numériques et deux applications réelles: la minimisation de surface en conception mécanique et le placement de puits pétroliers en ingénierie de réservoir. Le quatrième chapitre est dédié aux problèmes d'optimisation affectés par des paramètres imprécis, dont l'imprécision est modélisée grâce à la théorie des ensembles flous. Plusieurs méthodes ont déjà été publiées pour résoudre les programmes linéaires où apparaissent des coefficients flous, mais très peu pour traiter les problèmes non linéaires. Nous proposons un algorithme pour répondre à une large classe de problèmes par tri non-dominé itératif. / The modeling of complex phenomena encountered in industrial issues can lead to the study of numerical simulation codes. These simulators may require extensive execution time (from hours to days), involve uncertain parameters and even be intrinsically stochastic. Importantly within the context of simulation-based optimization, the derivatives of the outputs with respect to the inputs may be inexistent, inaccessible or too costly to approximate reasonably. This thesis is organized in four chapters. The first chapter discusses the state of the art in derivative-free optimization and uncertainty modeling. The next three chapters introduce three independent---although connected---contributions to the field of derivative-free optimization in the presence of uncertainty. The second chapter addresses the emulation of costly stochastic simulation codes---stochastic in the sense simulations run with the same input parameters may lead to distinct outputs. Such was the matter of the CODESTOCH project carried out at the Summer mathematical research center on scientific computing and its applications (CEMRACS) during the summer of 2013, together with two Ph.D. students from Electricity of France (EDF) and the Atomic Energy and Alternative Energies Commission (CEA). We designed four methods to build emulators for functions whose values are probability density functions. These methods were tested on two toy functions and applied to industrial simulation codes concerned with three complex phenomena: the spatial distribution of molecules in a hydrocarbon system (IFPEN), the life cycle of large electric transformers (EDF) and the repercussions of a hypothetical accidental in a nuclear plant (CEA). Emulation was a preliminary process towards optimization in the first two cases. In the third chapter we consider the influence of inaccurate objective function evaluations on direct search---a classical derivative-free optimization method. In real settings inaccuracy may never vanish, however users usually apply direct search algorithms disregarding inaccuracy. We raise three questions. What precision can we hope to achieve, given the inaccuracy? How fast can this precision be attained? What stopping criteria can guarantee this precision? We answer these three questions for directional direct search applied to objective functions whose evaluation inaccuracy stochastic or not is uniformly bounded. We also derive from our results an adaptive algorithm for dealing efficiently with several oracles having different levels of accuracy. The theory and algorithm are validated with numerical tests and two industrial applications: surface minimization in mechanical design and oil well placement in reservoir engineering. The fourth chapter considers optimization problems with imprecise parameters, whose imprecision is modeled with fuzzy sets theory. A number of methods have been published to solve linear programs involving fuzzy parameters, but only a few as for nonlinear programs. We propose an algorithm to address a large class of fuzzy optimization problems by iterative non-dominated sorting. The distributions of the fuzzy parameters are assumed only partially known. We also provide a criterion to assess the precision of the solutions and make comparisons with other methods found in the literature. We show that our algorithm guarantees solutions whose level of precision at least equals the precision on the available data.
19

Otimização sem derivadas : sobre a construção e a qualidade de modelos quadráticos na solução de problemas irrestritos / Derivative-free optimization : on the construction and quality of quadratic models for unconstrained optimization problems

Nascimento, Ivan Xavier Moura do, 1989- 25 August 2018 (has links)
Orientador: Sandra Augusta Santos / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T00:20:47Z (GMT). No. of bitstreams: 1 Nascimento_IvanXavierMourado_M.pdf: 5587602 bytes, checksum: 769fbf124a59d55361b184a6ec802f66 (MD5) Previous issue date: 2014 / Resumo: Métodos de região de confiança formam uma classe de algoritmos iterativos amplamente utilizada em problemas de otimização não linear irrestrita para os quais as derivadas da função objetivo não estão disponíveis ou são imprecisas. Uma das abordagens clássicas desses métodos envolve a otimização de modelos polinomiais aproximadores para a função objetivo, construídos a cada iteração com base em conjuntos amostrais de pontos. Em um trabalho recente, Scheinberg e Toint [SIAM Journal on Optimization, 20 (6) (2010), pp. 3512-3532 ] mostram que apesar do controle do posicionamento dos pontos amostrais ser essencial para a convergência do método, é possível que tal controle ocorra de modo direto apenas no estágio final do algoritmo. Baseando-se nessas ideias e incorporando-as a um esquema algorítmico teórico, os autores investigam analiticamente uma curiosa propriedade de autocorreção da geometria dos pontos, a qual se evidencia nas iterações de insucesso. A convergência global do novo algoritmo é, então, obtida como uma consequência da geometria autocorretiva. Nesta dissertação estudamos o posicionamento dos pontos em métodos baseados em modelos quadráticos de interpolação e analisamos o desempenho computacional do algoritmo teórico proposto por Scheinberg e Toint, cujos parâmetros são determinados / Abstract: Trust-region methods are a class of iterative algorithms widely applied to nonlinear unconstrained optimization problems for which derivatives of the objective function are unavailable or inaccurate. One of the classical approaches involves the optimization of a polynomial model for the objective function, built at each iteration and based on a sample set. In a recent work, Scheinberg and Toint [SIAM Journal on Optimization, 20 (6) (2010), pp. 3512¿3532 ] proved that, despite being essential for convergence results, the improvement of the geometry (poisedness) of the sample set might occur only in the final stage of the algorithm. Based on these ideas and incorporating them into a theoretical algorithm framework, the authors investigate analytically an interesting self-correcting geometry mechanism of the interpolating set, which becomes evident at unsuccessful iterations. Global convergence for the new algorithm is then proved as a consequence of this self-correcting property. In this work we study the positioning of the sample points within interpolation-based methods that rely on quadratic models and investigate the computational performance of the theoretical algorithm proposed by Scheinberg and Toint, whose parameters are based upon either choices of previous works or numerical experiments / Mestrado / Matematica Aplicada / Mestre em Matemática Aplicada
20

Sobre um método de minimização irrestrita baseado em derivadas simplex / About an unconstrained minimization method based on simplex derivatives

Cervelin, Bruno Henrique, 1988- 04 August 2013 (has links)
Orientador: Maria Aparecida Diniz Ehrhardt / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-22T15:48:00Z (GMT). No. of bitstreams: 1 Cervelin_BrunoHenrique_M.pdf: 1935510 bytes, checksum: 91d17dd60bdd280c9eddd301cb3d2c24 (MD5) Previous issue date: 2013 / Resumo: O objetivo deste trabalho é apresentar alguns métodos de minimização irrestrita sem derivadas, tais como, Nelder-Mead, busca padrão e SID-PSM, assim como compará-los. Ainda pretendemos apresentar o problema de otimização de parâmetros de algoritmos, e aplicar o método SID-PSM de modo a encontrar parâmetros ótimos para o próprio método SID-PSM em relação ao número de avaliações de função que o método realiza. Os experimentos numéricos realizados mostram que o SID-PSM _e mais robusto e mais eficiente que os métodos clássicos sem derivadas (busca padrão e Nelder-Mead). Outros experimentos nos mostram o potencial do problema de otimização de parâmetros de algoritmos em melhorar tanto a eficiência quanto a robustez dos métodos / Abstract: The aim of this paper is to present some derivative-free methods for unconstrained minimization problems, such as Nelder-Mead, pattern search and SID-PSM, and compare them. We also intend to present the problem of optimal algorithmic parameters, and apply the method SID-PSM in order to find optimal parameters for the method SID-PSM itself in relation to the number of function evaluations performed by the method. The numerical experiments performed show that the SID-PSM is more robust and more efficient than the classical derivative-free methods (pattern search and Nelder-Mead). Other experiments show us the potential of the problem of optimal algorithmic parameters to improve both the efficiency and the robustness of the methods / Mestrado / Matematica Aplicada / Mestre em Matemática Aplicada

Page generated in 0.0461 seconds