• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 15
  • 15
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Class of Direct Search Methods for Nonlinear Integer Programming

Sugden, Stephen J Unknown Date (has links)
This work extends recent research in the development of a number of direct search methods in nonlinear integer programming. The various algorithms use an extension of the well-known FORTRAN MINOS code of Murtagh and Saunders as a starting point. MINOS is capable of solving quite large problems in which the objective function is nonlinear and the constraints linear. The original MINOS code has been extended in various ways by Murtagh, Saunders and co-workers since the original 1978 landmark paper. Extensions have dealt with methods to handle both nonlinear constraints, most notably MINOS/AUGMENTED and integer requirements on a subset of the variables(MINTO). The starting point for the present thesis is the MINTO code of Murtagh. MINTO is a direct descendant of MINOS in that it extends the capabilities to general nonlinear constraints and integer restrictions. The overriding goal for the work described in this thesis is to obtain a good integer-feasible or near-integer-feasible solution to the general NLIP problem while trying to avoid or at least minimize the use of the ubiquitous branch-and-bound techniques. In general, we assume a small number of nonlinearities and a small number of integer variables.Some initial ideas motivating the present work are summarised in an invited paper presented by Murtagh at the 1989 CTAC (Computational Techniques and Applications) conference in Brisbane, Australia. The approach discussed there was to start a direct search procedure at the solution of the continuous relaxation of a nonlinear mixed-integer problem by first removing integer variables from the simplex basis, then adjusting integer-infeasible superbasic variables, and finally checking for local optimality by trial unit steps in the integers. This may be followed by a reoptimization with the latest point as the starting point, but integer variables held fixed. We describe ideas for the further development of Murtagh’s direct search method. Both the old and new approaches aim to attain an integer-feasible solution from an initially relaxed (continuous) solution. Techniques such as branch-and-bound or Scarf’s neighbourhood search [84] may then be used to obtain a locally optimal solution. The present range of direct search methods differs significantly to that described by Murtagh, both in heuristics used and major and minor steps of the procedures. Chapter 5 summarizes Murtagh’s original approach while Chapter 6 describes the new methods in detail.Afeature of the new approach is that some degree of user-interaction (MINTO/INTERACTIVE) has been provided, so that a skilled user can "drive" the solution towards optimality if this is desired. Alternatively the code can still be run in "automatic" mode, where one of five available direct search methods may be specified in the customary SPECS file. A selection of nonlinear integer programming problems taken from the literature has been solved and the results are presented here in the latter chapters. Further, anewcommunications network topology and allocation model devised by Berry and Sugden has been successfully solved by the direct search methods presented herein. The results are discussed in Chapter 14, where the approach is compared with the branch-and-bound heuristic.
2

A Class of Direct Search Methods for Nonlinear Integer Programming

Sugden, Stephen J Unknown Date (has links)
This work extends recent research in the development of a number of direct search methods in nonlinear integer programming. The various algorithms use an extension of the well-known FORTRAN MINOS code of Murtagh and Saunders as a starting point. MINOS is capable of solving quite large problems in which the objective function is nonlinear and the constraints linear. The original MINOS code has been extended in various ways by Murtagh, Saunders and co-workers since the original 1978 landmark paper. Extensions have dealt with methods to handle both nonlinear constraints, most notably MINOS/AUGMENTED and integer requirements on a subset of the variables(MINTO). The starting point for the present thesis is the MINTO code of Murtagh. MINTO is a direct descendant of MINOS in that it extends the capabilities to general nonlinear constraints and integer restrictions. The overriding goal for the work described in this thesis is to obtain a good integer-feasible or near-integer-feasible solution to the general NLIP problem while trying to avoid or at least minimize the use of the ubiquitous branch-and-bound techniques. In general, we assume a small number of nonlinearities and a small number of integer variables.Some initial ideas motivating the present work are summarised in an invited paper presented by Murtagh at the 1989 CTAC (Computational Techniques and Applications) conference in Brisbane, Australia. The approach discussed there was to start a direct search procedure at the solution of the continuous relaxation of a nonlinear mixed-integer problem by first removing integer variables from the simplex basis, then adjusting integer-infeasible superbasic variables, and finally checking for local optimality by trial unit steps in the integers. This may be followed by a reoptimization with the latest point as the starting point, but integer variables held fixed. We describe ideas for the further development of Murtagh’s direct search method. Both the old and new approaches aim to attain an integer-feasible solution from an initially relaxed (continuous) solution. Techniques such as branch-and-bound or Scarf’s neighbourhood search [84] may then be used to obtain a locally optimal solution. The present range of direct search methods differs significantly to that described by Murtagh, both in heuristics used and major and minor steps of the procedures. Chapter 5 summarizes Murtagh’s original approach while Chapter 6 describes the new methods in detail.Afeature of the new approach is that some degree of user-interaction (MINTO/INTERACTIVE) has been provided, so that a skilled user can "drive" the solution towards optimality if this is desired. Alternatively the code can still be run in "automatic" mode, where one of five available direct search methods may be specified in the customary SPECS file. A selection of nonlinear integer programming problems taken from the literature has been solved and the results are presented here in the latter chapters. Further, anewcommunications network topology and allocation model devised by Berry and Sugden has been successfully solved by the direct search methods presented herein. The results are discussed in Chapter 14, where the approach is compared with the branch-and-bound heuristic.
3

Deterministic Parallel Global Parameter Estimation for a Model of the Budding Yeast Cell Cycle

Panning, Thomas D. 18 August 2006 (has links)
Two parallel deterministic direct search algorithms are combined to find improved parameters for a system of differential equations designed to simulate the cell cycle of budding yeast. Comparing the model simulation results to experimental data is difficult because most of the experimental data is qualitative rather than quantitative. An algorithm to convert simulation results to mutant phenotypes is presented. Vectors of the 143 parameters defining the differential equation model are rated by a discontinuous objective function. Parallel results on a 2200 processor supercomputer are presented for a global optimization algorithm, DIRECT, a local optimization algorithm, MADS, and a hybrid of the two. A second formulation is presented that uses a system of smooth inequalities to evaluate the phenotype of a mutant. Preliminary results of this formulation are given. / Master of Science
4

Optimization of Pile Groups : A practical study using Genetic Algorithm and Direct Search with four different objective functions

Bengtlars, Ann, Väljamets, Erik January 2014 (has links)
Piling is expensive but often necessary when building large structures, for example bridges. Some pile types, such as steel core piles, are very costly and it is therefore of great interest to keep the number piles in a pile group to a minimum. This thesis deals with optimization of pile groups with respect to placement, batter and angle of rotation in order to minimize the number of piles. A program has been developed, where two optimization algorithms named Genetic Algorithm and Direct Search, and four objective functions have been used. These have been tested and compared to find the most suitable for pile group optimization. Three real cases, two bridge supports and one culvert, have been studied, using the program.  It has been difficult to draw any clear conclusions since the results have been ambiguous. This is probably because only three cases have been tested and the results are very problemdependent.The outcome depends, for example, on the starting guess and settings for the optimization. However, the results show that the Genetic Algorithm is somewhat more robust in its ability to remove piles than Direct Search and is therefore to prefer in pile group optimization.
5

Direct Search Methods for Nonsmooth Problems using Global Optimization Techniques

Robertson, Blair Lennon January 2010 (has links)
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods. In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems. Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
6

Efficient Solution Of Optimization Problems With Constraints And/or Cost Functions Expensive To Evaluate

Kurtdere, Ahmet Gokhan 01 January 2010 (has links) (PDF)
There are many optimization problems motivated by engineering applications, whose constraints and/or cost functions are computationally expensive to evaluate. What is more derivative information is usually not available or available at a considerable cost. For that reason, classical optimization methods, based on derivatives, are not applicable. This study presents a framework based on available methods in literature to overcome this important problem. First, a penalized model is constructed where the violation of the constraints are added to the cost function. The model is optimized with help of stochastic approximation algorithms until a point satisfying the constraints is obtained. Then, a sample point set satisfying the constraints is obtained by taking advantage of direct search algorithms based sampling strategies. In this context, two search direction estimation methods, convex hull based and estimated radius of curvature of the sample point set based methods can be applicable. Point set is used to create a barrier which imposes a large cost for points near to the boundary. The aim is to obtain convergence to local optima using the most promising direction with help of direct search methods. As regards to the evaluation of the cost function there are two directions to follow: a-) Gradient-based methods, b-) Non-gradient methods. In gradient-based methods, the gradient is approximated using the so-called stochastic approximation algorithms. In the latter case, direct search algorithms based sampling strategy is realized. This study is concluded by using all these ideas in the solution of complicated test problems where the cost and the constraint functions are costly to evaluate.
7

Tiesioginio paieškos metodo lyderio ir pasekėjo uždaviniui realizacija / Implementation of Direct Search Method for Leader – Follower Problem

Jurgaitytė, Eugenija 16 July 2014 (has links)
Šio bakalauro darbo tikslas - realizuoti tiesioginės paieškos metodą (lyderio ir pasekėjo uždaviniui). Šiam tikslui pasiekti buvo nagrinėjama tiesioginio paieškos metodo (lyderio ir pasekėjo uždaviniui) matematinė formuluotė ir algoritmas pateikti Dali Zhang ir Gui-Hua Lin darbe„Dviejų lygių tiesioginės paieškos metodas lyderio – pasekėjo uždaviniui ir taikymams“ (angl. „Bilevel Direct Search Method for Leader – Follower Equilibrium Problems and Applications“). Darbo metu buvo sukurta programa, realizuojanti metodą, metodas pritaikytas oligopolinės rinkos modeliui (deterministiniam, nedeterministiniam, stochastiniam) spręsti, taip pat metodas pritaikytas oligopolinės rinkos modeliui, kai algoritmo parametrai generuojami atsitiktinai. / The purpose of this bachelor work – to realize the Direct Search Method (for Leader - Follower problem). To achieve this aim was analyzed the mathematical formulation of direct search method and algorithm that were submitted at paper of Dali Zhang and Gui-Hua Lin „Bilevel Direct Search Method for Leader – Follower Equilibrium Problems and Applications“. During this work was developed a program for realizing the method. The method was adapted for oligopolistic market model (deterministic, non-deterministic, stochastic), also, method was adapted for oligopolistic market model, when the parameters of algorithm are generated randomly.
8

An Investigation of MADS for the Solution of Non-convex Control Co-Design Problems

Dandawate, Sushrut Laxmikant January 2021 (has links)
No description available.
9

Global Optimization of Transmitter Placement for Indoor Wireless Communication Systems

He, Jian 30 August 2002 (has links)
The DIRECT (DIviding RECTangles) algorithm JONESJOTi, a variant of Lipschitzian methods for bound constrained global optimization, has been applied to the optimal transmitter placement for indoor wireless systems. Power coverage and BER (bit error rate) are considered as two criteria for optimizing locations of a specified number of transmitters across the feasible region of the design space. The performance of a DIRECT implementation in such applications depends on the characteristics of the objective function, the problem dimension, and the desired solution accuracy. Implementations with static data structures often fail in practice because of unpredictable memory requirements. This is especially critical in S⁴W (Site-Specific System Simulator for Wireless communication systems), where the DIRECT optimization is just one small component connected to a parallel 3D propagation ray tracing modeler running on a 200-node Beowulf cluster of Linux workstations, and surrogate functions for a WCDMA (wideband code division multiple access) simulator are also used to estimate the channel performance. Any component failure of this large computation would abort the entire design process. To make the DIRECT global optimization algorithm efficient and robust, a set of dynamic data structures is proposed here to balance the memory requirements with execution time, while simultaneously adapting to arbitrary problem size. The focus is on design issues of the dynamic data structures, related memory management strategies, and application issues of the DIRECT algorithm to the transmitter placement optimization for wireless communication systems. Results for two indoor systems are presented to demonstrate the effectiveness of the present work. / Master of Science
10

Algorithmes d'optimisation sans dérivées à caractère probabiliste ou déterministe : analyse de complexité et importance en pratique / Derivative-free optimization methods based on probabilistic and deterministic properties : complexity analysis and numerical relevance

Royer, Clément 04 November 2016 (has links)
L'utilisation d'aspects aléatoires a contribué de façon majeure aux dernières avancées dans le domaine de l'optimisation numérique; cela est dû en partie à la recrudescence de problèmes issus de l'apprentissage automatique (machine learning). Dans un tel contexte, les algorithmes classiques d'optimisation non linéaire, reposant sur des principes déterministes, se révèlent en effet bien moins performants que des variantes incorporant de l'aléatoire. Le coût de ces dernières est souvent inférieur à celui de leurs équivalents déterministes; en revanche, il peut s'avérer difficile de maintenir les propriétés théoriques d'un algorithme déterministe lorsque de l'aléatoire y est introduit. Effectuer une analyse de complexité d'une telle méthode est un procédé très répandu dans ce contexte. Cette technique permet déstimer la vitesse de convergence du schéma considéré et par là même d'établir une forme de convergence de celui-ci. Les récents travaux sur ce sujet, en particulier pour des problèmes d'optimisation non convexes, ont également contribué au développement de ces aspects dans le cadre déterministe, ceux-ci apportant en effet un éclairage nouveau sur le comportement des algorithmes. Dans cette thèse, on s'intéresse à l'amélioration pratique d'algorithmes d'optimisation sans dérivées à travers l'introduction d'aléatoire, ainsi qu'à l'impact numérique des analyses de complexité. L'étude se concentre essentiellement sur les méthodes de recherche directe, qui comptent parmi les principales catégories d'algorithmes sans dérivées; cependant, l'analyse sous-jacente est applicable à un large éventail de ces classes de méthodes. On propose des variantes probabilistes des propriétés requises pour assurer la convergence des algorithmes étudiés, en mettant en avant le gain en efficacité induit par ces variantes: un tel gain séxplique principalement par leur coût très faible en évaluations de fonction. Le cadre de base de notre analyse est celui de méthodes convergentes au premier ordre, que nous appliquons à des problèmes sans ou avec contraintes linéaires. Les bonnes performances obtenues dans ce contexte nous incitent par la suite à prendre en compte des aspects d'ordre deux. A partir des propriétés de complexité des algorithmes sans dérivées, on développe de nouvelles méthodes qui exploitent de l'information du second ordre. L'analyse de ces procédures peut être réalisée sur un plan déterministe ou probabiliste: la deuxième solution nous permet d'étudier de nouveaux aspects aléatoires ainsi que leurs conséquences sur l'éfficacité et la robustesse des algorithmes considérés. / Randomization has had a major impact on the latest developments in the field of numerical optimization, partly due to the outbreak of machine learning applications. In this increasingly popular context, classical nonlinear programming algorithms have indeed been outperformed by variants relying on randomness. The cost of these variants is usually lower than for the traditional schemes, however theoretical guarantees may not be straightforward to carry out from the deterministic to the randomized setting. Complexity analysis is a useful tool in the latter case, as it helps in providing estimates on the convergence speed of a given scheme, which implies some form of convergence. Such a technique has also gained attention from the deterministic optimization community thanks to recent findings in the nonconvex case, as it brings supplementary indicators on the behavior of an algorithm. In this thesis, we investigate the practical enhancement of deterministic optimization algorithms through the introduction of random elements within those frameworks, as well as the numerical impact of their complexity results. We focus on direct-search methods, one of the main classes of derivative-free algorithms, yet our analysis applies to a wide range of derivative-free methods. We propose probabilistic variants on classical properties required to ensure convergence of the studied methods, then enlighten their practical efficiency induced by their lower consumption of function evaluations. Firstorder concerns form the basis of our analysis, which we apply to address unconstrained and linearly-constrained problems. The observed gains incite us to additionally take second-order considerations into account. Using complexity properties of derivative-free schemes, we develop several frameworks in which information of order two is exploited. Both a deterministic and a probabilistic analysis can be performed on these schemes. The latter is an opportunity to introduce supplementary probabilistic properties, together with their impact on numerical efficiency and robustness.

Page generated in 0.0503 seconds