• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 5
  • 5
  • 3
  • 1
  • Tagged with
  • 45
  • 45
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Generalized hill climbing algorithms for discrete optimization problems

Johnson, Alan W. 06 June 2008 (has links)
Generalized hill climbing (GHC) algorithms are introduced, as a tool to address difficult discrete optimization problems. Particular formulations of GHC algorithms include simulated annealing (SA), local search, and threshold accepting (T A), among. others. A proof of convergence of GHC algorithms is presented, that relaxes the sufficient conditions for the most general proof of convergence for stochastic search algorithms in the literature (Anily and Federgruen [1987]). Proofs of convergence for SA are based on the concept that deteriorating (hill climbing) transitions between neighboring solutions are accepted by comparing a deterministic function of both the solution change cost and a temperature parameter to a uniform (0,1) random variable. GHC algorithms represent a more general model, whereby deteriorating moves are accepted according to a general random variable. Computational results are reported that illustrate relationships that exist between the GHC algorithm's finite-time performance on three problems, and the general random variable formulations used. The dissertation concludes with suggestions for further research. / Ph. D.
22

« Resolution Search » et problèmes d’optimisation discrète / Resolution Search and Discrete Optimization Problems

Posta, Marius 03 February 2012 (has links)
Les problèmes d’optimisation discrète sont pour beaucoup difficiles à résoudre, depar leur nature combinatoire. Citons par exemple les problèmes de programmationlinéaire en nombres entiers. Une approche couramment employée pour les résoudreexactement est l’approche de Séparation et Évaluation Progressive. Une approchedifférente appelée « Resolution Search » a été proposée par Chvátal en 1997 pourrésoudre exactement des problèmes d’optimisation à variables 0-1, mais elle restemal connue et n’a été que peu appliquée depuis.Cette thèse tente de remédier à cela, avec un succès partiel. Une première contributionconsiste en la généralisation de Resolution Search à tout problème d’optimisationdiscrète, tout en introduisant de nouveaux concepts et définitions. Ensuite,afin de confirmer l’intérêt de cette approche, nous avons essayé de l’appliquer enpratique pour résoudre efficacement des problèmes bien connus. Bien que notrerecherche n’ait pas abouti sur ce point, elle nous a amené à de nouvelles méthodespour résoudre exactement les problèmes d’affectation généralisée et de localisationsimple. Après avoir présenté ces méthodes, la thèse conclut avec un bilan et desperspectives sur l’application pratique de Resolution Search. / The combinatorial nature of discrete optimization problems often makes them difficultto solve. Consider for instance integer linear programming problems, which arecommonly solved using a Branch-and-Bound approach. An alternative approach,Resolution Search, was proposed by Chvátal in 1997 for solving 0-1 optimizationproblems, but remains little known to this day and as such has seen few practicalapplications.This thesis attempts to remedy this state of affairs, with partial success. Itsfirst contribution consists in the generalization of Resolution Search to any discreteoptimization problem, while introducing new definitions and concepts. Next, wetried to validate this approach by attempting to solve well-known problems efficientlywith it. Although our research did not succeed in this respect, it lead usto new methods for solving the generalized assignment and uncapacitated facilitylocation problems. After presenting these methods, this thesis concludes with asummary of our attempts at practical application of Resolution Search, along withfurther perspectives on this matter.
23

A Branch And Bound Algorithm For Resource Leveling Problem

Mutlu, Mustafa Cagdas 01 August 2010 (has links) (PDF)
Resource Leveling Problem (RLP) aims to minimize undesired fluctuations in resource distribution curves which cause several practical problems. Many studies conclude that commercial project management software packages can not effectively deal with RLP. In this study a branch and bound algorithm is presented for solving RLP for single and multi resource, small size networks. The algorithm adopts a depth-first strategy and stores start times of non-critical activities in the nodes of the search tree. Optimal resource distributions for 4 different types of resource leveling metrics can be obtained via the developed procedure. To prune more of the search tree and thereby reduce the computation time, several lower bound calculation methods are employed. Experiment results from 20 problems showed that the suggested algorithm can successfully locate optimal solutions for networks with up to 20 activities. The algorithm presented in this study contributes to the literature in two points. First, the new lower bound improvement method (maximum allowable daily resources method) introduced in this study reduces computation time required for achieving the optimal solution for the RLP. Second, optimal solutions of several small sized problems have been obtained by the algorithm for some traditional and recently suggested leveling metrics. Among these metrics, Resource Idle Day (RID) has been utilized in an exact method for the first time. All these solutions may form a basis for performance evaluation of heuristic and metaheuristic procedures for the RLP. Limitations of the developed branch and bound procedure are discussed and possible further improvements are suggested.
24

Optimum Design Of Steel Structures Via Differential Evolution Algorithm And Application Programming Interface Of Sap2000

Dedekarginoglu, Ozgur 01 March 2012 (has links) (PDF)
The objective of this study is to investigate the use and efficiency of Differential Evolution (DE) method on structural optimization. The solution algorithm developed with DE is computerized into software called SOP2011 using VB.NET. SOP2011 is automated to achieve size optimum design of steel structures consisting of 1-D elements such as trusses and frames subjected to design provisions according to ASD-AISC (2010) and LRFD-AISC (2010). SOP2011 works simultaneously with the structural analysis and design software SAP2000 in order to find the global or near optimum designs for real size truss and frame structures in which the optimization problem is classified as constrained, discrete size optimization. Software interacts with SAP2000 through the Open Application Programming Interface (OAPI), which provides an access to information of SAP2000 inputs and outputs. It is programmed for finding reasonable and optimized results for truss and frame steel structures by choosing appropriate ready sections for structural members considering the minimum weight via DE technique. Based on the comparison of the obtained results with the literature, DE algorithm with penalty function implementation is proved to be an efficient optimization technique amongst several major methods used for discrete constrained size optimization of real size steel structures. Also, it has been shown that by using optimized designs obtained by DE, weight of the structures can be reduced up to 67.9% for steel truss structures and 41.7% for steel frame structures compared to SAP2000 auto design procedure, hence resulting a significant saving of materials, cost, work hours and energy required for the project.
25

A multi-fidelity analysis selection method using a constrained discrete optimization formulation

Stults, Ian Collier 17 August 2009 (has links)
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method that will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results.
26

Problems, Models and Algorithms in One- and Two-Dimensional Cutting / Probleme, Modelle und Algorithmen in ein- und zweidimensionalem Zuschnitt

Belov, Gleb 20 January 2004 (has links) (PDF)
Within such disciplines as Management Science, Information and Computer Science, Engineering, Mathematics and Operations Research, problems of cutting and packing (C&P) of concrete and abstract objects appear under various specifications (cutting problems, knapsack problems, container and vehicle loading problems, pallet loading, bin packing, assembly line balancing, capital budgeting, changing coins, etc.), although they all have essentially the same logical structure. In cutting problems, a large object must be divided into smaller pieces; in packing problems, small items must be combined to large objects. Most of these problems are NP-hard. Since the pioneer work of L.V. Kantorovich in 1939, which first appeared in the West in 1960, there has been a steadily growing number of contributions in this research area. In 1961, P. Gilmore and R. Gomory presented a linear programming relaxation of the one-dimensional cutting stock problem. The best-performing algorithms today are based on their relaxation. It was, however, more than three decades before the first `optimum? algorithms appeared in the literature and they even proved to perform better than heuristics. They were of two main kinds: enumerative algorithms working by separation of the feasible set and cutting plane algorithms which cut off infeasible solutions. For many other combinatorial problems, these two approaches have been successfully combined. In this thesis we do it for one-dimensional stock cutting and two-dimensional two-stage constrained cutting. For the two-dimensional problem, the combined scheme provides mostly better solutions than other methods, especially on large-scale instances, in little time. For the one-dimensional problem, the integration of cuts into the enumerative scheme improves the results of the latter only in exceptional cases. While the main optimization goal is to minimize material input or trim loss (waste), in a real-life cutting process there are some further criteria, e.g., the number of different cutting patterns (setups) and open stacks. Some new methods and models are proposed. Then, an approach combining both objectives will be presented, to our knowledge, for the first time. We believe this approach will be highly relevant for industry.
27

Designing and Probing Open Quantum Systems: Quantum Annealing, Excitonic Energy Transfer, and Nonlinear Fluorescence Spectroscopy

Perdomo, Alejandro 27 July 2012 (has links)
The 20th century saw the first revolution of quantum mechanics, setting the rules for our understanding of light, matter, and their interaction. The 21st century is focused on using these quantum mechanical laws to develop technologies which allows us to solve challenging practical problems. One of the directions is the use quantum devices which promise to surpass the best computers and best known classical algorithms for solving certain tasks. Crucial to the design of realistic devices and technologies is to account for the open nature of quantum systems and to cope with their interactions with the environment. In the first part of this dissertation, we show how to tackle classical optimization problems of interest in the physical sciences within one of these quantum computing paradigms, known as quantum annealing (QA). We present the largest implementation of QA on a biophysical problem (six different experiments with up to 81 superconducting quantum bits). Although the cases presented here can be solved on a classical computer, we present the first implementation of lattice protein folding on a quantum device under the Miyazawa-Jernigan model. This is the first step towards studying optimization problems in biophysics and statistical mechanics using quantum devices. In the second part of this dissertation, we focus on the problem of excitonic energy transfer. We provide an intuitive platform for engineering exciton transfer dynamics and we show that careful consideration of the properties of the environment leads to opportunities to engineer the transfer of an exciton. Since excitons in nanostructures are proposed for use in quantum information processing and artificial photosynthetic designs, our approach paves the way for engineering a wide range of desired exciton dy- namics. Finally, we develop the theory for a two-dimensional electronic spectroscopic technique based on fluorescence (2DFS) and challenge previous theoretical results claiming its equivalence to the two-dimensional photon echo (2DPE) technique which is based on polarization. Experimental realization of this technique confirms our the- oretical predictions. The new technique is more sensitive than 2DPE as a tool for conformational determination of excitonically coupled chromophores and o↵ers the possibility of applying two-dimensional electronic spectroscopy to single-molecules.
28

Discrete optimization via simulation with stochastic constraints

Park, Chuljin 20 September 2013 (has links)
In this thesis, we first develop a new method called penalty function with memory (PFM). PFM consists of a penalty parameter and a measure of constraint violation and it converts a discrete optimization via simulation (DOvS) problem with stochastic constraints into a series of DOvS problems without stochastic constraints. PFM determines a penalty of a visited solution based on past results of feasibility checks on the solution. Specifically, assuming a minimization problem, a penalty parameter of PFM, namely the penalty sequence, diverges to infinity for an infeasible solution but converges to zero almost surely for any strictly feasible solution under certain conditions. For a feasible solution located on the boundary of feasible and infeasible regions, the sequence converges to zero either with high probability or almost surely. As a result, a DOvS algorithm combined with PFM performs well even when optimal solutions are tight or nearly tight. Second, we design an optimal water quality monitoring network for river systems. The problem is to find the optimal location of a finite number of monitoring devices, minimizing the expected detection time of a contaminant spill event while guaranteeing good detection reliability. When uncertainties in spill and rain events are considered, both the expected detection time and detection reliability need to be estimated by stochastic simulation. This problem is formulated as a stochastic DOvS problem with the objective of minimizing expected detection time and with a stochastic constraint on the detection reliability; and it is solved by a DOvS algorithm combined with PFM. Finally, we improve PFM by combining it with an approximate budget allocation procedure. We revise an existing optimal budget allocation procedure so that it can handle active constraints and satisfy necessary conditions for the convergence of PFM.
29

Towards Visualization of Discrete Optimization Problems and Search Algorithms

Volke, Sebastian 24 July 2019 (has links)
Diskrete Optimierung beschäftigt sich mit dem Identifizieren einer Kombination oder Permutation von Elementen, die im Hinblick auf ein gegebenes quantitatives Kriterium optimal ist. Anwendungen dafür entstehen aus Problemen in der Wirtschaft, der industriellen Fertigung, den Ingenieursdisziplinen, der Mathematik und Informatik. Dazu gehören unter anderem maschinelles Lernen, die Planung der Reihenfolge und Terminierung von Fertigungsprozessen oder das Layout von integrierten Schaltkreisen. Häufig sind diskrete Optimierungsprobleme NP-hart. Dadurch kommt der Erforschung effizienter, heuristischer Suchalgorithmen eine große Bedeutung zu, um für mittlere und große Probleminstanzen überhaupt gute Lösungen finden zu können. Dabei wird die Entwicklung von Algorithmen dadurch erschwert, dass Eigenschaften der Probleminstanzen aufgrund von deren Größe und Komplexität häufig schwer zu identifizieren sind. Ebenso herausfordernd ist die Analyse und Evaluierung von gegebenen Algorithmen, da das Suchverhalten häufig schwer zu charakterisieren ist. Das trifft besonders im Fall von emergentem Verhalten zu, wie es in der Forschung der Schwarmintelligenz vorkommt. Visualisierung zielt auf das Nutzen des menschlichen Sehens zur Datenverarbeitung ab. Das Gehirn hat enorme Fähigkeiten optische Reize von den Sehnerven zu analysieren, Formen und Muster darin zu erkennen, ihnen Bedeutung zu verleihen und dadurch ein intuitives Verstehen des Gesehenen zu ermöglichen. Diese Fähigkeit kann im Speziellen genutzt werden, um Hypothesen über komplexe Daten zu generieren, indem man sie in einem Bild repräsentiert und so dem visuellen System des Betrachters zugänglich macht. Bisher wurde Visualisierung kaum genutzt um speziell die Forschung in diskreter Optimierung zu unterstützen. Mit dieser Dissertation soll ein Ausgangspunkt geschaffen werden, um den vermehrten Einsatz von Visualisierung bei der Entwicklung von Suchheuristiken zu ermöglichen. Dazu werden zunächst die zentralen Fragen in der Algorithmenentwicklung diskutiert und daraus folgende Anforderungen an Visualisierungssysteme abgeleitet. Mögliche Forschungsrichtungen in der Visualisierung, die konkreten Nutzen für die Forschung in der Optimierung ergeben, werden vorgestellt. Darauf aufbauend werden drei Visualisierungssysteme und eine Analysemethode für die Erforschung diskreter Suche vorgestellt. Drei wichtige Aufgaben von Algorithmendesignern werden dabei adressiert. Zunächst wird ein System für den detaillierten Vergleich von Algorithmen vorgestellt. Auf der Basis von Zwischenergebnissen der Algorithmen auf einer Probleminstanz wird der Suchverlauf der Algorithmen dargestellt. Der Fokus liegt dabei dem Verlauf der Qualität der Lösungen über die Zeit, wobei die Darstellung durch den Experten mit zusätzlichem Wissen oder Klassifizierungen angereichert werden kann. Als zweites wird ein System für die Analyse von Suchlandschaften vorgestellt. Auf Basis von Pfaden und Abständen in der Landschaft wird eine Karte der Probleminstanz gezeichnet, die strukturelle Merkmale intuitiv erfassbar macht. Der zweite Teil der Dissertation beschäftigt sich mit der topologischen Analyse von Suchlandschaften, aufbauend auf einer Schwellwertanalyse. Ein Visualisierungssystem wird vorgestellt, dass ein topologisch equivalentes Höhenprofil der Suchlandschaft darstellt, um die topologische Struktur begreifbar zu machen. Dieses System ermöglicht zudem, den Suchverlauf eines Algorithmus direkt in der Suchlandschaft zu beobachten, was insbesondere bei der Untersuchung von Schwarmintelligenzalgorithmen interessant ist. Die Berechnung der topologischen Struktur setzt eine vollständige Aufzählung aller Lösungen voraus, was aufgrund der Größe der Suchlandschaften im allgemeinen nicht möglich ist. Um eine Anwendbarkeit der Analyse auf größere Probleminstanzen zu ermöglichen, wird eine Methode zur Abschätzung der Topologie vorgestellt. Die Methode erlaubt eine schrittweise Verfeinerung der topologischen Struktur und lässt sich heuristisch steuern. Dadurch können Wissen und Hypothesen des Experten einfließen um eine möglichst hohe Qualität der Annäherung zu erreichen bei gleichzeitig überschaubarem Berechnungsaufwand. / Discrete optimization deals with the identification of combinations or permutations of elements that are optimal with regard to a specific, quantitative criterion. Applications arise from problems in economy, manufacturing, engineering, mathematics and computer sciences. Among them are machine learning, scheduling of production processes, and the layout of integrated electrical circuits. Typically, discrete optimization problems are NP hard. Thus, the investigation of efficient, heuristic search algorithms is of high relevance in order to find good solutions for medium- and large-sized problem instances, at all. The development of such algorithms is complicated, because the properties of problem instances are often hard to identify due to the size and complexity of the instances. Likewise, the analysis and evaluation of given algorithms is challenging, because the search behavior of an algorithm is hard to characterize, especially in case of emergent behavior as investigated in swarm intelligence research. Visualization targets taking advantage of human vision in order to do data processing. The visual brain possesses tremendous capabilities to analyse optical stimulation through the visual nerves, perceive shapes and patterns, assign meaning to them and thus facilitate an intuitive understanding of the seen. In particular, this can be used to generate hypotheses about complex data by representing them in a well-designed depiction and making it accessible to the visual system of the viewer. So far, there is only little use of visualization to support the discrete optimization research. This thesis is meant as a starting point to allow for an increased application of visualization throughout the process of developing discrete search heuristics. For this, we discuss the central questions that arise from the development of heuristics as well as the resulting requirements on visualization systems. Possible directions of research for visualization are described that yield a specific benefit for optimization research. Based on this, three visualization systems and one analysis method are presented. These address three important tasks of algorithm designers. First, a system for the fine-grained comparison of algorithms is introduced. Based on the intermediate results of algorithm runs on a given problem instance the search process is visualized. The focus is on the progress of the solution quality over time while allowing the algorithm expert to augment the depiction with additional domain knowledge and classification of individual solutions. Second, a system for the analysis of search landscapes is presented. Based on paths and distances in the landscape, a map of the problem instance is drawn that facilitates an intuitive cognition of structural properties. The second part of this thesis focuses on the topological analysis of search landscapes, based on barriers. A visualization system is presented that shows a topological equivalent height profile of the search landscape. Further, the system facilitates to observe the search process of an algorithm directly within the search landscape. This is of particular interest when researching swarm intelligence algorithms. The computation of topological structure requires a complete enumeration of all solutions which is not possible in the general case due to the size of the search landscapes. In order to enable an application to larger problem instances, we introduce a method to approximate the topological structure. The method allows for an incremental refinement of the topological approximation that can be controlled using a heuristic. Thus, the domain expert can introduce her knowledge and also hypotheses about the problem instance into the analysis so that an approximation of good quality is achieved with reasonable computational effort.
30

A Fitness Function Elimination Theory For Blackbox Optimization And Problem Class Learning

Anil, Gautham 01 January 2012 (has links)
The modern view of optimization is that optimization algorithms are not designed in a vacuum, but can make use of information regarding the broad class of objective functions from which a problem instance is drawn. Using this knowledge, we want to design optimization algorithms that execute quickly (efficiency), solve the objective function with minimal samples (performance), and are applicable over a wide range of problems (abstraction). However, we present a new theory for blackbox optimization from which, we conclude that of these three desired characteristics, only two can be maximized by any algorithm. We put forward an alternate view of optimization where we use knowledge about the problem class and samples from the problem instance to identify which problem instances from the class are being solved. From this Elimination of Fitness Functions approach, an idealized optimization algorithm that minimizes sample counts over any problem class, given complete knowledge about the class, is designed. This theory allows us to learn more about the difficulty of various problems, and we are able to use it to develop problem complexity bounds. We present general methods to model this algorithm over a particular problem class and gain efficiency at the cost of specifically targeting that class. This is demonstrated over the Generalized Leading-Ones problem and a generalization called LO∗∗ , and efficient algorithms with optimal performance are derived and analyzed. We also iii tighten existing bounds for LO∗∗∗. Additionally, we present a probabilistic framework based on our Elimination of Fitness Functions approach that clarifies how one can ideally learn about the problem class we face from the objective functions. This problem learning increases the performance of an optimization algorithm at the cost of abstraction. In the context of this theory, we re-examine the blackbox framework as an algorithm design framework and suggest several improvements to existing methods, including incorporating problem learning, not being restricted to blackbox framework and building parametrized algorithms. We feel that this theory and our recommendations will help a practitioner make substantially better use of all that is available in typical practical optimization algorithm design scenarios.

Page generated in 0.5343 seconds