• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 193
  • 73
  • 18
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 639
  • 639
  • 184
  • 178
  • 177
  • 154
  • 113
  • 112
  • 110
  • 95
  • 72
  • 71
  • 68
  • 66
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Topology sensitive algorithms for large scale uncapacitated covering problem

Sabbir, Tarikul Alam Khan January 2011 (has links)
Solving NP-hard facility location problems in wireless network planning is a common scenario. In our research, we study the Covering problem, a well known facility location problem with applications in wireless network deployment. We focus on networks with a sparse structure. First, we analyzed two heuristics of building Tree Decomposition based on vertex separator and perfect elimination order. We extended the vertex separator heuristic to improve its time performance. Second, we propose a dynamic programming algorithm based on the Tree Decomposition to solve the Covering problem optimally on the network. We developed several heuristic techniques to speed up the algorithm. Experiment results show that one variant of the dynamic programming algorithm surpasses the performance of the state of the art mathematical optimization commercial software on several occasions. / ix, 89 leaves : ill. ; 29 cm
202

Parallel algorithm design and implementation of regular/irregular problems: an in-depth performance study on graphics processing units

Solomon, Steven 16 January 2012 (has links)
Recently, interest in the Graphics Processing Unit (GPU) for general purpose parallel applications development and research has grown. Much of the current research on the GPU focuses on the acceleration of regular problems, as irregular problems typically do not provide the same level of performance on the hardware. We explore the potential of the GPU by investigating four problems on the GPU with regular and/or irregular properties: lookback option pricing (regular), single-source shortest path (irregular), maximum flow (irregular), and the task matching problem using multi-swarm particle swarm optimization (regular with elements of irregularity). We investigate the design, implementation, optimization, and performance of these algorithms on the GPU, and compare the results. Our results show that the regular problem achieves greater performance and requires less development effort than the irregular problems. However, we find the GPU to still be capable of providing high levels of acceleration for irregular problems.
203

Online Resource Management

Tiedemann, Morten 16 April 2015 (has links)
No description available.
204

TSP - Infrastructure for the Traveling Salesperson Problem

Hahsler, Michael, Hornik, Kurt January 2006 (has links) (PDF)
The traveling salesperson or salesman problem (TSP) is a well known and important combinatorial optimization problem. The goal is to find the shortest tour that visits each city in a given list exactly once and then returns to the starting city. Despite this simple problem statement, solving the TSP is difficult since it belongs to the class of NP-complete problems. The importance of the TSP arises besides from its theoretical appeal from the variety of its applications. In addition to vehicle routing, many other applications, e.g., computer wiring, cutting wallpaper, job sequencing or several data visualization techniques, require the solution of a TSP. In this paper we introduce the R package TSP which provides a basic infrastructure for handling and solving the traveling salesperson problem. The package features S3 classes for specifying a TSP and its (possibly optimal) solution as well as several heuristics to find good solutions. In addition, it provides an interface to Concorde, one of the best exact TSP solvers currently available. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
205

Topics in discrete optimization: models, complexity and algorithms

He, Qie 13 January 2014 (has links)
In this dissertation we examine several discrete optimization problems through the perspectives of modeling, complexity and algorithms. We first provide a probabilistic comparison of split and type 1 triangle cuts for mixed-integer programs with two rows and two integer variables in terms of cut coefficients and volume cutoff. Under a specific probabilistic model of the problem parameters, we show that for the above measure, the probability that a split cut is better than a type 1 triangle cut is higher than the probability that a type 1 triangle cut is better than a split cut. The analysis also suggests some guidelines on when type 1 triangle cuts are likely to be more effective than split cuts and vice versa. We next study a minimum concave cost network flow problem over a grid network. We give a polytime algorithm to solve this problem when the number of echelons is fixed. We show that the problem is NP-hard when the number of echelons is an input parameter. We also extend our result to grid networks with backward and upward arcs. Our result unifies the complexity results for several models in production planning and green recycling including the lot-sizing model, and gives the first polytime algorithm for some problems whose complexities were not known before. Finally, we examine how much complexity randomness will bring to a simple combinatorial optimization problem. We study a problem called the sell or hold problem (SHP). SHP is to sell k out of n indivisible assets over two stages, with known first-stage prices and random second-stage prices, to maximize the total expected revenue. Although the deterministic version of SHP is trivial to solve, we show that SHP is NP-hard when the second-stage prices are realized as a finite set of scenarios. We show that SHP is polynomially solvable when the number of scenarios in the second stage is constant. A max{1/2,k/n}-approximation algorithm is presented for the scenario-based SHP.
206

Parallel algorithm design and implementation of regular/irregular problems: an in-depth performance study on graphics processing units

Solomon, Steven 16 January 2012 (has links)
Recently, interest in the Graphics Processing Unit (GPU) for general purpose parallel applications development and research has grown. Much of the current research on the GPU focuses on the acceleration of regular problems, as irregular problems typically do not provide the same level of performance on the hardware. We explore the potential of the GPU by investigating four problems on the GPU with regular and/or irregular properties: lookback option pricing (regular), single-source shortest path (irregular), maximum flow (irregular), and the task matching problem using multi-swarm particle swarm optimization (regular with elements of irregularity). We investigate the design, implementation, optimization, and performance of these algorithms on the GPU, and compare the results. Our results show that the regular problem achieves greater performance and requires less development effort than the irregular problems. However, we find the GPU to still be capable of providing high levels of acceleration for irregular problems.
207

Application of Combinatorial Optimization Techniques in Genomic Median Problems

Haghighi, Maryam 13 December 2011 (has links)
Constructing the genomic median of several given genomes is crucial in developing evolutionary trees, since the genomic median provides an estimate for the ordering of the genes in a common ancestor of the given genomes. This is due to the fact that the content of DNA molecules is often similar, but the difference is mainly in the order in which the genes appear in various genomes. The mutations that affect this ordering are called genome rearrangements, and many structural differences between genomes can be studied using genome rearrangements. In this thesis our main focus is on applying combinatorial optimization techniques to genomic median problems, with particular emphasis on the breakpoint distance as a measure of the difference between two genomes. We will study different variations of the breakpoint median problem from signed to unsigned, unichromosomal to multichromosomal, and linear to circular to mixed. We show how these median problems can be formulated in terms of problems in combinatorial optimization, and take advantage of well-known combinatorial optimization techniques and apply these powerful methods to study various median problems. Some of these median problems are polynomial and many are NP-hard. We find efficient algorithms and approximation methods for median problems based on well-known combinatorial optimization structures. The focus is on algorithmic and combinatorial aspects of genomic medians, and how they can be utilized to obtain optimal median solutions.
208

Sequential optimal design of neurophysiology experiments

Lewi, Jeremy 31 March 2009 (has links)
For well over 200 years, scientists and doctors have been poking and prodding brains in every which way in an effort to understand how they work. The earliest pokes were quite crude, often involving permanent forms of brain damage. Though neural injury continues to be an active area of research within neuroscience, technology has given neuroscientists a number of tools for stimulating and observing the brain in very subtle ways. Nonetheless, the basic experimental paradigm remains the same; poke the brain and see what happens. For example, neuroscientists studying the visual or auditory system can easily generate any image or sound they can imagine to see how an organism or neuron will respond. Since neuroscientists can now easily design more pokes then they could every deliver, a fundamental question is ``What pokes should they actually use?' The complexity of the brain means that only a small number of the pokes scientists can deliver will produce any information about the brain. One of the fundamental challenges of experimental neuroscience is finding the right stimulus parameters to produce an informative response in the system being studied. This thesis addresses this problem by developing algorithms to sequentially optimize neurophysiology experiments. Every experiment we conduct contains information about how the brain works. Before conducting the next experiment we should use what we have already learned to decide which experiment we should perform next. In particular, we should design an experiment which will reveal the most information about the brain. At a high level, neuroscientists already perform this type of sequential, optimal experimental design; for example crude experiments which knockout entire regions of the brain have given rise to modern experimental techniques which probe the responses of individual neurons using finely tuned stimuli. The goal of this thesis is to develop automated and rigorous methods for optimizing neurophysiology experiments efficiently and at a much finer time scale. In particular, we present methods for near instantaneous optimization of the stimulus being used to drive a neuron.
209

Intractability results for problems in computational learning and approximation

Saket, Rishi 29 June 2009 (has links)
In this thesis we prove intractability results for well studied problems in computational learning and approximation. Let ε , mu > 0 be arbitrarily small constants and t be an arbitrary constant positive integer. We show an almost optimal hardness factor of d[superscript{1-ε}] for computing an equivalent DNF expression with minimum terms for a boolean function on d variables, given its truth table. In the study of weak learnability, we prove an optimal 1/2 + ε inapproximability for the accuracy of learning an intersection of two halfspaces with an intersection of t halfspaces. Further, we study the learnability of small DNF formulas, and prove optimal 1/2 + ε inapproximability for the accuracy of learning (i) a two term DNF by a t term DNF, and (ii) an AND under adversarial mu-noise by a t-CNF. In addition, we show a 1 - 2[superscript{-d}] + ε inapproximability for accurately learning parities (over GF(2)), under adversarial mu-noise, by degree d polynomials, where d is a constant positive integer. We also provide negative answers to the possibility of stronger semi-definite programming (SDP) relaxations yielding much better approximations for graph partitioning problems such as Maximum Cut and Sparsest Cut by constructing integrality gap examples for them. For Maximum Cut and Sparsest Cut we construct examples -- with gaps alpha[superscript{-1}] - ε (alpha is the Goemans-Williamson constant) and Omega((logloglog n)[superscript{1/13}]) respectively -- for the standard SDP relaxations augmented with O((logloglog n)[superscript{1/6}]) rounds of Sherali-Adams constraints. The construction for Sparsest Cut also implies that an n-point negative type metric may incur a distortion of Omega((logloglog n)[superscript{1/ 13}]) to embed into ell_1 even if the induced submetric on every subset of O((logloglog n)[superscript{1/6}]) points is isometric to ell_1. We also construct an integrality gap of Omega(loglog n) for the SDP relaxation for Uniform Sparsest Cut problem augmented with triangle inequalities, disproving a well known conjecture of Arora, Rao and Vazirani.
210

An evolutionary method for synthesizing technological planning and architectural advance

Cole, Bjorn Forstrom 18 May 2009 (has links)
There are many times in which a critical choice between proposed system architectures must be made. Two situations in particular motivate this dissertation: a "Cambrian explosion" when no dominant rchitecture has arisen, and times in which developments enable challenges to a dominant incumbent. In each situation, the advance of core technologies is key. This dissertation features a new computing technique to systematically explore the interaction of technological progress with architectural choices. This technique is founded upon a graph theoretic formulation of architecture, which enables the consideration of multifunctional components and modularity v. synergy trades. The technique utilizes a genetic algorithm formulated for graphs, and a solver that automatically constrains and optimizes component design variables. The use of quantitative technology models, graph theoretic formulation, and optimization algorithms together enables a systematic exploration of both time and combinatorial spaces. The quantitative results of this exploration enhance the strategic view of technology planners.

Page generated in 0.6324 seconds