• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 16
  • 4
  • Tagged with
  • 85
  • 22
  • 19
  • 11
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Models and algorithms for assignment and cache allocation problems in content distribution networks

Haghi, Narges January 2016 (has links)
This thesis considers two di?cult combinatorial optimization problems for request routing and client assignment in content distribution networks. The aim is to introduce lower and upper bounds to estimate optimal solutions. Existing solution methods and techniques for similar problems have been reviewed. The first problem consists of minimizing the total network cost for request routing with no origin server by considering the delay function. The second problem is cache allocation problem. Lagrangian relaxation and two perspective cuts are used to linearize first problem and to find valid lower bounds. Two different linearization methods are implemented for cache allocation problem. Iterated Variable Neighborhood Descent and Tabu Search are two solution methods which are suggested to find best upper bounds. Different local search operators are introduced to improve objective function values as follows: swap, remove-insert, insert from origin server to a proxy, insert from one proxy to another proxy, swap between origin server and a proxy, swap between two proxies and cyclic exchange. All computational results are presented on randomly generated instances.
12

The vehicle routing problem with release and due dates : formulations, heuristics and lower bounds

Shelbourne, Benjamin January 2016 (has links)
A novel extension of the classical vehicle routing and scheduling problem is proposed that integrates aspects of machine scheduling into vehicle routing. Associated to each customer is a release date that defines the earliest time that the order is available to leave the depot for delivery, and a due date that indicates the time by which the order should ideally be delivered to the customer. The objective is to minimise a convex combination of the operational costs and customer service level, measured as total distance travelled and total weighted tardiness, respectively. A formal definition of the problem is presented, and a variety of benchmark instances are generated to analyse the problem experimentally, and evaluate the performance of any solution approaches developed. Both experimental and theoretical contributions are made in this thesis, and these arise from the development of mixed integer linear programming(MIP) formulations, efficient heuristics, and a Dantzig-Wolfe decomposition and associated column generation algorithm. The MIP formulations extend commodity flow formulations of the capacitated vehicle routing problem, and are generally related by aggregation or disaggregation of the variables. Although a set of constraints is presented that is only valid form-commodity flow formulations. A path-relinking algorithm (PRA) is proposed that exploits the efficiency and aggressive improvement of neighbourhood search, but relies on a new path-relinking procedure and advanced population management strategies to navigate the search space effectively. To provide a comparator algorithm to the PRA, we embed the neighbourhood search into a standard iterated local search algorithm. The Dantzig-Wolfe decomposition of the problem yields a traditional set-partitioning formulation, where the pricing problem (PP) is an elementary shortest path problem with resource constraints and weighted tardiness. Two dynamic programming (DP) formulations of the PP are presented, modelling the weighted tardiness of customers in a path as a pointwise function of the release dates, or decomposing the states over the release dates. The CG algorithm relies on a multi-phase pricing algorithm that utilises DP heuristics, and a decremental state-space relaxation algorithm that solves an ng-route relaxation at each iteration. Extensive computational experiments on the benchmark instances show that the newly defined features have a significant and varied impact on the problem. As a result, finding tight lower bounds and eventually optimal solutions is highly complex, but tight upper bounds can be found efficiently using advanced heuristics.
13

Decomposition based evolutionary methods for constrained multiobjective optimization

Jan, Muhammad Asif January 2011 (has links)
No description available.
14

Combinatorial data analysis for data ordering

Kostopoulos, A. January 2016 (has links)
Seriation is a combinatorial optimisation problem that aims to sequence a set of objects such that a natural ordering is created. A large variety of applications exist ranging from archaeology to bioinformatics and text mining. Initially, a thorough and useful quantitative analysis compares different seriation algorithms using the positional proximity coefficient (PPC). This analysis helps the practitioner to understand how similar two algorithms are for a given set of datasets. The first contribution is consensus seriation. This method uses the principles of other consensus based methods to combine different seriation solutions according to the PPC. As it creates a solution that no individual algorithm can create, the usefulness comes in the form of combining different structural elements from each original algorithms. In particular, it is possible to create a solution that combines the local characteristics of one algorithm together with the global characteristics of another. Experimental results show that compared to consensus ranking based methods, using the Hamming, Spearman and Kendall coefficients, the consensus seriation using the PPC gives generally superior results according to the independent accumulated relative generalised anti-Robinson events measure. The second contribution is a metaheuristic for creating good approximation solutions very large seriation problems. This adapted harmony search algorithm makes use of modified crossover operators taken from genetic algorithm literature to optimise the least-squares criterion commonly used in seriation. As for all combinatorial optimisation problems, there is a need for metaheuristics that can produce better solutions quicker. Results show that that algorithm consistently outperforms existing metaheuristic algorithms such as genetic algorithm, particle swarm optimisation, simulated annealing and tabu search as well as the genetic algorithm using the modified crossover operators, with the main advantage of creating a much superior result in a very short iteration frame. These two major contributions offer practitioners and academics with new tools to tackle seriation related problems and a suggested direction for future work is concluded.
15

Scheduling models with additional features : synchronization, pliability and resiliency

Weiss, Christian January 2016 (has links)
In this thesis we study three new extensions of scheduling models with both practical and theoretical relevance, namely synchronization, pliability and resiliency. Synchronization has previously been studied for flow shop scheduling and we now apply the concept to open shop models for the first time. Here, as opposed to the traditional models, operations that are processed together all have to be started at the same time. Operations that are completed are not removed from the machines until the longest operation in their group is finished. Pliability is a new approach to model flexibility in flow shops and open shops. In scheduling with pliability, parts of the processing load of the jobs can be re-distributed between the machines in order to achieve better schedules. This is applicable, for example, if the machines represent cross-trained workers. Resiliency is a new measure for the quality of a given solution if the input data are uncertain. A resilient solution remains better than some given bound, even if the original input data are changed. The more we can perturb the input data without the solution losing too much quality, the more resilient the solution is. We also consider the assignment problem, as it is the traditional combinatorial optimization problem underlying many scheduling problems. Particularly, we study a version of the assignment problem with a special cost structure derived from the synchronous open shop model and obtain new structural and complexity results. Furthermore we study resiliency for the assignment problem. The main focus of this thesis is the study of structural properties, algorithm development and complexity. For synchronous open shop we show that for a fixed number of machines the makespan can be minimized in polynomial time. All other traditional scheduling objectives are at least as hard to optimize as in the traditional open shop model. Starting out research in pliability we focus on the most general case of the model as well as two relevant special cases. We deliver a fairly complete complexity study for all three versions of the model. Finally, for resiliency, we investigate two different questions: `how to compute the resiliency of a given solution?' and `how to find a most resilient solution?'. We focus on the assignment problem and single machine scheduling to minimize the total sum of completion times and present a number of positive results for both questions. The main goal is to make a case that the concept deserves further study.
16

Vehicle routing on real road networks

Dehghan Nasiri, Saeideh January 2014 (has links)
The vehicle routing problem (VRP) has received particular attention, in the field of transportation and logistics. Producing good solutions for the problem is of interest both commercially and theoretically. Reliable solutions to real life applications require an approach based on realistic assumptions that resemble real-world conditions. In that respects, this thesis studies vehicle routing problems on real road networks addressing aspects of the problem that need to be modelled on the original road network graph and aims to provide appropriate modelling techniques for solving them. As a preliminary step, chapter 2 studies the travelling salesman problem (TSP) on real road networks, referred to as the Steiner TSP (STSP) and proposes alternative integer programming formulations for the problem and some other related routing problems. The performances of formulations is examined both theoretically and computationally. Chapter 3 highlights the fact that travel speeds on road networks are correlated and uses a real traffic dataset to explore the structure of this correlation. In conclusion, it is shown that there is still significant spatial correlations between speeds on roads that are up to twenty links apart, in our congested road network. Chapter 4 extends chapter 2 and incorporates the findings of chapter 3 into a modelling framework for VRP. The STSP with correlated costs is defined as a potentially useful variant of VRP that considers the costs in the STSP to be stochastic random variables with correlation. The problem is then formulated as a single-objective problem with eight different integer programming formulations presented. It is then shown how to account for three different correlation structures in each of the formulations. Chapter 5 considers the VRPs with time windows and shows how most of the exact algorithms proposed for them, might not be applicable if the problem is defined on the original road network graph due to the underlying assumption of these algorithms that the cheapest path between a pair of customers is the same as the quickest path. This assumption is not always true on a real road network. Instead some alternative pricing routines are proposed that can solve the problem directly on the original graph.
17

Applications of copulas in optimisation

Kakouris, Iakovos January 2013 (has links)
The methods for modelling uncertainty and assessing the risk of financial markets were placed under scrutiny after the 2008 crisis. To protect against the worst possible scenario, in a problem of asset allocation, robust optimisation is required. Still, within this framework, assumptions about the uncertainty set have to be made. In our work, we expand the possible options for describing uncertainty sets, through the use of copulas. Copulas are a useful tool for describing uncertainty because of the modelling flexibility that they provide. They are able to easily describe asymmetric dependence structures and tail risk. Both are vital for emulating the financial markets behaviour, during periods of extreme shocks and comovements. Also, copulas are associated with robust measures of dependence. We introduce copulas into the robust optimisation framework by following two different approaches. At first, we derive a Worst Case Conditional Value at Risk optimisation problem, in which the uncertainty set consists of a selection of copulas. We formulate the problem into a convex optimisation problem. The advantages of such a model are supported by numerical examples using real data. In the second approach, copulas are used as means for creating non-symmetric, convex uncertainty sets, in the form of domains. We present examples where these sets can be used in a robust optimisation problem.
18

Hybrid meta-heuristic frameworks : a functional approach

Senington, Richard James January 2013 (has links)
Problems requiring combinatorial optimisation are routinely encountered in research and applied computing. Though polynomial-time algorithms are known for certain problems, for many practical problems, from mundane tasks in scheduling through to exotic tasks such as sequence alignment in bioinformatics, the only effective approach is to use heuristic methods. In contrast to complete strategies that locate globally optimal solutions through (in the worst case) the enumeration of all solutions to a problem, heuristics are based upon rules of thumb about specific problems, which guide the search down promising avenues. Work in the field of Operations Research has gone further, developing generic metaheuristics, abstract templates which may be adapted to tackle many different problems. Metaheuristic researchers have created a variety of algorithms, each with their own strengths and weaknesses, and development of metaheuristics now often tries to combine concepts from a number of existing strategies to balance the advantages of the originals, known as hybridisation. This interest in hybridisation has led to the creation of a number of frameworks in imperative languages to assist programmers in the rapid creation and experimentation upon the algorithms. However these existing frameworks have struggled to enable hybridisation of the two major classes of metaheuristic, point based and population based, while being large and complicated to use. This Thesis investigates a functional approach to hybridisation. Despite superficial analogies between hybridisation and function composition, there are substantial challenges: unlike global search methods that can be explained elegantly in terms of graph traversal, prior work on local search has struggled to articulate a common model, let alone one that can accommodate more esoteric optimisation techniques such as ant colony optimisation. At the same time, these implementations cannot ignore the fact that the development of these techniques is driven by large-scale problems, and computational efficiency cannot be ignored. Given this background, this Thesis makes three substantial contributions. It decomposes metaheuristic searchmethods into a set of finer-grained concepts and tools that can be reassembled to describe both standard search strategies and more specialised approaches. It resolves problems found in implementing these abstractions in the widely used language Haskell, developing a novel approach based on dataflow networks. The value of functional abstraction in the practice of metaheuristic development and tuning is demonstrated through case studies, including a substantial problem in bioinformatics.
19

Robust and optimal control of disturbed population dynamics

Lloyd, Stephanie Jane January 2015 (has links)
We use control theory to explore management of populations affected by disturbances and uncertainty. We consider five related topics. Chapter 2 uses linear programming to find optimal translocation strategies between wild and captive populations. To allow comparison of the solutions we classify the optimal strategy depending on which stage classes are kept in captivity. We find depending on species, that different stages are targeted when the resource available is limited. In Chapter 3 we use linear programming to create management strategies for an invading population affected by disturbance. For a sinusoidal disturbance, the final population with control is bounded between a transfer function approximation and a feedback control solution. Then we assume worst case disturbance, which creates a 2-player game. In this linear programming context then it is possible that minimax < maximin. Chapter 4 considers a 2-player linear-quadratic problem and introduces the use of disturbance attenuation into ecology. Disturbance attenuation shows how a disturbance is amplified or attenuated by the system. In Chapter 5 we consider an invading population, and we explore the effect that stochasticity has on the relationship between Allee effect and population inertia needed for successful invasion. We find that for small population densities, then demographic stochasticity dramatically reduces the likelihood of invasion and survival of the resident.
20

Multi-agent based cooperative search in combinatorial optimisation

Martin, Simon January 2013 (has links)
Cooperative search provides a class of strategies to design more effective search methodologies by combining (meta-) heuristics for solving combinatorial optimisation problems. This area has been little explored in operational research. This thesis proposes a general agent-based distributed framework where each agent implements a (meta-) heuristic. An agent continuously adapts itself during the search process using a cooperation protocol based on reinforcement learning and pattern matching. Good patterns which make up improving solutions are identified and shared by the agents. A theoretical approach to the understanding of the potential of agent-based systems is also proposed. This agent-based system aims to raise the level of generality by providing a flexible framework to deal with a variety of different problem domains. The proposed framework so far has been tested on Permutation Flow-shop Scheduling, Travelling Salesman Problem and Nurse Rostering. These instances have yielded some promising results. As part of the nurse rostering work a novel approach to modelling fairer nurse rosters is proposed.

Page generated in 0.0384 seconds