311 |
Algorithms for Matching Problems Under Data Accessibility ConstraintsHanguir, Oussama January 2022 (has links)
Traditionally, optimization problems in operations research have been studied in a complete information setting; the input/data is collected and made fully accessible to the user, before an algorithm is sequentially run to generate the optimal output. However, the growing magnitude of treated data and the need to make immediate decisions are increasingly shifting the focus to optimizing under incomplete information settings. The input can be partially inaccessible to the user either because it is generated continuously, contains some uncertainty, is too large and cannot be stored on a single machine, or has a hidden structure that is costly to unveil. Many problems providing a context for studying algorithms when the input is not entirely accessible emanate from the field of matching theory, where the objective is to pair clients and servers or, more generally, to group clients in disjoint sets. Examples include ride-sharing and food delivery platforms, internet advertising, combinatorial auctions, and online gaming.
In this thesis, we study three different novel problems from the theory of matchings. These problems correspond to situations where the input is hidden, spread across multiple processors, or revealed in two stages with some uncertainty. In particular, we present in Chapter 1 the necessary definitions and terminology for the concepts and problems we cover. In Chapter 2, we consider a two-stage robust optimization framework that captures matching problems where one side of the input includes some future demand uncertainty. We propose two models to capture the demand uncertainty: explicit and implicit scenarios.
Chapters 3 and 4 see us switch our attention to matchings in hypergraphs. In Chapter 3, we consider the problem of learning hidden hypergraph matchings through membership queries. Finally, in Chapter 4, we study the problem of finding matchings in uniform hypergraphs in the massively parallel computation (MPC) model where the data (e.g. vertices and edges) is distributed across the machines and in each round, a machine performs local computation on its fragment of data, and then sends messages to other machines for the next round.
|
312 |
COMBINATORIAL OPTIMIZATION APPROACHES TO DISCRETE PROBLEMSLIU, MIN JING 10 1900 (has links)
<p>As stressed by the Society for Industrial and Applied Mathematics (SIAM): Applied mathematics, in partnership with computational science, is essential in solving many real-world problems. Combinatorial optimization focuses on problems arising from discrete structures such as graphs and polyhedra. This thesis deals with extremal graphs and strings and focuses on two problems: the Erdos' problem on multiplicities of complete subgraphs and the maximum number of distinct squares in a string.<br />The first part of the thesis deals with strengthening the bounds for the minimum proportion of monochromatic t cliques and t cocliques for all 2-colourings of the edges of the complete graph on n vertices. Denote by k_t(G) the number of cliques of order t in a graph G. Let k_t(n) = min{k_t(G)+k_t(\overline{G})} where \overline{G} denotes the complement of G of order n. Let c_t(n) = {k_t(n)} / {\tbinom{n}{t}} and c_t be the limit of c_t(n) for n going to infinity. A 1962 conjecture of Erdos stating that c_t = 2^{1-\tbinom{t}{2}} was disproved by Thomason in 1989 for all t > 3. Tighter counterexamples have been constructed by Jagger, Stovicek and Thomason in 1996, by Thomason for t < 7 in 1997, and by Franek for t=6 in 2002. We present a computational framework to investigate tighter upper bounds for small t yielding the following improved upper bounds for t=6,7 and 8: c_6 \leq 0.7445 \times 2^{1- \tbinom{6}{2}}, c_7\leq 0.6869\times 2^{1- \tbinom{7}{2}}, and c_8 \leq 0.7002\times 2^{1- \tbinom{8}{2}}. The constructions are based on a large but highly regular variant of Cayley graphs for which the number of cliques and cocliques can be expressed in closed form. Considering the quantity e_t=2^{\tbinom{t}{2}-1} c_t, the new upper bound of 0.687 for e_7 is the first bound for any e_t smaller than the lower bound of 0.695 for e_4 due to Giraud in 1979.<br />The second part of the thesis deals with extremal periodicities in strings: we consider the problem of the maximum number of distinct squares in a string. The importance of considering as key variables both the length n and the size d of the alphabet is stressed. Let (d,n)-string denote a string of length n with exactly d distinct symbols. We investigate the function \sigma_d(n) = max {s(x) | x} where s(x) denotes the number of distinct primitively rooted squares in a (d,n)-string x. We discuss a computational framework for computing \sigma_d(n) based on the notion of density and exploiting the tightness of the available lower bound. The obtained computational results substantiate the hypothesized upper bound of n-d for \sigma_d(n). The structural similarities with the approach used for investigating the Hirsch bound for the diameter of a polytope of dimension d having n facets is underlined. For example, the role played by (d,2d)-polytope was presented in 1967 by Klee and Walkup who showed the equivalency between the Hirsch conjecture and the d-step conjecture.</p> / Doctor of Philosophy (PhD)
|
313 |
A MULTI-AGENT BASED APPROACH FOR SOLVING THE REDUNDANCY ALLOCATION PROBLEMLi, Zhuo January 2011 (has links)
Redundancy Allocation Problem (RAP) is a well known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. Due to the diverse possible selection of components, the RAP is proved to be NP-hard. Therefore, many algorithms, especially heuristic algorithms were proposed and implemented in the past several decades, committed to provide innovative methods or better solutions. In recent years, multi-agent system (MAS) is proposed for modeling complex systems and solving large scale problems. It is a relatively new programming concept with the ability of self-organizing, self-adaptive, autonomous administrating, etc. These features of MAS inspire us to look at the RAP from another point of view. An RAP can be divided into multiple smaller problems that are solved by multiple agents. The agents can collaboratively solve optimal RAP solutions quickly and efficiently. In this research, we proposed to solve RAP using MAS. This novel approach, to the best of our knowledge, has not been proposed before, although multi-agent approaches have been widely used for solving other large and complex nonlinear problems. To demonstrate that, we analyzed and evaluated four benchmark RAP problems in the literature. From the results, the MAS approach is shown as an effective and extendable method for solving the RAP problems. / Electrical and Computer Engineering
|
314 |
A Convergence Analysis of Generalized Hill Climbing AlgorithmsSullivan, Kelly Ann 21 April 1999 (has links)
Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented.
The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested. / Ph. D.
|
315 |
Assessing the Finite-Time Performance of Local Search AlgorithmsHenderson, Darrall 18 April 2001 (has links)
Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value.
It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions. / Ph. D.
|
316 |
Generalized hill climbing algorithms for discrete optimization problemsJohnson, Alan W. 06 June 2008 (has links)
Generalized hill climbing (GHC) algorithms are introduced, as a tool to address difficult discrete optimization problems. Particular formulations of GHC algorithms include simulated annealing (SA), local search, and threshold accepting (T A), among. others. A proof of convergence of GHC algorithms is presented, that relaxes the sufficient conditions for the most general proof of convergence for stochastic search algorithms in the literature (Anily and Federgruen [1987]).
Proofs of convergence for SA are based on the concept that deteriorating (hill climbing) transitions between neighboring solutions are accepted by comparing a deterministic function of both the solution change cost and a temperature parameter to a uniform (0,1) random variable. GHC algorithms represent a more general model, whereby deteriorating moves are accepted according to a general random variable.
Computational results are reported that illustrate relationships that exist between the GHC algorithm's finite-time performance on three problems, and the general random variable formulations used. The dissertation concludes with suggestions for further research. / Ph. D.
|
317 |
Constructing Covering Arrays using Parallel Computing and Grid ComputingAvila George, Himer 10 September 2012 (has links)
A good strategy to test a software component involves the generation of the whole
set of cases that participate in its operation. While testing only individual values
may not be enough, exhaustive testing of all possible combinations is not always
feasible. An alternative technique to accomplish this goal is called combinato-
rial testing. Combinatorial testing is a method that can reduce cost and increase
the effectiveness of software testing for many applications. It is based on con-
structing functional test-suites of economical size, which provide coverage of the
most prevalent configurations. Covering arrays are combinatorial objects, that
have been applied to do functional tests of software components. The use of cov-
ering arrays allows to test all the interactions, of a given size, among the input
parameters using the minimum number of test cases.
For software testing, the fundamental problem is finding a covering array with
the minimum possible number of rows, thus reducing the number of tests, the
cost, and the time expended on the software testing process. Because of the
importance of the construction of (near) optimal covering arrays, much research
has been carried out in developing effective methods for constructing them. There
are several reported methods for constructing these combinatorial models, among
them are: (1) algebraic methods, recursive methods, (3) greedy methods, and (4)
metaheuristics methods.
Metaheuristic methods, particularly through the application of simulated anneal-
ing has provided the most accurate results in several instances to date. Simulated
annealing algorithm is a general-purpose stochastic optimization method that has
proved to be an effective tool for approximating globally optimal solutions to many
optimization problems. However, one of the major drawbacks of the simulated an-
nealing is the time it requires to obtain good solutions.
In this thesis, we propose the development of an improved simulated annealing
algorithm / Avila George, H. (2012). Constructing Covering Arrays using Parallel Computing and Grid Computing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17027
|
318 |
[pt] ALGORITMOS DE APROXIMAÇÃO PARA ÁRVORES DE DECISÃO / [en] APPROXIMATION ALGORITHMS FOR DECISION TREESALINE MEDEIROS SAETTLER 13 December 2021 (has links)
[pt] A construção de árvores de decisão é um problema central em diversas áreas da ciência da computação, por exemplo, teoria de banco de dados e aprendizado computacional. Este problema pode ser visto como o problema de avaliar uma função discreta, onde para verificar o valor de cada variável da função temos que pagar um custo, e os pontos onde a função está definida estão associados a uma distribuição de probabilidade. O objetivo do problema é avaliar a função minimizando o custo gasto (no pior caso ou no caso médio). Nesta tese, apresentamos quatro contribuições relacionadas a esse problema. A
primeira é um algoritmo que alcança uma aproximação de O(log(n)) em relação a tanto o custo esperado quanto ao pior custo. A segunda é um método que combina duas árvores, uma com pior custo W e outra com custo esperado E, e produz uma árvore com pior custo de no máximo (1+p)W e custo esperado no
máximo (1/(1-e-p))E, onde p é um parâmetro dado. Nós também provamos que esta é uma caracterização justa do melhor trade-off alcançável, mostrando que existe um número infinito de instâncias para as quais não podemos obter uma árvore de decisão com tanto o pior custo menor que (1 + p)OPTW(I)
quanto o custo esperado menor que (1/(1 - e - p))OPTE(I), onde OPTW(I) (resp. OPTE(I)) denota o pior custo da árvore de decisão que minimiza o pior custo (resp. custo esperado) para uma instância I do problema. A terceira contribuição é um algoritmo de aproximação de O(log(n)) para a minimização
do pior custo para uma variante do problema onde o custo de ler uma variável depende do seu valor. Nossa última contribuição é um algoritmo randomized rounding que, dada uma instância do problema (com um inteiro adicional (k > 0) e um parâmetro 0 < e < 1/2, produz uma árvore de decisão oblivious
com custo no máximo (3/(1 - 2e))ln(n)OPT(I) e que produz no máximo (k/e) erros, onde OPT(I) denota o custo da árvore de decisão oblivious com o menor custo entre todas as árvores oblivious para a instância I que produzem no máximo k erros de classificação. / [en] Decision tree construction is a central problem in several areas of computer science, for example, data base theory and computational learning. This problem can be viewed as the problem of evaluating a discrete function, where to check the value of each variable of the function we have to pay a cost, and the points where the function is defined are associated with a probability distribution. The goal of the problem is to evaluate the function minimizing the cost spent (in the worst case or in expectation). In this Thesis, we present four contributions related to this problem. The first one is an algorithm that achieves an O(log(n)) approximation with respect to both the expected and the worst costs. The second one is a procedure that combines two trees, one with worst costW and another with expected cost E, and produces a tree with worst cost at most (1+p)W and expected cost at most (1/(1-e-p))E, where p is a given parameter. We also prove that this is a sharp characterization of the best possible trade-off attainable, showing that there are infinitely many instances for which we cannot obtain a decision tree with both worst cost smaller than
(1+p)OPTW(I) and expected cost smaller than (1/(1-e-p))OPTE(I), where OPTW(I) (resp. OPTE(I)) denotes the cost of the decision tree that minimizes the worst cost (resp. expected cost) for an instance I of the problem. The third contribution is an O(log(n)) approximation algorithm for the minimization
of the worst cost for a variant of the problem where the cost of reading a variable depends on its value. Our final contribution is a randomized rounding algorithm that, given an instance of the problem (with an additional integer k > 0) and a parameter 0 < e < 1/2, builds an oblivious decision tree with
cost at most (3/(1 - 2e))ln(n)OPT(I) and produces at most (k/e) errors, where OPT(I) denotes the cost of the oblivious decision tree with minimum cost among all oblivious decision trees for instance I that make at most k classification errors.
|
319 |
Discrete flower pollination algorithm for resource constrained project scheduling problemBibiks, Kirils, Li, Jian-Ping, Hu, Yim Fun 07 1900 (has links)
Yes / In this paper, a new population-based and nature-inspired metaheuristic algorithm, Discrete Flower Pollination Algorithm (DFPA), is presented to solve the Resource Constrained Project Scheduling Problem (RCPSP). The DFPA is a modification of existing Flower Pollination Algorithm adapted for solving combinatorial optimization problems by changing some of the algorithm's core concepts, such as flower, global pollination, Lévy flight, local pollination. The proposed DFPA is then tested on sets of benchmark instances and its performance is compared against other existing metaheuristic algorithms. The numerical results have shown that the proposed algorithm is efficient and outperforms several other popular metaheuristic algorithms, both in terms of quality of the results and execution time. Being discrete, the proposed algorithm can be used to solve any other combinatorial optimization problems. / Innovate UK / Awarded 'Best paper of the Month'
|
320 |
The polyhedral structure of certain combinatorial optimization problems with application to a naval defense problemLee, Youngho 06 June 2008 (has links)
This research deals with a study of the polyhedral structure of three important combinatorial optimization problems, namely, the generalized upper bounding (GUS) constrained knapsack problem, the set partitioning problem, and the quadratic zero-one programming problem, and applies related techniques to solve a practical combinatorial naval defense problem.
In Part I of this research effort, we present new results on the polyhedral structure of the foregoing combinatorial optimization problems. First, we characterize a new family of facets for the GUS constrained knapsack polytope. This family of facets is obtained by sequential and simultaneous lifting procedures of minimal GUS cover inequalities. Second, we develop a new family of cutting planes for the set partitioning polytope for deleting any fractional basic feasible solutions to its underlying linear programming relaxation. We also show that all the known classes of valid inequalities belong to this family of cutting planes, and hence, this provides a unifying framework for a broad class of such valid inequalities. Finally, we present a new class of facets for the boolean quadric polytope, obtained by applying a simultaneous lifting procedure.
The strong valid inequalities developed in Part I, such as facets and cutting planes, can be implemented for obtaining exact and approximate solutions for various combinatorial optimization problems in the context of a branch-and-cut procedure. In particular, facets and valid cutting planes developed for the GUS constrained knapsack polytope and the set partitioning polytope can be directly used in generating tight linear programming relaxations for a certain scheduling polytope that arises from a combinatorial naval defense problem. Furthermore, these tight formulations are intended not only to develop exact solution algorithms, but also to design powerful heuristics that provide good quality solutions within a reasonable amount of computational effort.
Accordingly, in Part ll of this dissertation, we present an application of such polyhedral results in order to construct effective approximate and exact algorithms for solving a naval defense problem. tn this problem, the objective is to schedule a set of illuminators in order to strike a given set of targets using surface-to-air missiles in naval battle-group engagement scenarios. The problem is conceptualized as a production floor shop scheduling problem of minimizing the total weighted flow time subject to time-window job availability and machine-downtime unavailability side constraints. A polynomial-time algorithm is developed for the case when ail the job processing times are equal (and unity without loss of generality) and the data are all integer. For the general case of scheduling jobs with unequal processing times, we develop three alternative formulations and analyze their relative strengths by comparing their respective linear programming relaxations. The special structures inherent in a particular strong zero-one integer programming model of the problem enable us to derive some classes of strong valid inequalities from the facets of the GUB constrained knapsack polytope and the set-packing polytope. Furthermore, these special structures enable us to construct several effective approximate and exact algorithms that provide solutions within specified tolerances of optimality, with an effort that admits real-time processing in the naval battle-group engagement scenario. Computational results are presented using suitable realistic test data. / Ph. D.
|
Page generated in 0.0187 seconds