501 |
Getting Things in Order: An Introduction to the R package seriationHahsler, Michael, Hornik, Kurt, Buchta, Christian January 2007 (has links) (PDF)
Seriation, i.e., finding a linear order for a set of objects given data and a loss or merit function, is a basic problem in data analysis. Caused by the problem's combinatorial nature, it is hard to solve for all but very small sets. Nevertheless, both exact solution methods and heuristics are available. In this paper we present the package seriation which provides the infrastructure for seriation with R. The infrastructure comprises data structures to represent linear orders as permutation vectors, a wide array of seriation methods using a consistent interface, a method to calculate the value of various loss and merit functions, and several visualization techniques which build on seriation. To illustrate how easily the package can be applied for a variety of applications, a comprehensive collection of examples is presented. / Series: Research Report Series / Department of Statistics and Mathematics
|
502 |
Lace tessellations: a mathematical model for bobbin lace and an exhaustive combinatorial search for patternsIrvine, Veronika 29 August 2016 (has links)
Bobbin lace is a 500-year-old art form in which threads are braided together in an alternating manner to produce a lace fabric. A key component in its construction is a small
pattern, called a bobbin lace ground, that can be repeated periodically to fill a region of
any size. In this thesis we present a mathematical model for bobbin lace grounds representing the structure as the pair (Δ(G), ζ (v)) where Δ(G) is a topological embedding of a 2-regular digraph, G, on a torus and ζ(v) is a mapping from the vertices of G to a set of braid words. We explore in depth the properties that Δ(G) must possess in order to produce workable lace patterns. Having developed a solid, logical foundation for bobbin lace grounds, we enumerate and exhaustively generate patterns that conform to that model. We start by specifying an equivalence relation and define what makes a pattern prime so that we can identify unique representatives. We then prove that there are an infinite number of prime workable patterns. One of the key properties identified in the
model is that it must be possible to partition Δ(G) into a set of osculating circuits such
that each circuit has a wrapping index of (1,0); that is, the circuit wraps once around
the meridian of the torus and does not wrap around the longitude. We use this property
to exhaustively generate workable patterns for increasing numbers of vertices in G by
gluing together lattice paths in an osculating manner. Using a backtracking algorithm to process the lattice paths, we identify over 5 million distinct prime patterns. This is well in
excess of the roughly 1,000 found in lace ground catalogues. The lattice paths used in our
approach are members of a family of partially directed lattice paths that have not been
previously reported. We explore these paths in detail, develop a recurrence relation and
generating function for their enumeration and present a bijection between these paths
and a subset of Motzkin paths. Finally, to draw out of the extremely large number of patterns some of the more aesthetically interesting cases for lacemakers to work on, we look for examples that have a high degree of symmetry. We demonstrate, by computational generation, that there are lace ground representatives from each of the 17 planar periodic symmetry groups. / Graduate / 0389 / 0984 / 0405 / veronikairvine@gmail.com
|
503 |
Metaheuristická metóda mravčej kolónie pri riešení kombinatorických optimalizačných úloh / Solving the combinatorial optimization problems with the Ant Colony Optimization metaheuristic methodChu, Andrej January 2005 (has links)
The Ant Colony Optimization belongs into the metaheuristic methods category and it has been developing quite recently. So far it has shown its capabalities to over-perform other metaheuristic methods in quality of the solutions. This work brings analysis of the possible applications of the method on the classical optimization combinatorial problems -- traveling salesman problem, vehicle routing problem, knapsack problem, generalized assignment problem and maximal clique problem. It also deals with the practical experiments with application on several optimization problems and analysis of the time and memory complexity of such algorithms. The last part of the work is dedicated to the possibility of parallelization of the algorithm, which was result of the application of the ACO method on the traveling salesman problem. It brings analysis of the crucial operations and data synchronization issues, as well as practical example and demonstration of the parallelized version of the algorithm.
|
504 |
O princípio das gavetas de Dirichlet - problemas e aplicações / The Dirichlets principle - problems and applicationsPacifico, Thiago Mauricio 31 May 2019 (has links)
O princípio das gavetas de Dirichlet é um resultado matemático baseado numa proposição relativamente simples: se desejamos distribuir N +1 objetos em N gavetas, necessariamente alguma das gavetas conterá pelo menos 2 objetos. Apesar de parecer pouco relevante, devido a sua obviedade, esse teorema constitui uma ferramenta bastante importante na prova de outros resultados matemáticos. O presente trabalho, demonstra o Princípio das Gavetas em duas versões, uma mais simples e a outra mais geral, exibe algumas aplicações que evidenciam a sua importância como ferramenta de prova, e ao mesmo tempo, utiliza da sua simplicidade para motivar o estudo do próprio resultado assim como o de outros conceitos matemáticos. O banco de questões separado por níveis de dificuldade e o plano de aula têm o propósito de subsidiar o trabalho do professor no desenvolvimento desse interessante resultado matemático. / The Dirichlets drawers principle is a mathematical result based on a relatively simple proposition: if we wish to distribute N+1 objects in N drawers, necessarily some of the drawers will contain at least 2 objects. Although it seems insignificant due to its obviousness, this result is a very important tool in proving other mathematical results. The present work proves the Dirichlets principle, also know as pigeonhole principle in two versions, one simpler and the other more general, exibits some applications that show its importance as a tool of proof, and at the same time uses its simplicity to motivate the study of the own result as well as other mathematical concepts. The set of problems separated by difficulty levels and the lesson plan are intended to subsidize the teachers work in the development of this interesting mathematical result.
|
505 |
A new hybrid meta-heuristic algorithm for solving single machine scheduling problemsZlobinsky, Natasha January 2017 (has links)
A dissertation submitted in partial ful lment of the
degree of Master of Science in Engineering (Electrical) (50/50)
in the
Faculty of Engineering and the Built Environment
Department of Electrical and Information Engineering
May 2017 / Numerous applications in a wide variety of elds has resulted in a rich history of research
into optimisation for scheduling. Although it is a fundamental form of the problem, the
single machine scheduling problem with two or more objectives is known to be NP-hard.
For this reason we consider the single machine problem a good test bed for solution
algorithms. While there is a plethora of research into various aspects of scheduling
problems, little has been done in evaluating the performance of the Simulated Annealing
algorithm for the fundamental problem, or using it in combination with other techniques.
Speci cally, this has not been done for minimising total weighted earliness and tardiness,
which is the optimisation objective of this work.
If we consider a mere ten jobs for scheduling, this results in over 3.6 million possible
solution schedules. It is thus of de nite practical necessity to reduce the search space in
order to nd an optimal or acceptable suboptimal solution in a shorter time, especially
when scaling up the problem size. This is of particular importance in the application
area of packet scheduling in wireless communications networks where the tolerance for
computational delays is very low. The main contribution of this work is to investigate
the hypothesis that inserting a step of pre-sampling by Markov Chain Monte Carlo
methods before running the Simulated Annealing algorithm on the pruned search space
can result in overall reduced running times.
The search space is divided into a number of sections and Metropolis-Hastings Markov
Chain Monte Carlo is performed over the sections in order to reduce the search space for
Simulated Annealing by a factor of 20 to 100. Trade-o s are found between the run time
and number of sections of the pre-sampling algorithm, and the run time of Simulated
Annealing for minimising the percentage deviation of the nal result from the optimal
solution cost. Algorithm performance is determined both by computational complexity
and the quality of the solution (i.e. the percentage deviation from the optimal). We
nd that the running time can be reduced by a factor of 4.5 to ensure a 2% deviation
from the optimal, as compared to the basic Simulated Annealing algorithm on the full
search space. More importantly, we are able to reduce the complexity of nding the
optimal from O(n:n!) for a complete search to O(nNS) for Simulated Annealing to
O(n(NMr +NS)+m) for the input variables n jobs, NS SA iterations, NM Metropolis-
Hastings iterations, r inner samples and m sections. / MT 2017
|
506 |
On combinatorial approximation algorithms in geometry / Sur les algorithmes d'approximation combinatoires en géométrieJartoux, Bruno 12 September 2018 (has links)
L'analyse des techniques d'approximation est centrale en géométrie algorithmique, pour des raisons pratiques comme théoriques. Dans cette thèse nous traitons de l'échantillonnage des structures géométriques et des algorithmes d'approximation géométriques en optimisation combinatoire. La première partie est consacrée à la combinatoire des hypergraphes. Nous débutons par les problèmes de packing, dont des extensions d'un lemme de Haussler, particulièrement le lemme dit de Shallow packing, pour lequel nous donnons aussi un minorant optimal, conjecturé mais pas établi dans les travaux antérieurs. Puis nous appliquons ledit lemme, avec la méthode de partition polynomiale récemment introduite, à l'étude d'un analogue combinatoire des régions de Macbeath de la géométrie convexe : les M-réseaux, pour lesquels nous unifions les résultats d'existence et majorations existants, et donnons aussi quelques minorants. Nous illustrons leur relation aux epsilon-réseaux, structures incontournables en géométrie combinatoire et algorithmique, notamment en observant que les majorants de Chan et al. (SODA 2012) ou Varadarajan (STOC 2010) pour les epsilon-réseaux (uniformes) découlent directement de nos résultats sur les M-réseaux. La deuxième partie traite des techniques de recherche locale appliquées aux restrictions géométriques de problèmes classiques d'optimisation combinatoire. En dix ans, ces techniques ont produit les premiers schémas d'approximation en temps polynomial pour divers problèmes tels que celui de calculer un plus petit ensemble intersectant pour un ensemble de disques donnés en entrée parmi un ensemble de points donnés en entrée. En fait, il a été montré que pour de nombreux tels problèmes, la recherche locale de rayon Θ (1/epsilon²) donne une (1 + epsilon)-approximation en temps n^{O(1/epsilon²)}. Savoir si l'exposant de n pouvait être ramené à o (1/epsilon²) demeurait une question ouverte. Nous répondons par la négative : la garantie d'approximation de la recherche locale n'est améliorable pour aucun desdits problèmes / The analysis of approximation techniques is a key topic in computational geometry, both for practical and theoretical reasons. In this thesis we discuss sampling tools for geometric structures and geometric approximation algorithms in combinatorial optimization. Part I focuses on the combinatorics of geometric set systems. We start by discussing packing problems in set systems, including extensions of a lemma of Haussler, mainly the so-called shallow packing lemma. For said lemma we also give an optimal lower bound that had been conjectured but not established in previous work on the topic. Then we use this lemma, together with the recently introduced polynomial partitioning technique, to study a combinatorial analogue of the Macbeath regions from convex geometry: Mnets, for which we unify previous existence results and upper bounds, and also give some lower bounds. We highlight their connection with epsilon-nets, staples of computational and combinatorial geometry, for example by observing that the unweighted epsilon-net bound of Chan et al. (SODA 2012) or Varadarajan (STOC 2010) follows directly from our results on Mnets. Part II deals with local-search techniques applied to geometric restrictions of classical combinatorial optimization problems. Over the last ten years such techniques have produced the first polynomial-time approximation schemes for various problems, such as that of computing a minimum-sized hitting set for a collection of input disks from a set of input points. In fact, it was shown that for many of these problems, local search with radius Θ(1/epsilon²) gives a (1 + epsilon)-approximation with running time n^{O(1/epsilon²)}. However the question of whether the exponent of n could be decreased to o(1/epsilon²) was left open. We answer it in the negative: the approximation guarantee of local search cannot be improved for any of these problems. The key ingredient is a new lower bound on locally expanding planar graphs, which is then used to show the impossibility results
|
507 |
Massive parallelism for combinatorial problems by hardware acceleration with an application to the label switching problemSteere, Edward January 2016 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment, University
of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science in
Engineering. / This dissertation proposes an approach to solving hard combinatorial problems in massively
parallel architectures using parallel metaheuristics.
Combinatorial problems are common in many scientific fields. Scientific progress is constrained
by the fact that, even using state of the art algorithms, solving hard combinatorial
problems can take days or weeks. This is the case with the Label Switching Problem (LSP)
in the field of Bioinformatics.
In this field, prior work to solve the LSP has resulted in the program CLUMPP (CLUster
Matching and Permutation Program). CLUMPP focuses solely on the use of a sequential,
classical heuristic, and has had success in smaller low complexity problems.
By contrast this dissertation proposes the Parallel Solvers model for the acceleration of
hard combinatorial problems. This model draws on the commonalities evident in algorithms
and strategies in metaheuristics.
After investigating the effectiveness of the mechanisms apparent in the Parallel Solvers
model with regards to the LSP, the author developed DePermute, an algorithm which can be
used to solve the LSP significantly faster. Results were generated from time based testing of
simulated data, as well as data freely available on the Internet as part of various projects.
An investigation into the effectiveness of DePermute was carried out on a CPU (Central
Processing Unit) based computer. The time based testing was carried out on a CPU based
computer and on a Graphics Processing Unit (GPU) attached to a CPU host computer. The
dissertation also proposes the design of an Field Programmable Gate Arrays (FGPA) based
implementation of DePermute.
Using Parallel Solvers, in the DePermute algorithm, the time taken for population group
sizes, K, ranging from K = 5 to 20 was improved by up to two orders of magnitude using the
GPU implementation and aggressive settings for CLUMPP. The CPU implementation, while
slower than the GPU implementation still outperforms CLUMPP, using aggressive settings,
marginally and usually with better quality. In addition it outperforms CLUMPP by at least
an order of magnitude when CLUMPP is set to use higher quality settings.
Combinatorial problems can be very difficult. Parallel Solvers has been effective in the
field of Bioinformatics in solving the LSP. This dissertation proposes that it might assist in
the reasoning and design of algorithms in other fields. / MT2017
|
508 |
The existence of minimal logarithmic signatures for classical groupsUnknown Date (has links)
A logarithmic signature (LS) for a nite group G is an ordered tuple = [A1;A2; : : : ;An] of subsets Ai of G, such that every element g 2 G can be expressed uniquely as a product g = a1a2 : : : ; an, where ai 2 Ai. Logarithmic signatures were dened by Magliveras in the late 1970's for arbitrary nite groups in the context of cryptography. They were also studied for abelian groups by Hajos in the 1930's. The length of an LS is defined to be `() = Pn i=1 jAij. It can be easily seen that for a group G of order Qk j=1 pj mj , the length of any LS for G satises `() Pk j=1mjpj . An LS for which this lower bound is achieved is called a minimal logarithmic signature (MLS). The MLS conjecture states that every finite simple group has an MLS. If the conjecture is true then every finite group will have an MLS. The conjecture was shown to be true by a number of researchers for a few classes of finite simple groups. However, the problem is still wide open. This dissertation addresses the MLS conjecture for the classical simple groups. In particular, it is shown that MLS's exist for the symplectic groups Sp2n(q), the orthogonal groups O 2n(q0) and the corresponding simple groups PSp2n(q) and 2n(q0) for all n 2 N, prime power q and even prime power q0. The existence of an MLS is also shown for all unitary groups GUn(q) for all odd n and q = 2s under the assumption that an MLS exists for GUn 1(q). The methods used are very general and algorithmic in nature and may be useful for studying all nite simple groups of Lie type and possibly also the sporadic groups. The blocks of logarithmic signatures constructed in this dissertation have cyclic structure and provide a sort of cyclic decomposition for these classical groups. / by Nikhil Singhi. / Thesis (Ph.D.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
|
509 |
On the minimal logarithmic signature conjectureUnknown Date (has links)
The minimal logarithmic signature conjecture states that in any finite simple group there are subsets Ai, 1 i s such that the size jAij of each Ai is a prime or 4 and each element of the group has a unique expression as a product Qs i=1 ai of elements ai 2 Ai. Logarithmic signatures have been used in the construction of several cryptographic primitives since the late 1970's [3, 15, 17, 19, 16]. The conjecture is shown to be true for various families of simple groups including cyclic groups, An, PSLn(q) when gcd(n; q 1) is 1, 4 or a prime and several sporadic groups [10, 9, 12, 14, 18]. This dissertation is devoted to proving that the conjecture is true for a large class of simple groups of Lie type called classical groups. The methods developed use the structure of these groups as isometry groups of bilinear or quadratic forms. A large part of the construction is also based on the Bruhat and Levi decompositions of parabolic subgroups of these groups. In this dissertation the conjecture is shown to be true for the following families of simple groups: the projective special linear groups PSLn(q), the projective symplectic groups PSp2n(q) for all n and q a prime power, and the projective orthogonal groups of positive type + 2n(q) for all n and q an even prime power. During the process, the existence of minimal logarithmic signatures (MLS's) is also proven for the linear groups: GLn(q), PGLn(q), SLn(q), the symplectic groups: Sp2n(q) for all n and q a prime power, and for the orthogonal groups of plus type O+ 2n(q) for all n and q an even prime power. The constructions in most of these cases provide cyclic MLS's. Using the relationship between nite groups of Lie type and groups with a split BN-pair, it is also shown that every nite group of Lie type can be expressed as a disjoint union of sets, each of which has an MLS. / by NIdhi Singhi. / Thesis (Ph.D.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
|
510 |
Collabortive filtering using machine learning and statistical techniquesUnknown Date (has links)
Collaborative filtering (CF), a very successful recommender system, is one of the applications of data mining for incomplete data. The main objective of CF is to make accurate recommendations from highly sparse user rating data. My contributions to this research topic include proposing the frameworks of imputation-boosted collaborative filtering (IBCF) and imputed neighborhood based collaborative filtering (INCF). We also proposed a model-based CF technique, TAN-ELR CF, and two hybrid CF algorithms, sequential mixture CF and joint mixture CF. Empirical results show that our proposed CF algorithms have very good predictive performances. In the investigation of applying imputation techniques in mining incomplete data, we proposed imputation-helped classifiers, and VCI predictors (voting on classifications from imputed learning sets), both of which resulted in significant improvement in classification performance for incomplete data over conventional machine learned classifiers, including kNN, neural network, one rule, decision table, SVM, logistic regression, decision tree (C4.5), random forest, and decision list (PART), and the well known Bagging predictors. The main imputation techniques involved in these algorithms include EM (expectation maximization) and BMI (Bayesian multiple imputation). / by Xiaoyuan Su. / Vita. / Thesis (Ph.D.)--Florida Atlantic University, 2008. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2008. Mode of access: World Wide Web.
|
Page generated in 0.0731 seconds