• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 734
  • 270
  • 129
  • 52
  • 19
  • 14
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1475
  • 669
  • 257
  • 243
  • 241
  • 240
  • 186
  • 182
  • 174
  • 167
  • 159
  • 150
  • 143
  • 141
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Extremal Results for Peg Solitaire on Graphs

Gray, Aaron D. 01 December 2013 (has links)
In a 2011 paper by Beeler and Hoilman, the game of peg solitaire is generalized to arbitrary boards. These boards are treated as graphs in the combinatorial sense. An open problem from that paper is to determine the minimum number of edges necessary for a graph with a fixed number of vertices to be solvable. This thesis provides new bounds on this number. It also provides necessary and sufficient conditions for two families of graphs to be solvable, along with criticality results, and the maximum number of pegs that can be left in each of the two graph families.
552

Interaction Testing, Fault Location, and Anonymous Attribute-Based Authorization

January 2019 (has links)
abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem. One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows. This work contains the first algorithm for constructing locating arrays for general values of $d$ and $t$. A randomized computational search algorithmic framework verifies if a candidate array is $(\bar{d},t)$-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2019
553

Subtraction Games: Range and Strict Periodicity

Blackham, Bryce Emerson 01 April 2018 (has links)
In this paper I introduce some background for subtraction games and explore the Sprague-Grundy functions defined on them. I exhibit some subtraction games where the functions are guaranteed to be strictly periodic. I also exhibit a class of subtraction games which have bounded range, and show there are uncountably many of these.
554

A Combinatorial Approach to $r$-Fibonacci Numbers

Heberle, Curtis 31 May 2012 (has links)
In this paper we explore generalized “$r$-Fibonacci Numbers” using a combinatorial “tiling” interpretation. This approach allows us to provide simple, intuitive proofs to several identities involving $r$-Fibonacci Numbers presented by F.T. Howard and Curtis Cooper in the August, 2011, issue of the Fibonacci Quarterly. We also explore a connection between the generalized Fibonacci numbers and a generalized form of binomial coefficients.
555

The use of geometric structures in graphics and optimization / L'utilisation des structures géométriques pour synthèse d'image et optimisation

Bus, Norbert 07 October 2015 (has links)
Les données du monde réel ont manifestement une composante géométrique importante et suggère les patterns géométriques signifiants. Les méthodes qui utilisent la nature géométrique des données sont activement développés dans plusieurs domaines scientifiques, comme, par exemple, la géométrie algorithmique, la géométrie discrète, la synthèse d'images, la vision par ordinateur. Dans le travail présent, nous utilisons les structures géométriques afin de modéliser des algorithmes efficaces pour deux domaines, celui de synthèse d'images et de l'optimisation combinatoire. Dans la première partie il s'agit de la structure de données géométriques, appelé une décomposition bien-séparée, et son application pour un des problèmes les plus difficiles dans la synthèse d'images, un efficace rendu photo-réalistique. Une solution consiste à appliquer toute une famille de méthodes de many-lights qui fait une approximation d'illumination globale par calcule individuelle d'illumination avec un grand nombre de VPLs (virtual point light) répartis sur les surfaces. L'application individuelle de chacun VPL résulte dans un grand nombre des calculs. Une des stratégies de la réussite pour réduire les computations est de faire les clusteurs considérés qui sont consideré comme une seul émetteur. Nous utilisons la décomposition bien-séparée de points comme le fondement de la structure des données susceptible de procéder à un calcul préliminaire et de conserver d'une façon compacte un grand nombre des clusterisations individuels potentiels ce qui montre que la clusterisation des VPL plus correspondante peut être extraite de cette structure de données d'une manière efficace. Nous montrons qu'au lieu de regroupper les points et/ou VPL indépendemment il vaut mieux produire les clusteurs sur l'espace de produit du nombre des points à nuancer et un groupe de VPL à la base de l'illumination des paires induite. En plus, nous proposons une technique adaptive afin d'échantillonner pour réduire le nombre des demandes de vérifications de visibilité pour chaque clusteur de l'espace de produit. Notre méthode consiste à détenir chaque émetteur qui peut être rapproché par VPL, matériaux spéculaire et à performer les méthodes précédents réconnus les meilleurs jusqu'au présent. La deuxième partie est consacrée au développement de nouveaux algorithmes d'approximation pour un problème fondamental de NP complet dans la géométrie algorithmique, précisément le problème du hitting set, avec une précision pour le cas d'un groupe de points et d'un groupe de disques, nous souhaiterons calculer les plus petits nombre du points qui touche tous les disques. Il arrive que les algorithmes efficaces à détecter le hitting set repose sur une structure géométrique clée, appelée epsilon-net. Nous donnons un algorithme utilisant uniquement les triangulisations de Delaunay pour construire les epsilon-nets de taille 13.4/epsilon. Nous donnons une implémentation pratique de la technique à calculer les hitting sets dans le temps quasi-linéaire en utilisant des epsilon-nets de petites tailles. Nos résultats aboutissent à une approximation de 13.4 pour le problème de hitting set par un algorithme qui fonctionne même pour les grands ensembles de données. Pour les ensembles de taille plus petite, nous proposons une implémentation de la technique de recherche locale avec une approximation bornes supérieures, avec le résultat obtenu d'approximation de (8 + epsilon) dans le temps O(n^{2.34}) / Real-world data has a large geometric component, showing significant geometric patterns. How to use the geometric nature of data to design efficient methods has became a very important topic in several scientific fields, e.g., computational geometry, discrete geometry, computer graphics, computer vision. In this thesis we use geometric structures to design efficient algorithms for problems in two domains, computer graphics and combinatorial optimization. Part I focuses on a geometric data structure called well-separated pair decomposition and its usage for one of the most challenging problems in computer graphics, namely efficient photo-realistic rendering. One solution is the family of many-lights methods that approximate global illumination by individually computing illumination from a large number of virtual point lights (VPLs) placed on surfaces. Considering each VPL individually results in a vast number of calculations. One successful strategy the reduce computations is to group the VPLs into a small number of clusters that are treated as individual lights with respect to each point to be shaded. We use the well-separated pair decomposition of points as a basis for a data structure for pre-computing and compactly storing a set of view independent candidate VPL clusterings showing that a suitable clustering of the VPLs can be efficiently extracted from this data structure. We show that instead of clustering points and/or VPLs independently what is required is to cluster the product-space of the set of points to be shaded and the set of VPLs based on the induced pairwise illumination. Additionally we propose an adaptive sampling technique to reduce the number of visibility queries for each product-space cluster. Our method handles any light source that can be approximated with virtual point lights (VPLs), highly glossy materials and outperforms previous state-of-the-art methods. Part II focuses on developing new approximation algorithms for a fundamental NP-complete problem in computational geometry, namely the minimum hitting set problem with particular focus on the case where given a set of points and a set of disks, we wish to compute the minimum-sized subset of the points that hits all disks. It turns out that efficient algorithms for geometric hitting set rely on a key geometric structure, called epsilon-net. We give an algorithm that uses only Delaunay triangulations to construct epsilon-nets of size 13.4/epsilon and we provide a practical implementation of a technique to calculate hitting sets in near-linear time using small sized epsilon-nets. Our results yield a 13.4 approximation for the hitting set problem with an algorithm that runs efficiently even on large data sets. For smaller datasets, we present an implementation of the local search technique along with tight approximation bounds for its approximation factor, yielding an (8 + epsilon)-approximation algorithm with running time O(n^{2.34})
556

Graph-theoretic studies of combinatorial optimization problems

Mirghorbani Nokandeh, Seyed Mohammad S. 01 May 2013 (has links)
Graph theory is fascinating branch of math. Leonhard Euler introduced the concept of Graph Theory in his paper about the seven bridges of Konigsberg published in 1736. In a nutshell, graph theory is the study of pair-wise relationships between objects. Each object is represented using a vertex and in case of a relationship between a pair of vertices, they will be connected using an edge. In this dissertation, graph theory is used to study several important combinatorial optimization problems. In chapter 2, we study the multi-dimensional assignment problem using their underlying hypergraphs. It will be shown how the MAP can be represented by a k-partite graph and how any solution to MAP is associated to a k-clique in the respective k-partite graph. Two heuristics are proposed to solve the MAP and computational studies are performed to compare the performance of the proposed methods with exact solutions. On the heels of chapter 3, a new branch-and-bound method is proposed to solve the problem of finding all k-cliques in a k-partite graph. The new method utilizes bitsets as the data-structure to represent graph data. A new pruning method is introduced in BitCLQ, and CPU instructions are used to improve the performance of the branch-and-bound method. BitCLQ gains up to 300% speed up over existing methods. In chapter 4, two new heuristic to solve decomposable cost MAP's are proposed. The proposed heuristic are based on the partitioning of the underlying graph representing the MAP. In the first heuristic method, MAP's are partitioned into several smaller MAP's with the same dimensiality and smaller cardinality. The solution to the original MAP is constructed incrementally, by using the solutions obtained from each of the smaller MAP's. The second heuristic works in the same fashion. But instead of partitioning the graph along the cardinality, graphs are divided into smaller graphs with the same cardinality but smaller dimensionality. The result of each heuristic is then compared with a well-known existing heuristic. An important problem in graph theory is the maximum clique problem (MCQ). A clique in a graph is defined as a complete subgraph. MCQ problem entails finding the size of the largest clique contained in a graph. General branch-and-bound methods to solve MCQ use graph coloring to find an upper bound on the size of the maximum clique. Recently, a new MAX-SAT based branch-and-bound method for MCQ is proposed that improves the quality of the upper bound obtained from graph coloring. In chapter 5, a branch and bound algorithm is proposed for the maximum clique problem. The proposed method uses bitsets as the data-structure. The result of the computational studies to compare the proposed method with existing methods for MCQ is provided. Chapter 6 contains an application of a graph theory in solving a risk management problem. A new mixed-integer mathematical model to formulate a risk-based network is provided. It will be shown that an optimal solution of the model is a maximal clique in the underlying graph representing the network. The model is then solved using a graph-theoretic approach and the results are compared to CPLEX.
557

Combinatorial-Based Prioritization For User-Session-Based Test Suites

Manchester, Schuyler 01 May 2012 (has links)
Software defects caused by inadequate software testing can cost billions of dollars. Further, web application defects can be costly due to the fact that most web applications handle constant user interaction. However, software testing is often under time and budget constraints. By improving the time efficiency of software testing, many of the costs associated with defects can be saved. Current methods for web application testing can take too long to generate test suites. In addition, studies have shown that user-session-based test suites often find faults missed by other testing techniques. This project addresses this problem by utilizing existing user sessions for web application testing. The software testing method provided within this project utilizes previous knowledge about combinatorial coverage testing and improves time and computer memory efficiency by only considering test cases that exist in a user-session based test suite. The method takes the existing test suite and prioritizes the test cases based on a specific combinatorial criterion. In addition, this project presents an empirical study examining the application of the newly proposed combinatorial prioritization algorithm on an existing web application.
558

Modèles et algorithmes pour l'optimisation robuste dans les Self-Organizing Network (SON) des réseaux mobiles 4G (LTE) / Models and algorithms for robust optimization in self-Organizing Networks (SON) of 4G mobile networks (LTE)

Tabia, Nourredine 13 December 2013 (has links)
La norme 3G/UMTS a permis de développer les premières applications multimédia pour téléphones et tablettes mobiles. Le nouveau standard 4G/LTE (Long Term Evolution) a pour objectif le très haut débit mobile. Dans ce standard, beaucoup d’efforts ont portés sur la reconfiguration automatique des réseaux en fonction de la demande des clients dans un processus appelé Self-Organizing Network (SON). Le travail de cette thèse s’inscrit dans cette direction. La reconfiguration de réseaux est comprise principalement dans le sens des modèles, des méthodes et des outils pour analyser les indicateurs remontés du réseau et configurer automatiquement les paramètres. Nous avons essentiellement travaillé sur les paramètres des aériens, l’allocation des fréquences, des puissances d’émission et des inclinaisons verticales.Dans cette optique, étant donné la forte variabilité des données d’entrée de l’optimisation issues des remontées de réseau, cette thèse porte sur les modèles et algorithmes d’optimisation robuste dans le contexte de l’optimisation sous contraintes. L’optimisation robuste fait référence à un ensemble de procédés pour proposer des solutions à des problèmes combinatoires dans un contexte de données incertaines et de scénarios variables dans le temps. Une première partie est dédiée à l’état de l’art et présente les principes des Self-Organizing Network (SON). La deuxième partie est consacrée à l’état de l’art des méthodes en optimisation robuste. En troisième partie nous présentons la modélisation mathématique du problème d’optimisation pour lequel les données de trafic (répartitions des clients sur la zone de service et leurs demandes respectives) prennent des valeurs variables dans le temps. Une phase de diagnostic sur le fonctionnement du réseau à partir des données, et une étude de sensibilité des solutions vis-à-vis des variations dans la réalisation des données ont été faites en quatrième partie avec des algorithmes de recherche locale. La cinquième partie présente le travail de conception, développement et test sur scénarios, d’une Recherche Tabou ainsi qu’une analyse approfondie sur les méthodes de pilotage envisagées pour les SON en 4G. / The standard 3G/UMTS has launched the first multimedia applications for mobile phones and tablets. The new standard 4G/LTE (Long Term Evolution) has mobile broadband objective. In this standard a huge effort has been done on automatic network reconfiguration based on customer demand variation in a process called Self-Organizing Network (SON). The work of this thesis lies in this direction. Reconfiguration of networks lies mainly in the direction of models, methods and tools to analyze network Key Performance Indicators and automatically configure its settings. We mainly worked on the air interface parameters such that frequency assignment, emitted power and pattern vertical inclination.In this context, given the high variability of optimization input data issued from the network, this thesis focuses on robust optimization under constraints. The robust optimization refers to a set of processes to provide solutions to combinatorial problems with uncertain and variable scenarios of data over time. The first Section presents the principles of Self-Organizing Network (SON). The second Section concerns the state of the art on robust optimization. The third Section defines the mathematical model to optimize for which traffic data (distribution of customers and throughput requirements on the service area) take variable values over time. A data diagnostic phase on the network operation and a sensitivity analysis of the solutions were made in the fourth Section with several local search algorithms. The fifth Section presents the work of design, development and test of a Tabu Search method and a thorough analysis of SON control methodology proposed for 4G.
559

Régulation coopérative des intersections : protocoles et politiques / Cooperative Intersection Management : Protocols and policies

Perronnet, Florent 27 May 2015 (has links)
Dans ce travail, nous exploitons le potentiel offert par les véhicules autonomes coopératifs, pour fluidifier le trafic dans une intersection isolée puis dans un réseau d’intersections. Nous proposons le protocole SVAC (Système du Véhicule Actionneur Coopératif) permettant de réguler une intersection isolée. SVAC est basé sur une distribution individuelle du droit de passage qui respecte un ordre précis donné par une séquence de passage.Pour optimiser la séquence de passage, nous définissons la politique PED (Politique d’Evacuation Distribuée) permettant d’améliorer le temps d’évacuation total de l’intersection. La création de la séquence de passage est étudiée à travers deux modélisations. Une modélisation sous forme de graphes permettant le calcul de la solution optimale en connaissant les dates d’arrivée de tous les véhicules, et une modélisation de type réseaux de Petri avec dateurs pour traiter la régulation temps-réel. Des tests réels avec des véhicules équipés ont été réalisés pour étudier la faisabilité du protocole SVAC. Des simulations mettant en jeu un trafic réaliste permettent ensuite de montrer la capacité de fluidifier le trafic par rapport à une régulation classique par feux tricolores.La régulation d’un réseau d’intersections soulève les problèmes d’interblocage et de réorientation du trafic. Nous proposons le protocole SVACRI (Système du Véhicule Actionneur Coopératif pour les Réseaux d’Intersections) qui permet d’éliminer à priori les risques d’interblocage à travers la définition de contraintes d’occupation et de réservation de l’espace et du temps. Nous étudions également la possibilité d’améliorer la fluidité du trafic à travers le routage des véhicules, en tirant avantage du protocole SVACRI. Enfin, nous généralisons le système de régulation proposé pour la synchronisation des vitesses aux intersections. / The objective of this work is to use the potential offered by the wireless communication and autonomous vehicles to improve traffic flow in an isolated intersection and in a network of intersections. We define a protocol, called CVAS (Cooperative Vehicle Actuator System) for managing an isolated intersection. CVAS distributes separately the right of way to each vehicle according to a specific order determined by a computed sequence.In order to optimize the sequence, we define a DCP (Distributed Clearing Policy) to improve the total evacuation time of the intersection. The control strategy is investigated through two modeling approaches. First graph theory is used for calculating the optimal solution according to the arrival times of all vehicles, and then a timed Petri Net model is used to propose a real-time control algorithm. Tests with real vehicles are realized to study the feasibility of CVAS. Simulations of realistic traffic flows are performed to assess our algorithm and to compare it versus conventional traffic lights.Managing a network of intersections raises the issue of gridlock. We propose CVAS-NI protocol (Cooperative Vehicle actuator system for Networks of Intersections), which is an extension of CVAS protocol. This protocol prevents the deadlock in the network through occupancy and reservation constraints. With a deadlock free network we extend the study to the traffic routing policy. Finally, we generalize the proposed control system for synchronizing the vehicle velocities at intersections.
560

Extremal and probabilistic problems in order types / Problemas extremais e probabilísticos em o-tipos

Sales, Marcelo Tadeu de Sá Oliveira 15 June 2018 (has links)
A configuration is a finite set of points in the plane. Two configurations have the same order type if there exists a bijection between them that preserves the orientation of every ordered triple. A configuration A contains a copy of a configuration B if some subset of A has the same order type of B and we denote this by B \\subset A. For a configuration B and a integer N, the extremal number ex(N,B)=max{|A|: B ot \\subset A \\subset [N]^2} is the maximum size of a subset of [N]^2 without a copy of B. We give an upper bound for general and convex cases. A random N-set is a set obtained by randomly choosing N points uniformly and independently in the unit square. A configuration is n-universal if contains all order types in general position of size n. We obtain the threshold for the n-universal property up to a log log factor, that is, we obtain integers N_0 and N_1 with log log N_1=O(log log N_0) such that if N >> N_1 (N << N_0), then a random N-set is n-universal with probability tending to 1 (tending to 0). We also determine a bound for the probability of obtaining a random set without a copy of a fixed configuration. / Uma configuração é um conjunto finito de pontos no plano. Duas configurações possuem o mesmo o-tipo se existe uma bijeção entre elas que preserva a orientação de toda tripla orientada. Uma configuração A contém uma cópia da configuração B se algum subconjunto de A possui o mesmo o-tipo que B e denotamos este fato por B \\subset A. Para uma configuração B e um inteiro N, o número extremal ex(N,B)=max{|A|: B ot \\subset A \\subset [N]^2} é o maior tamanho de um subconjunto de [N]^2 sem uma cópia de B. Neste trabalho, determinamos cotas superiores para o caso geral e para o caso convexo. Um N-conjunto aleatório é um conjunto obtido escolhendo N pontos uniformemente e independentemente ao acaso do quadrado unitário. Uma configuração é n-universal se contém todos os o-tipos de tamanho n. Determinamos o limiar da propriedade de um N-conjunto aleatório ser n-universal a menos de erros da ordem de log log, isto é, determinamos inteiros N_0 e N_1 com log log N_0=O(log log N_1) tais que se N >> N_1 (N << N_0), então o N-conjunto aleatório é n-universal com probabilidade tendendo a 1 (tendendo a 0). Também obtivemos cotas para a probabilidade de um conjunto aleatório não possuir determinado o-tipo.

Page generated in 0.0288 seconds