171 |
Algorithmique rapide pour les problèmes de tournées et d'ordonnancementToussaint, Hélène 23 July 2010 (has links) (PDF)
Dans le cadre de cette thèse, nous nous intéressons à la modélisation et à la résolution de différents problèmes de tournées de véhicules et d'ordonnancement. Nous proposons des méthodes approchées qui ont pour but de résoudre les problèmes de manière rapide et efficace. Nous traitons cinq problèmes. Le premier est un problème d'ordonnancement de projet sous contrainte de ressources (RCPSP) que nous résolvons à l'aide d'un multiflot. Nous envisageons également des méthodes de résolution pour des extensions de ce problème (contraintes temporelles ou financieres). Le second est un problème de placement en deux dimensions. Nous utilisons une approche originale basée sur sa relaxation en RCPSP. Le troisième est le Stacker Crane Problem (SCP). Il fait parti des problèmes de pickup and delivery, dans lesquels des marchandises doivent être transportées depuis des origines vers des destinations à l'aide d'une flotte de véhicules. Dans le SCP, un unique véhicule de capacité unitaire est disponible. Nous proposons une résolution originale à base d'arbres pour le cas préemptif. Le quatrième est un problème de transport à la demande avec contraintes financières. Nous résolvons ce problème grâce à une heuristique d'insertion et une technique de propagation de contraintes. Le cinquième mêle problème de tournées et placement en deux dimensions. Il s'agit du 2L-CVRP dans lequel des colis doivent être livrés à des clients. Nous proposons un schéma GRASPxELS pour ce problème. Des résultats expérimentaux montrent la pertinence des approches proposées.
|
172 |
Vers des solutions adaptatives et génériques pour l'extraction de motifs intéressants dans les donnéesFlouvat, Frédéric 08 December 2006 (has links) (PDF)
La découverte de motifs fréquents est un des problèmes en fouille de données. Afin de mieux comprendre l'influence des données sur les algorithmes, nous présentons une étude expérimentale des jeux de données communément utilisés par la communauté. Cette étude permet d'aboutir à une nouvelle classification des données en fonction des bordures : stable et en accord avec les performances des algorithmes. Malgré le grand nombre de travaux et un cadre théorique des problèmes d'extraction de motifs intéressants, l'utilisation de ces algorithmes pour résoudre des problèmes "équivalents" est peu répandue et reste délicate. Face à ces limites, nous proposons un algorithme générique de découverte des bordures des motifs intéressants, appelé ABS (Adaptive borders Search), adaptant dynamiquement sa stratégie en fonction des données. De plus, une librairie générique de composants C++ a été proposée pour faciliter le développement de solutions logicielles pour cette famille de problèmes.
|
173 |
Upper and Lower Bounds for Text Upper and Lower Bounds for Text Indexing Data StructuresGolynski, Alexander 10 December 2007 (has links)
The main goal of this thesis is to investigate the complexity of a variety of problems related to text indexing and text searching. We present new data structures that can be used as building blocks for full-text indices which occupies minute space (FM-indexes) and wavelet trees. These data structures also can be used to represent labeled trees and posting lists. Labeled trees are applied in XML documents, and posting lists in search engines.
The main emphasis of this thesis is on lower bounds for time-space tradeoffs for the following problems: the rank/select problem, the problem of representing a string of balanced parentheses, the text retrieval problem, the problem of computing a permutation and its inverse, and the problem of representing a binary relation. These results are divided in two groups: lower bounds in the cell probe model and lower bounds in the indexing model.
The cell probe model is the most natural and widely accepted framework for studying data structures. In this model, we are concerned with the total space used by a data structure and the total number of accesses (probes) it performs to memory, while computation is free of charge. The indexing model imposes an additional restriction on the storage: the object in question must be stored in its raw form together with a small index that facilitates an efficient implementation of a given set of queries, e.g. finding rank, select, matching parenthesis, or an occurrence of a given pattern in a given text (for the text retrieval problem).
We propose a new technique for proving lower bounds in the indexing model and use it to obtain lower bounds for the rank/select problem and the balanced parentheses problem. We also improve the existing techniques of Demaine and Lopez-Ortiz using compression and present stronger lower bounds for the text retrieval problem in the indexing model.
The most important result of this thesis is a new technique for cell probe lower bounds. We demonstrate its strength by proving new lower bounds for the problem of representing permutations, the text retrieval problem, and the problem of representing binary relations. (Previously, there were no non-trivial results known for these problems.) In addition, we note that the lower bounds for the permutations problem and the binary relations problem are tight for a wide range of parameters, e.g. the running time of queries, the size and density of the relation.
|
174 |
Propriétés algorithmiques des extensions linéairesBouchitte, Vincent 03 April 1987 (has links) (PDF)
Nous étudions le comportement des extensions linéaires au travers de deux invariants de comparabilité: le nombre de sauts et la dimension. La reconnaissance des ordres de Dilworth est montrée comme étant NP-complète, nous donnons des algorithmes polynomiaux pour résoudre ce problème sur deux sous-classes. Nous définissons les notions de dimension gloutonne et dimension dfgloutonne et étudions les cas d'égalité avec la dimension classique. Nous montrons la relation très étroite entre les extensions linéaires dfgloutonnes et les parcours en profondeur. Deux problèmes concernant les extensions linéaires dfgloutonnes sont montrés comme étant NP-difficiles.
|
175 |
Single and Twin-Heaps as Natural Data Structures for Percentile Point Simulation AlgorithmsHatzinger, Reinhold, Panny, Wolfgang January 1993 (has links) (PDF)
Sometimes percentile points cannot be determined analytically. In such cases one has to resort to Monte Carlo techniques. In order to provide reliable and accurate results it is usually necessary to generate rather large samples. Thus the proper organization of the relevant data is of crucial importance. In this paper we investigate the appropriateness of heap-based data structures for the percentile point estimation problem. Theoretical considerations and empirical results give evidence of the good performance of these structures regarding their time and space complexity. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
176 |
Upper and Lower Bounds for Text Upper and Lower Bounds for Text Indexing Data StructuresGolynski, Alexander 10 December 2007 (has links)
The main goal of this thesis is to investigate the complexity of a variety of problems related to text indexing and text searching. We present new data structures that can be used as building blocks for full-text indices which occupies minute space (FM-indexes) and wavelet trees. These data structures also can be used to represent labeled trees and posting lists. Labeled trees are applied in XML documents, and posting lists in search engines.
The main emphasis of this thesis is on lower bounds for time-space tradeoffs for the following problems: the rank/select problem, the problem of representing a string of balanced parentheses, the text retrieval problem, the problem of computing a permutation and its inverse, and the problem of representing a binary relation. These results are divided in two groups: lower bounds in the cell probe model and lower bounds in the indexing model.
The cell probe model is the most natural and widely accepted framework for studying data structures. In this model, we are concerned with the total space used by a data structure and the total number of accesses (probes) it performs to memory, while computation is free of charge. The indexing model imposes an additional restriction on the storage: the object in question must be stored in its raw form together with a small index that facilitates an efficient implementation of a given set of queries, e.g. finding rank, select, matching parenthesis, or an occurrence of a given pattern in a given text (for the text retrieval problem).
We propose a new technique for proving lower bounds in the indexing model and use it to obtain lower bounds for the rank/select problem and the balanced parentheses problem. We also improve the existing techniques of Demaine and Lopez-Ortiz using compression and present stronger lower bounds for the text retrieval problem in the indexing model.
The most important result of this thesis is a new technique for cell probe lower bounds. We demonstrate its strength by proving new lower bounds for the problem of representing permutations, the text retrieval problem, and the problem of representing binary relations. (Previously, there were no non-trivial results known for these problems.) In addition, we note that the lower bounds for the permutations problem and the binary relations problem are tight for a wide range of parameters, e.g. the running time of queries, the size and density of the relation.
|
177 |
Path Queries in Weighted TreesZhou, Gelin January 2012 (has links)
Trees are fundamental structures in computer science, being widely used in modeling
and representing different types of data in numerous computer applications. In many cases,
properties of objects being modeled are stored as weights or labels on the nodes of trees.
Thus researchers have studied the preprocessing of weighted trees in which each node is
assigned a weight, in order to support various path queries, for which a certain function
over the weights of the nodes along a given query path in the tree is computed [3, 14, 22, 26].
In this thesis, we consider the problem of supporting several various path queries over
a tree on n weighted nodes, where the weights are drawn from a set of σ distinct values.
One query we support is the path median query, which asks for the median weight on a
path between two given nodes. For this and the more general path selection query, we
present a linear space data structure that answers queries in O(lg σ) time under the word
RAM model. This greatly improves previous results on the same problem, as previous data
structures achieving O(lg n) query time use O(n lg^2 n) space, and previous linear space data
structures require O(n^ε) time to answer a query for any positive constant ε [26].
We also consider the path counting query and the path reporting query, where a path
counting query asks for the number of nodes on a query path whose weights are in a
query range, and a path reporting query requires to report these nodes. Our linear space
data structure supports path counting queries with O(lg σ) query time. This matches
the result of Chazelle [14] when σ is close to n, and has better performance when σ is
significantly smaller than n. The same data structure can also support path reporting
queries in O(lg σ + occ lg σ) time, where occ is the size of output. In addition, we present
a data structure that answers path reporting queries in O(lg σ + occ lg lg σ) time, using
O(n lg lg σ) words of space. These are the first data structures that answer path reporting
queries.
|
178 |
Designing Efficient Geometric Search Algorithms Using Persistent Binary-Binary Search TreesINAGAKI, Yasuyoshi, HIRATA, Tomio, TAN, Xuehou 20 April 1994 (has links)
No description available.
|
179 |
Verifiable and redactable medical documentsBrown, Jordan Lee 16 July 2012 (has links)
The objective of the proposed research is to answer the question of how to provide verification and redactability to medical documents at a manageable computation cost to all parties involved. The approach for this solution examines the use of Merkle Hash Trees to provide the redaction and verification characteristics required. Using the Merkle Hash Tree, various Continuity of Care Documents will have their various elements extracted for storage in the signature scheme. An analysis of the approach and the various characteristics that made this approach a likely candidate for success are provided within. A description of a framework implementation and a sample application are provided to demonstrate potential uses of the system. Finally, results seen from various experiments with the framework are included to provide concrete evidence of a solution to the question which was the focus of this research.
|
180 |
Extending Modelica with High-Level Data Structures: Design and Implementation in OpenModelicaBjörklén, Simon January 2008 (has links)
<p>Modelica is an equation-based object-oriented language (EOO). PELAB at Linköping University along with the OpenModelica development group, is developing a metamodeling extension, MetaModelica, to this language along with a compiler called the OpenModelica Compiler (OMC).</p><p>The goal of this thesis was to analyze the compiler, extend it with union type support and then write a report about the extension with union types in particular and extension with high level data structures in general, to facilitate further development. </p><p>The implementation made by this thesis was implemented with the goal of keeping the current structure intact and extending case-clauses where possible. The main parts of the extension is implemented by this thesis work but some parts concerning the pattern matching algorithms are still to be extended. The main goal of this is to bootstrap the OpenModelica Compiler, making it able to compile itself although this is still a goal for the future.</p><p>With this thesis I also introduce some guidelines for implementing a new highlevel data structure into the compiler and which modules needs extension.</p>
|
Page generated in 0.1103 seconds