191 |
Εφαρμογή των κινητικών δομών δεδομένων σε προβλήματα της υπολογιστικής γεωμετρίαςΤσιμά, Αλεξάνδρα 29 August 2008 (has links)
Οι κινητικές δομές δεδομένων KDSs (kinetic data structures) είναι ένα νέο
πλαίσιο εργασίας για το σχεδιασμό και την ανάλυση αλγορίθμων σχετικών με γεωμε-
τρικά αντικείμενα (ευθύγραμμα τμήματα, πολύγωνα, δίσκοι κ.τ.λ.) σε κίνηση. Σκο-
πός μας είναι να διατηρήσουμε ένα χαρακτηριστικό ενός συνόλου κινούμενων αντι-
κειμένων, π.χ. την κυρτή θήκη ή το κοντινότερο ζευγάρι του. Η διατήρηση του χαρα
κτηριστικού γίνεται μέσω ενός συνόλου συνθηκών που εγγυώνται την εγκυρότητα
της δομής κάθε χρονική στιγμή και το οποίο μεταβάλλεται με το χρόνο λόγω της κίνησης. Οι συνθήκες αποθηκεύονται σε μια ουρά διατεταγμένες χρονολογικά. Κάθε
φορά που αλλάζει το χαρακτηριστικό που μας ενδιαφέρει ενημερώνουμε τη δομή μας
και την ουρά.
Η πρώτη ενότητα της εργασίας είναι μια εισαγωγή στις KDSs. Αναφέρουμε
βασικές έννοιες και ιδέες των KDSs όπως: συνάρτηση διαμόρφωσης, πιστοποιητικά,
κρίσιμα γεγονότα. Επίσης, ασχολούμαστε και με τα μέτρα απόδοσής τους.
Στη δεύτερη ενότητα ασχολούμαστε με τους δυαδικούς διαχωρισμούς χώρου
BSPs, πρώτα σε στατικό και κατόπιν σε κινητικό περιβάλλον. Συγκεκριμένα παρουσιάζουμε τρεις αλγορίθμους για τη διατήρηση του BSP ενός συνόλου κινούμενων
ευθυγράμμων τμημάτων στο επίπεδο. Σύμφωνα με τον πρώτο γνωστό αλγόριθμο που
διατυπώθηκε για την αποτελεσματική διατήρηση του BSP για ένα σύνολο μη-τεμνόμενων ευθυγράμμων τμημάτων S στο επίπεδο χρησιμοποιώντας τη φιλοσοφία
των KDSs, κατασκευάζουμε έναν BSP για το S θεωρώντας τα ευθύγραμμα τμήματα
στάσιμα και στη συνέχεια τον διατηρούμε καθώς αυτά κινούνται. Ο δεύτερος αλγόριθμος είναι ουσιαστικά μια επέκταση του πρώτου καθώς ασχολείται με το ίδιο πρόβλημα, αλλά για τεμνόμενα ευθύγραμμα τμήματα. Αλλάζει το σύνολο των πιστοποιητικών και οι τρόποι με τους οποίους μπορεί να αλλάξει η δομή του BSP. Ο τρίτος αλγόριθμος χρησιμοποιεί ένα διαφορετικό τρόπο για την κατασκευή και διατήρηση του BSP για το σύνολο S βελτιώνοντας τον αρχικό.
Στην τρίτη ενότητα ασχολούμαστε με τη διατήρηση του Voronoi διαγράμματος (VD) για ένα σύνολο κινούμενων, πιθανώς τεμνόμενων δίσκων στο επίπεδο και
του συμπαγούς Voronoi διαγράμματος για ένα σύνολο μη-τεμνόμενων κυρτών πολυγώνων στο επίπεδο (το συμπαγές VD είναι δυϊκό του VD, αλλά το μέγεθός του είναι
συνάρτηση του αριθμού των πολυγώνων και όχι του αριθμού των κορυφών). Και στις
δύο περιπτώσεις, η επίλυση του προβλήματος ανάγεται στη διατήρηση του δυϊκού
του VD, της τριγωνοποίησης Delaunay DT . Η διατήρηση της DT βασίζεται στο
γεγονός ότι ένα σύνολο τοπικών συνθηκών (έλεγχοι InCircle), πιστοποιούν την ολική
ορθότητα της δομής και τοπικές επιδιορθώσεις είναι πάντα εφικτές. Έτσι, καθώς τα
αντικείμενα κινούνται, έχουμε κάθε στιγμή μια έγκυρη DT και συνεπώς ένα έγκυρο VD.
Τέλος, αναφέρουμε μια KDS για τον εντοπισμό συγκρούσεων μεταξύ δύο απλών πολυγώνων σε κίνηση. Ο αλγόριθμος διατηρεί μια υποδιαίρεση του ελεύθερου χώρου μεταξύ των πολυγώνων, που καλείται external relative geodesic triangulation, η οποία πιστοποιεί τη μη-σύγκρουσή των πολυγώνων. / Kinetic Data Structures (KDSs) are a new framework for designing and
analyzing algorithms for geometrics objects (segments, polygons, disks etc.) in
motion. Our goal is to maintain an attribute of a set of moving objects, for example
the convex hull or the closest pair. The maintenance of the attribute is made through a set of conditions that guarantee the validity of the structure every moment. This set is changed with time due to the motion. The conditions are stored in a queue ordered
chronologically. Every time the attribute is changed, we update the structure and the
queue.
The first chapter is an introduction to the KDSs. We mention basic notions and
ideas of the KDSs, like: configuration function, certificates, critical events.
Furthermore, we discuss their measure of performance.
In the second chapter we deal with the Binary Space Partitions (BSPs), first in
static and then in kinetic environment. Specifically, we present three algorithms for
the maintenance of a BSP for a set of moving segments in the plane. According to the
first known algorithm which was proposed for efficiently maintaining the BSP for a
set of non-intersecting segments S in the plane using the philosophy of KDSs, we
construct a BSP - considering that the segments are static - and then we maintain it as the segments move. The second algorithm is substantially an expansion of the first
algorithm as it deals with the same problem, but for intersecting segments. The set of
the certificates is changed as well as the set of critical events. The third algorithm uses a different technique for the construction and maintenance of the BSP for the set S. It is an improvement of the first algorithm.
In the third chapter, we deal with the maintenance of the Voronoi diagram
(VD) for a set of moving, probably intersecting disks in the plane and the
maintenance of a compact Voronoi-like diagram for a set of non-intersecting, convex
polygons in the plane (compact VD is dual to VD, except that its size is a function of
the number of polygons and not of the number of vertices). In both cases, we solve the
problem by maintaining the dual graph of VD, the Delaunay triangulation (DT ). The
maintenance of the DT is based in the fact that a set of local conditions (InCircle
tests) guarantee the total correctness of the structure and we are able to do only local
changes. So, as the objects move, we have a valid DT every moment and
consequently a valid VD.
Finally, we mention a KDS for detecting collisions between two simple
polygons in motion. In order to do so, we create a planar subdivision of the free space
between the polygons, called External Relative Geodesic Triangulation, which certify their disjointness.
|
192 |
Space-Efficient Data Structures in the Word-RAM and Bitprobe ModelsNicholson, Patrick 06 August 2013 (has links)
This thesis studies data structures in the word-RAM and bitprobe models, with an emphasis on space efficiency. In the word-RAM model of computation the space cost of a data structure is measured in terms of the number of w-bit words stored in memory, and the cost of answering a query is measured in terms of the number of read, write, and arithmetic operations that must be performed. In the bitprobe model, like the word-RAM model, the space cost is measured in terms of the number of bits stored in memory, but the query cost is measured solely in terms of the number of bit accesses, or probes, that are performed.
First, we examine the problem of succinctly representing a partially ordered set, or poset, in the word-RAM model with word size
Theta(lg n) bits. A succinct representation of a combinatorial object is one that occupies space matching the information theoretic lower bound to within lower order terms. We show how to represent a poset on n vertices using a data structure that occupies n^2/4 + o(n^2) bits, and can answer precedence (i.e., less-than) queries in
constant time. Since the transitive closure of a directed acyclic graph is a poset, this implies that we can support reachability
queries on an arbitrary directed graph in the same space bound. As far as we are aware, this is the first representation of an arbitrary directed graph that supports reachability queries in constant time,
and stores less than n choose 2 bits. We also consider several additional query operations.
Second, we examine the problem of supporting range queries on strings
of n characters (or, equivalently, arrays of
n elements) in the word-RAM model with word size Theta(lg n) bits. We focus on the specific problem of answering range majority queries: i.e., given a range, report the
character that is the majority among those in the range, if one exists. We show that these queries can be supported in constant time
using a linear space (in words) data structure. We generalize this
result in several directions, considering various frequency thresholds, geometric variants of the problem, and dynamism. These
results are in stark contrast to recent work on the similar range mode problem, in which the query operation asks for the mode (i.e., most frequent) character in a given range. The current best data structures for the range mode problem take soft-Oh(n^(1/2)) time per query for linear space data structures.
Third, we examine the deterministic membership (or dictionary) problem in the bitprobe model. This problem asks us to store a set of n elements drawn from a universe [1,u] such that membership queries
can be always answered in t bit probes. We present several new fully explicit results for this problem, in particular for the case
when n = 2, answering an open problem posed by Radhakrishnan, Shah, and Shannigrahi [ESA 2010]. We also present a general strategy for the membership problem that can be used to solve many related fundamental problems, such as rank, counting, and emptiness queries.
Finally, we conclude with a list of open problems and avenues for future work.
|
193 |
Data views for a programming environmentRobson, R. January 1988 (has links)
A data structure editor is presented for use in an integrated, fragment-based programming environment. This editor employs high resolution computer graphics to present the user with an iconic representation of the internal storage of a running program. / The editor allows the creation, modification, and deletion of data structures. These abilities allow the user to quickly sketch data structures with which to test incomplete program fragments, alleviating the need for driver routines. / To keep the user cognizant of events inside his program, a technique for automated display management is presented allowing the user to keep the most important objects in the viewport at all times. A history facility permits the user to see the former values of all variables. / Execution controls are provided allowing the user to control the scope and speed of execution, manipulate frames on the run-time stack, set breakpoints, and profile the executing algorithm.
|
194 |
Debugging and repair of description logic ontologies.Moodley, Kodylan. January 2010 (has links)
In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to
represent knowledge about a particular domain of interest in a precise way. The building
blocks of ontologies include concepts, relations and objects. Those can be combined to
form logical sentences which explicitly describe the domain. With this explicit knowledge
one can perform reasoning to derive knowledge that is implicit in the ontology. Description
Logics (DLs) are a group of knowledge representation languages with such capabilities that
are suitable to represent ontologies. The process of building ontologies has been greatly
simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and
OntoStudio. The result of this is that there are a growing number of ontology engineers
attempting to build and develop ontologies. It is frequently the case that errors are
introduced while constructing the ontology resulting in undesirable pieces of implicit
knowledge that follows from the ontology. As such there is a need to extend current
ontology editors with tool support to aid these ontology engineers in correctly designing
and debugging their ontologies. Errors such as unsatis able concepts and inconsistent
ontologies frequently occur during ontology construction. Ontology Debugging and Repair
is concerned with helping the ontology developer to eliminate these errors from the ontology.
Much emphasis, in current tools, has been placed on giving explanations as to why these
errors occur in the ontology. Less emphasis has been placed on using this information to
suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the
errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an
important gap in the area by contributing an alternative approach to ontology debugging
and repair for the more general error of a list of unwanted sentences. Errors such as
unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences
in the ontology. Our approach not only considers the explanation of the unwanted sentences
but also the identi cation of repair strategies to eliminate these unwanted sentences from
the ontology. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.
|
195 |
Preserving Privacy in Transparency LoggingPulls, Tobias January 2015 (has links)
The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage. / The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage.
|
196 |
Real-time Simulation and Rendering of Large-scale Crowd MotionLi, Bo January 2013 (has links)
Crowd simulations are attracting increasing attention from both academia and the industry field and are implemented across a vast range of applications, from scientific demonstrations to video games and films. As such, the demand for greater realism in their aesthetics and the amount of agents involved is always growing. A successful crowd simulation must simulate large numbers of pedestrians' behaviours as realistically as possible in real-time. The thesis looks at two important aspects of crowd simulation and real-time animation.
First, this thesis introduces a new data structure called Extended Oriented Bounding Box (EOBB) and related methods for fast collision detection and obstacle avoidance in the simulation of crowd motion in virtual environments. The EOBB is extended to contain a region whose size is defined based on the instantaneous velocity vector, thus allowing a bounding volume representation of both geometry and motion. Such a representation is also found to be highly effective in motion planning using the location of vertices of bounding boxes in the immediate neighbourhood of the current crowd member.
Second, we present a detailed analysis of the effectiveness of spatial subdivision data structures, specifically for large-scale crowd simulation. For large-scale crowd simulation, computational time for collision detection is huge, and many studies use spatial partitioning data structure to reduce the computational time, depicting their strengths and weaknesses, but few compare multiple methods in an effort to present the best solution. This thesis attempts to address this by implementing and comparing four popular spatial partitioning data structures with the EOBB.
|
197 |
Développement et mise en place d'une méthode de classification multi-blocs : application aux données de l'OQAI.Ouattara, Mory 18 March 2014 (has links) (PDF)
La multiplication des sources d'information et le développement de nouvelles technologies ont engendré des bases données complexes, souvent caractérisées par un nombre de variables relativement élevé par rapport aux individus. En particulier, dans les études environnementales sur la pollution de l'air intérieur, la collecte des informations sur les individus se fait au regard de plusieurs thématiques, engendrant ainsi des données de grande dimension avec une structure multi-blocs définie par les thématiques. L'objectif de ce travail a été de développer des méthodes de classification adaptées à ces jeux de données de grande dimension et structurées en blocs de variables. La première partie de ce travail présente un état de l'art des méthodes de classification en général et dans le cas de la grande dimension. Dans la deuxième partie, trois nouvelles approches de classification d'individus décrits par des variables structurées en blocs ont été proposées. La méthode 2S-SOM (Soft Subspace-Self Organizing Map), une approche de type subspace clustering basée sur une modification de la fonction de coût de l'algorithme des cartes topologiques à travers un double système de poids adaptatifs défini sur les blocs et sur les variables. Nous proposons ensuite des approches CSOM (Consensus SOM) et Rv-CSOM de recherche de consensus de cartes auto-organisées basées sur un système de poids déterminés à partir des partitions initiales. Enfin, la troisième partie présente une application de ces méthodes sur le jeu de données réelles de la campagne nationale logement (CNL) menée par l'OQAI afin de définir une typologie des logements au regard des thématiques : qualité de l'air intérieur, structure du bâtiment, composition des ménages et habitudes des occupants.
|
198 |
Algorithmic Contributions to Computational Molecular BiologyVialette, Stéphane 01 June 2010 (has links) (PDF)
Nous présentons ici nos résultats de recherche depuis 2001 sur l'algorithmique des graphes linéaires, la recherche de motif (avec ou sans topologie) dans les graphes, et la génomique comparative.
|
199 |
Analyses de l'algorithme de Gauss. Applications à l'analyse de l'algorithme LLL.Vera, Antonio 17 July 2009 (has links) (PDF)
Cette thèse est dédiée à l'analyse probabiliste d'algorithmes de réduction des réseaux euclidiens. Un réseau euclidien est l'ensemble de combinaisons linéaires à coefficients entiers d'une base (b_1,..., b_n ) \subset R^n. La réduction d'un réseau consiste a en trouver une base formée de vecteurs assez courts et assez orthogonaux, à partir d'une base donnée en entrée. Le célèbre algorithme LLL résout ce problème de manière efficace en dimension arbitraire. Il est très utilisé, mais mal compris. Nous nous concentrons sur son analyse dans le cas n = 2, où LLL devient l'algorithme de Gauss, car cette instance est une brique de base pour le cas n>= 3. Nous analysons précisément l'algorithme de Gauss, tant du point de vue de son exécution (nombre d'itérations, complexité binaire, coûts "additifs") que de la géométrie de la base de sortie (défaut d'Hermite, premier minimum et deuxième minimum orthogonalisé). Nous travaillons dans un modèle probabiliste très général, qui permet d'étudier aussi bien les instances faciles que les instances difficiles. Ce modèle nous a permis d'étudier la transition vers l'algorithme d'Euclide, qui correspond au cas où les vecteurs de la base d'entrée sont colinéaires. Nous utilisons des méthodes dynamiques : les algorithmes sont vus comme des systèmes dynamiques, et les séries génératrices concernées s'expriment en fonction de l'opérateur de transfert. Ces résultats très précis en dimension 2 sont une première étape pour l'analyse de l'algorithme LLL dans le cas général.
|
200 |
Étude d'algorithmes de restauration d'images sismiques par optimisation de forme non linéaire et application à la reconstruction sédimentaireGilardet, Mathieu 19 December 2013 (has links) (PDF)
Nous présentons une nouvelle méthode pour la restauration d'images sismiques. Quand on l'observe, une image sismique est le résultat d'un système de dépôt initial qui a été transformé par un ensemble de déformations géologiques successives (flexions, glissement de la faille, etc) qui se sont produites sur une grande période de temps. L'objectif de la restauration sismique consiste à inverser les déformations pour fournir une image résultante qui représente le système de dépôt géologique tel qu'il était dans un état antérieur. Classiquement, ce procédé permet de tester la cohérence des hypothèses d'interprétations formulées par les géophysiciens sur les images initiales. Dans notre contribution, nous fournissons un outil qui permet de générer rapidement des images restaurées et qui aide donc les géophysiciens à reconnaître et identifier les caractéristiques géologiques qui peuvent être très fortement modifiées et donc difficilement identifiables dans l'image observée d'origine. Cette application permet alors d'assister ces géophysiciens pour la formulation d'hypothèses d'interprétation des images sismiques. L'approche que nous introduisons est basée sur un processus de minimisation qui exprime les déformations géologiques en termes de contraintes géométriques. Nous utilisons une approche itérative de Gauss-Newton qui converge rapidement pour résoudre le système. Dans une deuxième partie de notre travail nous montrons différents résultats obtenus dans des cas concrets afin d'illustrer le processus de restauration d'image sismique sur des données réelles et de montrer comment la version restaurée peut être utilisée dans un cadre d'interprétation géologique.
|
Page generated in 0.0983 seconds