• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 270
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 484
  • 484
  • 356
  • 336
  • 190
  • 99
  • 65
  • 64
  • 58
  • 53
  • 53
  • 53
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Verification of sequential and concurrent libraries

Deshmukh, Jyotirmoy Vinay 02 August 2011 (has links)
The goal of this dissertation is to present new and improved techniques for fully automatic verification of sequential and concurrent software libraries. In most cases, automatic software verification is plagued by undecidability, while in many others it suffers from prohibitively high computational complexity. Model checking -- a highly successful technique used for verifying finite state hardware circuits against logical specifications -- has been less widely adapted for software, as software verification tends to involve reasoning about potentially infinite state-spaces. Two of the biggest culprits responsible for making software model checking hard are heap-allocated data structures and concurrency. In the first part of this dissertation, we study the problem of verifying shape properties of sequential data structure libraries. Such libraries are implemented as collections of methods that manipulate the underlying data structure. Examples of such methods include: methods to insert, delete, and update data values of nodes in linked lists, binary trees, and directed acyclic graphs; methods to reverse linked lists; and methods to rotate balanced trees. Well-written methods are accompanied by documentation that specifies the observational behavior of these methods in terms of pre/post-conditions. A pre-condition [phi] for a method M characterizes the state of a data structure before the method acts on it, and the post-condition [psi] characterizes the state of the data structure after the method has terminated. In a certain sense, we can view the method as a function that operates on an input data structure, producing an output data structure. Examples of such pre/post-conditions include shape properties such as acyclicity, sorted-ness, tree-ness, reachability of particular data values, and reachability of pointer values, and data structure-specific properties such as: "no red node has a red child'', and "there is no node with data value 'a' in the data structure''. Moreover, methods are often expected not to violate certain safety properties such as the absence of dangling pointers, absence of null pointer dereferences, and absence of memory leaks. We often assume such specifications as implicit, and say that a method is incorrect if it violates such specifications. We model data structures as directed graphs, and use the two terms interchangeably. Verifying correctness of methods operating on graphs is an instance of the parameterized verification problem: for every input graph that satisfies [phi], we wish to ensure that the corresponding output graph satisfies [psi]. Control structures such as loops and recursion allow an arbitrary method to simulate a Turing Machine. Hence, the parameterized verification problem for arbitrary methods is undecidable. One of the main contributions of this dissertation is in identifying mathematical conditions on a programming language fragment for which parameterized verification is not only decidable, but also efficient from a complexity perspective. The decidable fragment we consider can be broadly sub-divided into two categories: the class of iterative methods, or methods which use loops as a control flow construct to traverse a data structure, and the class of recursive methods, or methods that use recursion to traverse the data structure. We show that for an iterative method operating on a directed graph, if we are guaranteed that if the number of destructive updates that a method performs is bounded (by a constant, i.e., O(1)), and is guaranteed to terminate, then the correctness of the method can be checked in time polynomial in the size of the method and its specifications. Further, we provide a well-defined syntactic fragment for recursive methods operating on tree-like data structures, which assures that any method in this fragment can be verified in time polynomial in the size of the method and its specifications. Our approach draws on the theory of tree automata, and we show that parameterized correctness can be reduced to emptiness of finite-state, nondeterministic tree automata that operate on infinite trees. We then leverage efficient algorithms for checking the emptiness of such tree automata to obtain a tractable verification framework. Our prototype tool demonstrates the low theoretical complexity of our technique by efficiently verifying common methods that operate on data structures. In the second part of the dissertation, we tackle another obstacle for tractable software verification: concurrency. In particular, we explore application of a static analysis technique based on interprocedural dataflow analysis to predict and document deadlocks in concurrent libraries, and analyze deadlocks in clients that use such libraries. The kind of deadlocks that we focus result from circular dependencies in the acquisition of shared resources (such as locks). Well-written applications that use several locks implicitly assume a certain partial order in which locks are acquired by threads. A cycle in the lock acquisition order is an indicator of a possible deadlock within the application. Methods in object-oriented concurrent libraries often encapsulate internal synchronization details. As a result of information hiding, clients calling the library methods may cause thread safety violations by invoking methods in a manner that violates the partial ordering between lock acquisitions that is implicit within the library. Given a concurrent library, we present a technique for inferring interface contracts that speciy permissible concurrent method calls and patterns of aliasing among method arguments that guarantee deadlock-free execution for the methods in the library. The contracts also help client developers by documenting required assumptions about the library methods. Alternatively, the contracts can be statically enforced in the client code to detect potential deadlocks in the client. Our technique combines static analysis with a symbolic encoding for tracking lock dependencies, allowing us to synthesize contracts using a satisfiability modulo theories (SMT) solver. Additionally, we investigate extensions of our technique to reason about deadlocks in libraries that employ signalling primitives such as wait-notify for cooperative synchronization. We demonstrate its scalability and efficiency with a prototype tool that analyzed over a million lines of code for some widely-used open-source Java libraries in less than 50 minutes. Furthermore, the contracts inferred by our approach have been able to pinpoint real bugs, i.e. deadlocks that have been reported by users of these libraries. / text
292

Agrégation et extraction des connaissances dans les réseaux inter-véhicules

ZEKRI, Dorsaf 17 January 2013 (has links) (PDF)
Les travaux réalisés dans cette thèse traitent de la gestion des données dans les réseaux inter-véhiculaires (VANETs). Ces derniers sont constitués d'un ensemble d'objets mobiles qui communiquent entre eux à l'aide de réseaux sans fil de type IEEE 802.11, Bluetooth, ou Ultra Wide Band (UWB). Avec de tels mécanismes de communication, un véhicule peut recevoir des informations de ses voisins proches ou d'autres plus distants, grâce aux techniques de multi-sauts qui exploitent dans ce cas des objets intermédiaires comme relais. De nombreuses informations peuvent être échangées dans le contexte des "VANETs", notamment pour alerter les conducteurs lorsqu'un événement survient (accident, freinage d'urgence, véhicule quittant une place de stationnement et souhaitant en informer les autres, etc.). Au fur et à mesure de leurs déplacements, les véhicules sont ensuite " contaminés " par les informations transmises par d'autres. Dans ce travail, nous voulons exploiter les données de manière sensiblement différente par rapport aux travaux existants. Ces derniers visent en effet à utiliser les données échangées pour produire des alertes aux conducteurs. Une fois ces données utilisées, elles deviennent obsolètes et sont détruites. Dans ce travail, nous cherchons à générer dynamiquement à partir des données collectées par les véhicules au cours de leur trajet, un résumé (ou agrégat) qui fourni des informations aux conducteurs, y compris lorsqu'aucun véhicule communicant ne se trouve pas à proximité. Pour ce faire, nous proposons tout d'abord une structure d'agrégation spatio-temporelle permettant à un véhicule de résumer l'ensemble des événements observés. Ensuite, nous définissons un protocole d'échange des résumés entre véhicules sans l'intermédiaire d'une infrastructure, permettant à un véhicule d'améliorer sa base de connaissances locale par échange avec ses voisins. Enfin, nous définissons nos stratégies d'exploitation de résumé afin d'aider le conducteur dans la prise de décision. Nous avons validé l'ensemble de nos propositions en utilisant le simulateur " VESPA " en l'étendant pour prendre en compte la notion de résumés. Les résultats de simulation montrent que notre approche permet effectivement d'aider les conducteurs à prendre de bonnes décisions, sans avoir besoin de recourir à une infrastructure centralisatrice
293

Méthodes probabilistes pour la planifcation réactive de mouvement

Jaillet, Léonard 19 December 2005 (has links) (PDF)
Malgré le franc succès des techniques de planification de mouvement au cours de ces deux dernières décennies, leur adaptation à des scènes comprenant à la fois des obstacles statiques et des obstacles mobiles s'est avérée limitée jusqu'ici. Une des raisons en est le coût associé à la mise à jour des structures de données précalculées afin de capturer la connexité de l'espace libre. Notre contribution principale concerne la proposition d'un nouveau planificateur capable de traiter ces problèmes d'environnements partiellement dynamiques composés à la fois d'obstacles statiques et d'obstacles mobiles.
294

Effective techniques for understanding and improving data structure usage

Jung, Changhee 20 September 2013 (has links)
Turing Award winner Niklaus Wirth famously noted, `Algorithms + Data Structures = Programs', and it follows that data structures should be carefully considered for effective application development. In fact, data structures are the main focus of program understanding, performance engineering, bug detection, and security enhancement, etc. Our research is aimed at providing effective techniques for analyzing and improving data structure usage in fundamentally new approaches: First, detecting data structures; identifying what data structures are used within an application is a critical step toward application understanding and performance engineering. Second, selecting efficient data structures; analyzing data structures' behavior can recognize improper use of data structures and suggest alternative data structures better suited for the current situation where the application runs. Third, detecting memory leaks for data structures; tracking data accesses with little overhead and their careful analysis can enable practical and accurate memory leak detection. Finally, offloading time-consuming data structure operations; By leveraging a dedicated helper thread that executes the operations on the behalf of the application thread, we can improve the overall performance of the application.
295

Adaptive Bounding Volume Hierarchies for Efficient Collision Queries

Larsson, Thomas January 2009 (has links)
The need for efficient interference detection frequently arises in computer graphics, robotics, virtual prototyping, surgery simulation, computer games, and visualization. To prevent bodies passing directly through each other, the simulation system must be able to track touching or intersecting geometric primitives. In interactive simulations, in which millions of geometric primitives may be involved, highly efficient collision detection algorithms are necessary. For these reasons, new adaptive collision detection algorithms for rigid and different types of deformable polygon meshes are proposed in this thesis. The solutions are based on adaptive bounding volume hierarchies. For deformable body simulation, different refit and reconstruction schemes to efficiently update the hierarchies as the models deform are presented. These methods permit the models to change their entire shape at every time step of the simulation. The types of deformable models considered are (i) polygon meshes that are deformed by arbitrary vertex repositioning, but with the mesh topology preserved, (ii) models deformed by linear morphing of a fixed number of reference meshes, and (iii) models undergoing completely unstructured relative motion among the geometric primitives. For rigid body simulation, a novel type of bounding volume, the slab cut ball, is introduced, which improves the culling efficiency of the data structure significantly at a low storage cost. Furthermore, a solution for even tighter fitting heterogeneous hierarchies is outlined, including novel intersection tests between spheres and boxes as well as ellipsoids and boxes. The results from the practical experiments indicate that significant speedups can be achieved by using these new methods for collision queries as well as for ray shooting in complex deforming scenes.
296

Algorithmes, mots et textes aléatoires

Clément, Julien 12 December 2011 (has links) (PDF)
Dans ce mémoire, j'examine différents aspects d'un objet simple mais omniprésent en informatique: la séquence de symboles (appelée selon le contexte mot ou chaîne de caractères). La notion de mot est au carrefour de domaines comme la théorie de l'information et la théorie des langages. S'il est simple, il reste fondamental: nous n'avons, au plus bas niveau, que cela à disposition puisqu'il arrive toujours un moment où une donnée doit être encodée en symboles stockables en mémoire. La quantité d'information croissante de données mise à disposition et qu'on peut stocker, par exemple des génomes d'individus ou des documents numérisés, justifie que les algorithmes et les structures de données qui les manipulent soient optimisés. En conséquence, les besoins d'analyse se font sentir pour guider le choix et la conception des programmes qui manipulent ces données. L'analyse en moyenne est ici particulièrement adaptée puisque les données atteignent une variété et des volumes tellement importants que c'est le cas typique qui traduit le mieux la complexité et non pas le cas le pire. Cela évidemment pose le problème de la modélisation de données qui reste encore très épineux. En effet on souhaite deux choses contradictoires: un modèle au plus près des données, qui traduise vraiment leurs spécificités, mais aussi un modèle permettant de donner des résultats, c'est-à-dire de prédire les performances (et on comprend vite que le modèle doit donc rester relativement simple pour qu'il subsiste un espoir de le traiter!). Les méthodes sont le plus souvent celles de la combinatoire analytique et font appel à un objet mathématique, les séries génératrices, pour mener les analyses à bien.
297

Storage and aggregation for fast analytics systems

Amur, Hrishikesh 13 January 2014 (has links)
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.
298

Designing multi-sensory displays for abstract data

Nesbitt, Keith January 2003 (has links)
Doctor of Philosophy / The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated.
299

Designing multi-sensory displays for abstract data

Nesbitt, Keith January 2003 (has links)
Doctor of Philosophy / The rapid increase in available information has lead to many attempts to automatically locate patterns in large, abstract, multi-attributed information spaces. These techniques are often called data mining and have met with varying degrees of success. An alternative approach to automatic pattern detection is to keep the user in the exploration loop by developing displays for perceptual data mining. This approach allows a domain expert to search the data for useful relationships and can be effective when automated rules are hard to define. However, designing models of the abstract data and defining appropriate displays are critical tasks in building a useful system. Designing displays of abstract data is especially difficult when multi-sensory interaction is considered. New technology, such as Virtual Environments, enables such multi-sensory interaction. For example, interfaces can be designed that immerse the user in a 3D space and provide visual, auditory and haptic (tactile) feedback. It has been a goal of Virtual Environments to use multi-sensory interaction in an attempt to increase the human-to-computer bandwidth. This approach may assist the user to understand large information spaces and find patterns in them. However, while the motivation is simple enough, actually designing appropriate mappings between the abstract information and the human sensory channels is quite difficult. Designing intuitive multi-sensory displays of abstract data is complex and needs to carefully consider human perceptual capabilities, yet we interact with the real world everyday in a multi-sensory way. Metaphors can describe mappings between the natural world and an abstract information space. This thesis develops a division of the multi-sensory design space called the MS-Taxonomy. The MS-Taxonomy provides a concept map of the design space based on temporal, spatial and direct metaphors. The detailed concepts within the taxonomy allow for discussion of low level design issues. Furthermore the concepts abstract to higher levels, allowing general design issues to be compared and discussed across the different senses. The MS-Taxonomy provides a categorisation of multi-sensory design options. However, to design effective multi-sensory displays requires more than a thorough understanding of design options. It is also useful to have guidelines to follow, and a process to describe the design steps. This thesis uses the structure of the MS-Taxonomy to develop the MS-Guidelines and the MS-Process. The MS-Guidelines capture design recommendations and the problems associated with different design choices. The MS-Process integrates the MS-Guidelines into a methodology for developing and evaluating multi-sensory displays. A detailed case study is used to validate the MS-Taxonomy, the MS-Guidelines and the MS-Process. The case study explores the design of multi-sensory displays within a domain where users wish to explore abstract data for patterns. This area is called Technical Analysis and involves the interpretation of patterns in stock market data. Following the MS-Process and using the MS-Guidelines some new multi-sensory displays are designed for pattern detection in stock market data. The outcome from the case study includes some novel haptic-visual and auditory-visual designs that are prototyped and evaluated.
300

Constituents and their expectation towards a critical-pragmatic theory of information systems project management /

Brook, Phillip William James. January 2004 (has links)
Thesis (Ph.D.) -- University of Western Sydney, 2004. / "Submitted as fulfilling the requirements for the Doctor of Philosophy Degree"-- t.p. "March 2004" Includes bibliographic references.

Page generated in 0.0614 seconds