• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 270
  • 52
  • 27
  • 25
  • 19
  • 10
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 482
  • 482
  • 355
  • 335
  • 189
  • 99
  • 65
  • 64
  • 58
  • 53
  • 53
  • 53
  • 49
  • 49
  • 47
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

On improving the ease of use of the software transactional memory abstraction / Faciliter l'utilisation des mémoires transactionnelles logicielles

Crain, Tyler 06 March 2013 (has links)
Les architectures multicœurs changent notre façon d'écrire des programmes. L'écriture de programmes concurrents est bien connue pour être difficile. Traditionnellement, l'utilisation de verrous (locks) permettant au code de s'exécuter en exclusion mutuelle, a été l'abstraction la plus largement utilisée pour l'écriture des programmes concurrents. Malheureusement, il est difficile d'écrire des programmes concurrents efficaces et corrects reposant sur des verrous. En outre, les verrous présentent d'autres problèmes, notamment celui du passage à l'échelle. Le concept de mémoire transactionnelle a été proposé comme une solution à ces difficultés. Les transactions peuvent être considérées comme une abstraction de haut niveau, ou une méthodologie pour l'écriture de programmes concurrents, ce qui permet au programmeur de pouvoir déclarer des sections de code devant être exécutés de façon atomique, sans avoir à se soucier des détails de synchronisation. Malheureusement, bien qu'assurément plus facile à utiliser que les verrous, la mémoire transactionnelle souffre encore de problèmes de performance et de facilité d'utilisation. En fait, de nombreux concepts relatifs à l'utilisation et à la sémantique des transactions n'ont pas encore des normes convenues. Cette thèse propose de nouvelles solutions permettant de faciliter l'utilisation des mémoires transactionellles. La thèse débute par un chapitre qui donne un bref aperçu de la mémoire transactionnelle logicielle (STM) ainsi qu'une discussion sur le problème de la facilité d'utilisation. Les contributions à la recherche sont ensuite divisées en quatre chapitres principaux, chacun proposant une approche différente afin de rendre les STMs plus facile à utiliser. / Multicore architectures are changing the way we write programs. Writing concurrent programs is well known to be difficult task. Traditionally, the use of locks allowing code to execute in mutual exclusion has been the most widely used abstraction to write concurrent programs. Unfortunately, using locks it is difficult to write correct concurrent programs that perform efficiently. Additionally, locks present other problems such as scalability issues. Transactional memory has been proposed as a possible promising solution to these difficulties of writing concurrent programs. Transactions can be viewed as a high level abstraction or methodology for writing concurrent programs, allowing the programmer to be able to declare what sections of his code should be executed atomically, without having to worry about synchronization details. Unfortunately, although arguably easier to use then locks, transactional memory still suffers from performance and ease of use problems. In fact many concepts surrounding the usage and semantics of transactions have no widely agreed upon standards. This thesis specifically focuses on these ease of use problems by discussing how previous research has dealt with them and proposing new solutions putting ease of use first. The thesis starts with a chapter giving a brief overview of software transactional memory (STM) as well as a discussion of the problem of ease of use that is focused on in the later chapters. The research contributions are then divided into four main chapters, each looking at different approaches working towards making transactional memory easier to use.
162

Estrutura de dados persistentes / Persistent data structures

Couto, Yan Soares 08 January 2019 (has links)
Estruturas de dados (EDs) permitem operações de acesso e de modificação; operações de acesso apenas consultam um ou mais campos de uma ED, enquanto operações de modificação podem alterar os campos da estrutura. Dizemos que, ao realizar uma operação de modificação, criamos uma nova versão da ED. Uma ED é parcialmente persistente se permite apenas operações de acesso a versões anteriores e modificação apenas na versão mais nova, e totalmente persistente se também permite operações de modificação em todas as versões. Esta dissertação apresenta a descrição e implementação de uma versão total ou parcialmente persistente de várias estruturas: pilhas, filas, deques e árvores rubro-negras. Também são discutidas técnicas gerais para tornar persistentes certas classes de estruturas de dados. Por fim, é apresentada uma solução ao problema de localização de ponto, que usa uma árvore de busca binária persistente. / Data structures (DSs) allow access and update operations; access operations only allow accessing the value of one or more fields of the DS, while update operations allow modifying the fields of the structure. We say that, whenever an update operation is done, a new version of the DS is created. A DS is partially persistent if it allows access operations to previous versions of the structure and update operations only on the newest version, and totally persistent if it also allows update operations on all versions. This dissertation presents the description and implementation of totally or partially persistent versions of several data structures: stacks, queues, deques, and red-black trees. General techniques to make certain classes of DSs persistent are also discussed. At last, a solution to the point location problem, using a persistent binary search tree, is presented.
163

Constituents and their expectation : towards a critical-pragmatic theory of information systems project management

Brook, Phillip, University of Western Sydney, College of Law and Business, School of Marketing and International Business January 2004 (has links)
This dissertation presents a theoretical model of information systems (IS) project management that aims to improve the rate of project success, estimated at time of writing to be less than 50% despite over thirty years of experience. The inquiry into IS managementand the development of the CED Model of IS Project Management presented in this dissertation were informed by critical social theory and pragmatics. IS projects are conceptualised as a collaborative undertaking by everyone affected by the project : the constituents. The model identifies the central role of constituents' expectations and their critical examination of the intended, desired and feared outcomes. By being based in the political, economic, technical and social dimensions of IS projects, this dissertation makes a theoretical contribution to the IS discipline. Furthermore, by establishing a set of prescriptions for how a project should be conducted by identifying who should take part (the constituents), how they should interact(engaging in the project discourse) and how the processes should be managed(driven by the constituents' expectations), the model provides guidance for IS practitioners to increase the likelihood of successfully implementing the project. From a research perspective, this dissertation presents an example of empirically grounded IS research, informed by critical-pragmatic theoretic concerns, that is highly relevant for IS practice. / Doctor of Philosophy (PhD)
164

Improving the efficiency of graph-based data mining with application to public health data

Zhang, Yan, January 2007 (has links) (PDF)
Thesis (M.S. in computer science)--Washington State University, December 2007. / Includes bibliographical references (p. 77-79).
165

Reconnaissance de codes correcteurs d'erreurs

Côte, Maxime 22 March 2010 (has links) (PDF)
Durant cette thèse, je me suis intéressés à la reconnaissance de codes correcteurs d'erreurs à partir d'une observation bruitée. Parmi ces codes, nous avons choisi d'étudier plus particulièrement les codes convolutifs et les turbo-codes. Le canal de transmission considéré pour nos travaux est le canal binaire symétrique. En s'appuyant sur les travaux de E. Filiol et J. Barbier, j'ai mis au point un algorithme, imaginé conjointement avec N. Sendrier. Nous avons créé une nouvelle méthode générique de reconnaissance des codes convolutifs (n; k) (k entrées et n sorties). Cette méthode améliore l'état de l'art grâce à l'utilisation exclusive d'opérations binaires d'algèbre linéaire dans l'algorithme. L'implémentation fournit de bons résultats, autant du point de vue du temps d'exécution que de la tolérance au bruit, pour tout type de code convolutifs. La seconde partie consiste en la mise au point d'une méthode de reconnaissance des turbo-codes. Cette méthode repose sur les hypothèses que nous sommes capable de retrouver le premier code convolutif à l'aide de notre méthode de reconnaissance de code convolutif et que le second code convolutif (suivant l'entrelaceur) possède une matrice génératrice systématique définie par P(D)/Q(D) (où P(D) et Q(D) sont les polynômes du codeur convolutif) de terme constant non nul. Cette dernière hypothèse forte mais réaliste nous permet de construire une méthode et un algorithme capable de retrouver à la fois l'entrelaceur et les polynômes P(D) et Q(D) du code convolutif. Cet algorithme est très rapide mais trouve ses limites lorsque le taux d'erreur croit. De plus, notre hypothèse rend impossible la reconstruction de turbo-codes poinçonnés sans modifier l'algorithme.
166

Aspects théoriques et algorithmiques de l'optimisation semidéfinie.

Ramirez-Cabrera, Hector 13 January 2005 (has links) (PDF)
Le but de cette thèse est d'étudier des différents sujets de la programmation semidéfinie non linéaire(SDP). Ainsi, dans les deux premiers chapitres nous presentons certains aspects algorithmiques, dans les chapitres 3 et 4 nous travaillons sur des aspects théoriques comme l'analyse de perturbations de ce problème. Le premier chapitre développe un algorithme global qui étend l'algorithme local S-SDP. Cet algorithme est basé sur une fonction de pénalisation de Han et une stratégie de recherche linéaire. Le second chapitre est consacré à l'étude des méthodes de pénalisation ou fonctions barrière pour résoudre des problèmes semidéfinis convexes. Nous démontrons la convergence des suites primale et duale obtenues par cette méthode. De plus, nous étudions l'algorithme à deux paramètres en étendant les résultats connus dans le cadre restreint de la programmation convexe usuelle. Dans une deuxième partie, constituée des chapitres 3 et 4, nous nous intéressons à la caractérisation de la propriété des solutions fortement régulières en fonction des certaines conditions optimales de deuxième ordre. Ainsi, dans le troisième chapitre nous nous consacrons au problème de second-ordre, lequel est un cas particulier du problème SDP, dont on obtient cette caractérisation. Enfin dans la chapitre 4, nous donnons des conditions nécessaires et suffisantes pour la condition de régularité forte dans le cas SDP, en revanche, sa caractérisation reste un problème ouvert.
167

Hotlinks and Dictionaries

Douïeb, Karim 29 September 2008 (has links)
Knowledge has always been a decisive factor of humankind's social evolutions. Collecting the world's knowledge is one of the greatest challenges of our civilization. Knowledge involves the use of information but information is not knowledge. It is a way of acquiring and understanding information. Improving the visibility and the accessibility of information requires to organize it efficiently. This thesis focuses on this general purpose. A fundamental objective of computer science is to store and retrieve information efficiently. This is known as the dictionary problem. A dictionary asks for a data structure which allows essentially the search operation. In general, information that is important and popular at a given time has to be accessed faster than less relevant information. This can be achieved by dynamically managing the data structure periodically such that relevant information is located closer from the search starting point. The second part of this thesis is devoted to the development and the understanding of self-adjusting dictionaries in various models of computation. In particular, we focus our attention on dictionaries which do not have any knowledge of the future accesses. Those dictionaries have to auto-adapt themselves to be competitive with dictionaries specifically tuned for a given access sequence. This approach, which transforms the information structure, is not always feasible. Reasons can be that the structure is based on the semantic of the information such as categorization. In this context, the search procedure is linked to the structure itself and modifying the structure will affect how a search is performed. A solution developed to improve search in static structure is the hotlink assignment. It is a way to enhance a structure without altering its original design. This approach speeds up the search by creating shortcuts in the structure. The first part of this thesis is devoted to this approach.
168

Algorithmes auto-stabilisants pour la construction d'arbres couvrants et la gestion d'entités autonomes

Blin, Lélia 01 December 2011 (has links) (PDF)
Dans le contexte des réseaux à grande échelle, la prise en compte des pannes est une nécessité évidente. Ce document s'intéresse à l'approche auto-stabilisante qui vise à concevoir des algorithmes se ''réparant d'eux-même ' en cas de fautes transitoires, c'est-à-dire de pannes impliquant la modification arbitraire de l'état des processus. Il se focalise sur deux contextes différents, couvrant la majeure partie de mes travaux de recherche ces dernières années. La première partie du document est consacrée à l'algorithmique auto-stabilisante pour les réseaux de processus. La seconde partie du document est consacrée quant à elle à l'algorithmique auto-stabilisante pour des entités autonomes (agents logiciels, robots, etc.) se déplaçant dans un réseau.
169

Compact connectivity representation for triangle meshes

Gurung, Topraj 05 April 2013 (has links)
Many digital models used in entertainment, medical visualization, material science, architecture, Geographic Information Systems (GIS), and mechanical Computer Aided Design (CAD) are defined in terms of their boundaries. These boundaries are often approximated using triangle meshes. The complexity of models, which can be measured by triangle count, increases rapidly with the precision of scanning technologies and with the need for higher resolution. An increase in mesh complexity results in an increase of storage requirement, which in turn increases the frequency of disk access or cache misses during mesh processing, and hence decreases performance. For example, in a test application involving a mesh with 55 million triangles in a machine with 4GB of memory versus a machine with 1GB of memory, performance decreases by a factor of about 6000 because of memory thrashing. To help reduce memory thrashing, we focus on decreasing the average storage requirement per triangle measured in 32-bit integer references per triangle (rpt). This thesis covers compact connectivity representation for triangle meshes and discusses four data structures: 1. Sorted Opposite Table (SOT), which uses 3 rpt and has been extended to support tetrahedral meshes. 2. Sorted Quad (SQuad), which uses about 2 rpt and has been extended to support streaming. 3. Laced Ring (LR), which uses about 1 rpt and offers an excellent compromise between storage compactness and performance of mesh traversal operators. 4. Zipper, an extension of LR, which uses about 6 bits per triangle (equivalently 0.19 rpt), therefore is the most compact representation. The triangle mesh data structures proposed in this thesis support the standard set of mesh connectivity operators introduced by the previously proposed Corner Table at an amortized constant time complexity. They can be constructed in linear time and space from the Corner Table or any equivalent representation. If geometry is stored as 16-bit coordinates, using Zipper instead of the Corner Table increases the size of the mesh that can be stored in core memory by a factor of about 8.
170

Space-Efficient Data Structures in the Word-RAM and Bitprobe Models

Nicholson, Patrick 06 August 2013 (has links)
This thesis studies data structures in the word-RAM and bitprobe models, with an emphasis on space efficiency. In the word-RAM model of computation the space cost of a data structure is measured in terms of the number of w-bit words stored in memory, and the cost of answering a query is measured in terms of the number of read, write, and arithmetic operations that must be performed. In the bitprobe model, like the word-RAM model, the space cost is measured in terms of the number of bits stored in memory, but the query cost is measured solely in terms of the number of bit accesses, or probes, that are performed. First, we examine the problem of succinctly representing a partially ordered set, or poset, in the word-RAM model with word size Theta(lg n) bits. A succinct representation of a combinatorial object is one that occupies space matching the information theoretic lower bound to within lower order terms. We show how to represent a poset on n vertices using a data structure that occupies n^2/4 + o(n^2) bits, and can answer precedence (i.e., less-than) queries in constant time. Since the transitive closure of a directed acyclic graph is a poset, this implies that we can support reachability queries on an arbitrary directed graph in the same space bound. As far as we are aware, this is the first representation of an arbitrary directed graph that supports reachability queries in constant time, and stores less than n choose 2 bits. We also consider several additional query operations. Second, we examine the problem of supporting range queries on strings of n characters (or, equivalently, arrays of n elements) in the word-RAM model with word size Theta(lg n) bits. We focus on the specific problem of answering range majority queries: i.e., given a range, report the character that is the majority among those in the range, if one exists. We show that these queries can be supported in constant time using a linear space (in words) data structure. We generalize this result in several directions, considering various frequency thresholds, geometric variants of the problem, and dynamism. These results are in stark contrast to recent work on the similar range mode problem, in which the query operation asks for the mode (i.e., most frequent) character in a given range. The current best data structures for the range mode problem take soft-Oh(n^(1/2)) time per query for linear space data structures. Third, we examine the deterministic membership (or dictionary) problem in the bitprobe model. This problem asks us to store a set of n elements drawn from a universe [1,u] such that membership queries can be always answered in t bit probes. We present several new fully explicit results for this problem, in particular for the case when n = 2, answering an open problem posed by Radhakrishnan, Shah, and Shannigrahi [ESA 2010]. We also present a general strategy for the membership problem that can be used to solve many related fundamental problems, such as rank, counting, and emptiness queries. Finally, we conclude with a list of open problems and avenues for future work.

Page generated in 0.1292 seconds