• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 9
  • 6
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 102
  • 102
  • 25
  • 19
  • 17
  • 15
  • 15
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
72

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
73

Partial persistent sequences and their applications to collaborative text document editing and processing

Wu, Qinyi 08 July 2011 (has links)
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
74

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
75

[en] SHELL MODELING WITH PARAMETRIC INTERSECTION / [pt] MODELAGEM DE CASCAS COM INTERSEÇÕES PARAMÉTRICAS

LUIZ CRISTOVAO GOMES COELHO 26 July 2002 (has links)
[pt] Apresenta-se uma metodologia para modelagem de cascas para elementos finitos definidas em superfícies paramétricas. A metodologia consiste na criação de curvas e geração de malhas sobre os retalhos paramétricos constru´ıdos com base nestas curvas, que também são usadas para a conexão de malhas adjacentes. O modelo final é uma representação de todas as malhas combinadas em uma única estrutura de dados. As ferramentas básicas para geração de tais malhas são uma interface para modelagem de curvas espaciais e os algoritmos geom´etricos para construcão de mapeamentos nos domínios elementares. O problema central em modelagens compostas é o tratamento dado às malhas em superfícies que se interceptam. Um algoritmo capaz de modelar com precisão as curvas de interseção e de ajustar as duas malhas para as novas restrições geradas é apresentado neste trabalho. O algoritmo é parte de um programa completo para modelagem interativa de cascas, que tem sido usado no projeto de grandes sistemas flutuantes para explotação de petróleo em águas profundas. O uso de uma variante da estrutura de dados DCEL, que usa árvores de ordenação espacial para armazenar as entidades topol´ogicas ao invés de listas ou vetores, permite que malhas bastante refinadas sejam reconstru´ıdas em tempo compatível com o trabalho interativo. Estas árvores aceleram os cálculos de interseção necessários à determinação dos pontos de interpolação das curvas de trimming, permitindo tamb´em a reconstrução das malhas usando-se apenas consultas locais. / [en] We present a methodology for modeling finite-element meshes defined on parametric surface patches. The idea is to build curves and generate meshes over the parametric patches built with these curves, which also connect adjacent meshes. The final model is a representation of all meshes combined into a single data structure. The basic tools to generate such meshes are the user interface to model space curves and the geometric algorithms to construct the elementary domain mappings. The main problem in composite modeling is how to handle mesh surfaces that intersect each other. We present an algorithm that models the intersection curves precisely and adjusts both meshes to the newly formed borders. The algorithm is part of an interactive shell modeling program, which has been used in the design of large offshore oil structures. We avoid unacceptable interaction delays by using a variant of the DCEL data structure that stores topological entities in spatial indexing trees instead of linked lists. These trees speed up the intersection computations required to determine points of the trimming curves, and also allows mesh reconstruction using only local queries.
76

PaVo un tri parallèle adaptatif / PaVo. An Adaptative Parallel Sorting Algorithm.

Durand, Marie 25 October 2013 (has links)
Les joueurs exigeants acquièrent dès que possible une carte graphique capable de satisfaire leur soif d'immersion dans des jeux dont la précision, le réalisme et l'interactivité redoublent d'intensité au fil du temps. Depuis l'avènement des cartes graphiques dédiées au calcul généraliste, ils n'en sont plus les seuls clients. Dans un premier temps, nous analysons l'apport de ces architectures parallèles spécifiques pour des simulations physiques à grande échelle. Cette étude nous permet de mettre en avant un goulot d'étranglement en particulier limitant la performance des simulations. Partons d'un cas typique : les fissures d'une structure complexe de type barrage en béton armé peuvent être modélisées par un ensemble de particules. La cohésion de la matière ainsi simulée est assurée par les interactions entre elles. Chaque particule est représentée en mémoire par un ensemble de paramètres physiques à consulter systématiquement pour tout calcul de forces entre deux particules. Ainsi, pour que les calculs soient rapides, les données de particules proches dans l'espace doivent être proches en mémoire. Dans le cas contraire, le nombre de défauts de cache augmente et la limite de bande passante de la mémoire peut être atteinte, particulièrement en parallèle, bornant les performances. L'enjeu est de maintenir l'organisation des données en mémoire tout au long de la simulation malgré les mouvements des particules. Les algorithmes de tri standard ne sont pas adaptés car ils trient systématiquement tous les éléments. De plus, ils travaillent sur des structures denses ce qui implique de nombreux déplacements de données en mémoire. Nous proposons PaVo, un algorithme de tri dit adaptatif, c'est-à-dire qu'il sait tirer parti de l'ordre pré-existant dans une séquence. De plus, PaVo maintient des trous dans la structure, répartis de manière à réduire le nombre de déplacements mémoires nécessaires. Nous présentons une généreuse étude expérimentale et comparons les résultats obtenus à plusieurs tris renommés. La diminution des accès à la mémoire a encore plus d'importance pour des simulations à grande échelles sur des architectures parallèles. Nous détaillons une version parallèle de PaVo et évaluons son intérêt. Pour tenir compte de l'irrégularité des applications, la charge de travail est équilibrée dynamiquement par vol de travail. Nous proposons de distribuer automatiquement les données en mémoire de manière à profiter des architectures hiérarchiques. Les tâches sont pré-assignées aux cœurs pour utiliser cette distribution et nous adaptons le moteur de vol pour favoriser des vols de tâches concernant des données proches en mémoire. / Gamers are used to throw onto the latest graphics cards to play immersive games which precision, realism and interactivity keep increasing over time. With general-propose processing on graphics processing units, scientists now participate in graphics card use too. First, we examine these architectures interest for large-scale physics simulations. Drawing on this experience, we highlight in particular a bottleneck in simulations performance. Let us consider a typical situation: cracks in complex reinforced concrete structures such as dams are modelised by many particles. Interactions between particles simulate the matter cohesion. In computer memory, each particle is represented by a set of physical parameters used for every force calculations between two particles. Then, to speed up computations, data from particles close in space should be close in memory. Otherwise, the number of cache misses raises up and memory bandwidth may be reached, specially in parallel environments, limiting global performance. The challenge is to maintain data organization during the simulations despite particle movements. Classical sorting algorithms do not suit such situations because they consistently sort all the elements. Besides, they work upon dense structures leading to a lot of memory transfers. We propose PaVo, an adaptive sort which means it benefits from sequence presortedness. Moreover, to reduce the number of necessary memory transfers, PaVo spreads some gaps inside the data structure. We present a large experimental study and confront results to reputed sort algorithms. Reducing memory requests is again more important for large scale simulations with parallel architectures. We detail a parallel version of PaVo and evaluate its interest. To deal with application irregularities, we do load balancing with work-stealing. We take advantage of hierarchical architectures by automatically distributing data in memory. Thus, tasks are pre-assigned to cores with respect to this organization and we adapt the scheduler to favor steals of tasks working on data close in memory.
77

Efficient algorithms for de novo assembly of alternative splicing events from RNA-seq data / Algorithmes efficaces pour l’assemblage de novo d’événements d’épissage alternatif dans des données de RNA-seq

Tominaga Sacomoto, Gustavo Akio 06 March 2014 (has links)
Dans cette thèse, nous abordons le problème de l'identification et de la quantification de variants (épissage alternatif et polymorphisme génomique) dans des données de RNA-seq sans génome de référence, et sans faire un assemblage complet des transcripts. Basé sur l'idée que chaque variant correspond à un motif reconnaissable, qu'on appelle une bulle, dans un graphe de Bruijn construit à partir des lectures de RNA-seq, nous proposons un modèle pour les variants dans de tels graphes. Nous introduisons ensuite une méthode, appelé KisSplice, pour extraire les événements d'épissage alternatif, et nous montrons qu'il trouve plus d'événements corrects que les assembleurs de transcriptome traditionnels. Afin d'améliorer son temps d'exécution, nous proposons un nouvel algorithme polynomial pour énumérer les bulles. On montre qu'il est plusieurs ordres de grandeur plus rapide que les approches précédentes. Afin de réduire sa consommation en mémoire, nous proposons une nouvelle façon de représenter un graphe de Bruijn. Nous montrons que notre approche utilise 30% à 40% moins de mémoire que l'état de l'art. Nous appliquons les techniques développées pour énumérer les bulles à deux problémes classiques. Nous donnons le premier algorithme optimal pour énumérer les cycles dans des graphes non orientés. Il s'agit de la première amélioration à ce probléme en près de 40 ans. Nous considérons ensuite une variante du problème des K chemins plus courts: au lieu de limiter le nombre des chemins, nous limitons leurs poids. Nous présentons de nouveaux algorithmes qui utilisent exponentiellement moins mémoire que les approches précédentes / In this thesis, we address the problem of identifying and quantifying variants (alternative splicing and genomic polymorphism) in RNA-seq data when no reference genome is available, without assembling the full transcripts. Based on the idea that each variant corresponds to a recognizable pattern, a bubble, in a de Bruijn graph constructed from the RNA-seq reads, we propose a general model for all variants in such graphs. We then introduce an exact method, called KisSplice, to extract alternative splicing events and show that it outperforms general purpose transcriptome assemblers. We put an extra effort to make KisSplice as scalable as possible. In order to improve the running time, we propose a new polynomial delay algorithm to enumerate bubbles. We show that it is several orders of magnitude faster than previous approaches. In order to reduce its memory consumption, we propose a new compact way to build and represent a de Bruijn graph. We show that our approach uses 30% to 40% less memory than the state of the art, with an insignificant impact on the construction time. Additionally, we apply the techniques developed to list bubbles in two classical problems: cycle enumeration and the K-shortest paths problem. We give the first optimal algorithm to list cycles in undirected graphs, improving over Johnson’s algorithm. This is the first improvement to this problem in almost 40 years. We then consider a different parameterization of the K-shortest (simple) paths problem: instead of bounding the number of st-paths, we bound the weight of the st-paths. We present new algorithms using exponentially less memory than previous approaches
78

Platt Hierarki : Metoder för omvandling av relationsdata till hierarkisk data

Grönblad, Carl, Eker, Magnus January 2011 (has links)
The relational database model was defined in the 1970‟s and is the dominating database type today.  The main difference between data from a relational database and a hierarchical data structure is that the relational database stores records in tables. The records have no particular order, but can include links in terms of relationships with other records. A hierarchical structure organizes data in the form of a tree structure  and  can for an example be found in organizational structures in which different levels involves different responsibilities.  If the data stored in a relational database is to be presented in a hierarchically, a conversion of the data structure is required. The intention of this paper is to describe how such a conversion can be performed.   To investigate the conversion methods, case studies has been conducted on the basis of a specific organization‟s hierarchical structure. Web based prototypes were developed in Silverlight to evaluate the conversion of a hierarchical structure, based on the organization that was represented in a relational database. Existing tools were used in order to extract data from a database and transfer data in a client-server architecture.  The result is a framework for the conversion of relational data into hierarchical structure and describes the process step by step. A conversion process includes the design of the database source, extraction and transfer of data to a web client and the algorithm for performing the conversion into a tree structure. / Relationsdatabaser definierades på 1970-talet och är den dominerande databastypen idag. Skillnaden mellan data i en relationsdatabas och en hierarkisk datastruktur är att relationsdatabasen sparar poster i tabeller. Poster i tabellerna behöver ingen inbördes ordning, men kan omfatta  kopplingar i form av relationer till andra poster. En hierarkisk struktur organiserar data i form av trädstruktur och återfinns till exempel i organisationsstrukturer där olika nivåer innefattar olika ansvarsområden.  Om data som sparats i en relationsdatabas skall visas upp hierarkiskt krävs en omvandling av datastrukturen. Syftet med uppsatsen är att redogöra för hur en sådan omvandling kan utföras.  För att utreda omvandlingsmetoder har fallstudier bedrivits utifrån en specifik organisations hierarkiska struktur. Webbaserade prototyper utvecklades i Silverlight för att utvärdera omvandling till hierarkisk struktur utifrån organisationen som fanns representerad i en relationsdatabas.  Till hjälp existerar verktyg för att extrahera data ur databas samt överföra data i klient-server arkitektur.  Resultatet är ett ramverk för omvandling av relationsdata till hierarkisk struktur  och beskriver processen steg för steg. En omvandlingsprocess omfattar utformning av databas för källa, extrahering och överföring av data till en webbklient samt  algoritm  för utförande av omvandling till trädstruktur.
79

[en] GEOMETRICAL MODELING OF STRUCTURES USING CAD TECHNIQUES / [es] PROYECTO GEOMÉTRICO DE CARRETERAS CONTROLADO POR COMPUTADOR / [pt] PROJETO GEOMÉTRICO DE ESTRADAS ASSISTIDO POR COMPUTADOR

REYNALDO COSATI MEDEIROS 05 September 2001 (has links)
[pt] O Homem constrói estradas desde a Antigüidade. A técnica de projeto utilizada nos dias de hoje é resultado de um longo caminho de aperfeiçoamento que culminou em uma metodologia que procura atingir os melhores índices técnicos ao menor custo de execução possível. Pode-se dizer que esta tecnologia está bem definida desde o início da década de 40. Por sua vez, esta metodologia para ser otimizada requer uma quantidade enorme de cálculos que só podem ser feitos em tempo hábil com a ajuda de um computador. O trabalho a seguir é uma tentativa de se fazer uma aplicação para auxiliar o projetista na tarefa de projeto geométrico de estradas simplesmente implementando uma técnica consagrada há mais de 50 anos. / [en] Mankind has built roads since antiquity. The road planing techniques used today have been obtained throughout improvements which culminated with a methodology that seeks to combine best technical indexes with the lowest possible execution costs. One can say that this technology has been established since the early forties. However, to be optimized, this methodology requires a whole lot of calculation, that can only be afforded with the aid of computers. The research that follows is a system application with the purpose of making the task of road designing easier, by simply implementing techniques that have been known for over 50 years. / [es] El hombre construye carreteras desde la Antigüedad. La técnica de proyecto utilizada hoy em día es resultado de un largo camino de perfeccionamiento que culminó en una metodología que intenta alcanzar los mejores índices técnicos al menor costo de ejecución posible. Se puede decir que esta tecnología está bien definida desde el início de la década de 40. Por outra parte, esta metodología para ser optimizada requiere una cantidad enorme de cálculos que solo pueden ser realizados en tiempo hábil con ayuda de un computador. Este trabajo constituye um intento de realizar una aplicación para auxiliar al projetista en la tarea de proyecto geométrico de carreteras implementando una técnica consagrada há más de 50 años.
80

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng January 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.

Page generated in 0.0569 seconds