• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 13
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient data structures for discovery in high level architecture (HLA)

Rahmani, Hibah 01 January 2000 (has links)
The High Level Architecture (HLA) is a prototype architecture for constructing distributed simulations. HLA is a standard adopted by the Department of Defense (DOD) for development of simulation environments. An important goal of the HLA is to reduce the amount of data routing between simulations during run-time. The Runtime Infrastructure (RTI) is an operating system that is responsible for data routing between the simulations in HLA. The data routing service is provided by the Data Distribution Manager of the RTI. Several methods have been proposed and used for the implementation of data distribution services. The grid-based filtering method, the interval tree method, and the quad-tree method are examples. This thesis analyzes and compares two such methods: the grid and the quad-tree, in regards to their use in the discovery of intersections of publications and subscriptions. The number of false positives and the CPU time of each method are determined for typical cases. For most cases, the quad-tree methos produces less false positives. This method is best suited for large simulations where the cost of maintaining false positives, or non-relevant entities, may be prohibitive. For most cases, the grid method is faster than the quad-tree method. This method may be better suited for small simulations where the host has the capacity to accommodate false positives. The results of this thesis can be used to decide which of the two methods is better suited to a particular type of simulation exercise.
22

Resource-constraint And Scalable Data Distribution Management For High Level Architecture

Gupta, Pankaj 01 January 2007 (has links)
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
23

Model distribuiranja geopodataka u komunalnim sistemima / Model of Spatial Data Distribution in Municipal Systems

Bulatović Vladimir 14 May 2011 (has links)
<p>U radu su prikazani Open Geospatial Consortium (OGC) web servisi, iz aspekta serverskih i klijentskih aplikacija. Analizirani su problemi razmene prostornih podataka u složenim sistemima sa naglaskom na komunalne službe gradova. Na osnovu analize razmene podataka, predložen je model koji unapređuje komunikaciju i pospešuje napredak celokupnog sistema implementacijom distribuiranih OGC web servisa. Predloženi model distribucije prostornih podataka može se primenjivati na sve složene sisteme, ali i unutar manjih sistema kao što su kompanije koje se sastoje iz više sektora ili podsistema</p> / <p> The short review of the Open Geospatial Consortium (OGC) web services have been given in this work from the perspective of server and client applications. The problems of the exchange of spatial data in the complex systems as municipal service have been analysed. Based on analysis of data exchange, the model has been proposed to improve communication and progress of the whole system by implementing OGC web services. Described model of spatial data distribution can be applied to all complex systems, but also within smaller systems such as companies which consist of more<br /> sectors or subsystems.</p>
24

Patim: Proximity Aware Time Management

Okutanoglu, Aydin 01 October 2008 (has links) (PDF)
Logical time management is used to synchronize the executions of distributed simulation elements. In existing time management systems, such as High Level Architecture (HLA), logical times of the simulation elements are synchronized. However, in some cases synchronization can unnecessarily decrease the performance of the system. In the proposed HLA based time management mechanism, federates are clustered into logically related groups. The relevance of federates is taken to be a function of proximity which is defined as the distance between them in the virtual space. Thus, each federate cluster is composed of relatively close federates according to calculated distances. When federate clusters are sufficiently far from each other, there is no need to synchronize them, as they do not relate each other. So in PATiM mechanism, inter-cluster logical times are not synchronized when clusters are sufficiently distant. However, if the distant federate clusters get close to each other, they will need to resynchronize their logical times. This temporal partitioning is aimed at reducing network traffic and time management calculations and also increasing the concurrency between federates. The results obtained based on case applications have verified that clustering improves local performance as soon as federates become unrelated.
25

Sécurité et disponibilité des données stockées dans les nuages / Data availability and sécurity in cloud storage

Relaza, Théodore Jean Richard 12 February 2016 (has links)
Avec le développement de l'Internet, l'informatique s'est basée essentiellement sur les communications entre serveurs, postes utilisateurs, réseaux et data centers. Au début des années 2000, les deux tendances à savoir la mise à disposition d'applications et la virtualisation de l'infrastructure ont vu le jour. La convergence de ces deux tendances a donné naissance à un concept fédérateur qu'est le Cloud Computing (informatique en nuage). Le stockage des données apparaît alors comme un élément central de la problématique liée à la mise dans le nuage des processus et des ressources. Qu'il s'agisse d'une simple externalisation du stockage à des fins de sauvegarde, de l'utilisation de services logiciels hébergés ou de la virtualisation chez un fournisseur tiers de l'infrastructure informatique de l'entreprise, la sécurité des données est cruciale. Cette sécurité se décline selon trois axes : la disponibilité, l'intégrité et la confidentialité des données. Le contexte de nos travaux concerne la virtualisation du stockage dédiée à l'informatique en nuage (Cloud Computing). Ces travaux se font dans le cadre du projet SVC (Secured Virtual Cloud) financé par le Fond National pour la Société Numérique " Investissement d'avenir ". Ils ont conduit au développement d'un intergiciel de virtualisation du stockage, nommé CloViS (Cloud Virtualized Storage), qui entre dans une phase de valorisation portée par la SATT Toulouse-Tech-Transfer. CloViS est un intergiciel de gestion de données développé au sein du laboratoire IRIT, qui permet la virtualisation de ressources de stockage hétérogènes et distribuées, accessibles d'une manière uniforme et transparente. CloViS possède la particularité de mettre en adéquation les besoins des utilisateurs et les disponibilités du système par le biais de qualités de service définies sur des volumes virtuels. Notre contribution à ce domaine concerne les techniques de distribution des données afin d'améliorer leur disponibilité et la fiabilité des opérations d'entrées/sorties dans CloViS. En effet, face à l'explosion du volume des données, l'utilisation de la réplication ne peut constituer une solution pérenne. L'utilisation de codes correcteurs ou de schémas de seuil apparaît alors comme une alternative valable pour maîtriser les volumes de stockage. Néanmoins aucun protocole de maintien de la cohérence des données n'est, à ce jour, adapté à ces nouvelles méthodes de distribution. Nous proposons pour cela des protocoles de cohérence des données adaptés à ces différentes techniques de distribution des données. Nous analysons ensuite ces protocoles pour mettre en exergue leurs avantages et inconvénients respectifs. En effet, le choix d'une technique de distribution de données et d'un protocole de cohérence des données associé se base sur des critères de performance notamment la disponibilité en écriture et lecture, l'utilisation des ressources système (comme l'espace de stockage utilisé) ou le nombre moyen de messages échangés durant les opérations de lecture et écriture. / With the development of Internet, Information Technology was essentially based on communications between servers, user stations, networks and data centers. Both trends "making application available" and "infrastructure virtualization" have emerged in the early 2000s. The convergence of these two trends has resulted in a federator concept, which is the Cloud Computing. Data storage appears as a central component of the problematic associated with the move of processes and resources in the cloud. Whether it is a simple storage externalization for backup purposes, use of hosted software services or virtualization in a third-party provider of the company computing infrastructure, data security is crucial. This security declines according to three axes: data availability, integrity and confidentiality. The context of our work concerns the storage virtualization dedicated to Cloud Computing. This work is carried out under the aegis of SVC (Secured Virtual Cloud) project, financed by the National Found for Digital Society "Investment for the future". This led to the development of a storage virtualization middleware, named CloViS (Cloud Virtualized Storage), which is entering a valorization phase driven by SATT Toulouse-Tech-Transfer. CloViS is a data management middleware developped within the IRIT laboratory. It allows virtualizing of distributed and heterogeneous storage resources, with uniform and seamless access. CloViS aligns user needs and system availabilities through qualities of service defined on virtual volumes. Our contribution in this field concerns data distribution techniques to improve their availability and the reliability of I/O operations in CloViS. Indeed, faced with the explosion in the amount of data, the use of replication can not be a permanent solution. The use of "Erasure Resilient Code" or "Threshold Schemes" appears as a valid alternative to control storage volumes. However, no data consistency protocol is, to date, adapted to these new data distribution methods. For this reason, we propose to adapt these different data distribution techniques. We then analyse these new protocols, highlighting their respective advantages and disadvantages. Indeed, the choice of a data distribution technique and the associated data consistency protocol is based on performance criteria, especially the availability and the number of messages exchanged during the read and write operations or the use of system resources (such as storage space used).
26

Synchronizace databází MySQL / MySQL Database Synchronization

Dluhoš, Ondřej January 2010 (has links)
This thesis deals with the MySQL database synchronization. The goal of this work was to get acquainted with database synchronization in a broader context, to choose the appropriate tools for real usage, and them to implement, evaluate and analyze these tools. From these techniques was selected MySQL replication technology that solves the synchronization task of distributed evidence database system of health implantable devices in the best way. The replication was implemented on this database system and after the testing was used in the company Timplant Ltd.
27

Integrating Data Distribution Service in an Existing Software Architecture: Evaluation of the performance with different Quality of Service configurations

Domanos, Kyriakos January 2020 (has links)
The Data Distribution Service (DDS) is a flexible, decentralized, peer-to-peer communication middle-ware. This thesis presents a performance analysis of the DDS usage in the Toyota Smartness platform that is used in Toyota’s Autonomous Guided Vehicles (AGVs). The purpose is to find if DDS is suitable for internal communication between modules that reside within the Smartness platform and for external communication between AGVs that are connected in the same network. An introduction to the main concepts of DDS and the Toyota Smartness platform architecture is given together with a presentation of some earlier research that has been done in DDS. A number of different approaches of how DDS can be integrated to the Smartness platform are explored and a set of different configurations that DDS provides are evaluated. The tests that were performed in order to evaluate the usage of DDS are described in detail and the results that were collected are presented, compared and discussed. The advantages and disadvantages of using DDS are listed, and some ideas for future work are proposed.
28

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
29

Data Distribution Management In Large-scale Distributed Environments

Gu, Yunfeng 15 February 2012 (has links)
Data Distribution Management (DDM) deals with two basic problems: how to distribute data generated at the application layer among underlying nodes in a distributed system and how to retrieve data back whenever it is necessary. This thesis explores DDM in two different network environments: peer-to-peer (P2P) overlay networks and cluster-based network environments. DDM in P2P overlay networks is considered a more complete concept of building and maintaining a P2P overlay architecture than a simple data fetching scheme, and is closely related to the more commonly known associative searching or queries. DDM in the cluster-based network environment is one of the important services provided by the simulation middle-ware to support real-time distributed interactive simulations. The only common feature shared by DDM in both environments is that they are all built to provide data indexing service. Because of these fundamental differences, we have designed and developed a novel distributed data structure, Hierarchically Distributed Tree (HD Tree), to support range queries in P2P overlay networks. All the relevant problems of a distributed data structure, including the scalability, self-organizing, fault-tolerance, and load balancing have been studied. Both theoretical analysis and experimental results show that the HD Tree is able to give a complete view of system states when processing multi-dimensional range queries at different levels of selectivity and in various error-prone routing environments. On the other hand, a novel DDM scheme, Adaptive Grid-based DDM scheme, is proposed to improve the DDM performance in the cluster-based network environment. This new DDM scheme evaluates the input size of a simulation based on probability models. The optimum DDM performance is best approached by adapting the simulation running in a mode that is most appropriate to the size of the simulation.
30

Automatic data distribution for massively parallel processors

García Almiñana, Jordi 16 April 1997 (has links)
Massively Parallel Processor systems provide the required computational power to solve most large scale High Performance Computing applications. Machines with physically distributed memory allow a cost-effective way to achieve this performance, however, these systems are very diffcult to program and tune. In a distributed-memory organization each processor has direct access to its local memory, and indirect access to the remote memories of other processors. But the cost of accessing a local memory location can be more than one order of magnitude faster than accessing a remote memory location. In these systems, the choice of a good data distribution strategy can dramatically improve performance, although different parts of the data distribution problem have been proved to be NP-complete.The selection of an optimal data placement depends on the program structure, the program's data sizes, the compiler capabilities, and some characteristics of the target machine. In addition, there is often a trade-off between minimizing interprocessor data movement and load balancing on processors. Automatic data distribution tools can assist the programmer in the selection of a good data layout strategy. These use to be source-to-source tools which annotate the original program with data distribution directives.Crucial aspects such as data movement, parallelism, and load balance have to be taken into consideration in a unified way to efficiently solve the data distribution problem.In this thesis a framework for automatic data distribution is presented, in the context of a parallelizing environment for massive parallel processor (MPP) systems. The applications considered for parallelization are usually regular problems, in which data structures are dense arrays. The data mapping strategy generated is optimal for a given problem size and target MPP architecture, according to our current cost and compilation model.A single data structure, named Communication-Parallelism Graph (CPG), that holds symbolic information related to data movement and parallelism inherent in the whole program, is the core of our approach. This data structure allows the estimation of the data movement and parallelism effects of any data distribution strategy supported by our model. Assuming that some program characteristics have been obtained by profiling and that some specific target machine features have been provided, the symbolic information included in the CPG can be replaced by constant values expressed in seconds representing data movement time overhead and saving time due to parallelization. The CPG is then used to model a minimal path problem which is solved by a general purpose linear 0-1 integer programming solver. Linear programming techniques guarantees that the solution provided is optimal, and it is highly effcient to solve this kind of problems.The data mapping capabilities provided by the tool includes alignment of the arrays, one or two-dimensional distribution with BLOCK or CYCLIC fashion, a set of remapping actions to be performed between phases if profitable, plus the parallelization strategy associated. The effects of control flow statements between phases are taken into account in order to improve the accuracy of the model. The novelty of the approach resides in handling all stages of the data distribution problem, that traditionally have been treated in several independent phases, in a single step, and providing an optimal solution according to our model.

Page generated in 0.1272 seconds