11 |
Synchronizace databází MySQL / MySQL Database SynchronizationDluhoš, Ondřej January 2010 (has links)
This thesis deals with the MySQL database synchronization. The goal of this work was to get acquainted with database synchronization in a broader context, to choose the appropriate tools for real usage, and them to implement, evaluate and analyze these tools. From these techniques was selected MySQL replication technology that solves the synchronization task of distributed evidence database system of health implantable devices in the best way. The replication was implemented on this database system and after the testing was used in the company Timplant Ltd.
|
12 |
Integrated Mobility and Service Management for Future All-IP Based Wireless NetworksHe, Weiping 24 April 2009 (has links)
Mobility management addresses the issues of how to track and locate a mobile node (MN) efficiently. Service management addresses the issues of how to efficiently deliver services to MNs. This dissertation aims to design and analyze integrated mobility and service management schemes for future all-IP based wireless systems. We propose and analyze per-user regional registration schemes extending from Mobile IP Regional Registration and Hierarchical Mobile IPv6 for integrated mobility and service management with the goal to minimize the network signaling and packet delivery cost in future all-IP based wireless networks.
If access routers in future all-IP based wireless networks are restricted to perform network layer functions only, we investigate the design of intelligent routers, called dynamic mobility anchor points (DMAPs), to implement per-user regional management in IP wireless networks. These DMAPs are access routers (ARs) chosen by individual MNs to act as regional routers to reduce the signaling overhead for intra-regional movements. The DMAP domain size is based on a MN's mobility and service characteristics. A MN optimally determines when and where to launch a DMAP to minimize the network cost in serving the user's mobility and service management operations. We show that there exists an optimal DMAP domain size for each individual MN. We also demonstrate that the DMAP design can easily support failure recovery because of the flexibility of allowing a MN to choose any AR to be the DMAP for mobility and service management.
If access routers are powerful and flexible in future all-IP based networks to perform network-layer and application-layer functions, we propose the use of per-user proxies that can run on access routers. The user proxies can carry service context information such as cached data items and Web processing objects, and perform context-aware functions such as content adaptation for services engaged by the MN to help application executions. We investigate a proxy-based integrated mobility and service management architecture (IMSA) under which a client-side proxy is created on a per-user basis to serve as a gateway between a MN and all services engaged by the MN. Leveraging Mobile IP with route optimization, the proxy runs on an access router and cooperates with the home agent and foreign agent of the MN to maintain the location information of the MN to facilitate data delivery by services engaged by the MN. Further, the proxy optimally determines when to move with the MN so as to minimize the network cost associated with the user's mobility and service management operations.
Finally we investigate a proxy-based integrated cache consistency and mobility management scheme called PICMM to support client-server query-based mobile applications, to improve query performance, the MN stores frequently used data in its cache. The MN's proxy receives invalidation reports or updated data objects from application servers, i.e., corresponding nodes (Cans) for cached data objects stored in the MN. If the MN is connected, the proxy will forward invalidation reports or fresh data objects to the MN. If the MN is disconnected, the proxy will store the invalidation reports or fresh data objects, and, once the MN is reconnected, the proxy will forward the latest cache invalidation report or data objects to the MN. We show that there is an optimal ``service area'' under which the overall cost due to query processing, cache consistency management and mobility management is minimized. To further reduce network traffic, we develop a threshold-based hybrid cache consistency management policy such that whenever a data object is updated at the server, the server sends an invalidation report to the MN through the proxy to invalidate the cached data object only if the size of the data object exceeds the given threshold. Otherwise, the server sends a fresh copy of the data object through the proxy to the MN. We identify the best ``threshold'' value that would minimize the overall network cost.
We develop mathematical models to analyze performance characteristics of DMAP, IMSA and PICMM developed in the dissertation research and demonstrate that they outperform existing schemes that do not consider integrated mobility and service management or that use static regional routers to serve all MNs in the system. The analytical results obtained are validated through extensive simulation. We conclude that integrated mobility and service management can greatly reduce the overall network cost for mobile multimedia and database applications, especially when the application's data service rate is high compared with the MN's mobility rate. / Ph. D.
|
13 |
Partial persistent sequences and their applications to collaborative text document editing and processingWu, Qinyi 08 July 2011 (has links)
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity.
In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents.
We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
|
14 |
Sûreté de fonctionnement dans le nuage de stockage / Dependability in cloud storageObame Meye, Pierre 01 December 2016 (has links)
La quantité de données stockées dans le monde ne cesse de croître et cela pose des challenges aux fournisseurs de service de stockage qui doivent trouver des moyens de faire face à cette croissance de manière scalable, efficace, tout en optimisant les coûts. Nous nous sommes intéressés aux systèmes de stockage de données dans le nuage qui est une grande tendance dans les solutions de stockage de données. L'International Data Corporation (IDC) prédit notamment que d'ici 2020, environ 40% des données seront stockées et traitées dans le nuage. Cette thèse adresse les challenges liés aux performances d'accès aux données et à la sûreté de fonctionnement dans les systèmes de stockage dans le nuage. Nous avons proposé Mistore, un système de stockage distribué que nous avons conçu pour assurer la disponibilité des données, leur durabilité, ainsi que de faibles latences d'accès aux données en exploitant des zones de stockage dans les box, les Points de Présence (POP), et les centre de données dans une infrastructure Digital Subscriber Line (xDSL) d'un Fournisseur d'Accès à Internet (FAI). Dans Mistore, nous adressons aussi les problèmes de cohérence de données en fournissant plusieurs critères de cohérence des données ainsi qu'un système de versioning. Nous nous sommes aussi intéressés à la sécurité des données dans le contexte de systèmes de stockage appliquant une déduplication des données, qui est l'une des technologies les plus prometteuses pour réduire les coût de stockage et de bande passante réseau. Nous avons conçu une méthode de déduplication en deux phases qui est sécurisée contre des attaques d'utilisateurs malicieux tout en étant efficace en termes d'économie de bande passante réseau et d'espace de stockage. / The quantity of data in the world is steadily increasing bringing challenges to storage system providers to find ways to handle data efficiently in term of dependability and in a cost-effectively manner. We have been interested in cloud storage which is a growing trend in data storage solution. For instance, the International Data Corporation (IDC) predicts that by 2020, nearly 40% of the data in the world will be stored or processed in a cloud. This thesis addressed challenges around data access latency and dependability in cloud storage. We proposed Mistore, a distributed storage system that we designed to ensure data availability, durability, low access latency by leveraging the Digital Subscriber Line (xDSL) infrastructure of an Internet Service Provider (ISP). Mistore uses the available storage resources of a large number of home gateways and Points of Presence for content storage and caching facilities. Mistore also targets data consistency by providing multiple types of consistency criteria on content and a versioning system. We also considered the data security and confidentiality in the context of storage systems applying data deduplication which is becoming one of the most popular data technologies to reduce the storage cost and we design a two-phase data deduplication that is secure against malicious clients while remaining efficient in terms of network bandwidth and storage space savings.
|
15 |
Coalla : Un modèle pour l'édition collaborative d'un contenu géographique et la gestion de sa cohérence / Coalla : a model for collaborative editing and consistency management of geographic contentBrando Escobar, Carmen 05 April 2013 (has links)
La production et la maintenance de contenus géographiques se fait souvent grâce à la mise en commun de contributions diverses. La mise à jour des données de l'IGN s'appuie ainsi sur l'intégration de données de partenaires ou la prise en compte d'alertes d'évolution du terrain. C'est également le cas des contenus libres produits par des projets communautaires comme OpenStreetMap. Un aspect problématique est la gestion de la qualité d'un contenu géographique collaboratif, particulièrement de leur cohérence afin de permettre que des prises de décision s'appuient dessus. Cette cohérence est liée à l'homogénéité de la représentation de l'espace, ainsi qu'à la préservation d'informations importantes non explicites mais qui peuvent être retrouvées sur les entités décrites grâce à leurs géométries. Ce travail de thèse propose un modèle baptisé Coalla pour l'édition collaborative d'un contenu géographique avec gestion de la cohérence. Ce modèle comporte trois contributions : 1) l'identification et la définition des éléments que doit comporter un vocabulaire formel visant à faciliter la construction d'un contenu géographique collaboratif ; 2) un processus d'aide à la construction à la volée d'un vocabulaire formel à partir de spécifications formelles des bases de données IGN et à des vocabulaires collaboratifs existants, et 3) une stratégie d'évaluation et de réconciliation des contributions afin de les intégrer d'une façon cohérente au contenu central. Notre modèle Coalla a été implémenté dans un prototype / Geographic content production and maintenance is often done through a combination of various contributions. Thus, updating IGN geographic data relies on integrating data from partners or by involving field change alerts. This is also the case of free content produced by community projects such as OpenStreetMap. An important problem is quality management of collaboratively produced geographic content, in particular consistency management. This allows for decision-making which is based on this content. Data consistency depends on how homogenous space representation is. Likewise, it depends on preserving important non-explicit information that can be found on the geometries of the entities described in the content. This Thesis proposes a model baptized Coalla for collaborative editing of geographic content with consistency management. The model has three contributions: 1) identifying and defining elements that should be included in a formal vocabulary to facilitate the construction of collaborative geographic content, 2) user assistance process to help users build on the fly a formal vocabulary extracted from formal IGN databases specifications and existing collaborative vocabularies, and 3) a strategy for evaluating and reconciling user contributions in order to coherently integrate them into the content. Our model Coalla has been implemented in a prototype
|
16 |
Modélisation et construction des bases de données géographiques floues et maintien de la cohérence de modèles pour les SGBD SQL et NoSQL / Modeling and construction of fuzzy geographic databases with supporting models consistency for SQL and NoSQL database systemsSoumri Khalfi, Besma 12 June 2017 (has links)
Aujourd’hui, les recherches autour du stockage et de l’intégration des données spatiales constituent un maillon important qui redynamise les recherches sur la qualité des données. La prise en compte de l’imperfection des données géographiques, particulièrement l’imprécision, ajoute une réelle complexification. Parallèlement à l’augmentation des exigences de qualité centrées sur les données (précision, exhaustivité, actualité), les besoins en information intelligible ne cessent d’augmenter. Sous cet angle, nous sommes intéressés aux bases de données géographiques imprécises (BDGI) et leur cohérence. Ce travail de thèse présente des solutions pour la modélisation et la construction des BDGI et cohérentes pour les SGBD SQL et NoSQL.Les méthodes de modélisation conceptuelle de données géographiques imprécises proposées ne permettent pas de répondre de façon satisfaisante aux besoins de modélisation du monde réel. Nous présentons une version étendue de l’approche F-Perceptory pour la conception de BDGI. Afin de construire la BDGI dans un système relationnel, nous présentons un ensemble de règles de transformation automatique de modèles pour générer à partir du modèle conceptuel flou le modèle physique. Nous implémentons ces solutions sous forme d’un prototype baptisé FPMDSG.Pour les systèmes NoSQL type document. Nous présentons un modèle logique baptisé Fuzzy GeoJSON afin de mieux cerner la structure des données géographiques imprécises. En plus, ces systèmes manquent de pertinence pour la cohérence des données ; nous présentons une méthodologie de validation pour un stockage cohérent. Les solutions proposées sont implémentées sous forme d'un processus de validation. / Today, research on the storage and the integration of spatial data is an important element that revitalizes the research on data quality. Taking into account the imperfection of geographic data particularly the imprecision adds a real complexity. Along with the increase in the quality requirements centered on data (accuracy, completeness, topicality), the need for intelligible information (logically consistent) is constantly increasing. From this point of view, we are interested in Imprecise Geographic Databases (IGDBs) and their logical coherence. This work proposes solutions to build consistent IGDBs for SQL and NoSQL database systems.The design methods proposed to imprecise geographic data modeling do not satisfactorily meet the modeling needs of the real world. We present an extension to the F-Perceptory approach for IGDBs design. To generate a coherent definition of the imprecise geographic objects and built the IGDB into relational system, we present a set of rules for automatic models transformation. Based on these rules, we develop a process to generate the physical model from the fuzzy conceptual model. We implement these solutions as a prototype called FPMDSG.For NoSQL document oriented databases, we present a logical model called Fuzzy GeoJSON to better express the structure of imprecise geographic data. In addition, these systems lack relevance for data consistency; therefore, we present a validation methodology for consistent storage. The proposed solutions are implemented as a schema driven pipeline based on Fuzzy GeoJSON schema and semantic constraints.
|
Page generated in 0.5878 seconds