• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 18
  • 9
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 81
  • 81
  • 25
  • 19
  • 19
  • 18
  • 18
  • 17
  • 17
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

On a Self-Organizing MANET Event Routing Architecture with Causal Dependency Awareness

Pei, Guanhong 07 January 2010 (has links)
Publish/subscribe (P/S) is a communication paradigm of growing popularity for information dissemination in large-scale distributed systems. The weak coupling between information producers and consumers in P/S systems is attractive for loosely coupled and dynamic network infrastructures such as ad hoc networks. However, achieving end-to-end timeliness and reliability properties when P/S events are causally dependent is an open problem in ad hoc networks. In this thesis, we present, evaluate benefits of, and compare with past work, an architecture design that can effectively support timely and reliable delivery of events and causally related events in ad hoc environments, and especially in mobile ad hoc networks (MANETs). With observations from both realistic application model and simulation experiments, we reveal causal dependencies among events and their significance in a typical use notional system. We also examine and propose engineering methodologies to further tailor an event-based system to facilitate its self-reorganizing capability and self-reconfiguration. Our design features a two-layer structure, including novel distributed algorithms and mechanisms for P/S tree construction and maintenance. The trace-based experimental simulation studies illustrate our design's effectiveness in both cases with and without causal dependencies. / Master of Science
62

Scalable view-based techniques for web data : algorithms and systems / Techniques efficaces basées sur des vues matérialisées pour la gestion des données du Web : algorithmes et systèmes

Katsifodimos, Asterios 03 July 2013 (has links)
Le langage XML, proposé par le W3C, est aujourd’hui utilisé comme un modèle de données pour le stockage et l’interrogation de grands volumes de données dans les systèmes de bases de données. En dépit d’importants travaux de recherche et le développement de systèmes efficace, le traitement de grands volumes de données XML pose encore des problèmes des performance dus à la complexité et hétérogénéité des données ainsi qu’à la complexité des langages courants d’interrogation XML. Les vues matérialisées sont employées depuis des décennies dans les bases de données afin de raccourcir les temps de traitement des requêtes. Elles peuvent être considérées les résultats de requêtes pré-calculées, que l’on réutilise afin d’éviter de recalculer (complètement ou partiellement) une nouvelle requête. Les vues matérialisées ont fait l’objet de nombreuses recherches, en particulier dans le contexte des entrepôts des données relationnelles.Cette thèse étudie l’applicabilité de techniques de vues matérialisées pour optimiser les performances des systèmes de gestion de données Web, et en particulier XML, dans des environnements distribués. Dans cette thèse, nos apportons trois contributions.D’abord, nous considérons le problème de la sélection des meilleures vues à matérialiser dans un espace de stockage donné, afin d’améliorer la performance d’une charge de travail des requêtes. Nous sommes les premiers à considérer un sous-langage de XQuery enrichi avec la possibilité de sélectionner des noeuds multiples et à de multiples niveaux de granularités. La difficulté dans ce contexte vient de la puissance expressive et des caractéristiques du langage des requêtes et des vues, et de la taille de l’espace de recherche de vues que l’on pourrait matérialiser.Alors que le problème général a une complexité prohibitive, nous proposons et étudions un algorithme heuristique et démontrer ses performances supérieures par rapport à l’état de l’art.Deuxièmement, nous considérons la gestion de grands corpus XML dans des réseaux pair à pair, basées sur des tables de hachage distribuées. Nous considérons la plateforme ViP2P dans laquelle des vues XML distribuées sont matérialisées à partir des données publiées dans le réseau, puis exploitées pour répondre efficacement aux requêtes émises par un pair du réseau. Nous y avons apporté d’importantes optimisations orientées sur le passage à l’échelle, et nous avons caractérisé la performance du système par une série d’expériences déployées dans un réseau à grande échelle. Ces expériences dépassent de plusieurs ordres de grandeur les systèmes similaires en termes de volumes de données et de débit de dissémination des données. Cette étude est à ce jour la plus complète concernant une plateforme de gestion de contenus XML déployée entièrement et testée à une échelle réelle.Enfin, nous présentons une nouvelle approche de dissémination de données dans un système d’abonnements, en présence de contraintes sur les ressources CPU et réseau disponibles; cette approche est mise en oeuvre dans le cadre de notre plateforme Delta. Le passage à l’échelle est obtenu en déchargeant le fournisseur de données de l’effort de répondre à une partie des abonnements. Pour cela, nous tirons profit de techniques de réécriture de requêtes à l’aide de vues afin de diffuser les données de ces abonnements, à partir d’autres abonnements.Notre contribution principale est un nouvel algorithme qui organise les vues dans un réseau de dissémination d’information multi-niveaux ; ce réseau est calculé à l’aide d’outils techniques de programmation linéaire afin de passer à l’échelle pour de grands nombres de vues, respecter les contraintes de capacité du système, et minimiser les délais de propagation des information. L’efficacité et la performance de notre algorithme est confirmée par notre évaluation expérimentale, qui inclut l’étude d’un déploiement réel dans un réseau WAN. / XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.
63

A Social Semantic Web System for Coordinating Communication in the Architecture, Engineering & Construction Industry

Zhang, Jinyue 08 March 2011 (has links)
The AEC industry has long been in need of effective modes of information exchange and knowledge sharing, but their practice in the industry is still far from satisfactory. In order to maintain their competence in a highly competitive environment and a globalized market, many organizations in the AEC industry have aimed at a move towards the development of learning organizations. Knowledge management has been seen as an effective way to have every member of an organization engaged in learning at all levels. At the very centre of knowledge management and learning is knowledge sharing through effective communication. Unfortunately, however, there is a big gap in the AEC industry between existing practice and the ideal in this area. In order to effectively coordinate information and knowledge flow in the AEC industry, this present research has developed a framework for an information system – a Construction Information and Knowledge Protocol/Portal (CIKP) which integrates within it a publish/subscribe system, Semantic Web technology, and Social Web concepts. Publish/subscribe is an appropriate many-to-many, people-to-people communication paradigm for handling a highly fragmented industry such as construction. In order to enrich the expressiveness of publications and subscriptions, Semantic Web technology has been incorporated into this system through the development of ontologies as a formal and interoperable form of knowledge representation. This research first involved the development of a domain-level ontology (AR-Onto) to encapsulate knowledge about actors, roles, and their attributes in the AEC industry. AR-Onto was then extended and tailored to create an application-level ontology (CIKP-Onto) which has been used to support the semantics in the CIKP framework. Social Web concepts have been introduced to enrich the description of publications and subscriptions. Our aim has been to break down linear communication through social involvement and encourage a culture of sharing, and in the end, the CIKP framework has been developed to specify desired services in communicating information and knowledge, applicable technical approaches, and more importantly, the functions required to satisfy the needs of a variety of service scenarios.
64

A Social Semantic Web System for Coordinating Communication in the Architecture, Engineering & Construction Industry

Zhang, Jinyue 08 March 2011 (has links)
The AEC industry has long been in need of effective modes of information exchange and knowledge sharing, but their practice in the industry is still far from satisfactory. In order to maintain their competence in a highly competitive environment and a globalized market, many organizations in the AEC industry have aimed at a move towards the development of learning organizations. Knowledge management has been seen as an effective way to have every member of an organization engaged in learning at all levels. At the very centre of knowledge management and learning is knowledge sharing through effective communication. Unfortunately, however, there is a big gap in the AEC industry between existing practice and the ideal in this area. In order to effectively coordinate information and knowledge flow in the AEC industry, this present research has developed a framework for an information system – a Construction Information and Knowledge Protocol/Portal (CIKP) which integrates within it a publish/subscribe system, Semantic Web technology, and Social Web concepts. Publish/subscribe is an appropriate many-to-many, people-to-people communication paradigm for handling a highly fragmented industry such as construction. In order to enrich the expressiveness of publications and subscriptions, Semantic Web technology has been incorporated into this system through the development of ontologies as a formal and interoperable form of knowledge representation. This research first involved the development of a domain-level ontology (AR-Onto) to encapsulate knowledge about actors, roles, and their attributes in the AEC industry. AR-Onto was then extended and tailored to create an application-level ontology (CIKP-Onto) which has been used to support the semantics in the CIKP framework. Social Web concepts have been introduced to enrich the description of publications and subscriptions. Our aim has been to break down linear communication through social involvement and encourage a culture of sharing, and in the end, the CIKP framework has been developed to specify desired services in communicating information and knowledge, applicable technical approaches, and more importantly, the functions required to satisfy the needs of a variety of service scenarios.
65

A Content-Oriented Architecture for Publish/Subscribe Systems

Chen, Jiachen 16 March 2015 (has links)
No description available.
66

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi 25 February 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
67

Spasiba: a context-aware adaptive mobile advisor

Rudkovskiy, Alexey 01 April 2010 (has links)
This thesis presents the design and analysis of Spasiba, a context-aware mobile advisor. We argue that current context-aware mobile applications exhibit significant flaws with respect to (1) limited use of context information, (2) incomplete or irrelevant content generation, and (3) low usability. The proposed model attempts to tackle these limitations by advancing the usage and manipulation of context information, automating the back-end systems in terms of self-management and seamless extensibility, and shifting the logic away from the client side. A distinguishing characteristic of Spasiba is the proactive approach to notifying the user of information of interest. In this proactive approach, the user subscribes to the service and receives content updates as the context changes. This proposed model is realised in a proof-of-concept prototype that uses a Nokia Web Runtime widget as the client application. The widget, which sports an elegant, touch-optimised interface, collects multiple context parameters to deliver high-quality results. The server-side architecture employs the publish/subscribe paradigm for managing the active users and Comet—for proactively notifying the clients of updated information of interest. IRS-III, a Semantic Web Services broker, handles the process of content generation. The prototype employs nine data sources, seven of which are open API web services and two of which are regular web pages, to deliver diverse and complete results. A simple autonomic element, implemented with the help of aspect-oriented programming, ensures partial self-management of the back-end systems. Spasiba is evaluated by means of a case study that involves a tourist couple visiting Victoria. The application assists the tourist couple with finding attractions, relevant stores, and places serving food.
68

Hardware Architecture of an XML/XPath Broker/Router for Content-Based Publish/Subscribe Data Dissemination Systems

El-Hassan, Fadi January 2014 (has links)
The dissemination of various types of data faces ongoing challenges with the growing need of accessing manifold information. Since the interest in content is what drives data networks, some new technologies and thoughts attempt to cope with these challenges by developing content-based rather than address-based architectures. The Publish/ Subscribe paradigm can be a promising approach toward content-based data dissemination, especially that it provides total decoupling between publishers and subscribers. However, in content-based publish/subscribe systems, subscriptions are expressive and the information is often delivered based on the matched expressive content - which may not deeply alleviate considerable performance challenges. This dissertation explores a hardware solution for disseminating data in content-based publish/subscribe systems. This solution consists of an efficient hardware architecture of an XML/XPath broker that can route information based on content to either other XML/XPath brokers or to ultimate users. A network of such brokers represent an overlay structure for XML content-based publish/subscribe data dissemination systems. Each broker can simultaneously process many XPath subscriptions, efficiently parse XML publications, and subsequently forward notifications that result from high-performance matching processes. In the core of the broker architecture, locates an XML parser that utilizes a novel Skeleton CAM-Based XML Parsing (SCBXP) technique in addition to an XPath processor and a high-performance matching engine. Moreover, the broker employs effective mechanisms for content-based routing, so as subscriptions, publications, and notifications are routed through the network based on content. The inherent reconfigurability feature of the broker’s hardware provides the system architecture with the capability of residing in any FPGA device of moderate logic density. Furthermore, such a system-on-chip architecture is upgradable, if any future hardware add-ons are needed. However, the current architecture is mature and can effectively be implemented on an ASIC device. Finally, this thesis presents and analyzes the experiments conducted on an FPGA prototype implementation of the proposed broker/router. The experiments tackle tests for the SCBXP alone and for two phases of development of the whole broker. The corresponding results indicate the high performance that the involved parsing, storing, matching, and routing processes can achieve.
69

Architectural Evolution of Intelligent Transport Systems (ITS) using Cloud Computing

Nasim, Robayet January 2015 (has links)
With the advent of Smart Cities, Intelligent Transport System (ITS) has become an efficient way of offering an accessible, safe, and sustainable transportation system. Utilizing advances in Information and Communication Technology (ICT), ITS can maximize the capacity of existing transportation system without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design. This thesis studies how to design a highly flexible and deployable architecture for ITS, which can utilize the recent technologies such as - cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications for transport authorities in the cloud, which reduces infrastructure cost, improves management flexibility and also ensures better resource utilization. Moreover, we use a publish/subscribe system as a building block for developing a low latency ITS application, which is a promising technology for designing scalable and distributed applications within the ITS domain. Although cloud-based architectures provide the flexibility of adding, removing or moving ITS services within the underlying physical infrastructure, it may be difficult to provide the required quality of service (QoS) which decrease application productivity and customer satisfaction, leading to revenue losses. Therefore, we investigate the impact of service mobility on related QoS in the cloud-based infrastructure. We investigate different strategies to improve performance of a low latency ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes. Evaluation results from a private cloud testbed using OpenStack show that our proposed architecture is suitable for hosting ITS applications which have stringent performance requirements in terms of scalability, QoS and latency. / Baksidestext: Intelligent Transport System (ITS) can utilize advances in Information and Communication Technology (ICT) and maximize the capacity of existing transportation systems without building new infrastructure. However, in spite of these technical feasibilities and significant performance-cost ratios, the deployment of ITS is limited in the real world because of several challenges associated with its architectural design.  This thesis studies how to design an efficient deployable architecture for ITS, which can utilize the advantages of cloud computing and the publish/subscribe communication model. In particular, our aim is to offer an ITS infrastructure which provides the opportunity for transport authorities to allocate on-demand computing resources through virtualization technology, and supports a wide range of ITS applications. We propose to use an Infrastructure as a Service (IaaS) model to host large-scale ITS applications, and to use a publish/subscribe system as a building block for developing a low latency ITS application. We investigate different strategies to improve performance of an ITS application during service mobility such as utilizing multiple paths to spread network traffic, or deploying recent queue management schemes. / <p>Artikel 4 Network Centric Performance Improvement for Live VM Migration finns i avhandlingen som manuskript. Nu publicerat konferenspaper. </p>
70

Evaluating a publish/subscribe proxy for HTTP

Zhang, Yuanhui January 2013 (has links)
With the increasingly high speed of the Internet and its wide spread usage, the current Internet architecture exhibits some problems. The publish/subscribe paradigm has been developed to support one of the most common patterns of communication. It makes “information” the center of communication and removes the “location-identity split” (i.e., that objects are at specific locations to which you must communicate with to access the object). In this thesis project a publish/subscribe network is built and then used in the design, implementation, and evaluation of a publish/subscribe proxy for today’s HTTP based communication. By using this proxy users are able to use their existing web browser to send both HTTP requests and Publish/Subscribe Internet Routing Paradigm (PSIRP) requests. A publish/subscribe overlay is responsible for maintaining PSIRP contents. The proxy enables web browser clients to benefit from the publish/subscribe network, without requiring them to change their behavior or even be aware of the fact that the content that they want to access is being provided via the publish/subscribe overlay. The use of the overlay enables a user’s request to be satisfied by any copy of the content, potentially decreasing latency, reducing backbone network traffic, and reducing the load on the original content server. One of the aims of this thesis is to make more PSIRP content available, this is done by introducing a proxy who handles both HTTP and PSIRP requests, but having received content as a result of an HTTP response it publishes this data as PSIRP accessible content. The purpose is to foster the introduction and spread of content based access. / Med allt högre Internetshastighet och dess utbredda användning, uppvisar den aktuella Internet-arkitekturen vissa problem. Publicera / prenumerera paradigm har utvecklats för att stödja en av de vanligaste mönstren för kommunikation. Det gör att "information" blir centrum av kommunikation och tar bort "plats-identitet split" (dvs att objekten är på specifika platser som du måste kommunicera med för att komma åt objektet). i detta examensarbete byggs ett publicera / prenumerera nätverk och sedan används i utformningen, genomförandet, och utvärdering av en publicera / prenumerera proxy för dagens HTTP-baserad kommunikation. Genom att använda denna proxy kan användare kan använda sin befintliga webbläsare för att skicka både HTTP-förfrågningar och publicera / Prenumerera Internet Routing Paradigm (PSIRP) begäran. En publicera / prenumerera överlagring är ansvarig för att upprätthålla innehåll av PSIRP. Fullmakten gör det möjligt för klienter av webbläsare att dra nytta av publicera / prenumerera nätverket, utan att kräva dem att ändra sitt beteende eller ens vara medvetna om det faktum att det innehållet som de vill komma åt tillhandahålls via publicera / prenumerera överlägg. Användningen av överlägget kan en användare begäran som skall uppfyllas av en kopia av innehållet, eventuellt minskande latens, vilket minskar trafiken stamnät, och minska belastningen på det ursprungliga innehållet servern. Ett av syftena med denna uppsats är att göra mer PSIRP innehåll tillgängligt och detta görs genom att införa en proxy som hanterar både HTTP och PSIRP förfrågningar, men har fått innehåll som en följd av en HTTP-svar offentliggörs denna data som PSIRP tillgängligt innehåll. Syftet är att främja införandet och innehållbaserade åtkomsten.

Page generated in 0.0549 seconds