• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 216
  • 28
  • 24
  • 24
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 590
  • 140
  • 130
  • 110
  • 110
  • 93
  • 92
  • 69
  • 62
  • 62
  • 59
  • 59
  • 59
  • 57
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Um serviço de transações atômicas para Web services / An Atomic Transaction Service for Web Services

Ivan Bittencourt de Araujo e Silva Neto 21 September 2007 (has links)
Sistemas computacionais são constituídos por componentes de hardware e software que podem eventualmente falhar. Por esse motivo, o mecanismo de transação sempre foi imprescindível para a construção de sistemas robustos. O suporte transacional para a tecnologia Web services foi definido em agosto de 2005, num conjunto de três especificações denominadas WS-Coordination, WS-AtomicTransaction e WS-BusinessActivity. Juntas, essas especificações definem um alicerce sobre o qual aplicações robustas baseadas em Web services podem ser construídas. Nesta dissertação realizamos um estudo sobre transações atômicas em ambientes Web services. Em particular, estendemos o gerenciador de transações presente no servidor de aplicações JBoss, de modo que ele passasse a comportar transações distribuídas envolvendo Web services. Além disso, avaliamos o desempenho desse gerenciador de transações quando ele emprega cada um dos seguintes mecanismos de chamada remota: Web services/SOAP, CORBA/IIOP e JBoss Remoting. Finalmente, realizamos experimentos de escalabilidade e interoperabilidade. / Computing systems consist of a multitude of hardware and software components that may fail. For this reason, the transaction mechanism has always been essential for the development of robust systems. Transactional support for the Web services technology was defined in August 2005, in a set of three specifications, namely WS-Coordination, WS-AtomicTransaction, and WS-BusinessActivity. Together, such specifications enable the development of robust Web services applications. In this dissertation we studied atomic transactions in the Web services realm. Particularly, we added Web services atomic transaction support to the existing JBoss application server transaction manager. Furthermore, we evaluated the performance of this transaction manager when it employs each of the following remote method invocation mechanisms: Web services/SOAP, CORBA/IIOP and JBoss Remoting. Finally, we performed scalability and interoperability experiments.
442

RiTE: Providing On-Demand Data for Right-Time Data Warehousing

Lehner, Wolfgang, Thomsen, Christian, Bach Pedersen, Torben 20 June 2022 (has links)
Data warehouses (DWs) have traditionally been loaded with data at regular time intervals, e.g., monthly, weekly, or daily, using fast bulk loading techniques. Recently, the trend is to insert all (or only some) new source data very quickly into DWs, called near-realtime DWs (right-time DWs). This is done using regular INSERT statements, resulting in too low insert speeds. There is thus a great need for a solution that makes inserted data available quickly, while still providing bulk-load insert speeds. This paper presents RiTE ('Right-Time ETL'), a middleware system that provides exactly that. A data producer (ETL) can insert data that becomes available to data consumers on demand. RiTE includes an innovative main-memory based catalyst that provides fast storage and offers concurrency control. A number of policies controlling the bulk movement of data based on user requirements for persistency, availability, freshness, etc. are supported. The system works transparently to both producer and consumers. The system is integrated with an open source DBMS, and experiments show that it provides 'the best of both worlds', i.e., INSERT-like data availability, but with bulk-load speeds (up to 10 times faster).
443

Secure collection and data management system for WSNs / Un système de collecte sécurisé et de gestion des données pour les réseaux de capteurs sans fils

Drira, Wassim 10 December 2012 (has links)
Le développement des réseaux de capteurs sans fil fait que chaque utilisateur ou organisation est déjà connecté à un nombre important de nœuds. Ces nœuds génèrent une quantité importante de données, rendant la gestion de ces données non évident. De plus, ces données peuvent contenir des informations concernant la vie privée. Les travaux de la thèse attaquent ces problématiques. Premièrement, nous avons conçu un middleware qui communique avec les capteurs physiques pour collecter, stocker, traduire, indexer, analyser et générer des alertes sur les données des capteurs. Ce middleware est basé sur la notion de composants et de composites. Chaque nœud physique communique avec un composite du middleware via une interface RESTFul. Ce middleware a été testé et utilisé dans le cadre du projet Européen Mobesens dans le but de gérer les données d'un réseau de capteurs pour la surveillance de la qualité de l'eau. Deuxièmement, nous avons conçu un protocole hybride d'authentification et d'établissement de clés de paires et de groupes. Considérant qu'il existe une différence de performance entre les noeuds capteur, la passerelle et le middleware, nous avons utilisé l'authentification basé sur la cryptographie basée sur les identités entre la passerelle et le serveur de stockage et une cryptographie symétrique entre les capteurs et les deux autres parties. Ensuite, le middleware a été généralisé dans la troisième partie de la thèse pour que chaque organisation ou individu puisse avoir son propre espace pour gérer les données de ses capteurs en utilisant le cloud computing. Ensuite, nous avons portail social sécurisé pour le partage des données des réseaux de capteurs / Nowadays, each user or organization is already connected to a large number of sensor nodes which generate a substantial amount of data, making their management not an obvious issue. In addition, these data can be confidential. For these reasons, developing a secure system managing the data from heterogeneous sensor nodes is a real need. In the first part, we developed a composite-based middleware for wireless sensor networks to communicate with the physical sensors for storing, processing, indexing, analyzing and generating alerts on those sensors data. Each composite is connected to a physical node or used to aggregate data from different composites. Each physical node communicating with the middleware is setup as a composite. The middleware has been used in the context of the European project Mobesens in order to manage data from a sensor network for monitoring water quality. In the second part of the thesis, we proposed a new hybrid authentication and key establishment scheme between senor nodes (SN), gateways (MN) and the middleware (SS). It is based on two protocols. The first protocol intent is the mutual authentication between SS and MN, on providing an asymmetric pair of keys for MN, and on establishing a pairwise key between them. The second protocol aims at authenticating them, and establishing a group key and pairwise keys between SN and the two others. The middleware has been generalized in the third part in order to provide a private space for multi-organization or -user to manage his sensors data using cloud computing. Next, we expanded the composite with gadgets to share securely sensor data in order to provide a secure social sensor network
444

DIPBench: An Independent Benchmark for Data-Intensive Integration Processes

Lehner, Wolfgang, Böhm, Matthias, Habich, Dirk, Wloka, Uwe 12 August 2022 (has links)
The integration of heterogeneous data sources is one of the main challenges within the area of data engineering. Due to the absence of an independent and universal benchmark for data-intensive integration processes, we propose a scalable benchmark, called DIPBench (Data intensive integration Process Benchmark), for evaluating the performance of integration systems. This benchmark could be used for subscription systems, like replication servers, distributed and federated DBMS or message-oriented middleware platforms like Enterprise Application Integration (EAI) servers and Extraction Transformation Loading (ETL) tools. In order to reach the mentioned universal view for integration processes, the benchmark is designed in a conceptual, process-driven way. The benchmark comprises 15 integration process types. We specify the source and target data schemas and provide a toolsuite for the initialization of the external systems, the execution of the benchmark and the monitoring of the integration system's performance. The core benchmark execution may be influenced by three scale factors. Finally, we discuss a metric unit used for evaluating the measured integration system's performance, and we illustrate our reference benchmark implementation for federated DBMS.
445

The design and development of multi-agent based RFID middleware system for data and devices management

Massawe, Libe Valentine January 2012 (has links)
Thesis (D. Tech. (Electrical Engineering)) - Central University of technology, Free State, 2012 / Radio frequency identification technology (RFID) has emerged as a key technology for automatic identification and promises to revolutionize business processes. While RFID technology adoption is improving rapidly, reliable and widespread deployment of this technology still faces many significant challenges. The key deployment challenges include how to use the simple, unreliable raw data generated by RFID deployments to make business decisions; and how to manage a large number of deployed RFID devices. In this thesis, a multi-agent based RFID middleware which addresses some of the RFID data and device management challenges was developed. The middleware developed abstracts the auto-identification applications from physical RFID device specific details and provides necessary services such as device management, data cleaning, event generation, query capabilities and event persistence. The use of software agent technology offers a more scalable and distributed system architecture for the proposed middleware. As part of a multi-agent system, application-independent domain ontology for RFID devices was developed. This ontology can be used or extended in any application interested with RFID domain ontology. In order to address the event processing tasks within the proposed middleware system, a temporal-based RFID data model which considers both applications’ temporal and spatial granules in the data model itself for efficient event processing was developed. The developed data model extends the conventional Entity-Relationship constructs by adding a time attribute to the model. By maintaining the history of events and state changes, the data model captures the fundamental RFID application logic within the data model. Hence, this new data model supports efficient generation of application level events, updating, querying and analysis of both recent and historical events. As part of the RFID middleware, an adaptive sliding-window based data cleaning scheme for reducing missed readings from RFID data streams (called WSTD) was also developed. The WSTD scheme models the unreliability of the RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. The WSTD scheme is capable of efficiently coping with both environmental variations and tag dynamics by automatically and continuously adapting its cleaning window size, based on observed readings.
446

TENA in a Telemetry Network System

Saylor, Kase J., Malatesta, William A., Abbott, Ben A. 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / The integrated Network Enhanced Telemetry (iNET) and Test and Training Enabling Architecture (TENA) projects are working to understand how TENA will perform in a Telemetry Network System. This paper discusses a demonstration prototype that is being used to investigate the use of TENA across a constrained test environment simulating iNET capabilities. Some of the key elements being evaluated are throughput, latency, memory utilization, memory footprint, and bandwidth. The results of these evaluations will be presented. Additionally, the paper briefly discusses modeling and metadata requirements for TENA and iNET.
447

Publish Subscribe on Large-Scale Dynamic Topologies: Routing and Overlay Management

Frey, Davide 18 May 2006 (has links) (PDF)
Content-based publish-subscribe is emerging as a communication paradigm able to meet the demands of highly dynamic distributed applications, such as those made popular by mobile computing and peer-to-peer networks. Nevertheless, the available systems implementing this communication model are still unable to cope efficiently with dynamic changes to the topology of their distributed dispatching infrastructure. This hampers their applicability in the aforementioned scenarios. This thesis addresses this problem and presents a complete approach to the reconfiguration of content-based publish-subscribe systems. In Part I, it proposes a layered architecture for reconfigurable publish-subscribe middleware consisting of an overlay, a routing, and an event-recovery layer. This architecture allows the same routing components to operate in different types of dynamic network environments, by exploiting different underlying overlays. Part II addresses the routing layer with new protocols to manage the recon- figuration of the routing information enabling the correct delivery of events to subscribers. When the overlay changes as a result of nodes joining or leaving the network or as a result of mobility, this information is updated so that routing can adapt to the new environment. Our protocols manage to achieve this with as little overhead as possible. Part III addresses the overlay layer and proposes two novel approaches for building and maintaining a connected topology in highly dynamic network sce- narios. The protocols we present achieve this goal, while managing node degree and keeping reconfigurations localized when possible. These properties allow our overlay managers to be applied not only in the context of publish-subscribe mid- dleware but also as enabling technologies for other communication paradigms like application-level multicast. Finally, the thesis integrates the overlay and routing layers into a single frame- work and evaluates their combined performance both in wired and in wireless scenarios. Results show that the optimizations provided by our routing reconfig- uration protocols allow the middleware to achieve very good performance in such networks. Moreover, they highlight that our overlay layer is able to optimize this performance even further, significantly reducing the network traffic generated by the routing layer. The protocols presented in this thesis are implemented in the REDS middle- ware framework developed at Politecnico di Milano. Their use enables REDS to operate efficiently in dynamic network scenarios ranging from large-scale peer-to- peer to mobile ad hoc networks.
448

Reliable peer to peer grid middleware

Leslie, Matthew John January 2011 (has links)
Grid computing systems are suffering from reliability and scalability problems caused by their reliance on centralised middleware. In this thesis, we argue that peer to peer middleware could help alleviate these problems. We show that peer to peer techniques can be used to provide reliable storage systems, which can be used as the basis for peer to peer grid middleware. We examine and develop new methods of providing reliable peer to peer storage, giving a new algorithm for this purpose, and assessing its performance through a combination of analysis and simulation. We then give an architecture for a peer to peer grid information system based on this work. Performance evaluation of this information system shows that it improves scalability when compared to the original centralised system, and that it withstands the failure of participant nodes without a significant reduction in quality of service. New contributions include dynamic replication, a new method for maintaining reliable storage in a Distributed Hash Table, which we show allows for the creation of more reliable, higher performance systems with lower bandwidth usage than current techniques. A new analysis of the reliability of distributed storage systems is also presented, which shows for the first time that replica placement has a significant effect on reliability. A simulation of the performance of distributed storage systems provides for the first time a quantitative performance comparison between different placement patterns. Finally, we show how these reliable storage techniques can be applied to grid computing systems, giving a new architecture for a peer to peer grid information service for the SAM-Grid system. We present a thorough performance evaluation of a prototype implementation of this architecture. Many of these contributions have been published at peer reviewed conferences.
449

Toward an autonomic engine for scientific workflows and elastic Cloud infrastructure / Etude et conception d’un système de gestion de workflow autonomique

Croubois, Hadrien 16 October 2018 (has links)
Les infrastructures de calcul scientifique sont en constante évolution, et l’émergence de nouvelles technologies nécessite l’évolution des mécanismes d’ordonnancement qui leur sont associé. Durant la dernière décennie, l’apparition du modèle Cloud a suscité de nombreux espoirs, mais l’idée d’un déploiement et d’une gestion entièrement automatique des plates-formes de calcul est jusque la resté un voeu pieu. Les travaux entrepris dans le cadre de ce doctorat visent a concevoir un moteur de gestion de workflow qui intègre les logiques d’ordonnancement ainsi que le déploiement automatique d’une infrastructure Cloud. Plus particulièrement, nous nous intéressons aux plates-formes Clouds disposant de système de gestion de données de type DaaS (Data as a Service). L’objectif est d’automatiser l’exécution de workflows arbitrairement complexe, soumis de manière indépendante par de nombreux utilisateurs, sur une plate-forme Cloud entièrement élastique. Ces travaux proposent une infrastructure globale, et décrivent en détail les différents composants nécessaires à la réalisation de cette infrastructure :• Un mécanisme de clustering des tâches qui prend en compte les spécificités des communications via un DaaS ;• Un moteur décentralisé permettant l’exécution des workflows découpés en clusters de tâches ;• Un système permettant l’analyse des besoins et le déploiement automatique. Ces différents composants ont fait l’objet d’un simulateur qui a permis de tester leur comportement sur des workflows synthétiques ainsi que sur des workflows scientifiques réels issues du LBMC (Laboratoire de Biologie et Modélisation de la Cellule). Ils ont ensuite été implémentés dans l’intergiciel Diet. Les travaux théoriques décrivant la conception des composants, et les résultats de simulations qui les valident, ont été publié dans des workshops et conférences de portée internationale. / The constant development of scientific and industrial computation infrastructures requires the concurrent development of scheduling and deployment mechanisms to manage such infrastructures. Throughout the last decade, the emergence of the Cloud paradigm raised many hopes, but achieving full platformautonomicity is still an ongoing challenge. Work undertaken during this PhD aimed at building a workflow engine that integrated the logic needed to manage workflow execution and Cloud deployment on its own. More precisely, we focus on Cloud solutions with a dedicated Data as a Service (DaaS) data management component. Our objective was to automate the execution of workflows submitted by many users on elastic Cloud resources.This contribution proposes a modular middleware infrastructure and details the implementation of the underlying modules:• A workflow clustering algorithm that optimises data locality in the context of DaaS-centeredcommunications;• A dynamic scheduler that executes clustered workflows on Cloud resources;• A deployment manager that handles the allocation and deallocation of Cloud resources accordingto the workload characteristics and users’ requirements. All these modules have been implemented in a simulator to analyse their behaviour and measure their effectiveness when running both synthetic and real scientific workflows. We also implemented these modules in the Diet middleware to give it new features and prove the versatility of this approach.Simulation running the WASABI workflow (waves analysis based inference, a framework for the reconstruction of gene regulatory networks) showed that our approach can decrease the deployment cost byup to 44% while meeting the required deadlines.
450

Uma abordagem de Fog Computing para o subsistema de reconhecimento de contexto e adaptação do Middleware EXEHDA

CARDOSO, Anderson Afonso 24 February 2017 (has links)
Submitted by Cristiane Chim (cristiane.chim@ucpel.edu.br) on 2017-08-14T14:59:15Z No. of bitstreams: 1 ANDERSON AFONSO CARDOZO.pdf: 14214534 bytes, checksum: 2fe18ba05bfad7a5bf6b404bb2fcfd6c (MD5) / Made available in DSpace on 2017-08-14T14:59:15Z (GMT). No. of bitstreams: 1 ANDERSON AFONSO CARDOZO.pdf: 14214534 bytes, checksum: 2fe18ba05bfad7a5bf6b404bb2fcfd6c (MD5) Previous issue date: 2017-02-24 / Recent surveys show that in the near future billions of smart devices will be interconnected via the Internet, thus attracting the attention of Industry and directing the research of the academic community, this synergy of investment has contributed to the materialization of the scenario known as IoT. From the perspective of IoT, computing provides information of all the "things" at all times, regardless of location, providing a highly distributed environment, heterogeneous, dynamic and strong interaction between man and machine. To this end, the IoT devices need to be aware of contextual data that interest you and where appropriate respond to, interoperating autonomously and with minimal human intervention possible in the aspects of management. For the processing of contextual data in IoT has been used cloud-based strategies, which has proven effective in the treatment of important aspects for the IoT, such as ease of access and availability. Howere, these strategies are vulnerable to systems that have limitations on their channels to the Internet, as well as for systems that require low latency in responses or present high disconnect chances. Given this motivation, the central objective of this thesis is the design of an architecture capable of providing the acquisition and processing of distributed contextual events. Therefore, the proposed architecture, called EXEHDA-FOG provides the middleware EXEHDA support to Fog Computing, using the distributed event processing at the edges as a cloud computing extension strategy. The results obtained with the case studies conducted have shown promising results, leading to the continuity of research efforts. / Pesquisas recentes apontam que em um futuro próximo bilhões de dispositivos inteligentes estarão interconectados através da Internet, atraindo assim a atenção da Indústria e direcionando as pesquisas da comunidade acadêmica. Esta sinergia de investimentos vem contribuindo para a materialização do cenário conhecido como Internet of Things (IoT). Na perspectiva da IoT a computação provê informação de todas as "coisas", a todo o momento, independente de localização, constituindo um ambiente altamente distribuído, heterogêneo, dinâmico e com forte interação entre homem e máquina. Para tal, os dispositivos da IoT necessitam ter ciência dos dados contextuais que lhe interessam e quando for o caso reagirem aos mesmos, interoperando de forma autônoma e com o mínimo de intervenção humana possível nos aspectos de gerenciamento. Para o processamento de dados contextuais na IoT tem sido empregadas estratégias baseadas em Cloud, as quais tem se provado eficientes no tratamento de aspectos importantes para a IoT, como a facilidade de acesso e disponibilidade. Estas estratégias porém mostramse vulneráveis para sistemas que possuem limitações nos seus canais com a Internet, assim como para sistemas que necessitam de baixa latência nas respostas ou ainda apresentem chances de desconexão elevadas. Considerando esta motivação, o objetivo central desta dissertação é a concepção de uma arquitetura capaz de prover a aquisição e o processamento de eventos contextuais distribuídos. Para tanto, esta arquitetura, denominada EXEHDA-FOG capacita o middleware Execution Environment for Highly Distributed Applications (EXEHDA) o suporte à Fog Computing, empregando o processamento distribuído de eventos nas bordas como estraté- gia de extensão da Cloud Computing. Os resultados obtidos com o estudo de caso desenvolvido se mostraram promissores, apontando para continuidade dos esforços de estudo e pesquisa.

Page generated in 0.0635 seconds