Spelling suggestions: "subject:"publish"" "subject:"republish""
51 |
Um middleware reconfigurável para redes de sensores sem fioSouza Vieria, Mardoqueu January 2006 (has links)
Made available in DSpace on 2014-06-12T15:59:39Z (GMT). No. of bitstreams: 2
arquivo5463_1.pdf: 1462415 bytes, checksum: 975980491ca4b4412b374f07a3f2612d (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2006 / A atratividade das Redes de Sensores Sem Fio (RSSF) para o monitoramento
das condições do ambiente e para servir de ponte entre o mundo físico e o virtual vem
aumentando devido aos avanços da micro-eletrônica, que possibilitaram a produção de
vários tipos de sensores (luz, umidade, temperatura, fumaça, radiação, acústicos,
sísmicos, etc.) no mesmo chip que processa o sinal e realiza a comunicação. As RSSF
podem ser consideradas ambientes de computação distribuída com severas restrições de
velocidade de processamento, tamanho de memória, energia e largura de banda.
Individualmente os nós das redes de sensores são tipicamente não confiáveis e a
topologia da rede pode mudar dinamicamente. As redes de sensores também se
diferenciam por causa da estreita interação com o ambiente físico através de sensores e
atuadores. Devido à todas estas diferenças, muitas soluções desenvolvidas para
plataformas de computação genéricas e para redes ad-hoc não podem ser aplicadas às
RSSF. Todavia, os nós das redes de sensores também exibem características de sistema
de propósito geral e de sistemas embutidos.
Os sistemas de middleware para RSSF têm objetivos similares (ex.,
comunicação) aos dos sistemas de middleware tradicionais como CORBA, RMI, JINI,
DCOM e PVM, porém têm restrições diferentes. Os sistemas de middleware
tradicionais geralmente consomem demasiadamente recursos como processamento,
memória e largura de banda, enquanto que nas RSSF estes recursos são escassos
dificultando a tarefa de desenvolver sistemas de middleware para estas redes.
O desenvolvimento de middleware para RSSF é o tema central desta dissertação.
O middleware desenvolvido nesta dissertação deve possui as seguintes características:
adaptação do comportamento da aplicação devido à disponibilidade de recursos e
características do ambiente físico; comunicação entre nós da rede permitindo também a
comunicação assíncrona, pois é mais adequada ao modelo de disseminação de
informação requerido por aplicações em RSSF; combinação ou fusão de dados
provenientes de fontes diferentes eliminando redundância, minimizando o número de
transmissões e assim economizando energia; e gerenciamento de grupos de nós para dar
suporte à aplicações de rastreamento de objetos, tolerância a falhas, segurança,
sincronização de relógios e gerenciamento de energia.
Para realizar as características mencionadas, apresentamos o projeto, a
implementação e a validação de um middleware para RSSF. Este middleware é visto
como uma coleção de serviços (de middleware) fornecidos através de uma API
(Application Programming Interface) e é composto pelos serviços de: comunicação, que
provê canais de comunicação broadcast e publish-subscribe; reconfiguração,
responsável pela reconfiguração de componentes da aplicação e serviços do
middleware; gerenciamento de grupos, que provê um modelo de gerenciamento de
grupos de nós da rede; e de agregação, que realiza a combinação de dados para diminuir
o envio de dados pela rede
|
52 |
A Trip Planner for the Itract System supporting real-time updatesLiden, Natalie January 2014 (has links)
Mobile applications and real-time data are excellent tools for rapidly sharing information. Such information may concern public transportation, such as time tables and traffic delays. This project has involved the development of a trip planner, which can subscribe to real-time data in order to inform the end user about the position of transit vehicles and trip updates. A trip planner is an application which, after having been given a start and a destination by the user, generates the possible trip between these two locations. The route is displayed upon a map, along with information of how the trip is travelled. The real-time data, which is pushed to the application, will inform the user if vehicles are delayed and if the trip needs to be updated due to a missed bus or train. The trip planner for Itract developed in this project is using the graphical interface and some necessary Java classes from the open source application Open Trip Planner. The new trip planner, developed in this project, is compatible with the API of Itract, has some additional functionality and can subscribe to real-time information. To subscribe to real-time information, a database called Redis has been set up in connection to Itract. Another database, known as MongoDB, is used for persistant storage. / Itract
|
53 |
Revisiter les grilles de PCs avec des technologies du Web et le Cloud computing / Re-examaning the Desktop Grids with Web Technologies and Cloud ComputingAbidi, Leila 03 March 2015 (has links)
Le contexte de cette thèse est à l’intersection des contextes des grilles de calculs, des nouvelles technologies du Web ainsi que des Clouds et des services à la demande. Depuis leur avènement au cours des années 90, les plates-formes distribuées, plus précisément les systèmes de grilles de calcul (Grid Computing), n’ont pas cessé d’évoluer permettant ainsi de susciter multiple efforts de recherche. Les grilles de PCs ont été proposées comme une alternative aux super-calculateurs par la fédération des milliers d’ordinateurs de bureau. Les détails de la mise en oeuvre d’une telle architecture de grille, en termes de mécanismes de mutualisation des ressources, restent très difficile à cerner. Parallèlement, le Web a complètement modifié notre façon d’accéder à l’information. Le Web est maintenant une composante essentielle de notre quotidien. Les équipements ont, à leur tour, évolué d’ordinateurs de bureau ou ordinateurs portables aux tablettes, lecteurs multimédias, consoles de jeux, smartphones, ou NetPCs. Cette évolution exige d’adapter et de repenser les applications/intergiciels de grille de PCs qui ont été développés ces dernières années. Notre contribution se résume dans la réalisation d’un intergiciel de grille de PCs que nous avons appelé RedisDG. Dans son fonctionnement, RedisDG reste similaire à la plupart des intergiciels de grilles de calcul, c’est-à-dire qu’il est capable d’exécuter des applications sous forme de «sacs de tâches» dans un environnement distribué, assurer le monitoring des noeuds, valider et certifier les résultats. L’innovation de RedisDG, réside dans l’intégration de la modélisation et la vérification formelles dans sa phase de conception, ce qui est non conventionnel mais très pertinent dans notre domaine. Notre approche consiste à repenser les grilles de PCs à partir d’une réflexion et d’un cadre formel permettant de les développer, de manière rigoureuse et de mieux maîtriser les évolutions technologiques à venir. / The context of this work is at the intersection of grid computing, the new Web technologies and the Clouds and services on demand contexts. Desktop Grid have been proposed as an alternative to supercomputers by the federation of thousands of desktops. The details of the implementation of such an architecture, in terms of resource sharing mechanisms, remain very hard. Meanwhile, the Web has completely changed the way we access information. The equipment, in turn, have evolved from desktops or laptops to tablets, smartphones or NetPCs. Our approach is to rethink Desktop Grids from a reflexion and a formal framework to develop them rigorously and better control future technological developments. We have reconsidered the interactions between the traditional components of a Desktop Grid based on the Web technology, and given birth to RedisDG, a new Desktop Grid middelware capable to operate on small devices, ie smartphones, tablets like the more traditional devicves (PCs). Our system is entirely based on the publish-subscribe paradigm. RedisDG is developped with Python and uses Redis as advanced key-value cache and store.
|
54 |
Organizational Rhetoric in the Academy: Junior Faculty Perceptions and RolesGordon, Cynthia K. 12 1900 (has links)
The purpose of this project was to examine the perceptions of junior faculty members as they relate to roles and expectations related to the tenure process. The study utilized a mixed methods approach to gain a multifaceted perspective of this complex process. I employed a quantitative and qualitative survey to explore junior faculty perceptions regarding roles related to promotion and tenure policies. In addition, I conducted fantasy theme analysis (FTA) to explore the organizational rhetoric related to these policies. Findings from the study illustrate the continued presence of the "publish or perish" paradigm, as well as issues related to role conflict within the context of organizational rhetoric.
|
55 |
Flexibilní bezdrátový systém pro měření CO2 v budově / Indoor flexible wireless CO2 measure systemVálek, Vít January 2021 (has links)
Monitoring of the carbon dioxide concentration in the building is carried out for several reasons. One is to ensure hygiene conditions. With the advent of Bluetooth 5.0 came the support of mesh network technology, which is defined by the Bluetooth Mesh standard. By implementing this standard, we can create an extensive network of devices monitoring the concentration of carbon dioxide in the building. Based on the monitored concentration, we can control the air conditioning and ventilation of the spaces, ensuring that the hygiene conditions are met. Thanks to the compatibility of Bluetooth Mesh with Bluetooth Low Energy, it is possible to access individual nodes, e.g. from a mobile phone. The aim of this work is to design and implement a measuring system whose elements will communicate with each other using Bluetooth Mesh wireless technology.
|
56 |
Pay with Bytes : A Collaborative and Anonymous Storage ServiceSanta Cruz Cosp, Juan Ignacio 05 September 2014 (has links)
No description available.
|
57 |
Community-Based Intrusion DetectionWeigert, Stefan 06 February 2017 (has links) (PDF)
Today, virtually every company world-wide is connected to the Internet. This wide-spread connectivity has given rise to sophisticated, targeted, Internet-based attacks. For example, between 2012 and 2013 security researchers counted an average of about 74 targeted attacks per day. These attacks are motivated by economical, financial, or political interests and commonly referred to as “Advanced Persistent Threat (APT)” attacks. Unfortunately, many of these attacks are successful and the adversaries manage to steal important data or disrupt vital services. Victims are preferably companies from vital industries, such as banks, defense contractors, or power plants. Given that these industries are well-protected, often employing a team of security specialists, the question is: How can these attacks be so successful?
Researchers have identified several properties of APT attacks which make them so efficient. First, they are adaptable. This means that they can change the way they attack and the tools they use for this purpose at any given moment in time. Second, they conceal their actions and communication by using encryption, for example. This renders many defense systems useless as they assume complete access to the actual communication content. Third, their
actions are stealthy — either by keeping communication to the bare minimum or by mimicking legitimate users. This makes them “fly below the radar” of defense systems which check for anomalous communication. And finally, with the goal to increase their impact or monetisation prospects, their attacks are targeted against several companies from the same industry. Since months can pass between the first attack, its detection, and comprehensive analysis, it is often too late to deploy appropriate counter-measures at businesses peers. Instead, it is much more likely that they have already been attacked successfully.
This thesis tries to answer the question whether the last property (industry-wide attacks) can be used to detect such attacks. It presents the design, implementation and evaluation of a community-based intrusion detection system, capable of protecting businesses at industry-scale. The contributions of this thesis are as follows. First, it presents a novel algorithm for community detection which can detect an industry (e.g., energy, financial, or defense industries) in Internet communication. Second, it demonstrates the design, implementation, and evaluation of a distributed graph mining engine that is able to scale with the throughput of the input data while maintaining an end-to-end latency for updates in the range of a few milliseconds. Third, it illustrates the usage of this engine to detect APT attacks against industries by analyzing IP flow information from an Internet service provider.
Finally, it introduces a detection algorithm- and input-agnostic intrusion detection engine which supports not only intrusion detection on IP flow but any other intrusion detection algorithm and data-source as well.
|
58 |
XSiena: The Content-Based Publish/Subscribe SystemJerzak, Zbigniew 29 September 2009 (has links) (PDF)
Just as packet switched networks constituted a major breakthrough in our perception of the information exchange in computer networks so have the decoupling properties of publish/subscribe systems revolutionized the way we look at networking in the context of large scale distributed systems. The decoupling of the components of publish/subscribe systems in time, space and synchronization has created an appealing platform for the asynchronous information exchange among anonymous information producers and consumers. Moreover, the content-based nature of publish/subscribe systems provides a great degree of flexibility and expressiveness as far as construction of data flows is considered.
However, a number of challenges and not yet addressed issued still exists in the area of the publish/subscribe systems. One active area of research is directed toward the problem of the efficient content delivery in the content-based publish/subscribe networks. Routing of the information based on the information itself, instead of the explicit source and destination addresses poses challenges as far as efficiency and processing times are concerned. Simultaneously, due to their decoupled nature, publish/subscribe systems introduce new challenges with respect to issues related to dependability and fail-awareness.
This thesis seeks to advance the field of research in both directions. First it shows the design and implementation of routing algorithms based on the end-to-end systems design principle. Proposed routing algorithms obsolete the need to perform content-based routing within the publish/subscribe network, pushing this task to the edge of the system. Moreover, this thesis presents a fail-aware approach towards construction of the content-based publish/subscribe system along with its application to the creation of the soft state publish/subscribe system. A soft state publish/subscribe system exposes the self stabilizing behavior as far as transient timing, link and node failures are concerned. The result of this thesis is a family of the XSiena content-based publish/subscribe systems, implementing the proposed concepts and algorithms. The family of the XSiena content-based publish/subscribe systems has been a subject to rigorous evaluation, which confirms the claims made in this thesis.
|
59 |
An Efficient, Extensible, Hardware-aware Indexing KernelSadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives.
This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms.
In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
|
60 |
An Efficient, Extensible, Hardware-aware Indexing KernelSadoghi Hamedani, Mohammad 20 June 2014 (has links)
Modern hardware has the potential to play a central role in scalable data management systems. A realization of this potential arises in the context of indexing queries, a recurring theme in real-time data analytics, targeted advertising, algorithmic trading, and data-centric workflows, and of indexing data, a challenge in multi-version analytical query processing. To enhance query and data indexing, in this thesis, we present an efficient, extensible, and hardware-aware indexing kernel. This indexing kernel rests upon novel data structures and (parallel) algorithms that utilize the capabilities offered by modern hardware, especially abundance of main memory, multi-core architectures, hardware accelerators, and solid state drives.
This thesis focuses on presenting our query indexing techniques to cope with processing queries in data-intensive applications that are susceptible to ever increasing data volume and velocity. At the core of our query indexing kernel lies the BE-Tree family of memory-resident indexing structures that scales by overcoming the curse of dimensionality through a novel two-phase space-cutting technique, an effective Top-k processing, and adaptive parallel algorithms to operate directly on compressed data (that exploits the multi-core architecture). Furthermore, we achieve line-rate processing by harnessing the unprecedented degrees of parallelism and pipelining only available through low-level logic design using FPGAs. Finally, we present a comprehensive evaluation that establishes the superiority of BE-Tree in comparison with state-of-the-art algorithms.
In this thesis, we further expand the scope of our indexing kernel and describe how to accelerate analytical queries on (multi-version) databases by enabling indexes on the most recent data. Our goal is to reduce the overhead of index maintenance, so that indexes can be used effectively for analytical queries without being a heavy burden on transaction throughput. To achieve this end, we re-design the data structures in the storage hierarchy to employ an extra level of indirection over solid state drives. This indirection layer dramatically reduces the amount of magnetic disk I/Os that is needed for updating indexes and localizes the index maintenance. As a result, by rethinking how data is indexed, we eliminate the dilemma between update vs. query performance and reduce index maintenance and query processing cost substantially.
|
Page generated in 0.0367 seconds