Spelling suggestions: "subject:"published""
61 |
MHNCS: um middleware para o desenvolvimento de aplicações móveis cientes de contexto com requisitos de QoC / MHNCS: um middleware para o desenvolvimento de aplicações móveis cientes de contexto com requisitos de QoC / MNCS: a middleware for development of context-aware mobile applications with requirements of QoC / MNCS: a middleware for development of context-aware mobile applications with requirements of QoCPinheiro, Dejailson Nascimento 06 August 2014 (has links)
Made available in DSpace on 2016-08-17T14:53:29Z (GMT). No. of bitstreams: 1
DISSERTACAO Dejailson Nascimento Pinheiro.pdf: 1433962 bytes, checksum: 4173dad207f09fa2033a834f86a5d4b7 (MD5)
Previous issue date: 2014-08-06 / Mobile Social Networks (MSNs) are social structures in which members relate in groups and interaction is accomplished through information
and communication technologies using portable devices and wireless network technologies. Healthcare is one among the many possible areas of RSMs application.
The MobileHealthNet project, developed in partnership by UFMA and PUC-Rio, aims to develop a middleware that allows access to social networks and facilitate the
development of collaborative services targeting the health domain, the exchange of experiences and communication between patients and health professionals, as well
as a better management of health resources by government agencies. An important aspect in the development of the MobileHealthNet middleware is the infrastructure necessary for the gathering, distribution and processing of context data. In this master thesis we propose a software infrastructure incorporated to
the MobileHealthNet middleware that allows the specification, acquisition, validation and distribution of context data, considering quality requirements, making them available to context-aware applications. The distribution of context data is based on
a data-centric the publish/subscribe model, using the OMG-DDS specification. / Redes Sociais Móveis (RSMs) são estruturas sociais em que seus membros relacionam-se em grupos e a interação é realizada através de tecnologias de informação
e comunicação utilizando dispositivos portáteis com acesso a tecnologias de rede sem fio. Entre os muitos domínios de aplicação das RSMs, temos a área da saúde.
O projeto MobileHealthNet, desenvolvido em parceria pela UFMA e PUC-Rio, tem por objetivo desenvolver um middleware que permita o acesso às redes sociais e
facilite o desenvolvimento de serviços colaborativos para o setor da saúde, a troca de experiências e a comunicação entre pacientes e profissionais da saúde, além de uma melhor gestão dos recursos da saúde por órgãos governamentais. Um aspecto importante no desenvolvimento do middleware proposto pelo projeto MobileHealthNet é a infraestrutura necessária para a coleta, distribuição e processamento de dados de
contexto. Neste trabalho de mestrado é proposta uma infraestrutura de software incorporada ao middleware MobileHealthNet que permite a especificação, obtenção,
validação e distribuição de dados de contexto, considerando requisitos de qualidade, tornando-os disponíveis a aplicações sensíveis ao contexto. A distribuição dos dados de contexto é baseado no modelo publish/subscribe centrado em dados, utilizando-se a
especificação OMG-DDS.
|
62 |
Community-Based Intrusion DetectionWeigert, Stefan 11 April 2016 (has links)
Today, virtually every company world-wide is connected to the Internet. This wide-spread connectivity has given rise to sophisticated, targeted, Internet-based attacks. For example, between 2012 and 2013 security researchers counted an average of about 74 targeted attacks per day. These attacks are motivated by economical, financial, or political interests and commonly referred to as “Advanced Persistent Threat (APT)” attacks. Unfortunately, many of these attacks are successful and the adversaries manage to steal important data or disrupt vital services. Victims are preferably companies from vital industries, such as banks, defense contractors, or power plants. Given that these industries are well-protected, often employing a team of security specialists, the question is: How can these attacks be so successful?
Researchers have identified several properties of APT attacks which make them so efficient. First, they are adaptable. This means that they can change the way they attack and the tools they use for this purpose at any given moment in time. Second, they conceal their actions and communication by using encryption, for example. This renders many defense systems useless as they assume complete access to the actual communication content. Third, their
actions are stealthy — either by keeping communication to the bare minimum or by mimicking legitimate users. This makes them “fly below the radar” of defense systems which check for anomalous communication. And finally, with the goal to increase their impact or monetisation prospects, their attacks are targeted against several companies from the same industry. Since months can pass between the first attack, its detection, and comprehensive analysis, it is often too late to deploy appropriate counter-measures at businesses peers. Instead, it is much more likely that they have already been attacked successfully.
This thesis tries to answer the question whether the last property (industry-wide attacks) can be used to detect such attacks. It presents the design, implementation and evaluation of a community-based intrusion detection system, capable of protecting businesses at industry-scale. The contributions of this thesis are as follows. First, it presents a novel algorithm for community detection which can detect an industry (e.g., energy, financial, or defense industries) in Internet communication. Second, it demonstrates the design, implementation, and evaluation of a distributed graph mining engine that is able to scale with the throughput of the input data while maintaining an end-to-end latency for updates in the range of a few milliseconds. Third, it illustrates the usage of this engine to detect APT attacks against industries by analyzing IP flow information from an Internet service provider.
Finally, it introduces a detection algorithm- and input-agnostic intrusion detection engine which supports not only intrusion detection on IP flow but any other intrusion detection algorithm and data-source as well.
|
63 |
XSiena: The Content-Based Publish/Subscribe SystemJerzak, Zbigniew 28 September 2009 (has links)
Just as packet switched networks constituted a major breakthrough in our perception of the information exchange in computer networks so have the decoupling properties of publish/subscribe systems revolutionized the way we look at networking in the context of large scale distributed systems. The decoupling of the components of publish/subscribe systems in time, space and synchronization has created an appealing platform for the asynchronous information exchange among anonymous information producers and consumers. Moreover, the content-based nature of publish/subscribe systems provides a great degree of flexibility and expressiveness as far as construction of data flows is considered.
However, a number of challenges and not yet addressed issued still exists in the area of the publish/subscribe systems. One active area of research is directed toward the problem of the efficient content delivery in the content-based publish/subscribe networks. Routing of the information based on the information itself, instead of the explicit source and destination addresses poses challenges as far as efficiency and processing times are concerned. Simultaneously, due to their decoupled nature, publish/subscribe systems introduce new challenges with respect to issues related to dependability and fail-awareness.
This thesis seeks to advance the field of research in both directions. First it shows the design and implementation of routing algorithms based on the end-to-end systems design principle. Proposed routing algorithms obsolete the need to perform content-based routing within the publish/subscribe network, pushing this task to the edge of the system. Moreover, this thesis presents a fail-aware approach towards construction of the content-based publish/subscribe system along with its application to the creation of the soft state publish/subscribe system. A soft state publish/subscribe system exposes the self stabilizing behavior as far as transient timing, link and node failures are concerned. The result of this thesis is a family of the XSiena content-based publish/subscribe systems, implementing the proposed concepts and algorithms. The family of the XSiena content-based publish/subscribe systems has been a subject to rigorous evaluation, which confirms the claims made in this thesis.
|
64 |
Publish Subscribe on Large-Scale Dynamic Topologies: Routing and Overlay ManagementFrey, Davide 18 May 2006 (has links) (PDF)
Content-based publish-subscribe is emerging as a communication paradigm able to meet the demands of highly dynamic distributed applications, such as those made popular by mobile computing and peer-to-peer networks. Nevertheless, the available systems implementing this communication model are still unable to cope efficiently with dynamic changes to the topology of their distributed dispatching infrastructure. This hampers their applicability in the aforementioned scenarios. This thesis addresses this problem and presents a complete approach to the reconfiguration of content-based publish-subscribe systems. In Part I, it proposes a layered architecture for reconfigurable publish-subscribe middleware consisting of an overlay, a routing, and an event-recovery layer. This architecture allows the same routing components to operate in different types of dynamic network environments, by exploiting different underlying overlays. Part II addresses the routing layer with new protocols to manage the recon- figuration of the routing information enabling the correct delivery of events to subscribers. When the overlay changes as a result of nodes joining or leaving the network or as a result of mobility, this information is updated so that routing can adapt to the new environment. Our protocols manage to achieve this with as little overhead as possible. Part III addresses the overlay layer and proposes two novel approaches for building and maintaining a connected topology in highly dynamic network sce- narios. The protocols we present achieve this goal, while managing node degree and keeping reconfigurations localized when possible. These properties allow our overlay managers to be applied not only in the context of publish-subscribe mid- dleware but also as enabling technologies for other communication paradigms like application-level multicast. Finally, the thesis integrates the overlay and routing layers into a single frame- work and evaluates their combined performance both in wired and in wireless scenarios. Results show that the optimizations provided by our routing reconfig- uration protocols allow the middleware to achieve very good performance in such networks. Moreover, they highlight that our overlay layer is able to optimize this performance even further, significantly reducing the network traffic generated by the routing layer. The protocols presented in this thesis are implemented in the REDS middle- ware framework developed at Politecnico di Milano. Their use enables REDS to operate efficiently in dynamic network scenarios ranging from large-scale peer-to- peer to mobile ad hoc networks.
|
65 |
PS2DICOM: Explorando o paradigma Publish/Subscribe e a elasticidade em níveis aplicados ao procedimento de TelemedicinaPaim, Euclides Palma 31 October 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-02-22T12:31:13Z
No. of bitstreams: 1
Euclides Palma Paim_.pdf: 2529933 bytes, checksum: 9c867ad7f5950b65e99f49343f096e8e (MD5) / Made available in DSpace on 2018-02-22T12:31:13Z (GMT). No. of bitstreams: 1
Euclides Palma Paim_.pdf: 2529933 bytes, checksum: 9c867ad7f5950b65e99f49343f096e8e (MD5)
Previous issue date: 2017-10-31 / Nenhuma / Imagens médicas são usadas diariamente para apoio ao diagnóstico em diferentes áreas da Radiologia no mundo todo. Essas imagens seguem uma padronização internacional definida pela ISO 12052, conhecida como padrão DICOM (Digital Imaging and Communications in Medicine). Cada instituição que reivindica conformidade com esta norma, possui seus próprios serviços de armazenamento, sistemas de visualização e processamento específicos para esses dados. No entanto, há uma grande necessidade de que essas imagens sejam analisadas por diferentes especialistas, a fim de que cada caso possa ser discutido de forma ampla, na busca do melhor tratamento para cada patologia. A indisponibilidade de dados em tempo real para a avaliação médica especializada impacta direta e profundamente no sucesso terapêutico. O modelo de computação em nuvem tem as características necessárias para garantir que estas imagens se encontrem ao alcance dos profissionais mais recomendados para cada caso, aptos a oferecer o melhor atendimento. A grande quantidade de recursos disponíveis em nuvem, para lidar com esses dados de forma escalável, facilita a criação de uma infraestrutura para apoio ao diagnóstico à distância através de recursos de Telemedicina. Tomando como base o paradigma computacional Publicar/Assinar, podemos estabelecer comunicação em tempo real para solucionar situações no campo da saúde, como comunicação entre hospitais ou clínicas e entre médicos, enfermeiros e especialistas envolvidos no diagnóstico. Em ambientes clínicos que lidam com transmissão massiva de imagens em alta resolução no padrão DICOM, bem como em ambientes com problemas de desempenho de rede, transmitir essas imagens em tempo hábil, armazenar e disponibilizar de forma segura é um problema sem solução espontânea. Dessa forma esse trabalho propõe uma arquitetura baseada em nuvem computacional, para coletar, comprimir, armazenar e recuperar dados utilizando o paradigma Publicar/Assinar e dois níveis de escalabilidade. O modelo PS2DICOM é estabelecido como um middleware que oferece recursos de infraestrutura na camada IaaS (Infrastructure as a Service), apoiando as tarefas de transmissão e armazenamento de arquivos dentro do padrão DICOM. O sistema oferece compactação dos dados com diferentes intensidades, conforme largura de banda disponível. O modelo PS2DICOM conta ainda com dois níveis de balanceamento de carga e com a elasticidade reativa oferecida pela infraestrutura. A pesquisa contribui ao apresentar uma arquitetura eficaz para otimizar tarefas de rede, capaz de ser adotada como solução ao desenvolver aplicações voltadas para nuvens computacionais aplicadas a saúde em futuras situações reais. A arquitetura foi testada utilizando um protótipo com módulos distintos, desenvolvidos para cada serviço específico oferecido e mostrou-se eficiente como solução para os problemas em questão. Seus detalhes são descritos nos capítulos seguintes, bem como sua implementação, que corrobora a viabilidade do modelo. / Medical images are used daily to support diagnosis in different areas of Radiology throughout the world. These images follow an international standardization defined by ISO 12052, known as DICOM (Digital Imaging and Communications in Medicine) standard. Each institution that claims compliance with this standard has its own storage services, visualization and processing systems specific to that data. However, there is a great need for these images to be analyzed by different specialists, so that each case can be discussed in a broad way, in the search for the best treatment for each pathology. The unavailability of real-time data for specialized medical evaluation has a direct and profound impact on therapeutic success. The cloud computing model has the necessary characteristics to ensure that these images are within the reach of the professionals most recommended for each case, able to offer the best service. The large amount of resources available in the cloud to handle this data in a scalable way facilitates the creation of an infrastructure to support remote diagnosis through Telemedicine resources. Based on the computational paradigm Publish/Subscribe, we can establish real-time communication to solve situations in the field of health, such as communication between hospitals or clinics and between doctors, nurses and experts involved in the diagnosis. In clinical environments that deal with massive transmission of high resolution images in the DICOM standard, as well as in environments with network performance problems, transmitting these images in a timely manner, storing and making them available securely is a problem without a spontaneous solution. In this way, this work proposes a computational cloud-based architecture to collect, compress, store and retrieve data using the Publish/Subscribe paradigm and two levels of scalability. The PS2DICOM model is established as a middleware that provides infrastructure resources in the IaaS (Infrastructure as a Service) layer, supporting the tasks of transmitting and storing files within the DICOM standard. The system offers data compression with different intensities, depending on available bandwidth. The PS2DICOM model also has two levels of load balancing and the reactive elasticity offered by the infrastructure. The research contributes to presenting an efficient architecture to optimize network tasks, capable of being adopted as a solution when developing applications focused on computational clouds applied to health in future real situations. The architecture was tested using a prototype with distinct modules, developed for each specific service offered and proved to be efficient as a solution to the problems in question. Its details are described in the following chapters, as well as its implementation, which corroborates the viability of the model.
|
66 |
Hur datakommunikationssäkerheten påverkas vid införandet av en meddelandeförmedlareGaupp, Erik, Jonsson, Jan January 2008 (has links)
<p>Validerat; 20101217 (root)</p>
|
67 |
The Impress Context Store: A Coordination Framework for Context-Aware SystemsLi, Herman Hon Yu January 2006 (has links)
The dream of weaving technology into our everyday fabric of life is recently being made possible by advances in ubiquitous computing and sensor technologies. Countless sensors of various sizes have made their way into everyday commercial applications. Many projects aim to explore new ways to utilize these new technologies to aid and interact with the general population. Context-aware systems use available context information to assist users automatically, without explicit user input. By inferring user intent and configuring the system proactively for each user, context-aware systems are an integral part of achieving user-friendly ubiquitous computing environments. <br /><br /> A common issue with building a distributed context-aware system is the need to develop a supporting infrastructure providing features such as storage, distributed messaging, and security, before the real work on processing context information can be done. This thesis proposes a coordination framework that provides an effective common foundation for context-aware systems. The separation between the context-processing logic component and the underlying supporting foundation allows researchers to focus their energy at the context-processing part of the system, instead of spending their time re-inventing the supporting infrastructure. <br /><br /> As part of an ongoing project, Impress, the framework uses the open standard, Jabber, as its communication protocol. The Publish-Subscribe (pubsub) extension to Jabber provides interesting features that match those needed by a context-aware system. The main contribution of this thesis is the design and implementation of a coordination framework, called the Impress Context Store, that provides an effective common foundation for context-aware systems. The separation between the context-processing logic and the underlying supporting foundation allows researchers to focus their energy at the context-processing part of the system, instead of spending their time re-inventing the supporting infrastructure.
|
68 |
The Impress Context Store: A Coordination Framework for Context-Aware SystemsLi, Herman Hon Yu January 2006 (has links)
The dream of weaving technology into our everyday fabric of life is recently being made possible by advances in ubiquitous computing and sensor technologies. Countless sensors of various sizes have made their way into everyday commercial applications. Many projects aim to explore new ways to utilize these new technologies to aid and interact with the general population. Context-aware systems use available context information to assist users automatically, without explicit user input. By inferring user intent and configuring the system proactively for each user, context-aware systems are an integral part of achieving user-friendly ubiquitous computing environments. <br /><br /> A common issue with building a distributed context-aware system is the need to develop a supporting infrastructure providing features such as storage, distributed messaging, and security, before the real work on processing context information can be done. This thesis proposes a coordination framework that provides an effective common foundation for context-aware systems. The separation between the context-processing logic component and the underlying supporting foundation allows researchers to focus their energy at the context-processing part of the system, instead of spending their time re-inventing the supporting infrastructure. <br /><br /> As part of an ongoing project, Impress, the framework uses the open standard, Jabber, as its communication protocol. The Publish-Subscribe (pubsub) extension to Jabber provides interesting features that match those needed by a context-aware system. The main contribution of this thesis is the design and implementation of a coordination framework, called the Impress Context Store, that provides an effective common foundation for context-aware systems. The separation between the context-processing logic and the underlying supporting foundation allows researchers to focus their energy at the context-processing part of the system, instead of spending their time re-inventing the supporting infrastructure.
|
69 |
Dynamic Differential Data Protection for High-Performance and Pervasive ApplicationsWidener, Patrick M. (Patrick McCall) 20 July 2005 (has links)
Modern distributed applications are long-lived, are expected to
provide flexible and adaptive data services, and must meet the
functionality and scalability challenges posed by dynamically changing
user communities in heterogeneous execution environments. The
practical implications of these requirements are that reconfiguration
and upgrades are increasingly necessary, but opportunities to perform
such tasks offline are greatly reduced. Developers are responding to
this situation by dynamically extending or adjusting application
functionality and by tuning application performance, a typical method
being the incorporation of client- or context-specific code into
applications' execution loops.
Our work addresses a basic roadblock in deploying such solutions: the protection of key
application components and sensitive data in distributed applications.
Our approach, termed Dynamic Differential Data Protection (D3P),
provides fine-grain methods for providing component-based protection
in distributed applications. Context-sensitive, application-specific
security methods are deployed at runtime to enforce restrictions in
data access and manipulation. D3P is suitable for low- or
zero-downtime environments, since deployments are performed while
applications run. D3P is appropriate for high performance environments
and for highly scalable applications like publish/subscribe, because
it creates native codes via dynamic binary code generation. Finally,
due to its integration into middleware, D3P can run across a wide
variety of operating system and machine platforms.
This dissertation introduces D3P, using sample
applications from the high performance and pervasive computing domains
to illustrate the problems addressed by our D3P solution. It also
describes how D3P can be integrated into modern middleware. We
present experimental evaluations which demonstrate the fine-grain
nature of D3P, that is, its ability to capture individual end users'
or components' needs for data protection, and also describe the
performance implications of using D3P in data-intensive applications.
|
70 |
Scalable view-based techniques for web data : algorithms and systemsKatsifodimos, Asterios 03 July 2013 (has links) (PDF)
XML was recommended by W3C in 1998 as a markup language to be used by device- and system-independent methods of representing information. XML is nowadays used as a data model for storing and querying large volumes of data in database systems. In spite of significant research and systems development, many performance problems are raised by processing very large amounts of XML data. Materialized views have long been used in databases to speed up queries. Materialized views can be seen as precomputed query results that can be re-used to evaluate (part of) another query, and have been a topic of intensive research, in particular in the context of relational data warehousing. This thesis investigates the applicability of materialized views techniques to optimize the performance of Web data management tools, in particular in distributed settings, considering XML data and queries. We make three contributions.We first consider the problem of choosing the best views to materialize within a given space budget in order to improve the performance of a query workload. Our work is the first to address the view selection problem for a rich subset of XQuery. The challenges we face stem from the expressive power and features of both the query and view languages and from the size of the search space of candidate views to materialize. While the general problem has prohibitive complexity, we propose and study a heuristic algorithm and demonstrate its superior performance compared to the state of the art.Second, we consider the management of large XML corpora in peer-to-peer networks, based on distributed hash tables (or DHTs, in short). We consider a platform leveraging distributed materialized XML views, defined by arbitrary XML queries, filled in with data published anywhere in the network, and exploited to efficiently answer queries issued by any network peer. This thesis has contributed important scalability oriented optimizations, as well as a comprehensive set of experiments deployed in a country-wide WAN. These experiments outgrow by orders of magnitude similar competitor systems in terms of data volumes and data dissemination throughput. Thus, they are the most advanced in understanding the performance behavior of DHT-based XML content management in real settings.Finally, we present a novel approach for scalable content-based publish/subscribe (pub/sub, in short) in the presence of constraints on the available computational resources of data publishers. We achieve scalability by off-loading subscriptions from the publisher, and leveraging view-based query rewriting to feed these subscriptions from the data accumulated in others. Our main contribution is a novel algorithm for organizing subscriptions in a multi-level dissemination network in order to serve large numbers of subscriptions, respect capacity constraints, and minimize latency. The efficiency and effectiveness of our algorithm are confirmed through extensive experiments and a large deployment in a WAN.
|
Page generated in 0.0243 seconds