Spelling suggestions: "subject:"peertopeer (P2P)"" "subject:"peertopeer (P2P)""
11 |
Developing Jxta Applications For Mobile Devices And Invoking Web Services Deployed In Jxta Platform From Mobile DevicesBahadir, Mesut 01 December 2004 (has links) (PDF)
Today, Peer-to-peer (P2P) computing and Web Services play an important role in networking and computing. P2P computing, that aims addressing all the resources in a network and sharing them, is an old paradigm that gains importance nowadays with the advent of popular file sharing and instant messaging applications. On the other hand, a Web service is a software system that has an interface allowing applications to interact with other applications through Internet or intranet. Providing methods for publishing and discovering Web services from which mobile devices can facilitate in a P2P environment enables exploitation of P2P and Web service technologies efficiently by mobile devices. This also extends the range of devices that facilitate P2P and Web services technologies from servers and desktop computers to personal digital assistants (PDAs) and mobile phones.
In this thesis, an architecture that enables publishing and discovering Web services for mobile clients that are inter-connected in a P2P environment is introduced. Key issues in this architecture are allowing mobile devices to join in a P2P network group, publishing Web services and discovering these services in P2P network. Invoking Web services that are published and discovered is another key issue in this architecture. The architecture introduced exploits P2P and Web services standards using various tools for mobile devices. For the purpose of organizing a P2P environment, JXTA protocols and services are used. WSDL is used for describing Web services. JXTA advertisements help in publication and discovery of Web services / and BPEL enables composition, deployment and execution of Web services. The architecture introduced within the scope of this thesis combines all these standards with tools that enable use of these standards on mobile devices.
The work done in this thesis is realized as a part of Artemis, a project funded by European Commission for providing interoperability of medical information systems.
|
12 |
Trust Management for P2P application in Delay Tolerant Mobile Ad-hoc Networks. An Investigation into the development of a Trust Management Framework for Peer to Peer File Sharing Applications in Delay Tolerant Disconnected Mobile Ad-hoc Networks.Qureshi, Basit I. January 2011 (has links)
Security is essential to communication between entities in the internet. Delay tolerant and disconnected Mobile Ad Hoc Networks (MANET) are a class of networks characterized by high end-to-end path latency and frequent end-to-end disconnections and are often termed as challenged networks. In these networks nodes are sparsely populated and without the existence of a central server, acquiring global information is difficult and impractical if not impossible and therefore traditional security schemes proposed for MANETs cannot be applied. This thesis reports trust management schemes for peer to peer (P2P) application in delay tolerant disconnected MANETs. Properties of a profile based file sharing application are analyzed and a framework for structured P2P overlay over delay tolerant disconnected MANETs is proposed. The framework is implemented and tested on J2ME based smart phones using Bluetooth communication protocol. A light weight Content Driven Data Propagation Protocol (CDDPP) for content based data delivery in MANETs is presented. The CDDPP implements a user profile based content driven P2P file sharing application in disconnected MANETs. The CDDPP protocol is further enhanced by proposing an adaptive opportunistic multihop content based routing protocol (ORP). ORP protocol considers the store-carry-forward paradigm for multi-hop packet delivery in delay tolerant MANETs and allows multi-casting to selected number of nodes. Performance of ORP is compared with a similar autonomous gossiping (A/G) protocol using simulations. This work also presents a framework for trust management based on dynamicity aware graph re-labelling system (DA-GRS) for trust management in mobile P2P applications. The DA-GRS uses a distributed algorithm to identify trustworthy nodes and generate trustable groups while isolating misleading or untrustworthy nodes. Several simulations in various environment settings show the effectiveness of the proposed framework in creating trust based communities. This work also extends the FIRE distributed trust model for MANET applications by incorporating witness based interactions for acquiring trust ratings. A witness graph building mechanism in FIRE+ is provided with several trust building policies to identify malicious nodes and detect collusive behaviour in nodes. This technique not only allows trust computation based on witness trust ratings but also provides protection against a collusion attack. Finally, M-trust, a light weight trust management scheme based on FIRE+ trust model is presented.
|
13 |
Hybrid multicasting using Automatic Multicast Tunnels (AMT)Alwadani, Dhaifallah January 2017 (has links)
Native Multicast plays an important role in distributing and managing delivery of some of the most popular Internet applications, such as IPTV and media delivery. However, due to patchy support and the existence of multiple approaches for Native Multicast, the support for Native Multicast is fragmented into isolated areas termed Multicast Islands. This renders Native Multicast unfit to be used as an Internet wide application. Instead, Application Layer Multicast, which does not have such network requirements but is more expensive in terms of bandwidth and overhead, can be used to connect the native multicast islands. This thesis proposes Opportunistic Native Multicast (ONM) which employs Application LayerMulticast (ALM), on top of a DHT-based P2P overlay network, and Automatic Multicast Tunnelling (AMT) to connect these islands. ALM will be used for discovery and initiating the AMT tunnels. The tunnels will encapsulate the traffic going between islands' Primary Nodes (PNs). AMT was used for its added benefits such as security and being better at traffic shaping and Quality Of Service (QoS). While different approaches for connecting multicast islands exists, the system proposed in the thesis was designed with the following characteristics in mind: scalability, availability, interoperability, self-adaptation and efficiency. Importantly, by utilising AMT tunnels, this approach has unique properties that improve network security and management.
|
14 |
Distributed Contention-Free Access for Multi-hop IEEE 802.15.4 Wireless Sensor NetworksKhayyat, Ahmad 26 October 2007 (has links)
The IEEE 802.15.4 standard is a low-power, low-rate MAC/PHY standard that meets most of the stringent requirements of single-hop wireless sensor networks. Sensor networks with nodal populations comprised of thousands of devices have been envisioned in conjunction with environmental, vehicular, and military applications, to mention a few. However, such large sensor network deployments necessitate multi-hop support as well as low power consumption. In light of the standard's extremely limited joint support of the two aforementioned attributes, this thesis presents two essential contributions. First, a framework is proposed to implement a new IEEE 802.15.4 operating mode, namely the synchronized peer-to-peer mode. This mode is designed to enable the standard's low-power features in peer-to-peer multi-hop-ready topologies. The second contribution is a distributed Guaranteed Time Slot (dGTS ) management scheme designed to function in the newly devised network mode. This protocol provides reliable contention-free access in peer-to-peer topologies in a completely distributed manner. Assuming optimal routing, our simulation experiments reveal perfect delivery ratios as long as the traffic load does not reach or surpass its saturation threshold. dGTS sustains at least twice the delivery ratio of contention access under sub-optimal dynamic routing. Moreover, the dGTS scheme exhibits minimum power consumption by eliminating the retransmissions attributed to contention, which in turn reduces the number of transmissions to a minimum. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2007-10-25 14:55:36.811
|
15 |
Structured peer-to-peer overlays for NATed churn intensive networksChowdhury, Farida January 2015 (has links)
The wide-spread coverage and ubiquitous presence of mobile networks has propelled the usage and adoption of mobile phones to an unprecedented level around the globe. The computing capabilities of these mobile phones have improved considerably, supporting a vast range of third party applications. Simultaneously, Peer-to-Peer (P2P) overlay networks have experienced a tremendous growth in terms of usage as well as popularity in recent years particularly in fixed wired networks. In particular, Distributed Hash Table (DHT) based Structured P2P overlay networks offer major advantages to users of mobile devices and networks such as scalable, fault tolerant and self-managing infrastructure which does not exhibit single points of failure. Integrating P2P overlays on the mobile network seems a logical progression; considering the popularities of both technologies. However, it imposes several challenges that need to be handled, such as the limited hardware capabilities of mobile phones and churn (i.e. the frequent join and leave of nodes within a network) intensive mobile networks offering limited yet expensive bandwidth availability. This thesis investigates the feasibility of extending P2P to mobile networks so that users can take advantage of both these technologies: P2P and mobile networks. This thesis utilises OverSim, a P2P simulator, to experiment with the performance of various P2P overlays, considering high churn and bandwidth consumption which are the two most crucial constraints of mobile networks. The experiment results show that Kademlia and EpiChord are the two most appropriate P2P overlays that could be implemented in mobile networks. Furthermore, Network Address Translation (NAT) is a major barrier to the adoption of P2P overlays in mobile networks. Integrating NAT traversal approaches with P2P overlays is a crucial step for P2P overlays to operate successfully on mobile networks. This thesis presents a general approach of NAT traversal for ring based overlays without the use of a single dedicated server which is then implemented in OverSim. Several experiments have been performed under NATs to determine the suitability of the chosen P2P overlays under NATed environments. The results show that the performance of these overlays is comparable in terms of successful lookups in both NATed and non-NATed environments; with Kademlia and EpiChord exhibiting the best performance. The presence of NATs and also the level of churn in a network influence the routing techniques used in P2P overlays. Recursive routing is more resilient to IP connectivity restrictions posed by NATs but not very robust in high churn environments, whereas iterative routing is more suitable to high churn networks, but difficult to use in NATed environments. Kademlia supports both these routing schemes whereas EpiChord only supports the iterating routing. This undermines the usefulness of EpiChord in NATed environments. In order to harness the advantages of both routing schemes, this thesis presents an adaptive routing scheme, called Churn Aware Routing Protocol (ChARP), combining recursive and iterative lookups where nodes can switch between recursive and iterative routing depending on their lifetimes. The proposed approach has been implemented in OverSim and several experiments have been carried out. The experiment results indicate an improved performance which in turn validates the applicability and suitability of ChARP in NATed environments.
|
16 |
台灣P2P借貸平台策略分析及探討其商業模式之適用性 / The Platform Strategy of P2P Lending, and the Applicability of the Business Model in Taiwan巫瑞芬, Wu, Ruei Fen Unknown Date (has links)
P2P個人網路借貸(Peer-to-Peer Lending)平台,即為媒合個人對個人借貸的網路平台;有閒置資金者,可透過網路平台,挑選自己願意資助的對象,將資金借給資金需求者,以獲得報酬;資金需求者,則可利用此網路平台,尋找願意提供資金者,以滿足借款需求。其中,借款利率由P2P借貸平台業者依據其計算方式評估訂定;因此,對投資人來說,P2P借貸平台成為另類的理財管道;對借款者來說,則成為新興的融資方式。2016年可說是台灣P2P網路借貸元年,LnB信用市集、鄉民貸等P2P借貸平台相繼成立,設計差異化的營運模式,提供台灣大眾新型態的借貸與投資服務。
本研究以個案式的實務分析及驗證,運用平台策略的相關理論為基礎,並以商業模式之要素作為分析架構,探討個案公司LnB信用市集、鄉民貸及FundPark創辦人創建平台時的動機與目標,如何增進平台參與者互動,替用戶解決問題,並共同創造價值。因此,本研究之研究問題總結如下:
(一)台灣P2P借貸平台發展之可能性。
(二)個案公司P2P借貸平台營運模式於台灣發展遇到的問題及比較。 / The Peer-to-Peer lending platform (P2P lending platform will be used as the abbreviation in the following section) the practice of lending money to individuals through online services that match lenders with borrowers. Investors holding idle capital can choose the target on the website to lend money and gain higher returns; while the borrowers’ funding needs can be satisfied by the mechanism of the P2P lending platform. Thus, the P2P lending platform has transformed the traditional idea that finance has to be handled through financial institutions.
The P2P lending industry in Taiwan have been developed since 2016, the first and second P2P lending companies: Lend & Borrow(LnB信用市集) , Lend(鄉民貸) were founded during 2015 and 2016, which were devoted to offering reasonable interest rates for lenders and borrowers.
Based on the concept of platform strategy and the factors of business model, this study investigates the object and the motivation of the founders when they founded the P2P lending companies, and discusses how they increased the interaction of the players in the platform ecosystem, with the case study of P2P lending companies--Lend & Borrow(LnB信用市集) , Lend(鄉民貸) and FundPark.
Therefore, the research questions of this study are as follows:
1.The practicability of the P2P lending platform in Taiwan.
2.The comparison of the business model, and the obstacles that the P2P lending companies faced in Taiwan.
|
17 |
Distributed virtual environment scalability and securityMiller, John January 2011 (has links)
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads. This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoW's workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to today's consumers, regardless of its overhead. Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption. A trusted auditing system ('Carbon') is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVE's risk tolerance. Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals. Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead.
|
18 |
Robust, fault-tolerant majority based key-value data store supporting multiple data consistencyKhan, Tareq Jamal January 2011 (has links)
Web 2.0 has significantly transformed the way how modern society works now-a-days. In today‘s Web, information not only flows top down from the web sites to the readers; but also flows bottom up contributed by mass user. Hugely popular Web 2.0 applications like Wikis, social applications (e.g. Facebook, MySpace), media sharing applications (e.g. YouTube, Flickr), blogging and numerous others generate lots of user generated contents and make heavy use of the underlying storage. Data storage system is the heart of these applications as all user activities are translated to read and write requests and directed to the database for further action. Hence focus is on the storage that serves data to support the applications and its reliable and efficient design is instrumental for applications to perform in line with expectations. Large scale storage systems are being used by popular social networking services like Facebook, MySpace where millions of users‘ data have been stored and fully accessed by these companies. However from users‘ point of view there has been justified concern about user data ownership and lack of control over personal data. For example, on more than one occasions Facebook have exercised its control over users‘ data without respecting users‘ rights to ownership of their own content and manipulated data for its own business interest without users‘ knowledge or consent. The thesis proposes, designs and implements a large scale, robust and fault-tolerant key-value data storage prototype that is peer-to-peer based and intends to back away from the client-server paradigm with a view to relieving the companies from data storage and management responsibilities and letting users control their own personal data. Several read and write APIs (similar to Yahoo!‘s P NUTS but different in terms of underlying design and the environment they are targeted for) with various data consistency guarantees are provided from which a wide range of web applications would be able to choose the APIs according to their data consistency, performance and availability requirements. An analytical comparison is also made against the PNUTS system that targets a more stable environment. For evaluation, simulation has been carried out to test the system availability, scalability and fault-tolerance in a dynamic environment. The results are then analyzed and conclusion is drawn that the system is scalable, available and shows acceptable performance.
|
19 |
Robust video streaming over time-varying wireless networksDemircin, Mehmet Umut 03 July 2008 (has links)
Multimedia services and applications became the driving force in the development and widespread deployment of wireless broadband access technologies and high speed local area networks. Mobile phone service providers are offering wide range of multimedia applications over high speed wireless data networks. People can watch live TV, stream on-demand video clips and place videotelephony calls using multimedia capable mobile devices. Mobile devices will soon support capturing and displaying high definition video. Similar evolution is also occurring in the local area domain. The video receiver or storage devices were conventionally connected to display devices using cables. By using wireless local area networking (WLAN) technologies, convenient and cable-free connectivity can be achieved. Media over wireless home networks prevents the cable mess and provides mobility to portable TVs.
However, there still exit challenges for improving the quality-of-service (QoS) of multimedia applications. Conventional service architectures, network structures and protocols lack to provide a robust distribution medium since most of them are not designed considering the high data rate and real-time transmission requirements of digital video.
In this thesis the challenges of wireless video streaming are addressed in two main categories. Streaming protocol level issues constitute the first category. We will refer to the collection of network protocols that enable transmitting digital compressed video from a source to a receiver as the streaming protocol. The objective of streaming protocol solutions is the high quality video transfer between two networked devices.
Novel application-layer video bit-rate adaptation methods are designed for handling short- and long-term bandwidth variations of the wireless local area network (WLAN) links. Both transrating and scalable video coding techniques are used to generate video bit-rate flexibility. Another contribution of this thesis study is an error control method that dynamically adjusts the forward error correction (FEC) rate based on channel bit-error rate (BER) estimation and video coding structure.
The second category is the streaming service level issues, which generally surface in large scale systems. Service system solutions target to achieve system scalability and provide low cost / high quality service to consumers. Peer-to-peer assisted video streaming technologies are developed to reduce the load of video servers. Novel video file segment caching strategies are proposed for more efficient peer-to-peer collaboration.
|
20 |
Gerenciamento de uma estrutura híbrida de TI dirigido por métricas de negócio. / Management of a hybrid IT structure driven by business metrics.MACIEL JÚNIOR, Paulo Ditarso. 31 July 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-31T13:58:16Z
No. of bitstreams: 1
PAULO DITARSO MACIEL JÚNIOR - TESE PPGCC 2013..pdf: 21161997 bytes, checksum: 33b051924023dbbac092de80229a7705 (MD5) / Made available in DSpace on 2018-07-31T13:58:16Z (GMT). No. of bitstreams: 1
PAULO DITARSO MACIEL JÚNIOR - TESE PPGCC 2013..pdf: 21161997 bytes, checksum: 33b051924023dbbac092de80229a7705 (MD5)
Previous issue date: 2013-06-14 / CNPq / Capes / Com o surgimento do paradigma de computação na nuvem e a busca contínua para reduzir o custo de operar infraestruturas de Tecnologia da Informação (TI), estamos vivenciando nos dias de hoje uma importante mudança na forma como estas infraestruturas estão sendo montadas, configuradas e gerenciadas. Nesta pesquisa consideramos o problema de gerenciar uma infraestrutura híbrida, cujo poder computacional é formado por máquinas locais dedicadas, máquinas virtuais obtidas de provedores de computação na nuvem e máquinas virtuais remotas disponíveis a partir de uma grade peer-to-peer (P2P) best-effort. As aplicações executadas nesta infraestrutura são caracterizadas por uma função de utilidade no tempo, ou seja, a utilidade produzida pela execução completa da aplicação depende do tempo total necessário para sua finalização. Tomamos uma abordagem dirigida a negócios para gerenciar esta infraestrutura, buscando maximizar o lucro total obtido. Aplicações são executadas
utilizando poder computacional local e da grade best-effort, quando possível. Qualquer capacidade extra requerida no intuito de melhorar a lucratividade da infraestrutura é adquirida no mercado de computação na nuvem. Também assumimos que esta capacidade extra pode ser reservada para uso futuro através de contratos de curta ou longa duração, negociados sem intervenção humana. Para contratos de curto prazo, o custo por unidade de recurso computacional pode variar significativamente entre contratos, com contratos mais urgentes apresentando, geralmente, custos mais caros. Além disso, devido à incerteza inerente à grade best-effort, podemos não saber exatamente quantos recursos serão necessários do mercado de computação na nuvem com certa antecedência. Superestimar a quantidade de recursos necessários leva a uma reserva maior do que necessária; enquanto subestimar leva à necessidade de negociar contratos adicionais posteriormente. Neste contexto, propomos heurísticas que podem ser usadas por agentes planejadores de contratos no intuito de balancear o custo e a utilidade obtida na execução das aplicações, com o objetivo de alcançar um alto lucro global. Demonstramos que a habilidade de estimar o comportamento da grade é uma importante condição para estabelecer contratos que produzem alta eficiência no uso da infraestrutura
híbrida de TI. / With the emergence of the cloud computing paradigm and the continuous search to reduce
the cost of running Information Technology (IT) infrastructures, we are currently experiencing an importam change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a hybrid high-performance computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired from cloud computing providers, and remote virtual machines made available by a best-effort peer-to-peer (P2P) grid. The applications that run in this hybrid infrastructure are characterised by a utility function: the utility yielded by the completion of an application depends on the time taken to execute it. We take a business-driven approach to manage this infrastructure, aiming at maximising the total profit achieved. Applications are run using computing power from both in-house resources and the best-effort grid. whenever possible. Any extra capacity required to improve the profitability of the infrastructure is purchased from the cloud computing market. We also assume that this extra capacity is reserved for future use through either short or long term contracts, which are negotiated without human intervention. For short term contracts. the cost per unit of computing resource may vary significantly between contracts, with more urgent contracts normally being more expensive. Furthermore, due to the uncertainty inherent in the besteffort grid, it may not be possible to know in advance exactly how much computing resource will be needed from the cloud computing market. Overestimation of the amount of resources required leads to the reservation of more than is necessary; while underestimation leads to the necessity of negotiating additional contracts later on to acquire the remaining required capacity. In this context, we propose heuristics to be used by a contract planning agent in order to balance the cost of running the applications and the utility that is achieved with their execution. with the aim of producing a high overall profit. We demonstrate that the ability to estimate the grid behaviour is an important condition for making contracts that produce high efficiency in the use of the hybrid IT infrastructure.
|
Page generated in 0.027 seconds