• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 460
  • 91
  • 43
  • 34
  • 29
  • 28
  • 27
  • 24
  • 16
  • 13
  • 10
  • 9
  • 9
  • 8
  • 2
  • Tagged with
  • 857
  • 857
  • 317
  • 309
  • 197
  • 142
  • 142
  • 137
  • 137
  • 102
  • 87
  • 81
  • 80
  • 78
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Computação paralela para reduzir o tempo de resposta da mineração de dados agrícolas

Abreu, Cristian Cosmoski Rangel de 30 April 2013 (has links)
Made available in DSpace on 2017-07-21T14:19:37Z (GMT). No. of bitstreams: 1 Cristian Abreu.pdf: 2219271 bytes, checksum: 3d770700a8027fff9a36f6287c8c4e54 (MD5) Previous issue date: 2013-04-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The objective of this study was investigate the use of parallel computing to reduce the response time of data mining in agriculture. For this purpose, a tool, called Fast Weka been defined and implemented. This tool allows running data mining algorithms and explore parallelism in multi-core computers with the use of threads and distributed systems employing peer-to-peer networks. The exploration of parallelism occurs through the data parallelism inherent to the process of cross-validation (folds). The tool was evaluated through experiments using artificial neural networks data mining algorithms applied to a data set of forest cover types. The multi-thread computing and computing on peer-to-peer networks allowed to reduce the response time of data mining activities. The best results were achieved when employed a multiple number of threads or pairs in the number of folds of cross validation. It was observed and efficiency of 87% when used 4 threads to 24 folds and 86% efficiency also in peer-to-peer networks using 24 folds with 11 pairs. / O objetivo deste trabalho foi investigar a utilização da computação paralela para reduzir o tempo de resposta da mineração de dados na agricultura. Para esse fim, uma ferramenta, chamada Fast Weka foi definida e implementada. Essa ferramenta permite executar algoritmos de mineração de dados e explorar o paralelismo em computadores multi-núcleos com uso de threads em sistemas distribuídos empregando redes peer-to-peer. A exploração do paralelismo ocorre por meio do paralelismo de dados inerente ao processo de validação cruzada (folds). A ferramenta foi avaliada por meio de experimentos de mineração de dados utilizando algoritmos de redes neurais artificiais aplicados em um conjunto de dados de tipos de coberturas florestais. A computação multi-thread e a computação em redes peer-to-peer permitiram reduzir o tempo de resposta das atividades de mineração de dados. Os melhores resultados foram obtidos quando empregados um número múltiplo de threads ou pares em relação ao número de folds da validação cruzada. Observou-se uma eficiência de 87% quando utilizadas 4 threads para 24 folds e 86% de eficiência, também, com 2 folds utilizando redes peer-to-peer co 11 pares.
252

SIMULAÇÃO CLIMÁTICA DE DADOS DE VENTO EM REDES P2P UTILIZANDO GPU

Baron Neto, Ciro 28 February 2014 (has links)
Made available in DSpace on 2017-07-21T14:19:39Z (GMT). No. of bitstreams: 1 Ciro Baron Neto.pdf: 1513768 bytes, checksum: a9f4624d5d9521cfa109fa40a688cbb2 (MD5) Previous issue date: 2014-02-28 / This paper presents an approach of technologies GPGPU (General-Purpose Computing on Graphics Processing Unit) and P2P (peer-to-peer) networks in order to improve the response time of climate data simulations. Thus, an application using CUDA (Compute Unified Device Architecture) architecture and the simulation model of Venthor simulator were initially adopted and integrated into the P2PComp framework. The results indicate an acceleration factor equal to 70 for single computers. Furthermore, the possibility of using a P2P sharing network for processing, higher acceleration factors can be obtained. Computer simulation models usually demand high processing power and this work showed that the use of parallelism in GPUs and P2P networks is an alternative that allows better performance when compared to sequential computing. / Este trabalho apresenta uma avaliação das tecnologias de GPGPU (General-Purpose Computing on Graphics Processing Unit) e de redes P2P (peer-to-peer) para melhorar o tempo de resposta de simulações de dados climáticos. Para isso, uma aplicação utilizando a arquitetura CUDA (Compute Unified Device Architecture) e o modelo de simulação de dados de vento do software Venthor foram inicialmente adotados e após integrados ao framework P2PComp. Os resultados indicam um fator de aceleração igual a 70 para computadores isolados. Além disso, com a possibilidade do uso de uma rede P2P para compartilhamento de processamento, fatores de aceleração maiores podem ser obtidos. Modelos de simulação computacional geralmente demandam alto poder de processamento e este trabalho mostrou que a utilização do paralelismo em redes P2P e GPUs constitui uma alternativa que permite melhor desempenho quando comparado à computação sequencial.
253

The Wisdom of Crowds as a Model for Trust and Security in Peer Groups

Whitney, Justin D 29 September 2005 (has links)
"Traditional security models are out of place in peer networks, where no hierarchy ex- ists, and where no outside channel can be relied upon. In this nontraditional environment we must provide traditional security properties and assure fairness in order to enable the secure, collaborative success of the network. One solution is to form a Trusted Domain, and exclude perceived dishonest and unfair members. Previous solutions have been intolerant of masquerading, and have suffered from a lack of precise control over the allocation and exercise of privileges within the Trusted Domain. Our contribution is the introduction of a model that allows for controlled access to the group, granular control over privileges, and guards against masquerading. Contin- ued good behavior is rewarded by an escalation of privileges, while requiring an increased commitment of resources. Bad behavior results in expulsion from the Trusted Domain. In colluding with malicious nodes, well behaved nodes risk losing privileges gained over time; collusion is thereby discouraged. We implement our solution on top of the Bouncer Toolkit, produced by Narasimha et al. [7], as a prototype peer to peer network. We make use of social models for trust from [], and rely on new cryptographic primitives from the field of Threshold Cryptography. We present the results of an experimental analysis of its performance for a number of thresholds, and present observations on a number of important performance and security improvements that can be made to the underlying toolkit."
254

Hybrid multicasting using Automatic Multicast Tunnels (AMT)

Alwadani, Dhaifallah January 2017 (has links)
Native Multicast plays an important role in distributing and managing delivery of some of the most popular Internet applications, such as IPTV and media delivery. However, due to patchy support and the existence of multiple approaches for Native Multicast, the support for Native Multicast is fragmented into isolated areas termed Multicast Islands. This renders Native Multicast unfit to be used as an Internet wide application. Instead, Application Layer Multicast, which does not have such network requirements but is more expensive in terms of bandwidth and overhead, can be used to connect the native multicast islands. This thesis proposes Opportunistic Native Multicast (ONM) which employs Application LayerMulticast (ALM), on top of a DHT-based P2P overlay network, and Automatic Multicast Tunnelling (AMT) to connect these islands. ALM will be used for discovery and initiating the AMT tunnels. The tunnels will encapsulate the traffic going between islands' Primary Nodes (PNs). AMT was used for its added benefits such as security and being better at traffic shaping and Quality Of Service (QoS). While different approaches for connecting multicast islands exists, the system proposed in the thesis was designed with the following characteristics in mind: scalability, availability, interoperability, self-adaptation and efficiency. Importantly, by utilising AMT tunnels, this approach has unique properties that improve network security and management.
255

Hur skapas tillit mellan användare och värdar vid användandet av tjänster i delningsekonomin? : En kvalitativ studie om hur Airbnbs värdar och användare skapar tilliten mellan varandra.

Ahmed, Aland, Ayanle Omar, Ifrah January 2018 (has links)
The purpose of the study was to study trust between Airbnb hosts and users. The complexity of the sharing economy lies in the fact that it lacks formal contracts and laws that support the private individuals included in an agreement. This type of economy is not in complete absence of laws. But laws that usually apply outside the sharing economy are not supported in this type of organizational form. Then the problem arises, when trust must be created between each other, since there are no laws to lean on disputes, fraud etc. not only that the sharing economy is characterized by uncertainty, the booking takes place digitally, which contributes with even greater uncertainty. The actors are forced to find ways to create security in the form of trust. The authors of the study have therefore investigated how Airbnbs' hosts and users create trust between each other. The study was based on a qualitative research method in the form of twelve semi-structured telephone interviews. The study's theoretical frame of reference consisted of previous research on the sharing economy, trust in the sharing economy and peer-to-peer asset sharing. The result was that respondents using the feedback system consisting of reviews and reviews and the person's profile create an idea of ​​the person and then get their own impression when they have a conversation with the person. Communication, feedback system and profile are thus the three ways that give an impression that further shapes and creates trust between users and hosts on Airbnb.
256

Karläggning av olika intressenters erfarenheter och åsikter av delningsekonomi : - En studie om Airbnb´s påverkan på samhällsaktörer- och befolkning i Stockholm

Matatko, Amanda, Piotrowska, Liza January 2019 (has links)
The phenomenon of renting out your own home is a trend that has been spread worldwide. From a spatial behavior, human territory has opened in society by making the power over its geographical area accessible to people which the landlord does not need to have any relation to. A business site, based on the sharing economy, which has gained ground today, is Airbnb. This study aims to map out experiences and opinions of the sharing economy and Airbnb from two perspectives, central social actors and the population. A mix of qualitative and quantitative research is made. The study shows that there are differences in experiences and opinions, which have consequences for the population, such as precarious regulations and income declaration. Our research also shows that there is a change in social behavior through the sharing economy, which at best can become a creative development for the population who want to be part of the sharing economy. / Fenomenet att hyra ut sitt eget hem till andra är en trend som gjort succé världen över. Ur ett rumsligt beteende öppnas människans territorialitet upp i samhället genom att makten över sitt geografiska område tillgängliggörs för personer som uthyraren inte behöver ha någon relation till sedan tidigare. En affärsidé, med utgångspunkt ur delningsekonomi, som fått fäste i dagens samhälle är Airbnb. Denna studie syftar till att kartlägga erfarenheter och åsikter av delningsekonomi och Airbnb utifrån den två perspektiven samhällsaktörer och befolkningen. En blandning av kvalitativ och kvantitativ forskning görs. Studien visar att det finns skillnader i erfarenheter och åsikter som har konsekvenser för befolkningen, exempelvis osäkra förordningar och inkomstdeklaration. Vår forskning visar på att det finns en förändring av socialt beteende genom delningsekonomin, som i bästa fall kan bli en kreativ utveckling för befolkningen som vill vara en del av delningsekonomin.
257

Distributed OpenCL : a platform for distributed, heterogeneous computing for domain scientists

Dillon, William H. (William Hall) 29 May 2012 (has links)
It is possible to purchase, for as little as $10,000, a cluster of computers with the capability to rival the supercomputers of only a few years ago. Now, users that have little to no experience developing distributed applications or managing a cluster are in a position to do so. To allow domain scientists to effectively utilize these resources, Distributed OpenCL (DOCL) was developed. DOCL is an easy-to-use foundation for peer-to-peer distributed computation on small to medium clusters. It is assumed that the end-user is a domain scientist, familiar with model development in environments such as Matlab, though inexperienced with distributed computation or parallel programming. The scope of this work includes the definition of a peer-to-peer protocol for discovering and establishing relationships with every node within a multicast domain, using the concepts of Zero-Configuration Networking, multicast DNS, and DNS Service Discovery. A problematic edge case of multicast DNS is detailed along with a mitigation technique. An XML schema is also described for basic peer communication and cluster management and inventory. A system for scheduling algorithm tasks on the cluster of heterogeneous compute devices was developed, including an automatic computation and communication cost measurement system. Finally, a graphical programming language was designed and implemented that allows non-expert programmers and modelers to develop new applications in a straightforward, accessible way. / Graduation date: 2012
258

State and file sharing in peer-to-peer systems

Zou, Li 07 June 2004 (has links)
No description available.
259

Efficient Information Dissemination in Wide Area Heterogeneous Overlay Networks

Zhang, Jianjun 11 July 2006 (has links)
In this dissertation research we study and address the unique challenges involved in information sharing and dissemination of large-scale group communication applications. We focus on system architectures and various techniques for efficient and scalable information dissemination in distributed P2P environments. Our solutions are developed by targeting at utilizing three representative P2P overlay networks: structured P2P network based on consistent hashing techniques, unstructured Gnutella-like P2P network, and P2P GeoGrid based on geographical location and proximity of end nodes. We have made three unique contributions to the general field of large-scale information sharing and dissemination. First, we propose a landmark-based peer clustering techniques to grouping end-system nodes by their network proximity, and a communication management technique addresses load balancing and reliability of group communication applications in structured P2P network. Second, we develop a utility-based P2P group communication service middleware, consisting of a utility-based topology management and a utility-aware P2P routing, for providing scalable and efficient group communication services in an unstructured P2P overlay network of heterogeneous peers. Third, we propose an overlay network management protocol that is aware of the geographical location of end-system nodes and a set of routing and adaptation techniques, aiming at building decentralized information dissemination service networks to support location-based applications and services. Although different overlay networks require different system designs for building scalable and efficient information dissemination services, we have employed two common design philosophies: (1) exploiting end-system heterogeneity and (2) utilizing proximity information of end-system nodes to localize most of the communication traffic, and (3) using randomized shortcuts to accelerate long-distant communications. We have demonstrated our design philosophies and the performance improvements in the above three types of P2P overlay networks. Concretely, by assigning more workloads to more powerful peers, we can greatly increase the system scalability and reduce the variation of workload distribution. By clustering end-system nodes based on their IP-network proximity or their geographical proximity, and utilizing randomized shortcuts, we can reduce the end-to-end communication latency, balance peer workloads against service request hotspots across the overlay network, and significantly enhance the scalability and efficiency of large-scale decentralized information dissemination and group communication.
260

Nomadic migration : a service environment for autonomic computing on the Grid

Lanfermann, Gerd January 2002 (has links)
In den vergangenen Jahren ist es zu einer dramatischen Vervielfachung der verfügbaren Rechenzeit gekommen. Diese 'Grid Ressourcen' stehen jedoch nicht als kontinuierlicher Strom zur Verfügung, sondern sind über verschiedene Maschinentypen, Plattformen und Betriebssysteme verteilt, die jeweils durch Netzwerke mit fluktuierender Bandbreite verbunden sind. <br /> Es wird für Wissenschaftler zunehmend schwieriger, die verfügbaren Ressourcen für ihre Anwendungen zu nutzen. Wir glauben, dass intelligente, selbstbestimmende Applikationen in der Lage sein sollten, ihre Ressourcen in einer dynamischen und heterogenen Umgebung selbst zu wählen: Migrierende Applikationen suchen eine neue Ressource, wenn die alte aufgebraucht ist. 'Spawning'-Anwendungen lassen Algorithmen auf externen Maschinen laufen, um die Hauptanwendung zu beschleunigen. Applikationen werden neu gestartet, sobald ein Absturz endeckt wird. Alle diese Verfahren können ohne menschliche Interaktion erfolgen.<br /> Eine verteilte Rechenumgebung besitzt eine natürliche Unverlässlichkeit. Jede Applikation, die mit einer solchen Umgebung interagiert, muss auf die gestörten Komponenten reagieren können: schlechte Netzwerkverbindung, abstürzende Maschinen, fehlerhafte Software. Wir konstruieren eine verlässliche Serviceinfrastruktur, indem wir der Serviceumgebung eine 'Peer-to-Peer'-Topology aufprägen. Diese “Grid Peer Service” Infrastruktur beinhaltet Services wie Migration und Spawning, als auch Services zum Starten von Applikationen, zur Dateiübertragung und Auswahl von Rechenressourcen. Sie benutzt existierende Gridtechnologie wo immer möglich, um ihre Aufgabe durchzuführen. Ein Applikations-Information- Server arbeitet als generische Registratur für alle Teilnehmer in der Serviceumgebung.<br /> Die Serviceumgebung, die wir entwickelt haben, erlaubt es Applikationen z.B. eine Relokationsanfrage an einen Migrationsserver zu stellen. Der Server sucht einen neuen Computer, basierend auf den übermittelten Ressourcen-Anforderungen. Er transferiert den Statusfile des Applikation zu der neuen Maschine und startet die Applikation neu. Obwohl das umgebende Ressourcensubstrat nicht kontinuierlich ist, können wir kontinuierliche Berechnungen auf Grids ausführen, indem wir die Applikation migrieren. Wir zeigen mit realistischen Beispielen, wie sich z.B. ein traditionelles Genom-Analyse-Programm leicht modifizieren lässt, um selbstbestimmte Migrationen in dieser Serviceumgebung durchzuführen. / In recent years, there has been a dramatic increase in available compute capacities. However, these “Grid resources” are rarely accessible in a continuous stream, but rather appear scattered across various machine types, platforms and operating systems, which are coupled by networks of fluctuating bandwidth. It becomes increasingly difficult for scientists to exploit available resources for their applications. We believe that intelligent, self-governing applications should be able to select resources in a dynamic and heterogeneous environment: Migrating applications determine a resource when old capacities are used up. Spawning simulations launch algorithms on external machines to speed up the main execution. Applications are restarted as soon as a failure is detected. All these actions can be taken without human interaction. <br /> <br /> A distributed compute environment possesses an intrinsic unreliability. Any application that interacts with such an environment must be able to cope with its failing components: deteriorating networks, crashing machines, failing software. We construct a reliable service infrastructure by endowing a service environment with a peer-to-peer topology. This “Grid Peer Services” infrastructure accommodates high-level services like migration and spawning, as well as fundamental services for application launching, file transfer and resource selection. It utilizes existing Grid technology wherever possible to accomplish its tasks. An Application Information Server acts as a generic information registry to all participants in a service environment. <br /> <br /> The service environment that we developed, allows applications e.g. to send a relocation requests to a migration server. The server selects a new computer based on the transmitted resource requirements. It transfers the application's checkpoint and binary to the new host and resumes the simulation. Although the Grid's underlying resource substrate is not continuous, we achieve persistent computations on Grids by relocating the application. We show with our real-world examples that a traditional genome analysis program can be easily modified to perform self-determined migrations in this service environment.

Page generated in 0.0455 seconds