• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 803
  • 803
  • 267
  • 221
  • 149
  • 145
  • 114
  • 97
  • 88
  • 80
  • 78
  • 75
  • 72
  • 72
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

DecaFS: A Modular Distributed File System to Facilitate Distributed Systems Education

Meth, Halli Elaine 01 June 2014 (has links)
Data quantity, speed requirements, reliability constraints, and other factors encourage industry developers to build distributed systems and use distributed services. Software engineers are therefore exposed to distributed systems and services daily in the workplace. However, distributed computing is hard to teach in Computer Science courses due to the complexity distribution brings to all problem spaces. This presents a gap in education where students may not fully understand the challenges introduced with distributed systems. Teaching students distributed concepts would help better prepare them for industry development work. DecaFS, Distributed Educational Component Adaptable File System, is a modular distributed file system designed for educational use. The goal of the system is to teach distributed computing concepts to undergraduate and graduate level students by allowing them to develop small, digestible portions of the system. The system is broken up into layers, and each layer is broken up into modules so that students can build or modify different components in small, assignment- sized portions. Students can replace modules or entire layers by following the DecaFS APIs and recompiling the system. This allows the behavior of the DFS (Distributed File System) to change based on student implementation, while providing base functionality for students to work from. Our implementation includes a code base of core DecaFS Modules that students can work from and basic implementations of non-core DecaFS Modules. Our basic non-core modules can be modified to implement more complex distribution techniques without modifying core modules. We have shown the feasibility of developing a modular DFS, while adhering to requirements such as configurable sizes (file, stripe, chunk) and support of multiple data replication strategies.
312

Zpracování síťové komunikace v distribuovaném prostředí / Distributed Network Traffic Processing

Letavay, Viliam January 2018 (has links)
Expansion of computer networks and availability of internet connection enables our society to grow faster then ever before. However, at the same time it opens up a new opportunuties for a cybercrime activities. That's why there is an increasing need of security administrators and law enforcing agencies for existence of a tools to analyze the captured data flows. This master thesis deals with ways of analysis of captured network traffic in a distributed environment, which would allow scaling of available analysis power and therefore adapt to ever increasing volumes of data transmitted over the computer networks.
313

Workload Characterization and Performance Evaluation of a Blockchain Implementation for Managing Federated Cloud Resources - Assuming a Peer-to-peer Energy Management Use Case

Jidrot, Rune, Perumal, Gnanapalaniselvi January 2021 (has links)
Blockchain technology has become an appealing concept in Distributed Systems because it enables a distributed storage of information, replacing a central database [1]. In addition, Blockchains promise to address inherent and difficult issues in distributed systems such as a) proving the provenance of information, i.e., the documentation where pieces of data comes from (including their the processing), and b) that the information has not been changed, i.e., the integrity of the information has not been corrupted. The data in a Blockchain is said to be immutable. In this thesis, we apply Blockchain technology as a concept in Distributed Systems for securely collecting and storing data from distributed cloud resources that must be intact over a longer amount of time, such as the amount of consumed cloud resources characterized by CPU load or energy usage. In particular, this work considers a peer-to-peer energy use case where virtual energy resources are monitored. The focus of this thesis is on a) how a Blockchain for a distributed Cloud monitoring can be implemented, b) how the workload can be characterized and c) how the Blockchain system’s performance can be observed and what performance can be achieved. Therefore, the work defines an initial system model, provide an implementation, and carries out experiments in order to understand the impact of the design factors and the system input to the capabilities and performance of the system. The results of the experiments, the workload characterization and performance analysis, are analysed by statistical means and provided as graphs. The choices of system models, Blockchain technology (Hyperledger Fabric), and other parameters, are based on the literature review. The experimental implementation is, in turn, based on the selected system model, where we want to experimentally identify limitations and bottlenecks of the performance.
314

Diseño e implementación de sistema distribuido y colaborativo de peticiones HTTP/S

Pulgar Romero, Francisco Leonardo January 2018 (has links)
Memoria para optar al título de Ingeniero Civil en Computación / En la actualidad existen muchos computadores y dispositivos tecnológicos con capacidad computacional ociosa, con el potencial de ser usados. Es así como existen una gran cantidad de proyectos donde personas donan voluntariamente su poder computacional para ayudar en problemas tales como: renderización de animaciones 3D, correr simulaciones de experimentos, estudiar conjeturas matemáticas, optimización de variables y parámetros en Machine Learning, estudiar estructuras de proteínas y moléculas, clasificación de galaxias, predicción del clima, entre un sinfín de aplicaciones posibles tanto en el área de investigación como en el área empresarial. Esa necesidad de poder de procesamiento y recursos computacionales ha llevado a crear tecnologías como la computación grid (o en malla), que consiste en un sistema de computación distribuido que permite coordinar computadoras de diferente hardware y software haciendo uso de estos para resolver en paralelo tareas en común. La presente memoria tiene como fin la creación de un sistema distribuido en malla donde dispositivos tecnológicos se comunican con un servidor central para recopilar datos de internet; usando así la capacidad ociosa de dispositivos tecnológicos y brindando ayuda voluntaria a aquel que necesite recopilar datos de internet. Durante el desarrollo de este trabajo se implementa un sistema de administración de usuarios y dispositivos tecnológicos realizado con Django, un sistema de distribución de consultas HTTP/S desarrollado con Tornado y un software que corre de lado de los dispositivos tecnológicos para resolver tareas y mandar resultados, hecho en Python. Estos tres sistemas se comunican entre ellos para lograr la distribución de las consultas HTTP/S, pero son independientes entre sí, ayudando a la escalabilidad y tolerancia a fallos del sistema general. Finalmente se realizan pruebas y experimentos de los diferentes componentes para obtener datos relevantes que nos permitan estudiar el comportamiento del sistema, identificando ventajas y desventajas del uso del mismo. Los resultados obtenidos muestran que a medida que aumenta la cantidad de dispositivos tecnológicos que colaboran en una tarea, disminuyen los tiempos de resolución de éstas; además se demuestra una correlación directa entre el tiempo de respuesta de una consulta HTTP/S y la distancia física que existe entre el dispositivo que hace la consulta y el servidor web.
315

Enabling Peer to Peer Energy Trading Marketplace Using Consortium Blockchain Networks

January 2019 (has links)
abstract: Blockchain technology enables peer-to-peer transactions through the elimination of the need for a centralized entity governing consensus. Rather than having a centralized database, the data is distributed across multiple computers which enables crash fault tolerance as well as makes the system difficult to tamper with due to a distributed consensus algorithm. In this research, the potential of blockchain technology to manage energy transactions is examined. The energy production landscape is being reshaped by distributed energy resources (DERs): photo-voltaic panels, electric vehicles, smart appliances, and battery storage. Distributed energy sources such as microgrids, household solar installations, community solar installations, and plug-in hybrid vehicles enable energy consumers to act as providers of energy themselves, hence acting as 'prosumers' of energy. Blockchain Technology facilitates managing the transactions between involved prosumers using 'Smart Contracts' by tokenizing energy into assets. Better utilization of grid assets lowers costs and also presents the opportunity to buy energy at a reasonable price while staying connected with the utility company. This technology acts as a backbone for 2 models applicable to transactional energy marketplace viz. 'Real-Time Energy Marketplace' and 'Energy Futures'. In the first model, the prosumers are given a choice to bid for a price for energy within a stipulated period of time, while the Utility Company acts as an operating entity. In the second model, the marketplace is more liberal, where the utility company is not involved as an operator. The Utility company facilitates infrastructure and manages accounts for all users, but does not endorse or govern transactions related to energy bidding. These smart contracts are not time bounded and can be suspended by the utility during periods of network instability. / Dissertation/Thesis / Masters Thesis Computer Science 2019
316

Models for Quantitative Distributed Systems and Multi-Valued Logics

Huschenbett, Martin 26 February 2018 (has links)
We investigate weighted asynchronous cellular automata with weights in valuation monoids. These automata form a distributed extension of weighted finite automata and allow us to model concurrency. Valuation monoids are abstract weight structures that include semirings and (non-distributive) bounded lattices but also offer the possibility to model average behaviors. We prove that weighted asynchronous cellular automata and weighted finite automata which satisfy an I-diamond property are equally expressive. Depending on the properties of the valuation monoid, we characterize this expressiveness by certain syntactically restricted fragments of weighted MSO logics. Finally, we define the quantitative model-checking problem for distributed systems and show how it can be reduced to the corresponding problem for sequential systems.
317

A Proposal and Implementation of a Novel Architecture Model for Future IoT Applications : With focus on fog computing

Andersson, Viktor January 2022 (has links)
The number of IoT devices and their respective data is increasing for each day impacting the traditional architecture model of solely using the cloud for processing and storage in a negative way. This model may therefore need a supporting model to alleviate the different challenges for future IoT applications. Several researchers have described and presented algorithms and models with focus on distributed architecture models. The main issues with these however is that they fall short when it comes to the implementation and distribution of tasks. The former issue is that they are not implemented on actual hardware but simulated in a constrained environment. The latter issue is that they are not considering sharing a single task but to distribute a whole task. The objective of this thesis is therefore to present the different challenges regarding the traditional architecture model, investigate the research gap for the IoT and the different computing paradigms. Together with this implementing and evaluating a future architecture model capable of collaboration for the completion of a generated task on multiple off-the-shelf hardware. This model is evaluated based on task completion time, data size, and scalability. The results show that the different testbeds are capable communicating and splitting a single task to be completed on multiple devices. They further show that the testbeds containing multiple devices are performing better regarding completion time and do not suffer from noticeable scalability issues. Lastly, they show that the completion time drops remarkably for tasks that are split and distributed.
318

Snapple : A distributed, fault-tolerant, in-memory key-value store using Conflict-Free Replicated Data Types / Snapple : En distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på konfliktfria replikerade datatyper

Stenberg, Johan January 2016 (has links)
As services grow and receive more traffic, data resilience through replication becomes increasingly important. Modern large-scale Internet services such as Facebook, Google and Twitter serve millions of users concurrently. Replication is a vital component of distributed systems. Eventual consistency and Conflict-Free Replicated Data Types (CRDTs) are suggested as an alternative to strong consistency systems. This thesis implements and evaluates Snapple, a distributed, fault-tolerant, in-memory key-value database based on CRDTs running on the Java Virtual Machine. Snapple supports two kinds of CRDTs, an optimized implementation of the OR-Set and version vectors. Performance measurements show that the Snapple system is significantly faster than Riak, a persistent database based on CRDTs, but has a factor 5x - 2.5x lower throughput than Redis, a popular in-memory key-value database written in C. Snapple is a prototype-implementation but might be a viable alternative to Redis if the user wants the consistency guarantees CRDTs provide. / När internet-baserade tjänster växer och får mer trafik blir data replikering allt viktigare. Moderna storskaliga internet-baserade tjänster såsom Facebook, Google och Twitter hanterar miljoner av förfrågningar från användare samtidigt. Datareplikering är en vital komponent av distribuerade system. Eventuell synkronisering och Konfliktfria Replikerade Datatyper (CRDTs) är föreslagna som alternativ till direkt synkronisering. Denna uppsats implementerar och evaluerar Snapple, en distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på CRDTs och som exekverar på Javas virtuella maskin. Snapple stödjer två sorters CRDTs, den optimerade implementationen av observera-ta-bort setet och versionsvektorer. Prestanda-mätningar visar att Snapple-systemet är mycket snabbare än Riak, en persistent databas baserad på CRDTs. Snapple visar sig ha 5x - 2.5x lägre genomströmning än Redis, en popular i-minnet nyckel-värdes databas skriven i C. Snapple är en prototyp men CRDT-stödda system kan vara ett värdigt alternativ till Redis om användaren vill ta del av synkroniseringsgarantierna som CRDTs tillhandahåller.
319

Fault Tolerant Distributed Complex Event Processing on Stream Computing Platforms

Carbone, Paris January 2013 (has links)
Recent advances in reliable distributed computing have made it possible to provide high availability and scalability to traditional systems and thus serve them as reliable services. For some systems, their parallel nature in addition to weak consistency requirements allowed a more trivial transision such as distributed storage, online data analysis, batch processing and distributed stream processing. On the other hand, systems such as Complex Event Processing (CEP) still maintain a monolithic architecture, being able to offer high expressiveness at the expense of low distribution. In this work, we address the main challenges of providing a highly-available Distributed CEP service with a focus on reliability, since it is the most crucial and untouched aspect of that transition. The experimental solution presented targets low average detection latency and leverages event delegation mechanisms present on existing stream execution platforms and in-memory logging to provide availability of any complex event processing abstraction on top via redundancy and partial recovery.
320

Service Management for P2P Energy Sharing Using Blockchain – Functional Architecture

Abdsharifi, Mohammad Hossein, Dhar, Ripan Kumar January 2022 (has links)
Blockchain has become the most revolutionary technology in the 21st century. In recent years, one of the concerns of world energy isn't just sustainability yet, in addition, being secure and reliable also. Since information and energy security are the main concern for the present and future services, this thesis is focused on the challenge of how to trade energy securely on the background of using distributed marketplaces that can be applied. The core technology used in this thesis is distributed ledger, specifically blockchain. Since this technology has recently gained much attention because of its functionalities such as transparency, immutability, irreversibility, security, etc, we tried to convey a solution for the implementation of a secure peer-to-peer (P2P) energy trading network over a suitable blockchain platform. Furthermore, blockchain enables traceability of the origin of data which is called data provenience. In this work, we applied a secure blockchain technology in peer-to-peer energy sharing or trading system where the prosumer and consumer can trade their energies through a secure channel or network. Furthermore, the service management functionalities such as security, reliability, flexibility, and scalability are achieved through the implementation. \\ This thesis is focused on the current proposals for p2p energy trading using blockchain and how to select a suitable blockchain technique to implement such a p2p energy trading network. In addition, we provide an implementation of such a secure network under blockchain and proper management functions. The choices of the system models, blockchain technology, and the consensus algorithm are based on literature review, and it carried to an experimental implementation where the feasibility of that system model has been validated through the output results.

Page generated in 0.2346 seconds