581 |
Analysis of Scalable Blockchain Technology in the Capital MarketJonéus, Carl January 2017 (has links)
Financial interactions on the capital market involve a wide variety of actors and processes. The requirement of security and privacy results to a large extent in non-shared and unintegrated databases among the different parties, leading to complex, time consuming and costly procedures. The last decade's introduction of innovative blockchain technologies such as Bitcoin, has brought attention to the possibilities of decentralized peer-to-peer networking in general, and its potential influence in the financial sector in particular. This master thesis investigates the possibilities for the capital market to adapt such a system from a technical point of view, with main focus on scalability. The analysis covers crucial aspects such as a peer-to-peer application's ability to handle large transaction volumes while maintaining security. The degree project also includes continued work on Visigon's blockchain application prototype with main focus on the network communication, as well as simulations of its performance capability. Results from the simulations showed that the transaction throughput capacity is limited to the time of broadcasting the transaction to the network, and thus decreasing linearly with increasing network size. The required time for handling other parts in the process appears constant and takes up a small fraction of the total time, therefore future work lays in further optimization of the communication protocol.
|
582 |
M3DS: um modelo de dinâmica de desenvolvimento distribuído de software. / M3DS: a dynamic model of distributed development of software.Alexandre L\'Erario 01 December 2009 (has links)
Este trabalho apresenta um modelo de dinâmica de desenvolvimento distribuído de software, cujo objetivo é representar a realidade e os aspectos de ambientes de DDS (Desenvolvimento distribuído de software), a fim de torná-los observáveis e descritíveis qualitativa e quantitativamente. Um modelo preliminar foi elaborado a partir da revisão bibliográfica e de um caso de experimentação desenvolvido por LErario et al (2004). Para a construção e validação deste modelo, a metodologia de estudo de múltiplos casos foi aplicada em diversas organizações que desenvolvem software de maneira distribuída. Ao modelo preliminar foram adicionados estados e transições significantes para a dinâmica do desenvolvimento distribuído de software, originando então o M3DS (Modelo de Dinâmica de Desenvolvimento Distribuído de Software). Duas versões do M3DS são apresentadas. Uma versão construída sobre uma máquina de estados, cujo objetivo é representar apenas a transições entre os estados. Outra versão equivalente, porém mais formal, é apresentada no formato de redes de Petri, na qual é possível visualizar a dependência entre transições e mudanças de estado. Com este modelo, é possível compreender o funcionamento de um projeto distribuído e auxiliar na eficácia da gestão da rede de produção, além de auxiliar as demais entidades e pessoas envolvidas a obterem um posicionamento na rede mais preciso. O M3DS pode, também, auxiliar a detecção proativa de problemas originados a partir do desenvolvimento a distância. Os resultados apresentados neste trabalho respondem a questão de como as organizações desenvolvedoras de software produzem software de maneira distribuída. A riginalidade da pesquisa centra-se na construção de um modelo de dinâmica do desenvolvimento distribuído elaborado com os dados levantados a partir de seis estudos de casos. / This work presents a dynamic model of distributed development of software, whose objective is to represent the reality and the aspects of DDS environments, in order to turn them qualitatively and quantitatively observable. A preliminary model was elaborated from the bibliographical revision and an experimentation case developed by L\'Erario et al (2004). The construction and validation of this model used the methodology multiple cases study in several organizations that develop software in a distributed way. After this, states and transitions were added in the dynamics model of the distributed development of software creating the M3DS. (Dynamics Model of Distributed Development of Software). Two versions of M3DS are presented. A version built on a state machine whose objective is demonstrating the transitions among the states. Another version equivalent, however more formal, it is presented in the format of Petri nets. The second version makes possible to visualize the dependence between transitions and state changes. With this model it is possible to understand the operation of a distributed project, aiding in the effectiveness of the manager of the network production and people can obtain a precise positioning in network. Besides, M3DS can also aid the proactive detection of problems originated from the development at the distance. The results presented in this work answer the question: how the development software organizations produce software in a distributed way. The originality of the research is the construction of a model of dynamics of the distributed development elaborated from data of six cases studies.
|
583 |
Performance Analysis of Distributed Object Middleware Technologies / Prestanda Analys av Distribuerade Objektorienterade MellanlagerArneng, Per, Bladh, Richard January 2003 (has links)
Each day new computers around the world connects to the Internet or some network. The increasing number of people and computers on the Internet has lead to a demand for more services in different domains that can be accessed from many locations in the network. When the computers communicate they use different kinds of protocols to be able to deliver a service. One of these protocol families are remote procedure calls between computers. Remote procedure calls has been around for quite some time but it is with the Internet that its usage has increased a lot and especially in its object oriented form which comes from the fact that object oriented programming has become a popular choice amongst programmers. When a programmer has to choose a distributed object middleware there is a lot to take into consideration and one of those things is performance. This master thesis aims to give a performance comparison between different distributed object middleware technologies and give an overview of the performance difference between them and make it easier for a programmer to choose one of the technologies when performance is an important factor. In this thesis we have evaluated the performance of CORBA, DCOM, RMI, RMI-IIOP, Remoting-TCP and Remoting-HTTP. The results we have seen from this evaluation is that DCOM and RMI are the distributed object middleware technologies with the best overall performance in terms of throughput and round trip time. Remoting-TCP generally generates the least amount of network traffic, while Remoting-HTTP generates the most amount of network traffic due to it's SOAP-formated protocol. / Detta magister arbete handlar om en prestanda analys av distribuerade objectorienterande mellanlagers teknologier. Dokumentet jämför prestandan på följande teknologier: CORBA, DCOM, RMI, RMI-IIOP, Remoting-TCP and Remoting-HTTP. Jämförelsen är i både server och klient perspektiv. / The authors can probably not be reached on the e-mail addresses given here or in the thesis. A simple search on any search engine will probably lead to valid an address. The reason for this is becouse the student mail is not a longterm address.
|
584 |
Distributed database support for networked real-time multiplayer gamesGrimm, Henrik January 2002 (has links)
The focus of this dissertation is on large-scale and long-running networked real-time multiplayer games. In this type of games, each player controls one or many entities, which interact in a shared virtual environment. Three attributes - scalability, security, and fault tolerance - are considered essential for this type of games. The normal approaches for building this type of games, using a client/server or peer-to-peer architecture, fail in achieving all three attributes. We propose a server-network architecture that supports these attributes. In this architecture, a cluster of servers collectively manage the game state and each server manages a separate region of the virtual environment. We discuss how the architecture can be extended using proxies, and we compare it to other similar architectures. Further, we investigate how a distributed database management system can support the proposed architecture. Since efficiency is very important in this type of games, some properties of traditional database systems must be relaxed. We also show how methods for increasing scalability, such as interest management and dead reckoning, can be implemented in a database system. Finally, we suggest how the proposed architecture can be validated using a simulation of a large-scale game.
|
585 |
Engineering swarm systems: A design pattern for the best-of-n decision problemReina, Andreagiovanni 04 July 2016 (has links)
The study of large-scale decentralised systems composed of numerous interacting agents that self-organise to perform a common task is receiving growing attention in several application domains. However, real world implementations are limited by a lack of well-established design methodologies that provide performance guarantees. Engineering such systems is a challenging task because of the difficulties to obtain the micro-macro link: a correspondence between the microscopic description of the individual agent behaviour and the macroscopic models that describe the system's dynamics at the global level. In this thesis, we propose an engineering methodology for designing decentralised systems, based on the concept of design patterns. A design pattern provides a general solution to a specific class of problems which are relevant in several application domains. The main component of the solution consists of a multi-level description of the collective process, from macro to micro models, accompanied by rules for converting the model parameters between description levels. In other words, the design pattern provides a formal description of the micro-macro link for a process that tackles a specific class of problems. Additionally, a design pattern provides a set of case studies to illustrate possible implementation alternatives both for simple or particularly challenging scenarios. We present a design pattern for the best-of-n, decentralised decision problem that is derived from a model of nest-site selection in honeybees. We present two case studies to showcase the design pattern usage in (i) a multiagent system interacting through a fully-connected network, and (ii) a swarm of particles moving on a bidimensional plane. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
586 |
Dynamic Load Balancing Schemes for Large-scale HLA-based SimulationsDe Grande, Robson E. January 2012 (has links)
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
|
587 |
Distributed spatial analysis in wireless sensor networksJabeen, Farhana January 2011 (has links)
Wireless sensor networks (WSNs) allow us to instrument the physical world in novel ways, providing detailed insight that has not been possible hitherto. Since WSNs provide an interface to the physical world, each sensor node has a location in physical space, thereby enabling us to associate spatial properties with data. Since WSNs can perform periodic sensing tasks, we can also associate temporal markers with data. In the environmental sciences, in particular, WSNs are on the way to becoming an important tool for the modelling of spatially and temporally extended physical phenomena. However, support for high-level and expressive spatial-analytic tasks that can be executed inside WSNs is still incipient. By spatial analysis we mean the ability to explore relationships between spatially-referenced entities (e.g., a vineyard, or a weather front) and to derive representations grounded on such relationships (e.g., the geometrical extent of that part of a vineyard that is covered by mist as the intersection of the geometries that characterize the vineyard and the weather front, respectively). The motivation for this endeavour stems primarily from applications where important decisions hinge on the detection of an event of interest (e.g., the presence, and spatio-temporal progression, of mist over a cultivated field may trigger a particular action) that can be characterized by an event-defining predicate (e.g., humidity greater than 98 and temperature less than 10). At present, in-network spatial analysis in WSN is not catered for by a comprehensive, expressive, well-founded framework. While there has been work on WSN event boundary detection and, in particular, on detecting topological change of WSN-represented spatial entities, this work has tended to be comparatively narrow in scope and aims. The contributions made in this research are constrained to WSNs where every node is tethered to one location in physical space. The research contributions reported here include (a) the definition of a framework for representing geometries; (b) the detailed characterization of an algebra of spatial operators closely inspired, in its scope and structure, by the Schneider-Guting ROSE algebra (i.e., one that is based on a discrete underlying geometry) over the geometries representable by the framework above; (c) distributed in-network algorithms for the operations in the spatial algebra over the representable geometries, thereby enabling (i) new geometries to be derived from induced and asserted ones, and (ii)topological relationships between geometries to be identified; (d) an algorithmic strategy for the evaluation of complex algebraic expressions that is divided into logically-cohesive components; (e) the development of a task processing system that each node is equipped with, thereby with allowing users to evaluate tasks on nodes; and (f) an empirical performance study of the resulting system.
|
588 |
Performance comparison of data distribution management strategies in large-scale distributed simulation.Dzermajko, Caron 05 1900 (has links)
Data distribution management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information in large-scale distributed simulations. The key to efficient DDM is to limit and control the volume of data exchanged during the simulation, to relay data to only those hosts requiring the data. This thesis focuses upon different DDM implementations and strategies. This thesis includes analysis of three DDM methods including the fixed grid-based, dynamic grid-based, and region-based methods. Also included is the use of multi-resolution modeling with various DDM strategies and analysis of the performance effects of aggregation/disaggregation with these strategies. Running numerous federation executions, I simulate four different scenarios on a cluster of workstations with a mini-RTI Kit framework and propose a set of benchmarks for a comparison of the DDM schemes. The goals of this work are to determine the most efficient model for applying each DDM scheme, discover the limitations of the scalability of the various DDM methods, evaluate the effects of aggregation/disaggregation on performance and resource usage, and present accepted benchmarks for use in future research.
|
589 |
A CONCEPTUAL FRAMEWORK FOR DISTRIBUTED SOFTWARE QUALITY NETWORKANUSHKA HARSHAD PATIL (7036883) 12 October 2021 (has links)
The advancement in technology has revolutionized the role of software in recent years. Software usage is practically found in all areas of the industry and has become a prime factor in the overall working of the companies. Simultaneously with an increase in the utilization of software, the software quality assurance parameters have become more crucial and complex. Currently the quality measurement approaches, standards, and models that are applied in the software industry are extremely divergent. Many a time the correct approach will wind up to be a combination of di erent concepts and techniques from di erent software assurance approaches [1]. Thus, having a platform that provides a single workspace for incorporating multiple software quality assurance approaches will ease the overall software quality process. In this thesis we have proposed a theoretical framework for distributed software quality assurance, which will be able to continuously monitor a source code repository; create a snapshot of the system for a given commit (both past and present); the snapshot can be used to create a multi-granular blockchain of the system and its metrics (i.e.,metadata) which we believe will let the tool developers and vendors participate continuously in assuring quality and security of systems and in the process be accessible when required while being rewarded for their services.
|
590 |
Multiple Learning for Generalized Linear Models in Big DataXiang Liu (11819735) 19 December 2021 (has links)
Big data is an enabling technology in digital transformation. It perfectly complements ordinary linear models and generalized linear models, as training well-performed ordinary linear models and generalized linear models require huge amounts of data. With the help of big data, ordinary and generalized linear models can be well-trained and thus offer better services to human beings. However, there are still many challenges to address for training ordinary linear models and generalized linear models in big data. One of the most prominent challenges is the computational challenges. Computational challenges refer to the memory inflation and training inefficiency issues occurred when processing data and training models. Hundreds of algorithms were proposed by the experts to alleviate/overcome the memory inflation issues. However, the solutions obtained are locally optimal solutions. Additionally, most of the proposed algorithms require loading the dataset to RAM many times when updating the model parameters. If multiple model hyper-parameters needed to be computed and compared, e.g. ridge regression, parallel computing techniques are applied in practice. Thus, multiple learning with sufficient statistics arrays are proposed to tackle the memory inflation and training inefficiency issues.
|
Page generated in 0.0547 seconds