• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 16
  • 9
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Statistical Performance Model of Homogeneous Raidb Clusters

Rogers, Brandon Lamar 10 March 2005 (has links) (PDF)
The continual growth of the Internet and e-commerce is driving demand for speed, reliability and processing power. With the rapid development and maturation of e-commerce, the need for a quick access to large amounts of information is steadily rising. Traditionally, database systems have been used for information storage and retrieval. However, with online auctions, rapid Internet searches, and data archival, the need for more powerful database systems is also growing. One type of distributed database is called Redundant Arrays of Inexpensive Databases (RAIDb). RAIDb clusters are middleware-driven to promote interoperability and portability. RAIDb clusters allow for multiple levels of data replication and publish the clustered system as a single, coherent database system. In this thesis, performance models are created for RAIDb level 1 and level 2 clusters. A statistical three-factor, two-level factorial design is used to evaluate the significance of several factors in a RAIDb cluster. These significant factors are then used to create a regression analysis, and eventually a regression equation, that can be used to predict the performance of RAIDb clusters. This performance model should be a useful predictive tool since the results have a 99% confidence interval.
12

Blockchain Technology in the Swedish Fund Market : A Study on the Trust Relationships Between Actors in a Blockchain-Based Fund Market / Blockkedje-teknologi på den svenska fondmarknaden

Huang, Shun, Carlsson, Jacob January 2016 (has links)
Blockchain is a new type of shared ledger for distributing and keeping consensus on what constitutes a true state of a system. The implications of the technology, i.e. enabling almost trustless transactions between market participants, is a revolutionary idea, especially to financial markets. The Swedish fund market, being a fragmented and in some cases inefficient system of intermediating actors, is a potential use case for the new technology of blockchain. This report reviews and presents the technology underlying the new blockchain phenomenon, and its potential application to the Swedish fund market with a specific focus on the possible new trust dynamics in such a market. Blockchain could, by removing some of the inter-participant risks, disintermediate the communication between market actors in the Swedish fund market, possibly enabling a cost reduction related to fund unit administration and order handling. / Blockkedje-teknologi är en ny typ av distribuerad databas som med hjälp av kryptologi tillåter ett system av självständiga och icke-tillitande aktörer att gemensamt dela en databas. Implikationerna för teknologin, tillåtandet av näratillitslösa transaktioner mellan marknadsdeltagare, är revolutionära, speciellt finansmarknaderna. Den svenska fondmarknaden, som karaktäriseras av fragmenterade och i vissa fall ineffektiva system, är ett potentiellt appliceringsområde för den nya teknologin. Den här rapporten går över och presenterar den underliggande tecknologin för blockkedjor, och dess potentiella applikation på den svenska fondmarknaden, med ett specifikt fokus på hur appliceringen skulle förändra tillits-förhållandena på marknaden. Det konstateras att blockkedjor skulle b.la. kunna avveckla vissa mellanliggande aktörer på marknaden, och därmed möjliggöra kostnadsbesparingar kopplade till fondadminstration och orderhantering.
13

NEAREST NEIGHBOR SEARCH IN DISTRIBUTED DATABASES

KUMAR, SUSMIT 11 June 2002 (has links)
No description available.
14

CHECKPOINTING AND RECOVERY IN DISTRIBUTED AND DATABASE SYSTEMS

Wu, Jiang 01 January 2011 (has links)
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the re- sults of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to be part of a transaction-consistent global checkpoint of the database. This result would be useful for constructing transaction-consistent global checkpoints incrementally from the checkpoints of each individual data item of a database. By applying this condition, we can start from any useful checkpoint of any data item and then incrementally add checkpoints of other data items until we get a transaction- consistent global checkpoint of the database. This result can also help in designing non-intrusive checkpointing protocols for database systems. Based on the intuition gained from the development of the necessary and sufficient conditions, we also de- veloped a non-intrusive low-overhead checkpointing protocol for distributed database systems. Checkpointing and rollback recovery are also established techniques for achiev- ing fault-tolerance in distributed systems. Communication-induced checkpointing algorithms allow processes involved in a distributed computation take checkpoints independently while at the same time force processes to take additional checkpoints to make each checkpoint to be part of a consistent global checkpoint. This thesis develops a low-overhead communication-induced checkpointing protocol and presents a performance evaluation of the protocol.
15

Using Web Services for Transparent Access to Distributed Databases

Schneider, Jan, Cárdenas, Héctor, Talamantes, José Alfonso January 2007 (has links)
<p>This thesis consists of a strategy to integrate distributed systems with the aid of web services. The focus of this research involves three subjects, web services and distributed database systems and its application on a real-life project.</p><p>For defining the context in this thesis, we present the research methodology that provides the path where the investigation will be performed and the general concepts of the running environment and architecture of web services.</p><p>The mayor contribution for this thesis is a solution for the Chamber Trade in Sweden and VNemart in Vietnam obtaining the requirement specification according to the SPIDER project needs and our software design specification using distributed databases and web services.</p><p>As results, we present the software implementation and the way or software meets and the requirements previously defined. For future web services developments, this document provides guidance for best practices in this subject.</p>
16

[en] WORKLOAD BALANCING STRATEGIES FOR PARALLEL BLAST EVALUATION ON REPLICATED DATABASES AND PRIMARY FRAGMENTS / [pt] ESTRATÉGIAS DE BALANCEAMENTO DE CARGA PARA AVALIAÇÃO PARALELA DO BLAST COM BASES DE DADOS REPLICADAS E FRAGMENTOS PRIMÁRIOS

DANIEL XAVIER DE SOUSA 07 April 2008 (has links)
[pt] Na área de biologia computacional a busca por informações relevantes em meio a volumes de dados cada vez maiores é uma atividade fundamental. Dentre outras, uma tarefa importante é a execução da ferramenta BLAST (Basic Local Alignment Search Tool), que possibilita comparar biosseqüências a fim de se descobrir homologias entre elas e inferir as demais informações pertinentes. Um dos problemas a serem resolvidos no que diz respeito ao custo de execução do BLAST se refere ao tamanho da base de dados, que vem aumentando consideravelmente nos últimos anos. Avaliar o BLAST com estrat´egias paralelas e distribuídas com apoio de agrupamento de computadores tem sido uma das estratégias mais utilizadas para obter ganhos de desempenho. Nesta dissertação, é realizada uma alocação física replicada da base de dados (de seqüências), onde cada réplica é fragmentada em partes distintas, algumas delas escolhidas como primárias. Dessa forma, é possível mostrar que se aproveitam as principais vantagens das estratégias de execução sobre bases replicadas e fragmentadas convencionais, unindo flexibilidade e paralelismo de E/S. Associada a essa alocação particular da base, são sugeridas duas formas de balanceamento dinâmico da carga de trabalho. As abordagens propostas são realizadas de maneira não intrusiva no código BLAST. São efetuados testes de desempenho variados que demonstram não somente a eficácia no equilíbrio de carga como também eficiência no processamento como um todo. / [en] A fundamental task in the area of computational biology is the search for relevant information within the large amount of available data. Among others, it is important to run tools such as BLAST - Basic Local Alignment Search Tool - effciently, which enables the comparison of biological sequences and discovery of homologies and other related information. However, the execution cost of BLAST is highly dependent on the database size, which has considerably increased. The evaluation of BLAST in distributed and parallel environments like PC clusters has been largely investigated in order to obtain better performances. This work reports a replicated allocation of the (sequences) database where each copy is also physically fragmented, with some fragments assigned as primary. This way we show that it is possible to execute BLAST with some nice characteristics of both replicated and fragmented conventional strategies, like flexibility and I/O parallelism. We propose two dynamic workload balancing strategies associated with this data allocation. We have adopted a non- intrusive approach, i.e., the BLAST code remains unchanged. These methods are implemented and practical results show that we achieve not only a balanced workload but also very good performances.
17

Virtual Full Replication for Scalable Distributed Real-Time Databases

Mathiason, Gunnar January 2009 (has links)
A fully replicated distributed real-time database provides high availability and predictable access times, independent of user location, since all the data is available at each node. However, full replication requires that all updates are replicated to every node, resulting in exponential growth of bandwidth and processing demands with the number of nodes and objects added. To eliminate this scalability problem, while retaining the advantages of full replication, this thesis explores Virtual Full Replication (ViFuR); a technique that gives database users a perception of using a fully replicated database while only replicating a subset of the data. We use ViFuR in a distributed main memory real-time database where timely transaction execution is required. ViFuR enables scalability by replicating only data used at the local nodes. Also, ViFuR enables flexibility by adaptively replicating the currently used data, effectively providing logical availability of all data objects. Hence, ViFuR substantially reduces the problem of non-scalable resource usage of full replication, while allowing timely execution and access to arbitrary data objects. In the thesis we pursue ViFuR by exploring the use of database segmentation. We give a scheme (ViFuR-S) for static segmentation of the database prior to execution, where access patterns are known a priori. We also give an adaptive scheme (ViFuR-A) that changes segmentation during execution to meet the evolving needs of database users. Further, we apply an extended approach of adaptive segmentation (ViFuR-ASN) in a wireless sensor network - a typical dynamic large-scale and resource-constrained environment. We use up to several hundreds of nodes and thousands of objects per node, and apply a typical periodic transaction workload with operation modes where the used data set changes dynamically. We show that when replacing full replication with ViFuR, resource usage scales linearly with the required number of concurrent replicas, rather than exponentially with the system size.
18

Permissioned Blockchains and Distributed Databases : A Performance Study / Permissioned Blockkedjor och Distribuerade Databaser : En Prestanda Undersökning

Bergman, Sara January 2018 (has links)
Blockchain technology is a booming new field in both computer science and economicsand other use cases than cryptocurrencies are on the rise. Permissioned blockchains are oneinstance of the blockchain technique. In a permissioned blockchain the nodes which validatesnew transactions are trusted. Permissioned blockchains and distributed databasesare essentially two different ways for storing data, but how do they compare in terms ofperformance? This thesis compares Hyperledger Fabric to Apache Cassandra in four experimentsto investigate their insert and read latency. The experiments are executed usingDocker on an Azure virtual machine and the studied systems consist of up to 20 logicalnodes. Latency measurements are performed using varying network size and load. Forsmall networks, the insert latency of Cassandra is twice as high as that of Fabric, whereasfor larger networks Fabric has almost twice as high insert latencies as Cassandra. Fabrichas around 40 ms latency for reading data and Cassandra between 150 ms to 250 ms, thusit scales better for reading. The insert latency of different workloads is heavily affected bythe configuration of Fabric and by the Docker overhead for Cassandra. The read latency isnot affected by different workloads for either system.
19

Using Web Services for Transparent Access to Distributed Databases

Schneider, Jan, Cárdenas, Héctor, Talamantes, José Alfonso January 2007 (has links)
This thesis consists of a strategy to integrate distributed systems with the aid of web services. The focus of this research involves three subjects, web services and distributed database systems and its application on a real-life project. For defining the context in this thesis, we present the research methodology that provides the path where the investigation will be performed and the general concepts of the running environment and architecture of web services. The mayor contribution for this thesis is a solution for the Chamber Trade in Sweden and VNemart in Vietnam obtaining the requirement specification according to the SPIDER project needs and our software design specification using distributed databases and web services. As results, we present the software implementation and the way or software meets and the requirements previously defined. For future web services developments, this document provides guidance for best practices in this subject.
20

Srovnání distribuovaných "NoSQL" databází s důrazem na výkon a škálovatelnost / Comparison of distributed "NoSQL" databases with focus on performance and scalability

Vrbík, Tomáš January 2011 (has links)
This paper focuses on NoSQL database systems. These systems currently serve rather as supplement than replacement of relational database systems. The aim of this paper is to compare 4 selected NoSQL database systems (MongoDB, Apache Cassandra, Apache HBase and Redis) with a main focus on performance and scalability. Performance comparison is done using simulated workload in a 4 nodes cluster environment. One relational SQL database is also benchmarked to provide comparison between classic and modern way of maintaining structured data. As the result of comparison I found out that none of these database systems can be labeled as "the best" as each of the compared systems is suitable for different production deployment.

Page generated in 0.2385 seconds