• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 1
  • Tagged with
  • 9
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Convergence model for the migration of a relational database to a NoSQL database

Mendoza Jayo, Rubén G., Raymundo, Carlos, Mateos, Francisco Domínguez, Alvarez Rodríguez, José María 01 January 2017 (has links)
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado. / No presente resumen
2

Impact of Cassandra Compaction on Dockerized Cassandra’s performance : Using Size Tiered Compaction Strategy

Mohanty, Biswajeet January 2016 (has links)
Context. Cassandra is a NoSQL Database which handles large amount of data simultaneously and provides high availability for the data present. Compaction in Cassandra is a process of removing stale data and making data more available to the user. This thesis focusses on analyzing the impact of Cassandra compaction on Cassandra’s performance when running inside a Docker container. Objectives. In this thesis, we investigate the impact of Cassandra compaction on the database performance when it is used within a Docker based container platform. We further fine tune Cassandra’s compaction settings to arrive at a sub-optimal scenario which maximizes its performance while operating within a Docker. Methods. Literature review is performed to enlist different compaction related metrics and compaction related parameters which have an effect on Cassandra’s performance. Further, Experiments are conducted using different sets of mixed workload to estimate the impact of compaction over database performance when used within a Docker. Once these experiments are conducted, we modify compaction settings while operating under a write heavy workload and access database performance in each of these scenarios to identify a sub-optimal value of parameter for maximum database performance. Finally, we use these sub-optimal parameters to perform an experiment and access the database performance. Results. The Cassandra and Operating System related parameters and metrics which affect the Cassandra compaction are listed and their effect on Cassandra’s performance has been tested using some experiments. Based on these experiments, few sub-optimum values are proposed for the listed metrics. Conclusions. It can be concluded that, for better performance of Dockerized Cassandra, the proposed values for each of the parameters in the results (i.e. 5120 for Memtable_heap_size_in_mb, 24 for concurrent_compactors, 16 for compaction_throughput_mb_per_sec, 6 for Memtable_flush_writers and 0.14 for Memtable_cleaup _threshold) can be chosen separately but not the union of those proposed values (confirmed from the experiment performed). Also the metrics and parameters affecting Cassandra performance are listed in this thesis.
3

Zpracování a vizualizace senzorových dat ve vojenském prostředí / Processing and Visualization of Military Sensor Data

Boychuk, Maksym January 2016 (has links)
This thesis deals with the creating, visualization and processing data in a military environment. The task is to design and implement a system that enables the creation, visualization and processing ESM data. The result of this work is a ESMBD application that allows using a classical approach, which is a relational database, and BigData technologies for data storage and manipulation. The comparison of data processing speed while using the classic approach (Postgres database) and BigData technologies (Cassandra databases and Hadoop) has been carried out as well.
4

Compactions in Apache Cassandra : Performance Analysis of Compaction Strategies in Apache Cassandra

Kona, Srinand January 2016 (has links)
Context: The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20th century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives: In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods: A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results: Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions: With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.
5

An Improved Design and Implementation of the Session-based SAMBO with Parallelization Techniques and MongoDB

Zhao, Yidan January 2017 (has links)
The session-based SAMBO is an ontology alignment system involving MySQL to store matching results. Currently, SAMBO is able to align most ontologies within acceptable time. However, when it comes to large scale ontologies, SAMBO fails to reach the target. Thus, the main purpose of this thesis work is to improve the performance of SAMBO, especially in the case of matching large scale ontologies.  To reach the purpose, a comprehensive literature study and an investigation on two outstanding large scale ontology system are carried out with the aim of setting the improvement directions. A detailed investigation on the existing SAMBO is conducted to figure out in which aspects the system can be improved. Parallel matching process optimization and data management optimization are determined as the primary optimization goal of the thesis work. In the following, a few relevant techniques are studied and compared. Finally, an optimized design is proposed and implemented.  System testing results of the improved SAMBO show that both parallel matching process optimization and data management optimization contribute greatly to improve the performance of SAMBO. However the execution time of SAMBO to align large scale ontologies with database interaction is still unacceptable.
6

Aplikace grafové databáze na analytické úlohy / Application of graph database for analytical tasks

Günzl, Richard January 2014 (has links)
This diploma thesis is about graph databases, which belong to the category of database systems known as NoSQL databases, but graph databases are beyond NoSQL databases. Graph databases are useful in many cases thanks to native storing of interconnections between data, which brings advantageous properties in comparison with traditional relational database system, especially in querying. The main goal of the thesis is: to describe principles, properties and advantages of graph database; to design own convenient graph database use case and to realize the template verifying designed use case. The theoretical part focuses on the description of properties and principles of the graph database which are then compared with relational database approach. Next part dedicates analysis and explanation of the most typical use cases of the graph database including the unsuitable use cases. The last part of thesis contains analysis of own graph database use case in which several principles are defined. The principles can be applied separately. There are crucial analytical operations in the use case. The analytical operations search the causes with their rate of influence on amount or change in the value of the indicator. This part also includes the realization of the template verifying the use case in the graph database. The template consists of the database structure design, the concrete database data and analytical operations. In the end the returned results from graph database are verified by the alternative calculations without using the graph database.
7

Um método de integração de dados armazenados em bancos de dados relacionais e NOSQL / A method for integration data stored in databases relational and NOSQL

Vilela, Flávio de Assis 08 October 2015 (has links)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-08-05T19:33:36Z No. of bitstreams: 2 Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-08-08T14:30:29Z (GMT) No. of bitstreams: 2 Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-08-08T14:30:29Z (GMT). No. of bitstreams: 2 Dissertação - Flávio de Assis Vilela - 2015.pdf: 4909033 bytes, checksum: 3266fed0915712ec88adad7eec5bfc55 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2015-10-08 / The increase in quantity and variety of data available on the Web contributed to the emergence of NOSQL approach, aiming at new demands, such as availability, schema flexibility and scalability. At the same time, relational databases are widely used for storing and manipulating structured data, providing stability and integrity of data, which is accessed through a standard language such as SQL. This work presents a method for integrating data stored in heterogeneous sources, in which an input query in standard SQL produces a unified answer, based in the partial answers of relational and NOSQL databases. / O aumento da quantidade e variedade de dados disponíveis na Web contribuiu com o surgimento da abordagem NOSQL, visando atender novas demandas, como disponibilidade, flexibilidade de esquema e escalabilidade. Paralelamente, bancos de dados relacionais são largamente utilizados para armazenamento e manipulação de dados estruturados, oferecendo estabilidade e integridade de dados, que são acessados através de uma linguagem padrão, como SQL. Este trabalho apresenta um método de integração de dados armazenados em fontes heterogêneas, no qual uma consulta de entrada em SQL produz uma resposta unificada, baseada nas respostas parciais de bancos de dados relacionais e NOSQL. Palavras–chave
8

MongoDB jako datové úložiště pro Google App Engine SDK / MongoDB as a Datastore for Google App Engine SDK

Heller, Stanislav January 2013 (has links)
In this thesis, there are discussed use-cases of NoSQL database MongoDB implemented as a datastore for user data, which is stored by Datastore stubs in Google App Engine SDK. Existing stubs are not very well optimized for higher load; they significantly slow down application development and testing if there is a need to store larger data sets in these storages. The analysis is focused on features of MongoDB, Google App Engine NoSQL Datastore and interfaces for data manipulation in SDK - Datastore Service Stub API. As a result, there was designed and implemented new datastore stub, which is supposed to solve problems of existing stubs. New stub uses MongoDB as a database layer for storing testing data and it is fully integrated into Google App Engine SDK.
9

Prohlížečová hra s umělou inteligencí / Browser Game with Artificial Intelligence

Moravec, Michal January 2019 (has links)
Thesis describes design and implementation of a web browser game, which can be played by multiple players via the internet. The main goal is to manage the economy, although players can cooperate (trading) or play against each other (battles). NoSQL database is used for persistent storage of progress, which is also described in the thesis. Apart from human players there are also agents/bots, which play the game autonomously via state machines generated by genetic algorithms. Paper describes design and functionality of either the genetic algorithms, but also the state machines.

Page generated in 0.0803 seconds