Spelling suggestions: "subject:"aoql."" "subject:"coql.""
31 |
Effektivisera generering av parameterfiler för betalterminaler / Improve the efficiency of generating parameter files for terminalsVillabona, Antonio, Dietrichson, Fredrik January 2014 (has links)
Denna rapport återger arbetsprocessen kring att utvärdera datalagringsstruktur och förbättra prestanda för generering av parameterfiler för kortterminaler. Arbetet utfördes på plats hos Esplanad AB, ett företag som bland annat arbetar med säkerhetslösningar och distribution av inställningar för betalstationer. Uppgiften bestod av att utvärdera möjligheter till att förbättra databasen som sparar alla inställningarna för betalsystemen, samt att förbättra kodstruktur och prestanda för programmet som genererar filerna. Rapporten beskriver testning av prestanda, både på Esplanads gamla program för att generera parameterfiler och det nya som konstruerades. En lösning presenteras som inkluderar förbättring av filgenereringens prestanda och en ny struktur på databasen för ökad skalbarhet. Tester visar att det nya systemet klarar av att skapa parameterfiler på TLV-format ungefär 16 gånger snabbare. Den föreslagna lösningen implementerar parallella processer och replikering av databasen. / This thesis describes the process of analyzing and evaluating the structure of data storage and improving the performance for generating parameter files destined for card terminals. The work was done in house at Esplanad AB, a company dealing with security solutions and distribution of settings for payment stations. The first task was to evaluate the possibilities for improving the database storing the settings for each card reader. The second task was to improve the structure of the code and also improve the file generating systems performance. The thesis describes testing performance, both for Esplanad’s old system for generating parameter files, and the new one constructed. The solution presented includes improved performance of the file generating process and a new structure for the database that increases scalability. Tests show that the new system is capable of generating parameter files with TLV-format, about 16 times faster. The proposed solution implements parallel processes and database replication.
|
32 |
A Framework for Managing Big Environmental Science DataEnderskog, Marcus January 2023 (has links)
This master thesis project investigates present non-traditional and established databasetechnologies for dealing with sensor-based high frequency monitoring in environmental scienceresearch. Starting on a small scale with limited system resources, this work aims to serve as astarting point and inspiration for tackling problems of "Big Data" magnitude.
|
33 |
Monitoring and Analysis of CPU Utilization, Disk Throughput and Latency in servers running Cassandra database : An Experimental InvestigationChekkilla, Avinash Goud January 2017 (has links)
Context Light weight process virtualization has been used in the past e.g., Solaris zones, jails in Free BSD and Linux’s containers (LXC). But only since 2013 is there a kernel support for user namespace and process grouping control that make the use of lightweight virtualization interesting to create virtual environments comparable to virtual machines. Telecom providers have to handle the massive growth of information due to the growing number of customers and devices. Traditional databases are not designed to handle such massive data ballooning. NoSQL databases were developed for this purpose. Cassandra, with its high read and write throughputs, is a popular NoSQL database to handle this kind of data. Running the database using operating system virtualization or containerization would offer a significant performance gain when compared to that of virtual machines and also gives the benefits of migration, fast boot up and shut down times, lower latency and less use of physical resources of the servers. Objectives This thesis aims to investigate the trade-off in performance while loading a Cassandra cluster in bare-metal and containerized environments. A detailed study of the effect of loading the cluster in each individual node in terms of Latency, CPU and Disk throughput will be analyzed. Method We implement the physical model of the Cassandra cluster based on realistic and commonly used scenarios or database analysis for our experiment. We generate different load cases on the cluster for Bare-Metal and Docker and see the values of CPU utilization, Disk throughput and latency using standard tools like sar and iostat. Statistical analysis (Mean value analysis, higher moment analysis and confidence intervals) are done on measurements on specific interfaces in order to show the reliability of the results. Results Experimental results show a quantitative analysis of measurements consisting Latency, CPU and Disk throughput while running a Cassandra cluster in Bare Metal and Container Environments. A statistical analysis summarizing the performance of Cassandra cluster while running single Cassandra is surveyed. Conclusions With the detailed analysis, the resource utilization of the database was similar in both the bare-metal and container scenarios. From the results the CPU utilization for the bare-metal servers is equivalent in the case of mixed, read and write loads. The latency values inside the container are slightly higher for all the cases. The mean value analysis and higher moment analysis helps us in doing a finer analysis of the results. The confidence intervals calculated show that there is a lot of variation in the disk performance which might be due to compactions happening randomly. Further work can be done by configuring the compaction strategies, memory, read and write rates.
|
34 |
SemIndex: Semantic-Aware Inverted IndexChbeir, Richard, Luo, Yi, Tekli, Joe, Yetongnon, Kokou, Raymundo Ibañez, Carlos Arturo, Traina, Agma J. M., Traina Jr, Caetano, Al Assad, Marc, Universidad Peruana de Ciencias Aplicadas (UPC) 10 February 2015 (has links)
carlos.raymundo@upc.edu.pe / This paper focuses on the important problem of semanticaware
search in textual (structured, semi-structured, NoSQL) databases.
This problem has emerged as a required extension of the standard containment
keyword based query to meet user needs in textual databases
and IR applications. We provide here a new approach, called SemIndex,
that extends the standard inverted index by constructing a tight coupling
inverted index graph that combines two main resources: a general
purpose semantic network, and a standard inverted index on a collection
of textual data. We also provide an extended query model and
related processing algorithms with the help of SemIndex. To investigate
its effectiveness, we set up experiments to test the performance
of SemIndex. Preliminary results have demonstrated the effectiveness,
scalability and optimality of our approach.
|
35 |
Analysis and Experimental Comparison of Graph Databases / Analysis and Experimental Comparison of Graph DatabasesKolomičenko, Vojtěch January 2013 (has links)
In the recent years a new type of NoSQL databases, called Graph databases (GDBs), has gained significant popularity due to the increasing need of processing and storing data in the form of a graph. The objective of this thesis is a research on possibilities and limitations of GDBs and conducting an experimental comparison of selected GDB implementations. For this purpose the requirements of a universal GDB benchmark have been formulated and an extensible benchmarking tool, named BlueBench, has been developed.
|
36 |
NOSQL- OCH MYSQLPRESTANDAFÖR SKOGSBRANDSDATA : Prestandautvärdering av grundläggandedatabasoperationer vid användning avtabellanpassad KML-data / NOSQL AND MYSQLPERFORMANCE FOR FOREST FIREDATA : Performance evaluation of basic databaseoperations using table mapped KML dataWihlstrand, Marc January 2015 (has links)
Den globala uppvärmningen sätter många samhällsviktiga funktioner på prov. Inte minst förmågan att upptäcka och bekämpa bränder. Ett viktigt steg för att kunna göra detta på ett effektivt sätt är att kunna lagra den data som samlas in och bearbeta denna så att den effektivt kan användas av godtyckligt program. För att kunna göra detta krävs ett databassystem. För att undersöka vilket databassystem som är bäst lämpat att lagra branddata från USA:s jordbruksdepartement utförs insättnings-, läs, och uppdateringsoperationer på databaserna Cassandra, MongoDB och MySQL. Testresultaten som erhölls från studien tyder på att MongoDB med stor marginal är bäst lämpat för att bearbeta data från Active Fire Maps-dokument som erhållits från USA:s jordbruksdepartement.
|
37 |
En prestandajämförelse mellan databaskopplingar i R / A performance Comparision between database connections in RLinnarsson, Gustaf January 2015 (has links)
De traditionella databaserna har sedan länge varit byggda på relationsdatamodellen och skrivna i SQL. Men ju större datamängder det började komma desto mer kapacitet behövdes för att kunna lagra dessa, därför skapades NoSQL. Eftersom det blev sådana stora datamängder så blev det naturligtvis intressant att analysera all data. Men då det är sådana enorma mängde data så är det omöjligt att gå igenom rad för rad. Inom statistik och analys världens så finns det en rad olika hjälpmedel, ett av dessa är R. Den här studien kommer att försöka ta reda på om det finns något databasalternativ som är bättre än det på att arbeta tillsammans med R. Syftet är att kunna ge företag och privatpersoner en bra bild om vad de skall välja när det kommer till databasalternativ hur de på enklast sätt skall kunna plocka in data för analys genom ett experiment. Resultatet av experimentet visar att MySQL var det snabbare alternativet för den datamängd som användes. Troligtvis kommer det att skifta om större datamängder testas.
|
38 |
Data modeling with NoSQL : how, when and whySilva, Carlos André Reis Fernandes Oliveira da January 2010 (has links)
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2010
|
39 |
Evaluating NOSQL Technologies for Historical Financial DataRafique, Ansar January 2013 (has links)
Today, when businesses and organizations are generating huge volumes of data; the applications like Web 2.0 or social networking requires processing of petabytes of data. Stock Exchange Systems are among the ones that process large amount of quotes and trades on a daily basis. The limited database storage ability is a major bottleneck in meeting up the challenge of providing efficient access to information. Further to this, varying data are the major source of information for the financial industry. This data needs to be read and written efficiently in the database; this is quite costly when it comes to traditional Relational Database Management System. RDBMS is good for different scenarios and can handle certain types of data very well, but it isn’t always the perfect choice. The existence of innovative architectures allows the storage of large data in an efficient manner. “Not only SQL” brings an effective solution through the provision of an efficient information storage capability. NOSQL is an umbrella term for various new data store. The NOSQL databases have gained popularity due to different factors that include their open source nature, existence of non-relational data store, high-performance, fault-tolerance, and scalability to name a few. Nowadays, NOSQL databases are rapidly gaining popularity because of the advantages that they offer compared to RDBMS. The major aim of this research is to find an efficient solution for storing and processing the huge volume of data for certain variants. The study is based on choosing a reliable, distributed, and efficient NOSQL database at Cinnober Financial Technology AB. The research majorly explores NOSQL databases and discusses issues with RDBMS; eventually selecting a database, which is best suited for financial data management. It is an attempt to contribute the current research in the field of NOSQL databases which compares one such NOSQL database Apache Cassandra with Apache Lucene and the traditional relational database MySQL for financial management. The main focus is to find out which database is the preferred choice for different variants. In this regard, the performance test framework for a selected set of candidates has also been taken into consideration.
|
40 |
Creating a NoSQL database for the Internet of Things : Creating a key-value store on the SensibleThings platformZhu, Sainan January 2015 (has links)
Due to the requirements of the Web 2.0 applications and the relational databaseshave a limitation in horizontal scalability. NoSQL databases have become moreand more popular in recent years. However, it is not easy to select a databasethat is suitable for a specific use. This thesis describes the detailed design, im plementation and final performance evaluation of a key-value NoSQL databasefor the SensibleThings platform, which is an Internet of Things platform. Thethesis starts by comparing the different types of NoSQL databases to select themost appropriate one. During the implementation of the database, the algorithms for data partition, data access, replication, addition and removal ofnodes, failure detection and handling are dealt with. The final results for theload distribution and the performance evaluation are also presented in this pa per. At the end of the thesis, some problems and improvements that need betaken into consideration in the futures.
|
Page generated in 0.0176 seconds