Spelling suggestions: "subject:"database performance"" "subject:"catabase performance""
1 |
SQL Query Disassembler: An Approach to Managing the Execution of Large SQL QueriesMeng, Yabin 25 September 2007 (has links)
In this thesis, we present an approach to managing the execution of large queries
that involves the decomposition of large queries into an equivalent set of smaller queries and then scheduling the smaller queries so that the work is accomplished with less impact on other queries. We describe a prototype implementation of our approach for IBM DB2™ and present a set of experiments to evaluate the effectiveness of the approach. / Thesis (Master, Computing) -- Queen's University, 2007-09-17 22:05:05.304
|
2 |
Fast Computation on Processing Data Warehousing Queries on GPU DevicesCyrus, Sam 29 June 2016 (has links)
Current database management systems use Graphic Processing Units (GPUs) as dedicated accelerators to process each individual query, which results in underutilization of GPU. When a single query data warehousing workload was run on an open source GPU query engine, the utilization of main GPU resources was found to be less than 25%. The low utilization then leads to low system throughput. To resolve this problem, this paper suggests a way to transfer all of the desired data into the global memory of GPU and keep it until all queries are executed as one batch. The PCIe transfer time from CPU to GPU is minimized, which results in better performance in less time of overall query processing. The execution time was improved by up to 40% when running multiple queries, compared to dedicated processing.
|
3 |
Analys av databasstruktur och stored procedure i syfte att öka prestanda vid hämtning av dataPolprasert, Natthakon, Ahmadi, Mobin January 2019 (has links)
The Company X has a database that continuously increasing which causes the response time when retrieving data from the database increases the more data that is retrieved. Therefore, the company want an analysis of the database structures and stored procedures to see if there is a more efficient way to store and retrieve large datasets. Performance can have different meanings. Within the subject of computer systems, it could be factors such as transaction throughput, response time and storage space. But within the scope of this work performance is limited to the response time. One of the tables in the database is normalized and a few techniques are implemented for stored procedure that the company has not implemented to see if there have been any improvements in performance when retrieving data. The respond time for the various techniques implemented was measured in order to make a comparison on performance. The purpose of this thesis is to analyse database tables and how stored procedure can be improved to find a sustainable solution for the database in the future. Question to the work are: How can the database structure be improved with the aim of increasing performance of data retrieval? Which techniques can improve stored procedure performance when it comes to retrieving large amount of data? The result of this work was that normalization has reduced the respond time for large data retrieval. One of the stored procedure techniques which is called sp_executesql was one of the best techniques which improved the performance of execution time the most when it came to retrieve large amount of data. / Företaget X har en databas som kontinuerligt ökar i storlek vilket detta leder till att svarstiden vid hämtning av data från databasen öka ju mer data som hämtas. Därför vill företaget X att databasstrukturen och stored procedure skulle analyseras för att se om det fanns ett effektivare sätt att lagra samt hämta stora mängder data på. Prestanda kan ha olika betydelser; inom databassystem handlar det om olika faktorer som transaktionsgenomströmning, svarstid och lagringsutrymme. I detta arbete begränsas prestanda till svarstiden. En av tabellerna i databasen normaliseras och sedan implementeras ett antal tekniker för stored procedure som företaget inte har implementerat. Detta för att kunna se om det har skett förbättringar i prestanda vid hämtning av data. Därefter mätes svarstiden för de olika teknikerna som implementerades för att kunna göra en jämförelse på prestanda. Syftet med arbetet är att analysera databastabeller samt analysera hur stored procedure kan förbättras för att hitta en hållbar lösning för databasen i framtiden. Frågeställningar till arbetet är: Hur kan databasstrukturen förbättras i syfte att öka prestanda vid hämtning av data? Samt vilka tekniker kan förbättra prestanda för stored procedure vid hämtning av stora datamängder? Resultatet av arbetet blev att med hjälp av normalisering har svarstiden minskat för hämtning av stora datamängder samt att sp_executesql är den teknik inom stored procedure som är överlägset bäst av de tekniker som har implementerats när det gäller prestanda vid hämtning av stora datamängder.
|
4 |
Duomenų bazių našumo tyrimo įrankis / Database performance audit toolGreibus, Justinas 13 August 2010 (has links)
Duomenų bazių našumo analizė yra viena iš pagrindinių siekių programinės įrangos testavimo srityje šiuo metu. Atliekant tyrimus sukurta nemažai metodikų, kurios leidžia nustatyti duomenų bazių našumo lygį. Tačiau priemonės sukurtos remiantis šiomis metodikomis yra prieinamos dažniausiai tik uždaroms bendruomenėms. Šiame darbe yra nagrinėjama duomenų bazių našumo tyrimo metodika, pagrįsta programinės įrangos apkrovos ir stresinio testavimo principais. Sukurtas įrankis suteikia vartotojui galimybę atlikti duomenų bazių našumo tyrimo scenarijus, bei pakartotinai atlikti istorinius scenarijus ir palyginti gautus rezultatus. Formuojamos ataskaitos pateikia daug svarbios informacijos skirtos analizei. / The analysis of the database performance is the common challenge in the nowadays software testing. There are several methodologies of the analysis of the database performance in the market. However, tools, which are based on these methodologies, are available for the narrow circle of the privileged persons. According to the results of the analysis, this master thesis investigates a new methodology, which is based on several other methodologies. The methodology of the database performance audit, which is discussed in this project, is based on the principles of the software load and stress testing. In order to identify the issues of database performance, values of the performance parameters are registered. These values are counted during the execution of the scenario of the automatic scenarios. The user has the possibility to re-execute historical scenarios and to compare the results of the separate executions. Generated reports with the deep data level facilitate the analysis of the database performance.
|
5 |
An evaluation of non-relational database management systems as suitable storage for user generated text-based content in a distributed environmentDu Toit, Petrus 07 October 2016 (has links)
Non-relational database management systems address some of the limitations relational database management systems have when storing large volumes of unstructured, user generated text-based data in distributed environments. They follow different approaches through the data model they use, their ability to scale data storage over distributed servers and the programming interface they provide.
An experimental approach was followed to measure the capabilities these alternative database management systems present in their approach to address the limitations of relational databases in terms of their capability to store unstructured text-based data, data warehousing capabilities, ability to scale data storage across distributed servers and the level of programming abstraction they provide.
The results of the research highlighted the limitations of relational database management systems. The different database management systems do address certain limitations, but not all. Document-oriented databases provide the best results and successfully address the need to store large volumes of user generated text-based data in a distributed environment / School of Computing / M. Sc. (Computer Science)
|
6 |
Hodnocení výkonnosti podniku / Company Performance MeasurementVálková, Barbora January 2017 (has links)
This master thesis deals with the financial and business performance ratings of the company Exvalos, spol. s r.o. in years 2010 - 2015. Main methods used, are benchmarking, comparison of analyzed company with its closest competitors and the use of methods of financial indicators. Suggested recommendations are based on the results of conducted analysis which will be used to enhance the financial performance of the company.
|
7 |
A scalable database for a remote patient monitoring systemMukhammadov, Ruslan January 2013 (has links)
Today one of the fast growing social services is the ability for doctors to monitor patients in their residences. The proposed highly scalable database system is designed to support a Remote Patient Monitoring system (RPMS). In an RPMS, a wide range of applications are enabled by collecting health related measurement results from a number of medical devices in the patient’s home, parsing and formatting these results, and transmitting them from the patient’s home to specific data stores. Subsequently, another set of applications will communicate with these data stores to provide clinicians with the ability to observe, examine, and analyze these health related measurements in (near) real-time. Because of the rapid expansion in the number of patients utilizing RPMS, it is becoming a challenge to store, manage, and process the very large number of health related measurements that are being collected. The primary reason for this problem is that most RPMSs are built on top of traditional relational databases, which are inefficient when dealing with this very large amount of data (often called “big data”). This thesis project analyzes scalable data management to support RPMSs, introduces a new set of open-source technologies that efficiently store and manage any amount of data which might be used in conjunction with such a scalable RPMS based upon HBase, implements these technologies, and as a proof of concept, compares the prototype data management system with the performance of a traditional relational database (specifically MySQL). This comparison considers both a single node and a multi node cluster. The comparison evaluates several critical parameters, including performance, scalability, and load balancing (in the case of multiple nodes). The amount of data used for testing input/output (read/write) and data statistics performance is 1, 10, 50, 100, and 250 GB. The thesis presents several ways of dealing with large amounts of data and develops & evaluates a highly scalable database that could be used with a RPMS. Several software suites were used to compare both relational and non-relational systems and these results are used to evaluate the performance of the prototype of the proposed RPMS. The results of benchmarking show that MySQL is better than HBase in terms of read performance, while HBase is better in terms of write performance. Which of these types of databases should be used to implement a RPMS is a function of the expected ratio of reads and writes. Learning this ratio should be the subject of a future thesis project. / En av de snabbast växande sociala tjänsterna idag är möjligheten för läkare att övervaka patienter i sina bostäder. Det beskrivna, mycket skalbara databassystemet är utformat för att stödja ett sådant Remote Patient Monitoring-system (RPMS). I ett RPMS kan flertalet applikationer användas med hälsorelaterade mätresultat från medicintekniska produkter i patientens hem, för att analysera och formatera resultat, samt överföra dem från patientens hem till specifika datalager. Därefter kommer ytterligare en uppsättning program kommunicera med dessa datalager för att ge kliniker möjlighet att observera, undersöka och analysera dessa hälsorelaterade mått i (nära) realtid. På grund av den snabba expansionen av antalet patienter som använder RPMS, är det en utmaning att hantera och bearbeta den stora mängd hälsorelaterade mätningar som samlas in. Den främsta anledningen till detta problem är att de flesta RPMS är inbyggda i traditionella relationsdatabaser, som är ineffektiva när det handlar om väldigt stora mängder data (ofta kallat "big data"). Detta examensarbete analyserar skalbar datahantering för RPMS, och inför en ny uppsättning av teknologier baserade på öppen källkod som effektivt lagrar och hanterar godtyckligt stora datamängder. Dessa tekniker används i en prototypversion (proof of concept) av ett skalbart RPMS baserat på HBase. Implementationen av det designade systemet jämförs mot ett RPMS baserat på en traditionell relationsdatabas (i detta fall MySQL). Denna jämförelse ges för både en ensam nod och flera noder. Jämförelsen utvärderar flera kritiska parametrar, inklusive prestanda, skalbarhet, och lastbalansering (i fallet med flera noder). Datamängderna som används för att testa läsning/skrivning och statistisk prestanda är 1, 10, 50, 100 respektive 250 GB. Avhandlingen presenterar flera sätt att hantera stora mängder data och utvecklar samt utvärderar en mycket skalbar databas, som är lämplig för användning i RPMS. Flera mjukvaror för att jämföra relationella och icke-relationella system används för att utvärdera prototypen av de föreslagna RPMS och dess resultat. Resultaten av dessa jämförelser visar att MySQL presterar bättre än HBase när det gäller läsprestanda, medan HBase har bättre prestanda vid skrivning. Vilken typ av databas som bör väljas vid en RMPS-implementation beror därför på den förväntade kvoten mellan läsningar och skrivningar. Detta förhållande är ett lämpligt ämne för ett framtida examensarbete.
|
8 |
SOFORT: A Hybrid SCM-DRAM Storage Engine for Fast Data RecoveryOukid, Ismail, Booss, Daniel, Lehner, Wolfgang, Bumbulis, Peter, Willhalm, Thomas 19 September 2022 (has links)
Storage Class Memory (SCM) has the potential to significantly improve database performance. This potential has been well documented for throughput [4] and response time [25, 22]. In this paper we show that SCM has also the potential to significantly improve restart performance, a shortcoming of traditional main memory database systems. We present SOFORT, a hybrid SCM-DRAM storage engine that leverages full capabilities of SCM by doing away with a traditional log and updating the persisted data in place in small increments. We show that we can achieve restart times of a few seconds independent of instance size and transaction volume without significantly impacting transaction throughput.
|
9 |
Multilingual Information Processing On Relaltional Database ArchitecturesKumaran, A 12 1900 (has links) (PDF)
No description available.
|
Page generated in 0.0766 seconds