1 |
Extracting ECA rules from UMLPalmadottir, Julia January 2001 (has links)
Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules. To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules. Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules. To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules.
|
2 |
Extracting ECA rules from UMLPalmadottir, Julia January 2001 (has links)
<p>Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules.</p><p>To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules.</p><p>Active technology in database management systems (DBMS) enables the movement of behaviour dependent on the system’s state, from the application software to a rule base in the DBMS. With active technology in database systems, the problem of how to design active behaviour has become an important issue. Modelling processes do not foresee support for design of active rules which can lead to conflicts between the event-condition-action (ECA) rules representing the active behaviour and the application systems, using the active DBMS. The unified modelling language (UML) is a widely used notation language and is the main subject in this project. Its features will be investigated to acknowledge to what extend UML modelling diagrams provide information that can be used to formulate ECA rules.</p><p>To achieve this, two methods where developed. One of the methods was applied on use-case UML modelling diagrams. The use-case models were developed by means of reflecting a real-life organisation. The results from applying the method on the use-case models were that there are features in UML that can be expressed with ECA rules.</p>
|
3 |
Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing DatabasesKaryakin, Alexey January 2011 (has links)
For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server.
In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers.
We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients.
|
4 |
Dynamic Scale-out Mechanisms for Partitioned Shared-Nothing DatabasesKaryakin, Alexey January 2011 (has links)
For a database system used in pay-per-use cloud environments, elastic scaling becomes an essential feature, allowing for minimizing costs while accommodating fluctuations of load. One approach to scalability involves horizontal database partitioning and dynamic migration of partitions between servers. We define a scale-out operation as a combination of provisioning a new server followed by migration of one or more partitions to the newly-allocated server.
In this thesis we study the efficiency of different implementations of the scale-out operation in the context of online transaction processing (OLTP) workloads. We designed and implemented three migration mechanisms featuring different strategies for data transfer. The first one is based on a modification of the Xen hypervisor, Snowflock, and uses on-demand block transfers for both server provisioning and partition migration. The second one is implemented in a database management system (DBMS) and uses bulk transfers for partition migration, optimized for higher bandwidth utilization. The third one is a conventional application, using SQL commands to copy partitions between servers.
We perform an experimental comparison of those scale-out mechanisms for disk-bound and CPU-bound configurations. When comparing the mechanisms we analyze their impact on whole-system performance and on the experience of individual clients.
|
5 |
Towards a Database System for Large-scale Analytics on StringsSahli, Majed 23 July 2015 (has links)
Recent technological advances are causing an explosion in the production of sequential data. Biological sequences, web logs and time series are represented as strings. Currently, strings are stored, managed and queried in an ad-hoc fashion because they lack a standardized data model and query language. String queries are computationally demanding, especially when strings are long and numerous. Existing approaches cannot handle the growing number of strings produced by environmental, healthcare, bioinformatic, and space applications. There is a trade- off between performing analytics efficiently and scaling to thousands of cores to finish in reasonable times.
In this thesis, we introduce a data model that unifies the input and output representations of core string operations. We define a declarative query language for strings where operators can be pipelined to form complex queries. A rich set of core string operators is described to support string analytics. We then demonstrate a database system for string analytics based on our model and query language. In particular, we propose the use of a novel data structure augmented by efficient parallel computation to strike a balance between preprocessing overheads and query execution times. Next, we delve into repeated motifs extraction as a core string operation for large-scale string analytics. Motifs are frequent patterns used, for example, to identify biological functionality, periodic trends, or malicious activities. Statistical approaches are fast but inexact while combinatorial methods are sound but slow. We introduce ACME, a combinatorial repeated motifs extractor. We study the spatial and temporal locality of motif extraction and devise a cache-aware search space traversal technique. ACME is the only method that scales to gigabyte- long strings, handles large alphabets, and supports interesting motif types with minimal overhead.
While ACME is cache-efficient, it is limited by being serial. We devise a lightweight parallel space traversal technique, called FAST, that enables ACME to scale to thousands of cores. High degree of concurrency is achieved by partition- ing the search space horizontally and balancing the workload among cores with minimal communication overhead. Consequently, complex queries are solved in minutes instead of days. ACME is a versatile system that runs on workstations, clusters, and supercomputers. It is the first to utilize a supercomputer and scale to 16 thousand CPUs.
Merely using more cores does not guarantee efficiency, because of the related overheads. To this end, we introduce an automatic tuning mechanism that suggests the appropriate number of cores to meet user constraints in terms of runtime while minimizing the financial cost of cloud resources. Particularly, we study workload frequency distributions then build a model that finds the best problem decomposition and estimates serial and parallel runtimes. Finally, we generalize our automatic tuning method as a general method, called APlug. APlug can be used in other applications and we integrate it with systems for molecular docking and multiple sequence alignment.
|
6 |
Methods for Comparing Database Management SystemsTörnqvist, Jakob January 2023 (has links)
Zenon AB is an it-company of which, this thesis was made in collaboration with. Zenon AB has clients that generate large amounts of data, therefore it is important for Zenon AB that they make competent choices of database management systems (DBMS) when designing systems for their clients. This thesis will therefore entail research carried out into the comparison of DBMS. Nowadays, there exists a large variety of DBMS. Despite this, there seems to be a lack of comparisons between types of DBMS and therefore a lack clarity of when each type should be used. Thus, this thesis aims to highlight these differences of DBMS types by creating a tailored test for each DBMS type and compare how each type performs in each-others area of specialization. This process will show how big the differences can be and highlight the importance of the choice of DBMS. The time it takes, and how simple DBMSs are to implement seems to be a factor most developers take into consideration when choosing DBMS but there is little research on how to compare the aspect. Therefore, this thesis will investigate the viability of a method to compare how easy the DBMSs are to implement into systems by querying programming help forums such as Stackoverflow. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
7 |
NOSQL- OCH MYSQLPRESTANDAFÖR SKOGSBRANDSDATA : Prestandautvärdering av grundläggandedatabasoperationer vid användning avtabellanpassad KML-data / NOSQL AND MYSQLPERFORMANCE FOR FOREST FIREDATA : Performance evaluation of basic databaseoperations using table mapped KML dataWihlstrand, Marc January 2015 (has links)
Den globala uppvärmningen sätter många samhällsviktiga funktioner på prov. Inte minst förmågan att upptäcka och bekämpa bränder. Ett viktigt steg för att kunna göra detta på ett effektivt sätt är att kunna lagra den data som samlas in och bearbeta denna så att den effektivt kan användas av godtyckligt program. För att kunna göra detta krävs ett databassystem. För att undersöka vilket databassystem som är bäst lämpat att lagra branddata från USA:s jordbruksdepartement utförs insättnings-, läs, och uppdateringsoperationer på databaserna Cassandra, MongoDB och MySQL. Testresultaten som erhölls från studien tyder på att MongoDB med stor marginal är bäst lämpat för att bearbeta data från Active Fire Maps-dokument som erhållits från USA:s jordbruksdepartement.
|
8 |
Second-tier Cache Management to Support DBMS WorkloadsLi, Xuhui 16 September 2011 (has links)
Enterprise Database Management Systems (DBMS) often run on computers with dedicated storage systems. Their data access requests need to go through two tiers of cache, i.e., a database bufferpool and a storage server cache, before reaching the storage media, e.g., disk platters. A tremendous amount of work has been done to improve the performance of the first-tier cache, i.e., the database bufferpool. However, the amount of work focusing on second-tier cache management to support DBMS workloads is comparably small. In this thesis we propose several novel techniques for managing second-tier caches to boost DBMS performance in terms of query throughput and query response time. The main purpose of second-tier cache management is to reduce the I/O latency endured by database query executions. This goal can be achieved by minimizing the number of reads and writes issued from second-tier caches to storage devices. The rst part of our research focuses on reducing the number of read I/Os issued by second-tier caches. We observe that DBMSs issue I/O requests for various reasons. The rationales behind these I/O requests provide useful information to second-tier caches because they can be used to estimate the temporal locality of the data blocks being requested. A second-tier cache can exploit this information when making replacement decisions. In this thesis we propose a technique to pass this information from DBMSs to second-tier caches and to use it in guiding cache replacements. The second part of this thesis focuses on reducing the number of writes issued by second-tier caches. Our work is two fold. First, we observe that although there are second-tier caches within computer systems, today's DBMS cannot take full advantage of them. For example, most commercial DBMSs use forced writes to propagate bufferpool updates to permanent storage for data durability reasons. We notice that enforcing such a practice is more conservative than necessary. Some of the writes can be issued as unforced requests and can be cached in the second-tier cache without immediate synchronization. This will give the second-tier cache opportunities to cache and consolidate multiple writes into one request. However, unfortunately, the current POSIX compliant le system interfaces provided by mainstream operating systems e.g., Unix and Windows) are not flexible enough to support such dynamic synchronization. We propose to extend such interfaces to let DBMSs take advantage of using unforced writes whenever possible. Additionally, we observe that the existing cache replacement algorithms are designed solely to maximize read cache hits (i.e., to minimize read I/Os). The purpose is to minimize the read latency, which is on the critical path of query executions. We argue that minimizing read requests is not the only objective of cache replacement. When I/O bandwidth becomes a bottleneck the objective should be to minimize the total number of I/Os, including both reads and writes, to achieve the best performance. We propose to associate a new type of replacement cost, i.e., the total number of I/Os caused by the replacement, with each cache page; and we also present a partial characterization of an optimal algorithm which minimizes the total number of I/Os generated by caches. Based on this knowledge, we extend several existing replacement algorithms, which are write-oblivious (focus only on reducing reads), to be write-aware and observe promising performance gains in the evaluations.
|
9 |
Development of database support for production of doubled haploidsEngerberg, Malin January 2002 (has links)
<p>In this project relational and Lotus Notes database technology are evaluated with regard to their suitability in providing computer-based support in plant breeding in general and specifically in the production of doubled haploids. The two developed databases are compared based on a set of requirements produced together with the DH-group which is the main users of the databases. The results indicate that both Lotus Notes and the relational databases are able to fulfil all needs documented in this project, although both systems have their limitations. An often expressed opinion is that it is difficult to combine biology and databases. The experience gained in this project however suggests that it does not need to be the case in instances where data is not as complicated as often discussed. Observations made during this project indicate that data warehousing with integrated data mining and OLAP tools are surprisingly similar to how the DH-group at Svalöf Weibull works and could be a suitable solution for the production of doubled haploids.</p>
|
10 |
Jämförelse av Oracle och MySQL med fokus på användning i laborationer för universitetsutbildningÅsberg, Mikael January 2008 (has links)
<p>Syftet med arbetet som beskrivs i denna rapport var att undersöka om den Oracle-baserade laborationsmiljö som användes hos ADIT gick att överföra till MySQL. Oracle är ett komplext system som är krävande att administrera, något som ADIT ansvarat för med egen personal och egen hårdvara och detta var inte idealiskt. I kombination med ett stort intresse från studenter att använda just MySQL vid laborationer hos ADIT beslutades det att man skulle undersöka om MySQL nu var moget att axla den roll som Oracle tidigare haft. Utifrån detta går rapporten igenom vad som behövde göras med det befintliga laborationsmaterialet. En introduktion till relations¬modellen och SQL samt förklaringar av skillnader i features mellan Oracle och MySQL som hade betydelse för laborationerna återfinns också. Det visade sig att överföringen var enkel att göra och sist i rapporten sammanställs våra erfarenheter.</p>
|
Page generated in 0.0414 seconds