Spelling suggestions: "subject:"database achema"" "subject:"database dechema""
1 |
Flexible Integration of Molecular-Biological Annotation Data: The GenMapper ApproachDo, Hong-Hai, Rahm, Erhard 12 December 2018 (has links)
Molecular-biological annotation data is continuously being collected, curated and made accessible in numerous public data sources. Integration of this data is a major challenge in bioinformatics. We present the GenMapper system that physically integrates heterogeneous annotation data in a flexible way and supports large-scale analysis on the integrated data. It uses a generic data model to uniformly represent different kinds of annotations originating from different data sources. Existing associations between objects, which represent valuable biological knowledge, are explicitly utilized to drive data integration and combine annotation knowledge from different sources. To serve specific analysis needs, powerful operators are provided to derive tailored annotation views from the generic data representation. GenMapper is operational and has been successfully used for large-scale functional profiling of genes.
|
2 |
Analysing, Designing, and Evaluating Database Schema Designs in Azure Data Explorer / Analys, design och utvärdering av databasscheman i Azure Data ExplorerPetersson, Linn, Ferlin, Angelica January 2024 (has links)
Today, data warehouses are used to store large amounts of data. This thesis investigates the impact of various database schema designs on query execution time within the cloud platform Azure Data Explorer. As Azure Data Explorer is a relatively new platform, limited research exists on designing database schemas within the platform. Further, the design of the database schema has a direct impact on the query execution times. The design should also align with the use case of the data warehouse. This thesis conducts a requirements analysis, determines the use case, and designs three database schemas. The three database schemas are implemented and evaluated through a performance test. Schema 1 is designed to utilize results tables from stored functions, while schema 2 utilizes sub-functions divided by different departments or products to minimize the data accessed per query. Finally, schema 3 uses the results tables from the sub-functions found in schema 2. The result from the performance tests shows that schema 3 has the best overall improvement in query execution time compared to the other designs and the original design. The findings emphasize the critical role of database schema design in influencing query performance. Additionally, a conclusion is reached that using more than one approach to enhance query performance increases the potential query performance.
|
3 |
Integracija šema modula baze podataka informacionog sistema / Integration of Information System Database Module SchemasLuković Ivan 18 January 1996 (has links)
<p>Paralelan i nezavisan rad više projektanata na različitim modulima (podsistemima) nekog informacionog sistema, identifikovanim saglasno početnoj funkcionalnoj dekompoziciji realnog sistema, nužno dovodi do međusobno nekonzistentnih rešenja šema modula baze podataka. Rad se bavi pitanjima identifikacije i razrešavanja problema, vezanih za automatsko otkrivanje kolizija, koje nastaju pri paralelnom projektovanju različitih šema modula i problema vezanih za integraciju šema modula u jedinstvenu šemu baze podataka informacionog sistema.</p><p>Identifikovani su mogući tipovi kolizija šema modula, formulisan je i dokazan potreban i dovoljan uslov stroge i intenzionalne kompatibilnosti šema modula, što je omogućilo da se, u formi algoritama, prikažu postupci za ispitivanje stroge i intenzionalne kompatibilnosti šema modula. Formalizovan je i postupak integracije kompatibilnih šema u jedinstvenu (strogo pokrivajuću) šemu baze podataka. Dat je, takođe, prikaz metodologije primene algoritama za testiranje kompatibilnosti i integraciju šema modula u jedinstvenu šemu baze podataka informacionog sistema.</p> / <p>Parallel and independent work of a number of designers on different information system modules (i.e. subsystems), identified by the initial real system functional decomposition, necessarily leads to mutually inconsistent database (db) module schemas. The thesis considers the problems concerning automatic detection of collisions, that can appear during the simultaneous design of different db module schemas, and integration of db module schemas into the unique information system db schema.</p><p>All possible types of db module schema collisions have been identified. Necessary and sufficient condition of strong and intensional db module schema compatibility has been formu-lated and proved. It has enabled to formalize the process of db module schema strong and intensional compatibility checking and to construct the appropriate algorithms. The integration process of the unique (strong covering) db schema, on the basis of compatible db module schemas, is formalized, as well. The methodology of applying the algorithms for compatibility checking and unique db schema integration is also presented.</p>
|
4 |
Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.Domingues, Márcia Beatriz Pereira 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.
|
5 |
Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.Márcia Beatriz Pereira Domingues 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.
|
6 |
A semantic data model for intellectual database accessWatanabe, Toyohide, Uehara, Yuusuke, Yoshida, Yuuji, Fukumura, Teruo 03 1900 (has links)
No description available.
|
7 |
Editor relačních tabulek / Editor of Relational TablesMacák, Martin January 2008 (has links)
This thesis deals with aspects of using the common, universal relational table editor as a simple information system, which would be fully independent on the underlying database system and partially independent on the underlying's database schema. One part of this thesis deals with potential of using such universal information system to create a framework to allow fast and easy development of small and medium information systems. The practical part of this thesis is the application, which implements the basics of simple relational table editor, which is fully independent on the underlying database provider and schema and servers as the demonstrative table editor.
|
8 |
Managed Query Processing within the SAP HANA Database PlatformMay, Norman, Böhm, Alexander, Block, Meinolf, Lehner, Wolfgang 03 February 2023 (has links)
The SAP HANA database extends the scope of traditional database engines as it supports data models beyond regular tables, e.g. text, graphs or hierarchies. Moreover, SAP HANA also provides developers with a more fine-grained control to define their database application logic, e.g. exposing specific operators which are difficult to express in SQL. Finally, the SAP HANA database implements efficient communication to dedicated client applications using more effective communication mechanisms than available with standard interfaces like JDBC or ODBC. These features of the HANA database are complemented by the extended scripting engine–an application server for server-side JavaScript applications–that is tightly integrated into the query processing and application lifecycle management. As a result, the HANA platform offers more concise models and code for working with the HANA platform and provides superior runtime performance. This paper describes how these specific capabilities of the HANA platform can be consumed and gives a holistic overview of the HANA platform starting from query modeling, to the deployment, and efficient execution. As a distinctive feature, the HANA platform integrates most steps of the application lifecycle, and thus makes sure that all relevant artifacts stay consistent whenever they are modified. The HANA platform also covers transport facilities to deploy and undeploy applications in a complex system landscape.
|
9 |
Data Perspectives of Workflow Schema Evolution : Cases of Task Deletion and InsertionArunagiri, Aravindhan January 2013 (has links) (PDF)
Dynamic changes in the business environment requires their business process to be up-to-date. The Workflow Management Systems supporting these business processes need to adapt to these changes rapidly. The Work Flow Management Systems however lacks the ability to dynamically propagate the process changes to their process model schemas (Workflow templates). The literature on workflow schema evolution emphasizes the impact of changes in control flow with very ittle attention to other aspects of a workflow schema. This thesis studies the data aspect (data flow and data model) of workflow schema during its evolution.
Workflow schema changes can lead to inconsistencies between the underlying database model and the workflow. A rather straight forward approach to the problem would be to abandon the existing database model and start afresh. However this introduces data persistence issues. Also there could be significant system downtimes involved in the process of migrating data from the old database model to the current one. In this research we develop an approach to address this problem. The business changes demand various types of control flow changes to its business process model (workflow schema). The control flow changes include task insertion, deletion, swapping, movement, replacement, extraction, in-lining, Parallelizing etc. Many of the control flow changes to the workflow can be made by using the combination of a simple task insertion and deletion, while some like embedding task in loop/ conditional branch and Parallelizing tasks also requires the addition/removal of control dependency between the tasks. Since many of the control flow change patterns involves task insertion and deletion at its core, in this thesis we study its impact on the underlying data model. We propose algorithms to dynamically handle the changes in the underlying relational database schema.
First we identify the basic change patterns that can be implemented using atomic task insertion and deletions. Then we characterize these basic pattern in terms of their data flow anomalies (Missing, Redundant, Conflicting data) that they can generate. The Data schema compliance criteria are developed to identify the data changes: (i) that makes the underlying database schema inconsistent with the modified workflow and (ii) generating the aforementioned data anomalies. The Data schema compliance criteria characterizes the change patterns in terms of its ability to work with the current relational data model. The Data schema compliance criteria show various properties required of the modified workflow to be consistent with the underlying database model. The data of any workflow instance conforming to Data schema compliance criteria can be directly accommodated in the database model.
The data anomalies (of task insertion and deletion) identified using DSC are handled dynamically using respective Data adaptation algorithms. The algorithm uses the functional dependency constraints in the relational database model to adapt/handle these data anomalies. Such handled data (changes) that conform to DSC can be directly accommodated in the underlying database schema. Hence with this approach the workflow can be modified (using task insertion and deletion) and their data changes can be implemented on-the-fly using the Data adaptation algorithms. In this research the same old data model is evolved without abandoning it even after the modification of the workflow schema. This maintains the old data persistence in the existing database schema. Detailed implementation procedures to deploy the Data adaptation algorithms are presented with illustrative examples.
|
10 |
Porovnání technologií pro objektově relační mapování / Comparison of Technologies for Object-Relational MappingFatrdla, Pavel January 2010 (has links)
Diploma thesis deals with the contemporary object-relational mapping (ORM) technologies for Java. It briefly describes also competing technologies for persisting objects in files, object and object-relational databases. However main part of the thesis is the persistence of objects in relational databases using ORM frameworks. The work begins with studying general methods and issues, that these frameworks have to solve. Next, it chooses and deeply describes some ORM frameworks. They are later demonstrated on the demo application. In the following part there is a description of the problems I have been facing during the implementation of the persistence using these frameworks. Finally, there is an evaluation and a comparison of these frameworks.
|
Page generated in 0.0454 seconds