61 |
Erschließung domänenübergreifender Informationsräume mit Multimodellen / Access of cross-domain information spaces using multi-modelsFuchs, Sebastian 23 October 2015 (has links) (PDF)
Mit dem Übergang von bauwerksorientierter zu prozessorientierter Arbeitsweise erlangt die domänenübergreifende Bereitstellung von Informationen wachsende Bedeutung. Das betrifft bspw. die Erstellung von Controlling-Kennwerten, die Vorbereitung von Simulationen oder die Betrachtung neuer Aspekte wie Energieeffizienz. Aktuelle Datenformate und Erschließungsmethoden können diese Herausforderung jedoch nicht befriedigend bewältigen. Daher bedarf es einer Methode, welche interdisziplinäre Bauinformationsprozesse uneingeschränkt ermöglicht. Vorhandene Kommunikationsprozesse und Fachanwendungen sollen dabei beibehalten und weitergenutzt werden können.
Mit der Multimodell-Methode wird ein Lösungsansatz für die strukturellen Probleme interdisziplinärer Bauinformationsprozesse vorgestellt. Multimodelle bündeln heterogene Fachmodelle unterschiedlicher Domänen und erlauben die Verbindung ihrer Elemente in externen, ID-basierten Linkmodellen. Da die Fachmodelle unberührt bleiben, wird auf diesem Weg eine lose und temporäre Kopplung ermöglicht. Durch den Verzicht auf ein führendes oder integrierendes Datenschema werden keine Transformationsprozesse benötigt, können etablierte und heute übliche Datenformate weitergenutzt und die verlinkten Fachmodelle neutral ausgetauscht werden.
Die in Multimodellen verknüpften Daten bieten einen informationellen Mehrwert gegenüber alleinstehenden Fachmodellen. Zusammengehörende Informationen können über die persistenten Links automatisch ausgewertet werden, anstelle manuell vom Menschen immer wieder flüchtig neu zugeordnet werden zu müssen. Somit erscheint ein Multimodell gegenüber einem Benutzer wie ein einziger abgeschlossener Informationsraum.
Um solche datenmodell-, datenformat- und domänenübergreifenden Informationsräume komfortabel erstellen und filtern zu können, wird die deklarative Multimodell-Abfragesprache MMQL eingeführt. Diese erlaubt einen generischen Zugriff auf die Originaldaten und bildet die Kernkonzepte der Multimodell-Erschließung - mehrwertige Linkerzeugung und strukturelle Linksemantik - ab. Ein zugehöriger Interpreter ermittelt den Lösungsweg für konkrete Anweisungen und führt diesen auf realen Daten aus.
Die Umsetzung und Bereitstellung der Konzepte als IT-Komponenten auf verschiedenen Ebenen - von der Datenstruktur über Bibliotheken und Services bis hin zur alleinstehenden, universellen Multimodell-Software M2A2 - erlaubt die sofortige und direkte Anwendung der Multimodell-Methode in der Praxis. / With the transition of building-oriented to process-oriented work, the provision of cross-domain information gained growing importance - for example in the creation of controlling parameters, the preparation of simulations or when considering new aspects such as energy efficiency. However, current data formats and access methods cannot cope with this challenge satisfactory. Therefore, a method is required, that enables interdisciplinary construction information processes fully. Thereby existing communication processes and domain applications have to be retained and continued to be used as possible.
With the multi-model method, an approach to structural problems of such interdisciplinary construction information processes is presented. Multi-models combine heterogeneous models of different domains and allow the connection of their elements in external ID-based link models. As the domain models remain unaffected, a loose and temporary coupling is possible in this way. By not using a leading or integrating data schema, no transformation processes are required, common established data formats can be retained and the linked domain models can be exchanged neutrally.
The linked data in multi-models offer an additional value of information over single domain models. Information belonging together can be automatically evaluated by the persistent links - instead of being repeatedly reassigned by people in a volatile way. Thus, a multi-model appears to a user as a single self-contained information space.
In order to create and filter such cross-format and cross-domain information spaces comfortably, the declarative multi-model query language MMQL is introduced. It allows for generic access to the original data and integrates the core concepts of the multi-model development - n-ary link generation and structural link semantics. An associated interpreter determines the approach for specific instructions and executes it on real data.
The implementation and deployment of the concepts as IT components at various levels - from the data structure via libraries and services, to the universal multi-model software M2A2 - allows an immediate and direct application of the multi-model method in practice.
|
62 |
RSQL - a query language for dynamic data typesJäkel, Tobias, Kühn, Thomas, Voigt, Hannes, Lehner, Wolfgang 09 June 2021 (has links)
Database Management Systems (DBMS) are used by software applications, to store, manipulate, and retrieve large sets of data. However, the requirements of current software systems pose various challenges to established DBMS. First, most software systems organize their data by means of objects rather than relations leading to increased maintenance, redundancy, and transformation overhead when persisting objects to relational databases. Second, complex objects are separated into several objects resulting in Object Schizophrenia and hard to persist Distributed State. Last but not least, current software systems have to cope with increased complexity and changes. These challenges have lead to a general paradigm shift in the development of software systems. Unfortunately, classical DBMS will become intractable, if they are not adapted to the new requirements imposed by these software systems. As a result, we propose an extension of DBMS with roles to represent complex objects within a relational database and support the exibility required by current software systems. To achieve this goal, we introduces RSQL, an extension to SQL with the concept of objects playing roles when interacting with other objects. Additionally, we present a formal model for the logical representation of roles in the extended DBMS.
|
63 |
Informační systém pro podporu řízení skladu, obchodu a marketingu / Information System for Management of Store and Support of Business and Marketing OperationsFerencz, Erik January 2007 (has links)
This term project is about analyses and design of information system for administration and managing business firm.System is designed as module system with unlimithed count of modules that coact or are connected with another modules. Each module has its own data tables in database, own Classes which make the middle layer of aplication and graphical interface, but modules are not independent (one module can not work as a system).Iner communication among modules is based on database servers.Part of the application is its own database. To accomplish this project I had to familiarize myself with a problemathic of programming in programming language C# and with database language PostgreSQL
|
64 |
Derby/S: A DBMS for Sample-Based Query AnsweringKlein, Anja, Gemulla, Rainer, Rösch, Philipp, Lehner, Wolfgang 10 November 2022 (has links)
Although approximate query processing is a prominent way to cope with the requirements of data analysis applications, current database systems do not provide integrated and comprehensive support for these techniques. To improve this situation, we propose an SQL extension---called SQL/S---for approximate query answering using random samples, and present a prototypical implementation within the engine of the open-source database system Derby---called Derby/S. Our approach significantly reduces the required expert knowledge by enabling the definition of samples in a declarative way; the choice of the specific sampling scheme and its parametrization is left to the system. SQL/S introduces new DDL commands to easily define and administrate random samples subject to a given set of optimization criteria. Derby/S automatically takes care of sample maintenance if the underlying dataset changes. Finally, samples are transparently used during query processing, and error bounds are provided. Our extensions do not affect traditional queries and provide the means to integrate sampling as a first-class citizen into a DBMS.
|
65 |
Managed Query Processing within the SAP HANA Database PlatformMay, Norman, Böhm, Alexander, Block, Meinolf, Lehner, Wolfgang 03 February 2023 (has links)
The SAP HANA database extends the scope of traditional database engines as it supports data models beyond regular tables, e.g. text, graphs or hierarchies. Moreover, SAP HANA also provides developers with a more fine-grained control to define their database application logic, e.g. exposing specific operators which are difficult to express in SQL. Finally, the SAP HANA database implements efficient communication to dedicated client applications using more effective communication mechanisms than available with standard interfaces like JDBC or ODBC. These features of the HANA database are complemented by the extended scripting engine–an application server for server-side JavaScript applications–that is tightly integrated into the query processing and application lifecycle management. As a result, the HANA platform offers more concise models and code for working with the HANA platform and provides superior runtime performance. This paper describes how these specific capabilities of the HANA platform can be consumed and gives a holistic overview of the HANA platform starting from query modeling, to the deployment, and efficient execution. As a distinctive feature, the HANA platform integrates most steps of the application lifecycle, and thus makes sure that all relevant artifacts stay consistent whenever they are modified. The HANA platform also covers transport facilities to deploy and undeploy applications in a complex system landscape.
|
66 |
DataCalc: Ad-hoc Analyses on Heterogeneous Data SourcesLuong, Johannes, Habich, Dirk, Lehner, Wolfgang 19 July 2023 (has links)
Storing and processing data at different locations using a heterogeneous set of formats and data managements systems is state-of-the-art in many organizations. However, data analyses can often provide better insight when data from several sources is integrated into a combined perspective. In this paper we present an overview of our data integration system DataCalc. DataCalc is an extensible integration platform that executes adhoc analytical queries on a set of heterogeneous data processors. Our novel platform uses an expressive function shipping interface that promotes local computation and reduces data movement between processors. In this paper, we provide a discussion of the overall architecture and the main components of DataCalc. Moreover, we discuss the cost of integrating additional processors and evaluate the overall performance of the platform.
|
67 |
A Technical Perspective of DataCalc: Ad-hoc Analyses on Heterogeneous Data SourcesLuong, Johannes, Habich, Dirk, Lehner, Wolfgang 19 July 2023 (has links)
Many organizations store and process data at different locations using a heterogeneous set of formats and data management systems. However, data analyses can often provide better insight when data from several sources is integrated into a combined perspective. DataCalc is an extensible data integration platform that executes ad-hoc analytical queries on a set of heterogeneous data processors. The platform uses an expressive function shipping interface that promotes local computation and reduces data movement between processors. In this paper, we provide a detailed discussion of the architecture and implementation of DataCalc. We introduce data processors for plain files, JDBC, the MongoDB document store, and a custom in memory system. Finally, we discuss the cost of integrating additional processors and evaluate the overall performance of the platform. Our main contribution is the specification and evaluation of the DataCalc code delegation interface.
|
68 |
Implementation of data flow query language on a handheld deviceEvangelista, Mark A. 03 1900 (has links)
Approved for public release; distribution is unlimited / Handheld devices have evolved significantly from mere simple organizers to more powerful handheld computers that are capable of network connectivity, giving it the ability to send e-mail, browse the World Wide Web, and query remote databases. However, handheld devices, because of its design philosophy, are limited in terms of size, memory, and processing power compared to desktop computers. This thesis investigates the use of Data Flow Query Language (DFQL) in querying local and remote databases from a handheld device. Creating Standard Query Language (SQL) queries can be a complex undertaking; and trying to create one on a handheld device with a small screen only adds to its complexity. However, by using DFQL, the user can submit queries with an easy to use graphical user interface. Although handheld devices are currently more powerful than earlier PCs, they still require applications with a small footprint, which is a limiting factor for software developed. This thesis will also investigate the best division of labor between handheld device and remote servers. / Sergeant, United States Army
|
69 |
Role-based Data ManagementJäkel, Tobias 29 May 2017 (has links) (PDF)
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime.
Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture.
To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type.
These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
|
70 |
Software Architecture Recovery based on Pattern MatchingSartipi, Kamran January 2003 (has links)
Pattern matching approaches in reverse engineering aim to incorporate domain knowledge and system documentation in the software architecture extraction process, hence provide a user/tool collaborative environment for architectural design recovery. This thesis presents a model and an environment for recovering the high level design of legacy software systems based on user defined architectural patterns and graph matching techniques.
In the proposed model, a high-level view of a software system in terms of the system components and their interactions is represented as a query, using a description language. A query is mapped onto a pattern-graph, where a module and its interactions with other modules are represented as a group of graph-nodes and a group of graph-edges, respectively. Interaction constraints can be modeled by the description language as a part of the query. Such a pattern-graph is applied against an entity-relation graph that represents the information extracted from the source code of the software system. An approximate graph matching process performs a series of graph edit operations (i. e. , node/edge insertion/deletion) on the pattern-graph and uses a ranking mechanism based on data mining association to obtain a sub-optimal solution. The obtained solution corresponds to an extracted architecture that complies with the given query.
An interactive prototype toolkit implemented as part of this thesis provides an environment for architecture recovery in two levels. First the system is decomposed into a number of subsystems of files. Second each subsystem can be decomposed into a number of modules of functions, datatypes, and variables.
|
Page generated in 0.0318 seconds