• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • Tagged with
  • 17
  • 17
  • 16
  • 16
  • 16
  • 15
  • 14
  • 12
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Datenbanksysteme 2: Vorlesungsskript Sommersemester 1999

Rahm, Erhard 15 November 2018 (has links)
1. Klassen und Einsatzfelder von DBS - Anforderungen neuartiger DB-Anwendungen (CAD etc.) - Beschränkungen des Relationemodells - Datenmodelle: Semantische DM, Objektorientierte DBS, Objektrelationale DBS, Deduktive DBS - Anwendungsfelder: Information-Retrieval, Multimedia-DBS, GIS, Decision Support / Data Warehouses 2. Grundkonzepte von objektorientierten DBS - Grundlagen und Konzepte - Struktureigenschaften - Objektorientierte Verarbeitung 3. ODMG-Standard - Kooperation in heterogenen Umgebungen (CORBA-Standard) - Objektmodell - Objektdefinition (ODL) - Anfragesprache (OQL) 4. Beispielrealisierungen von OODBS - NF² - GemStone - O2
2

Temporale Datenmodelle und Metainformationssysteme für Geoinformationssysteme

Ramsch, Jan, Sosna, Dieter 12 July 2019 (has links)
Das Amtliche Topographisch-Kartographische Informationssystem ATKIS ist das Vorhaben der Landesvermessungsverwaltungen zum Aufbau eines einheitlichen digitalen Datenbestand für das Gebiet der Bundesrepublik Deutschland. Für die im Rahmen des Verfahrens zur Datenabgabe an die Nutzer bereitzustellende inkrementelle Fortführungsinformation ist die ATKIS-Datenbank um Informationen über neu erzeugte, veränderte oder gelöschte Datensäatze zu erweitern. Neben dieser existieren von Seiten des IfAG weitere Anforderungen, die so nicht in [ATKIS] festgehalten sind, wie zum Beispiel die Rekonstruktion des ATKIS-Datenbank-Zustandes zu einem früheren Zeitpunkt und die Unterscheidung von korrekten und erfaßten Daten bezüglich eines bestimmten Zeitpunktes. Diese Anforderungen sind beim Entwurf der Datenbank zu berücksichtigen. Nach einer Diskussion prinzipieller Lösungswege werden in den folgenden Abschnitten verschiedene Ansätze, Modelle und Lösungsvorschläge für temporale Datenmodelle und Versionenverwaltungen vorgestellt und besonders im Hinblick auf eine Implementation in einem relationalen DBMS und den speziellen Anforderungen von ATKIS diskutiert.
3

Concept-Oriented Model and Nested Partially Ordered Sets

Savinov, Alexandr 24 April 2014 (has links) (PDF)
Concept-oriented model of data (COM) has been recently defined syntactically by means of the concept-oriented query language (COQL). In this paper we propose a formal embodiment of this model, called nested partially ordered sets (nested posets), and demonstrate how it is connected with its syntactic counterpart. Nested poset is a novel formal construct that can be viewed either as a nested set with partial order relation established on its elements or as a conventional poset where elements can themselves be posets. An element of a nested poset is defined as a couple consisting of one identity tuple and one entity tuple. We formally define main operations on nested posets and demonstrate their usefulness in solving typical data management and analysis tasks such as logic navigation, constraint propagation, inference and multidimensional analysis.
4

Concept-Oriented Model and Nested Partially Ordered Sets

Savinov, Alexandr 24 April 2014 (has links)
Concept-oriented model of data (COM) has been recently defined syntactically by means of the concept-oriented query language (COQL). In this paper we propose a formal embodiment of this model, called nested partially ordered sets (nested posets), and demonstrate how it is connected with its syntactic counterpart. Nested poset is a novel formal construct that can be viewed either as a nested set with partial order relation established on its elements or as a conventional poset where elements can themselves be posets. An element of a nested poset is defined as a couple consisting of one identity tuple and one entity tuple. We formally define main operations on nested posets and demonstrate their usefulness in solving typical data management and analysis tasks such as logic navigation, constraint propagation, inference and multidimensional analysis.
5

Data Model Canvas für die IT-Systemübergreifende Integration von Datenmodellen zur Unterstützung von Datenanalyse-Anwendungen im Produktlebenszyklus

Eickhoff, Thomas, Eiden, Andreas, Gries, Jonas, Göbel, Jens C. 06 September 2021 (has links)
Der Data Model Canvas (DMC) unterstützt methodisch und informationstechnisch den Aufbau der für ein durchgängiges und interdisziplinäres Engineering benötigten fachlichen Datengrundlage und deren Abbildung in den betreffenden IT-Systemen. Basierend auf konkreten Analyse-Szenarien erfolgt eine Modellierung der erforderlichen Datenvernetzung, die wiederum die explizit benötigten Datenquellen umfasst. Im Mittelpunkt dieses Ansatzes steht die Entwicklung eines fachlichen Verständnisses über die zur Analyse notwendigen roduktdaten. Unterstützt wird der Ansatz durch ein Softwaretool zur Erstellung der benötigten Datenmodelle.
6

AI-Based Transport Mode Recognition for Transportation Planning Utilizing Smartphone Sensor Data From Crowdsensing Campaigns

Grubitzsch, Philipp, Werner, Elias, Matusek, Daniel, Stojanov, Viktor, Hähnel, Markus 11 May 2023 (has links)
Utilizing smartphone sensor data from crowdsen-sing (CS) campaigns for transportation planning (TP) requires highly reliable transport mode recognition. To address this, we present our RNN-based AI model MovDeep, which works on GPS, accelerometer, magnetometer and gyroscope data. It was trained on 92 hours of labeled data. MovDeep predicts six transportation modes (TM) on one second time windows. A novel postprocessing further improves the prediction results. We present a validation methodology (VM), which simulates unknown context, to get a more realistic estimation of the real-world performance (RWP). We explain why existing work shows overestimated prediction qualities, when they would be used on CS data and why their results are not comparable with each other. With the introduced VM, MovDeep still achieves 99.3 % F1 -Score on six TM. We confirm the very good RWP for our model on unknown context with the Sussex-Huawei Locomotion data set. For future model comparison, both publicly available data sets can be used with our VM. In the end, we compare MovDeep to a deterministic approach as a baseline for an average performing model (82 - 88 % RWP Recall) on a CS data set of 540 k tracks, to show the significant negative impact of even small prediction errors on TP.
7

Towards a web-scale data management ecosystem demonstrated by SAP HANA

Lehner, Wolfgang, Faerber, Franz, Dees, Jonathan, Weidner, Martin, Baeuerle, Stefan 12 January 2023 (has links)
Over the years, data management has diversified and moved into multiple directions, mainly caused by a significant growth in the application space with different usage patterns, a massive change in the underlying hardware characteristics, and-last but not least-growing data volumes to be processed. A solution matching these constraints has to cope with a multidimensional problem space including techniques dealing with a large number of domain-specific data types, data and consistency models, deployment scenarios, and processing, storage, and communication infrastructures on a hardware level. Specialized database engines are available and are positioned in the market optimizing a particular dimension on the one hand while relaxing other aspects (e.g. web-scale deployment with relaxed consistency). Today it is common sense, that there is no single engine which can handle all the different dimensions equally well and therefore we have very good reasons to tackle this problem and optimize the dimensions with specialized approaches in a first step. However, we argue for a second step (reflecting in our opinion on the even harder problem) of a deep integration of individual engines into a single coherent and consistent data management ecosystem providing not only shared components but also a common understanding of the overall business semantics. More specifically, a data management ecosystem provides common “infrastructure” for software and data life cycle management, backup/recovery, replication and high availability, accounting and monitoring, and many other operational topics, where administrators and users expect a harmonized experience. More importantly from an application perspective however, customer experience teaches us to provide a consistent business view across all different components and the ability to seamlessly combine different capabilities. For example, within recent customer-based Internet of Things scenarios, a huge potential exists in combining graph-processing functionality with temporal and geospatial information and keywords extracted from high-throughput twitter streams. Using SAP HANA as the running example, we want to demonstrate what moving a set of individual engines and infra-structural components towards a holistic but also flexible data management ecosystem could look like. Although there are some solutions for some problems already visible on the horizon, we encourage the database research community in general to focus more on the Big Picture providing a holistic/integrated approach to efficiently deal with different types of data, with different access methods, and different consistency requirements-research in this field would push the envelope far beyond the traditional notion of data management.
8

Enjoy FRDM - play with a schema-flexible RDBMS

Lehner, Wolfgang, Voigt, Hannes, Damme, Patrick 12 January 2023 (has links)
Relational database management systems build on the closed world assumption requiring upfront modeling of a usually stable schema. However, a growing number of today's database applications are characterized by self-descriptive data. The schema of self-descriptive data is very dynamic and prone to frequent changes; a situation which is always troublesome to handle in relational systems. This demo presents the relational database management system FRDM. With flexible relational tables FRDM greatly simplifies the management of self-descriptive data in a relational database system. Self-descriptive data can reside directly next to traditionally modeled data and both can be queried together using SQL. This demo presents the various features of FRDM and provides first-hand experience of the newly gained freedom in relational database systems.
9

DebEAQ - debugging empty-answer queries on large data graphs

Lehner, Wolfgang, Vasilyeva, Elena, Heinze, Thomas, Thiele, Maik 12 January 2023 (has links)
The large volume of freely available graph data sets impedes the users in analyzing them. For this purpose, they usually pose plenty of pattern matching queries and study their answers. Without deep knowledge about the data graph, users can create ‘failing’ queries, which deliver empty answers. Analyzing the causes of these empty answers is a time-consuming and complicated task especially for graph queries. To help users in debugging these ‘failing’ queries, there are two common approaches: one is focusing on discovering missing subgraphs of a data graph, the other one tries to rewrite the queries such that they deliver some results. In this demonstration, we will combine both approaches and give the users an opportunity to discover why empty results were delivered by the requested queries. Therefore, we propose DebEAQ, a debugging tool for pattern matching queries, which allows to compare both approaches and also provides functionality to debug queries manually.
10

Online horizontal partitioning of heterogeneous data

Herrmann, Kai, Voigt, Hannes, Lehner, Wolfgang 30 November 2020 (has links)
In an increasing number of use cases, databases face the challenge of managing heterogeneous data. Heterogeneous data is characterized by a quickly evolving variety of entities without a common set of attributes. These entities do not show enough regularity to be captured in a traditional database schema. A common solution is to centralize the diverse entities in a universal table. Usually, this leads to a very sparse table. Although today’s techniques allow efficient storage of sparse universal tables, query efficiency is still a problem. Queries that address only a subset of attributes have to read the whole universal table includingmany irrelevant entities. Asolution is to use a partitioning of the table, which allows pruning partitions of irrelevant entities before they are touched. Creating and maintaining such a partitioning manually is very laborious or even infeasible, due to the enormous complexity. Thus an autonomous solution is desirable. In this article, we define the Online Partitioning Problem for heterogeneous data. We sketch how an optimal solution for this problem can be determined based on hypergraph partitioning. Although it leads to the optimal partitioning, the hypergraph approach is inappropriate for an implementation in a database system. We present Cinderella, an autonomous online algorithm for horizontal partitioning of heterogeneous entities in universal tables. Cinderella is designed to keep its overhead low by operating online; it incrementally assigns entities to partition while they are touched anyway duringmodifications. This enables a reasonable physical database design at runtime instead of static modeling.

Page generated in 0.0584 seconds