• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 5
  • Tagged with
  • 17
  • 17
  • 16
  • 16
  • 16
  • 15
  • 14
  • 12
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Management of multidimensional aggregates for efficient online analytical processing

Lehner, Wolfgang, Albrecht, J., Bauer, A., Deyerling, O., Günzel, H., Hummer, W., Schlesinger, J. 02 June 2022 (has links)
Proper management of multidimensional aggregates is a fundamental prerequisite for efficient OLAP. The experimental OLAP server CUBESTAR whose concepts are described, was designed exactly for that purpose. All logical query processing is based solely on a specific algebra for multidimensional data. However, a relational database system is used for the physical storage of the data. Therefore, in popular terms, CUBESTAR can be classified as a ROLAP system. In comparison to commercially available systems, CUBESTAR is superior in two aspects. First, the implemented multidimensional data model allows more adequate modeling of hierarchical dimensions, because properties which apply only to certain dimensional elements can be modeled context-sensitively. This fact is reflected by an extended star schema on the relational side. Second, CUBESTAR supports multidimensional query optimization by caching multidimensional aggregates. Since summary tables are not created in advance but as needed, hot spots can be adequately represented. The dynamic and partition-oriented caching method allows cost reductions of up to 60% with space requirements of less than 10% of the size of the fact table.
12

Datenmodelle für fachübergreifende Wissensbasen in der interdisziplinären Anwendung

Molch, Silke 17 December 2019 (has links)
Ziel dieses Beitrags aus der Lehrpraxis ist es, die erforderlichen Herangehensweisen für die Erstellung von fachübergreifenden Wissensbasen und deren Nutzung im Rahmen studentischer Semesterprojekte exemplarisch am Lehrbeispiel einer anwendenden Ingenieurdisziplin darzustellen.
13

Semantische Revisionskontrolle für die Evolution von Informations- und Datenmodellen

Hensel, Stephan 13 April 2021 (has links)
Stärker verteilte Systeme in der Planung und Produktion verbessern die Agilität und Wartbarkeit von Einzelkomponenten, wobei gleichzeitig jedoch deren Vernetzung untereinander steigt. Das stellt wiederum neue Anforderungen an die semantische Beschreibung der Komponenten und deren Verbindungen, wofür Informations- und Datenmodelle unabdingbar sind. Der Lebenszyklus dieser Modelle ist dabei von Änderungen geprägt, mit denen umgegangen werden muss. Heutige Revisionsverwaltungssysteme, die die industriell geforderte Nachvollziehbarkeit bereitstellen könnten, sind allerdings nicht auf die speziellen Anforderungen der Informations- und Datenmodelle zugeschnitten, wodurch Möglichkeiten einer konsistenten Evolution verringert werden. Im Rahmen dieser Dissertation wurde ein Revision Management System zur durchgängigen Unterstützung der Evolution von Informations- und Datenmodellen entwickelt, das Revisionsverwaltungs- und Evolutionsmechanismen integriert. Besonderheit ist hierbei die technologieunabhängige mathematische und semantische Beschreibung, die eine Überführung des Konzepts in unterschiedliche Technologien ermöglicht. Beispielhaft wurde das Konzept für das Semantic Web als Weiterentwicklung des Open-Source-Projektes R43ples umgesetzt. / The increased distribution of systems in planning and production leads to improved agility and maintainability of individual components, whereas concurrently their cross-linking increases. This causes new requirements for the semantic description of components and links for which information and data models are indispensable. The life cycle of those models is characterized by changes that must be dealt with. However, today’s revision control systems would provide the required industrial traceability but are not enough for the specific requirements of information and data models. As a result, possibilities for a consistent evolution are reduced. Within this thesis a revision management system was developed, integrating revision control and evolution mechanisms to support the evolution of information and data models. The key is the technology-independent mathematical and sematic description allowing the application of the concept within different technologies. Exemplarily the concept was implemented for the Semantic Web as an extension of the open source project R43ples.
14

Clustering Uncertain Data with Possible Worlds

Lehner, Wolfgang, Volk, Peter Benjamin, Rosenthal, Frank, Hahmann, Martin, Habich, Dirk 16 August 2022 (has links)
The topic of managing uncertain data has been explored in many ways. Different methodologies for data storage and query processing have been proposed. As the availability of management systems grows, the research on analytics of uncertain data is gaining in importance. Similar to the challenges faced in the field of data management, algorithms for uncertain data mining also have a high performance degradation compared to their certain algorithms. To overcome the problem of performance degradation, the MCDB approach was developed for uncertain data management based on the possible world scenario. As this methodology shows significant performance and scalability enhancement, we adopt this method for the field of mining on uncertain data. In this paper, we introduce a clustering methodology for uncertain data and illustrate current issues with this approach within the field of clustering uncertain data.
15

Forecasting the data cube

Lehner, Wolfgang, Fischer, Ulrike, Schildt, Christopher, Hartmann, Claudio 12 January 2023 (has links)
Forecasting time series data is crucial in a number of domains such as supply chain management and display advertisement. In these areas, the time series data to forecast is typically organized along multiple dimensions leading to a high number of time series that need to be forecasted. Most current approaches focus only on selection and optimizing a forecast model for a single time series. In this paper, we explore how we can utilize time series at different dimensions to increase forecast accuracy and, optionally, reduce model maintenance overhead. Solving this problem is challenging due to the large space of possibilities and possible high model creation costs. We propose a model configuration advisor that automatically determines the best set of models, a model configuration, for a given multi-dimensional data set. Our approach is based on a general process that iteratively examines more and more models and simultaneously controls the search space depending on the data set, model type and available hardware. The final model configuration is integrated into F2DB, an extension of PostgreSQL, that processes forecast queries and maintains the configuration as new data arrives. We comprehensively evaluated our approach on real and synthetic data sets. The evaluation shows that our approach significantly increases forecast query accuracy while ensuring low model costs.
16

Exploiting big data in time series forecasting: A cross-sectional approach

Lehner, Wolfgang, Hartmann, Claudio, Hahmann, Martin, Rosenthal, Frank 12 January 2023 (has links)
Forecasting time series data is an integral component for management, planning and decision making. Following the Big Data trend, large amounts of time series data are available from many heterogeneous data sources in more and more applications domains. The highly dynamic and often fluctuating character of these domains in combination with the logistic problems of collecting such data from a variety of sources, imposes new challenges to forecasting. Traditional approaches heavily rely on extensive and complete historical data to build time series models and are thus no longer applicable if time series are short or, even more important, intermittent. In addition, large numbers of time series have to be forecasted on different aggregation levels with preferably low latency, while forecast accuracy should remain high. This is almost impossible, when keeping the traditional focus on creating one forecast model for each individual time series. In this paper we tackle these challenges by presenting a novel forecasting approach called cross-sectional forecasting. This method is especially designed for Big Data sets with a multitude of time series. Our approach breaks with existing concepts by creating only one model for a whole set of time series and requiring only a fraction of the available data to provide accurate forecasts. By utilizing available data from all time series of a data set, missing values can be compensated and accurate forecasting results can be calculated quickly on arbitrary aggregation levels.
17

F2DB: The Flash-Forward Database System

Lehner, Wolfgang, Fischer, Ulrike, Rosenthal, Frank 29 November 2022 (has links)
Forecasts are important to decision-making and risk assessment in many domains. Since current database systems do not provide integrated support for forecasting, it is usually done outside the database system by specially trained experts using forecast models. However, integrating model-based forecasting as a first-class citizen inside a DBMS speeds up the forecasting process by avoiding exporting the data and by applying database-related optimizations like reusing created forecast models. It especially allows subsequent processing of forecast results inside the database. In this demo, we present our prototype F2DB based on PostgreSQL, which allows for transparent processing of forecast queries. Our system automatically takes care of model maintenance when the underlying dataset changes. In addition, we offer optimizations to save maintenance costs and increase accuracy by using derivation schemes for multidimensional data. Our approach reduces the required expert knowledge by enabling arbitrary users to apply forecasting in a declarative way.

Page generated in 0.0757 seconds