• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 8
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
2

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
3

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh 04 May 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
4

Limits of Schema Mappings

Kolaitis, Phokion, Pichler, Reinhard, Sallinger, Emanuel, Savenkov, Vadim 02 October 2018 (has links) (PDF)
Schema mappings have been extensively studied in the context of data exchange and data integration, where they have turned out to be the right level of abstraction for formalizing data interoperability tasks. Up to now and for the most part, schema mappings have been studied as static objects, in the sense that each time the focus has been on a single schema mapping of interest or, in the case of composition, on a pair of schema mappings of interest. In this paper, we adopt a dynamic viewpoint and embark on a study of sequences of schema mappings and of the limiting behavior of such sequences. To this effect, we first introduce a natural notion of distance on sets of finite target instances that expresses how "Close" two sets of target instances are as regards the certain answers of conjunctive queries on these sets. Using this notion of distance, we investigate pointwise limits and uniform limits of sequences of schema mappings, as well as the companion notions of pointwise Cauchy and uniformly Cauchy sequences of schema mappings. We obtain a number of results about the limits of sequences of GAV schema mappings and the limits of sequences of LAV schema mappings that reveal striking differences between these two classes of schema mappings. We also consider the completion of the metric space of sets of target instances and obtain concrete representations of limits of sequences of schema mappings in terms of generalized schema mappings, that is, schema mappings with infinite target instances as solutions to (finite) source instances.
5

Bridging Decision Applications and Multidimensional Databases

Nargesian, Fatemeh January 2011 (has links)
Data warehouses were envisioned to facilitate analytical reporting and data visualization by providing a model for the flow of data from operational databases to decision support environments. Decision support environments provide a multidimensional conceptual view of the underlying data warehouse, which is usually stored in relational DBMSs. Typically, there is an impedance mismatch between this conceptual view — shared also by all decision support applications accessing the data warehouse — and the physical model of the data stored in relational DBMSs. This thesis presents a mapping compilation algorithm in the context of the Conceptual Integration Model (CIM) [67] framework. In the CIM framework, the relationships between the conceptual model and the physical model are specified by a set of attribute-to-attribute correspondences. The algorithm compiles these correspondences into a set of mappings that associate each construct in the conceptual model with a query on the physical model. Moreover, the homogeneity and summarizability of data in conceptual models is the key to accurate query answering, a necessity in decision making environments. A data-driven approach to refactor relational models into summarizable schemas and instances is proposed as the solution of this issue. We outline the algorithms and challenges in bridging multidimensional conceptual models and the physical model of data warehouses and discuss experimental results.
6

Validation of mappings between data schemas

Rull Fort, Guillem 19 January 2011 (has links)
En esta tesis, presentamos un nuevo enfoque para validar mappings entre esquemas de datos que permite al diseñador comprobar si el mapping satisface o no ciertas propiedades deseables. La respuesta que obtiene el diseñador no se limita a un simple valor booleano, sino que dependiendo del resultado de la comprobación obtendrá un ejemplo/contraejemplo que ilustre ese resultado, o bien se le indicará el conjunto de restricciones de integridad de los esquemas y formulas del mapping responsables de ese resultado. Una de las características principales de nuestro enfoque es que es capaz de tratar una clase muy expresiva de mappings y esquemas relacionales. En particular, nuestro enfoque es capaz de tratar con formulas de mapping consistentes en inclusiones y igualdades de consultas, además de permitir el uso de negaciones y comparaciones aritméticas tanto en las propias formulas del mapping como en las vistas definidas en los esquemas. Nuestro enfoque también permite tratar restricciones de integridad, las cuales pueden estar definidas no solo sobre las tablas sino también sobre las vistas de los esquemas. Dado que razonar sobre este tipo de mappings y esquemas es, desafortunadamente, indecidible, proponemos realizar un test de terminación previo a la validación del mapping. Si el test de terminación da una respuesta positiva, entonces podremos estar seguros de que la posterior comprobación de la propiedad deseable correspondiente terminará. Finalmente, también extendemos nuestro enfoque más allá del caso relacional y lo aplicamos al contexto de mappings entre esquemas XML. / In this thesis, we present a new approach to the validation of mappings between data schemas. It allows the designer to check whether the mapping satisfies certain desirable properties. The feedback that our approach provides to the designer is not only a Boolean answer, but either a (counter)example for the (un)satisfiability of the tested property, or the set of mapping assertions and schema constraints that are responsible for that (un)satisfiability. One of the main characteristics of our approach is that it is able to deal with a very expressive class of relational mapping scenarios; in particular, it is able to deal with mapping assertions in the form of query inclusions and query equalities, and it allows the use of negation and arithmetic comparisons in both the mapping assertions and the views of the schemas; it also allows for integrity constraints, which can be defined not only over the base relations but also in terms of the views. Since reasoning on the class of mapping scenarios that we consider is, unfortunately, undecidable, we propose to perform a termination test as a pre-validation step. If the answer of the test is positive, then checking the corresponding desirable property will terminate. We also go beyond the relational setting and study the application of our approach to the context of mappings between XML schemas.
7

Embracing Incompleteness in Schema Mappings

Rodriguez-Gianolli, Patricia 09 August 2013 (has links)
Various forms of information integration have become ubiquitous in current Business Intelligence (BI) technologies. In many cases, the semantic relationship between heterogeneous data sources is specified using high-level declarative rules, called schema mappings. For decades, Skolem functions have been regarded as an important tool in schema mappings as they permit a precise representation of incomplete information. The powerful mapping language of second-order tuple generating dependencies (SO tgds) permits arbitrary Skolem functions and has been proven to be the right class for modeling many integration problems, such as composition and correlation of mappings. This language is strictly more powerful than the languages used in many integration systems, including source-to-target and nested tgds which are both first-order (FO) languages (commonly known as GLAV and nested GLAV mappings). An important class of GLAV mappings are Local-As-View (LAV) tgds, which has found important application in data integration. These FO mapping languages are known to have more desirable programmatic and computational properties. In this thesis, we present a number of techniques for translating some SO tgds into equivalent, more manageable FO schema mappings. Our results rely on understanding and controlling the presence of incompleteness in mappings. We show that the composition of LAV mappings is not only FO, but can always be expressed as a LAV mapping. As a byproduct, we show that the problem of recovery checking for LAV mappings becomes tractable, in contrast to the case of GLAV mappings for which it is known to be undecidable. We introduce two approaches for transforming SO tgds into equivalent nested GLAV mappings. Our approach considers the presence of source constraints, and provides sufficient conditions for when the rich Skolem functions in SO tgds are well-behaved and have an FO semantics. We experimentally show that these conditions are able to handle a very large number of real schema mappings. Last, we propose a first-step for embracing incompleteness in the context of BI applications. Specifically, we present elements of a formal framework for vivifying data with respect to a business model. We view the task of discovering data-to-business interpretations as one of removing incompleteness from these mappings.
8

Embracing Incompleteness in Schema Mappings

Rodriguez-Gianolli, Patricia 09 August 2013 (has links)
Various forms of information integration have become ubiquitous in current Business Intelligence (BI) technologies. In many cases, the semantic relationship between heterogeneous data sources is specified using high-level declarative rules, called schema mappings. For decades, Skolem functions have been regarded as an important tool in schema mappings as they permit a precise representation of incomplete information. The powerful mapping language of second-order tuple generating dependencies (SO tgds) permits arbitrary Skolem functions and has been proven to be the right class for modeling many integration problems, such as composition and correlation of mappings. This language is strictly more powerful than the languages used in many integration systems, including source-to-target and nested tgds which are both first-order (FO) languages (commonly known as GLAV and nested GLAV mappings). An important class of GLAV mappings are Local-As-View (LAV) tgds, which has found important application in data integration. These FO mapping languages are known to have more desirable programmatic and computational properties. In this thesis, we present a number of techniques for translating some SO tgds into equivalent, more manageable FO schema mappings. Our results rely on understanding and controlling the presence of incompleteness in mappings. We show that the composition of LAV mappings is not only FO, but can always be expressed as a LAV mapping. As a byproduct, we show that the problem of recovery checking for LAV mappings becomes tractable, in contrast to the case of GLAV mappings for which it is known to be undecidable. We introduce two approaches for transforming SO tgds into equivalent nested GLAV mappings. Our approach considers the presence of source constraints, and provides sufficient conditions for when the rich Skolem functions in SO tgds are well-behaved and have an FO semantics. We experimentally show that these conditions are able to handle a very large number of real schema mappings. Last, we propose a first-step for embracing incompleteness in the context of BI applications. Specifically, we present elements of a formal framework for vivifying data with respect to a business model. We view the task of discovering data-to-business interpretations as one of removing incompleteness from these mappings.

Page generated in 0.0808 seconds