• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • Tagged with
  • 37
  • 37
  • 19
  • 17
  • 16
  • 16
  • 14
  • 14
  • 14
  • 12
  • 11
  • 10
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Conceptual Integration Modelling Framework: Semantics and Query Answering

Ekaterina, Guseva January 2016 (has links)
In the context of business intelligence (BI), the accuracy and accessibility of information consolidation play an important role. Integrating data from different sources involves its transformation according to constraints expressed in an appropriate language. The Conceptual Integration Modelling framework (CIM) acts as such a language. The CIM is aimed to allow business users to specify what information is needed in a simplified and comprehensive language. Achieving this requires raising the level of abstraction to the conceptual level, so that users are able to pose queries expressed in a conceptual query language (CQL). The CIM is comprised of three facets: an Extended Entity Relationship (EER) model (a high level conceptual model that is used to design databases), a conceptual schema against which users pose their queries, a relational multidimensional model that represents data sources, and mappings between the conceptual schema and sources. Such mappings can be specified in two ways: in the first scenario, the so-called global-as-view (GAV), the global schema is mapped to views over the relational sources by specifying how to obtain tuples of the global relation from tuples in the sources. In the second scenario, sources may contain less detailed information (a more aggregated data) so the local relations are defined as views over global relations that is called as local-as-view (LAV). In this thesis, we address the problem of expressibility and decidability of queries written in CQL. We first define the semantics of the CIM by translating the conceptual model so we could translate it into a set of first order sentences containing a class of conceptual dependencies (CDs) - tuple-generating dependencies (TGDs) and equality generating dependencies (EGDs), in addition to certain (first order) restrictions to express multidimensionality. Here a multidimensionality means that facts in a data warehouse can be described from different perspectives. The EGDs set the equality between tuples and the TGDs set the rule that two instances are in a subtype association (more precise definitions are given further in the thesis). We use a non-conflicting class of conceptual dependencies that guarantees a query's decidability. The non-conflicting dependencies avoid an interaction between TGDs and EGDs. Our semantics extend the existing semantics defined for extended entity relationship models to the notions of fact, dimension category, dimensional hierarchy and dimension attributes. In addition, a class of conceptual queries will be defined and proven to be decidable. A DL-Lite logic has been extensively used for query rewriting as it allows us to reduce the complexity of the query answering to AC0. Moreover, we present a query rewriting algorithm for the class of defined conceptual dependencies. Finally, we consider the problem in light of GAV and LAV approaches and prove the query answering complexities. The query answering problem becomes decidable if we add certain constraints to a well-known set of EGDs + TGDs dependencies to guarantee summarizability. The query answering problem in light of the global-as-a-view approach of mapping has AC0 data complexity and EXPTIME combined complexity. This problem becomes coNP hard if we are to consider it a LAV approach of mapping.
2

Application of Definability to Query Answering over Knowledge Bases

Kinash, Taras January 2013 (has links)
Answering object queries (i.e. instance retrieval) is a central task in ontology based data access (OBDA). Performing this task involves reasoning with respect to a knowledge base K (i.e. ontology) over some description logic (DL) dialect L. As the expressive power of L grows, so does the complexity of reasoning with respect to K. Therefore, eliminating the need to reason with respect to a knowledge base K is desirable. In this work, we propose an optimization to improve performance of answering object queries by eliminating the need to reason with respect to the knowledge base and, instead, utilizing cached query results when possible. In particular given a DL dialect L, an object query C over some knowledge base K and a set of cached query results S={S1, ..., Sn} obtained from evaluating past queries, we rewrite C into an equivalent query D, that can be evaluated with respect to an empty knowledge base, using cached query results S' = {Si1, ..., Sim}, where S' is a subset of S. The new query D is an interpolant for the original query C with respect to K and S. To find D, we leverage a tool for enumerating interpolants of a given sentence with respect to some theory. We describe a procedure that maps a knowledge base K, expressed in terms of a description logic dialect of first order logic, and object query C into an equivalent theory and query that are input into the interpolant enumerating tool, and resulting interpolants into an object query D that can be evaluated over an empty knowledge base. We show the efficacy of our approach through experimental evaluation on a Lehigh University Benchmark (LUBM) data set, as well as on a synthetic data set, LUBMMOD, that we created by augmenting an LUBM ontology with additional axioms.
3

Formalização do processo de tradução de consultas em ambientes de integração de dados XML / Formalization of a query translation process in XML data integration

Alves, Willian Bruno Gomes January 2008 (has links)
A fim de consultar uma mesma informação em fontes XML heterogêneas seria desejável poder formular uma única consulta em relação a um esquema global conceitual e então traduzi-la automaticamente para consultas XML para cada uma das fontes. CXPath (Conceptual XPath) é uma proposta de linguagem para consultar fontes XML em um nível conceitual. Essa linguagem foi desenvolvida para simplificar o processo de tradução de consultas em nível conceitual para consultas em nível XML. Ao mesmo tempo, a linguagem tem como objetivo a facilidade de aprendizado de sua sintaxe. Por essa razão, sua sintaxe é bastante semelhante à da linguagem XPath utilizada para consultar documentos XML. Nesta dissertação é definido formalmente o mecanismo de tradução de consultas em nível conceitual, escritas em CXPath, para consultas em nível XML, escritas em XPath. É mostrado o tratamento do relacionamento de herança no mecanismo de tradução, e é feita uma discussão sobre a relação entre a expressividade do modelo conceitual e o mecanismo de tradução. Existem situações em que a simples tradução de uma consulta CXPath não contempla alguns resultados, pois as fontes de dados podem ser incompletas. Neste trabalho, o modelo conceitual que constitui o esquema global do sistema de integração de dados é estendido com dependências de inclusão e o mecanismo de resolução de consultas é modificado para lidar com esse tipo de dependência. Mais especificamente, são apresentados mecanismos de reescrita e eliminação de redundâncias de consultas a fim de lidar com essas dependências. Com o aumento de expressividade do esquema global é possível inferir resultados, a partir dos dados disponíveis no sistema de integração, que antes não seriam contemplados com a simples tradução de uma consulta. Também é apresentada a abordagem para integração de dados utilizada nesta dissertação de acordo com o arcabouço formal para integração de dados proposto por (LENZERINI, 2002). De acordo com o autor, tal arcabouço é geral o bastante para capturar todas as abordagens para integração de dados da literatura, o que inclui a abordagem aqui mostrada. / In order to search for the same information in heterogeneous XML data sources, it would be desirable to state a single query against a global conceptual schema and then translate it automatically into an XML query for each specific data source. CXPath (for Conceptual XPath ) has been proposed as a language for querying XML sources at the conceptual level. This language was developed to simplify the translation process of queries at conceptual level to queries at XML level. At the same time, one of the goals of the language design is to facilitate the learning of its syntax. For this reason its syntax is similar to the XPath language used for querying XML documents. In this dissertation, a translation mechanism of queries at conceptual level, written in CXPath, to queries at XML level, written in XPath, is formally defined. The inheritance relationship in the translation mechanism is shown, being discussed the relation between the conceptual model expressivity and the translation mechanism. In some cases, the translation of a CXPath query does not return some of the answers because the data sources may be incomplete. In this work, the conceptual model, which is the basis for the data integration system’s global schema, is improved with inclusion dependencies, and the query answering mechanism is modified to deal with this kind of dependency. More specifically, mechanisms of query rewriting and redundancy elimination are presented to deal with this kind of dependency. This global schema improvement allows infer results, with the data available in the system, that would not be provided with a simple query translation. The approach of data integration used in this dissertation is also presented within the formal framework for data integration proposed by (LENZERINI, 2002). According to the author, that framework is general enough to capture all approaches in the literature, including, in particular, the approach considered in this dissertation.
4

Formalização do processo de tradução de consultas em ambientes de integração de dados XML / Formalization of a query translation process in XML data integration

Alves, Willian Bruno Gomes January 2008 (has links)
A fim de consultar uma mesma informação em fontes XML heterogêneas seria desejável poder formular uma única consulta em relação a um esquema global conceitual e então traduzi-la automaticamente para consultas XML para cada uma das fontes. CXPath (Conceptual XPath) é uma proposta de linguagem para consultar fontes XML em um nível conceitual. Essa linguagem foi desenvolvida para simplificar o processo de tradução de consultas em nível conceitual para consultas em nível XML. Ao mesmo tempo, a linguagem tem como objetivo a facilidade de aprendizado de sua sintaxe. Por essa razão, sua sintaxe é bastante semelhante à da linguagem XPath utilizada para consultar documentos XML. Nesta dissertação é definido formalmente o mecanismo de tradução de consultas em nível conceitual, escritas em CXPath, para consultas em nível XML, escritas em XPath. É mostrado o tratamento do relacionamento de herança no mecanismo de tradução, e é feita uma discussão sobre a relação entre a expressividade do modelo conceitual e o mecanismo de tradução. Existem situações em que a simples tradução de uma consulta CXPath não contempla alguns resultados, pois as fontes de dados podem ser incompletas. Neste trabalho, o modelo conceitual que constitui o esquema global do sistema de integração de dados é estendido com dependências de inclusão e o mecanismo de resolução de consultas é modificado para lidar com esse tipo de dependência. Mais especificamente, são apresentados mecanismos de reescrita e eliminação de redundâncias de consultas a fim de lidar com essas dependências. Com o aumento de expressividade do esquema global é possível inferir resultados, a partir dos dados disponíveis no sistema de integração, que antes não seriam contemplados com a simples tradução de uma consulta. Também é apresentada a abordagem para integração de dados utilizada nesta dissertação de acordo com o arcabouço formal para integração de dados proposto por (LENZERINI, 2002). De acordo com o autor, tal arcabouço é geral o bastante para capturar todas as abordagens para integração de dados da literatura, o que inclui a abordagem aqui mostrada. / In order to search for the same information in heterogeneous XML data sources, it would be desirable to state a single query against a global conceptual schema and then translate it automatically into an XML query for each specific data source. CXPath (for Conceptual XPath ) has been proposed as a language for querying XML sources at the conceptual level. This language was developed to simplify the translation process of queries at conceptual level to queries at XML level. At the same time, one of the goals of the language design is to facilitate the learning of its syntax. For this reason its syntax is similar to the XPath language used for querying XML documents. In this dissertation, a translation mechanism of queries at conceptual level, written in CXPath, to queries at XML level, written in XPath, is formally defined. The inheritance relationship in the translation mechanism is shown, being discussed the relation between the conceptual model expressivity and the translation mechanism. In some cases, the translation of a CXPath query does not return some of the answers because the data sources may be incomplete. In this work, the conceptual model, which is the basis for the data integration system’s global schema, is improved with inclusion dependencies, and the query answering mechanism is modified to deal with this kind of dependency. More specifically, mechanisms of query rewriting and redundancy elimination are presented to deal with this kind of dependency. This global schema improvement allows infer results, with the data available in the system, that would not be provided with a simple query translation. The approach of data integration used in this dissertation is also presented within the formal framework for data integration proposed by (LENZERINI, 2002). According to the author, that framework is general enough to capture all approaches in the literature, including, in particular, the approach considered in this dissertation.
5

Formalização do processo de tradução de consultas em ambientes de integração de dados XML / Formalization of a query translation process in XML data integration

Alves, Willian Bruno Gomes January 2008 (has links)
A fim de consultar uma mesma informação em fontes XML heterogêneas seria desejável poder formular uma única consulta em relação a um esquema global conceitual e então traduzi-la automaticamente para consultas XML para cada uma das fontes. CXPath (Conceptual XPath) é uma proposta de linguagem para consultar fontes XML em um nível conceitual. Essa linguagem foi desenvolvida para simplificar o processo de tradução de consultas em nível conceitual para consultas em nível XML. Ao mesmo tempo, a linguagem tem como objetivo a facilidade de aprendizado de sua sintaxe. Por essa razão, sua sintaxe é bastante semelhante à da linguagem XPath utilizada para consultar documentos XML. Nesta dissertação é definido formalmente o mecanismo de tradução de consultas em nível conceitual, escritas em CXPath, para consultas em nível XML, escritas em XPath. É mostrado o tratamento do relacionamento de herança no mecanismo de tradução, e é feita uma discussão sobre a relação entre a expressividade do modelo conceitual e o mecanismo de tradução. Existem situações em que a simples tradução de uma consulta CXPath não contempla alguns resultados, pois as fontes de dados podem ser incompletas. Neste trabalho, o modelo conceitual que constitui o esquema global do sistema de integração de dados é estendido com dependências de inclusão e o mecanismo de resolução de consultas é modificado para lidar com esse tipo de dependência. Mais especificamente, são apresentados mecanismos de reescrita e eliminação de redundâncias de consultas a fim de lidar com essas dependências. Com o aumento de expressividade do esquema global é possível inferir resultados, a partir dos dados disponíveis no sistema de integração, que antes não seriam contemplados com a simples tradução de uma consulta. Também é apresentada a abordagem para integração de dados utilizada nesta dissertação de acordo com o arcabouço formal para integração de dados proposto por (LENZERINI, 2002). De acordo com o autor, tal arcabouço é geral o bastante para capturar todas as abordagens para integração de dados da literatura, o que inclui a abordagem aqui mostrada. / In order to search for the same information in heterogeneous XML data sources, it would be desirable to state a single query against a global conceptual schema and then translate it automatically into an XML query for each specific data source. CXPath (for Conceptual XPath ) has been proposed as a language for querying XML sources at the conceptual level. This language was developed to simplify the translation process of queries at conceptual level to queries at XML level. At the same time, one of the goals of the language design is to facilitate the learning of its syntax. For this reason its syntax is similar to the XPath language used for querying XML documents. In this dissertation, a translation mechanism of queries at conceptual level, written in CXPath, to queries at XML level, written in XPath, is formally defined. The inheritance relationship in the translation mechanism is shown, being discussed the relation between the conceptual model expressivity and the translation mechanism. In some cases, the translation of a CXPath query does not return some of the answers because the data sources may be incomplete. In this work, the conceptual model, which is the basis for the data integration system’s global schema, is improved with inclusion dependencies, and the query answering mechanism is modified to deal with this kind of dependency. More specifically, mechanisms of query rewriting and redundancy elimination are presented to deal with this kind of dependency. This global schema improvement allows infer results, with the data available in the system, that would not be provided with a simple query translation. The approach of data integration used in this dissertation is also presented within the formal framework for data integration proposed by (LENZERINI, 2002). According to the author, that framework is general enough to capture all approaches in the literature, including, in particular, the approach considered in this dissertation.
6

Tractable query answering for description logics via query rewriting

Perez-Urbina, Hector M. January 2010 (has links)
We consider the problem of answering conjunctive queries over description logic knowledge bases via query rewriting. Given a conjunctive query Q and a TBox T, we compute a new query Q′ that incorporates the semantic consequences of T such that, for any ABox A, evaluating Q over T and A can be done by evaluating the new query Q′ over A alone. We present RQR—a novel resolution-based rewriting algorithm for the description logic ELHIO¬ that generalizes and extends existing approaches. RQR not only handles a spectrum of logics ranging from DL-Lite_core up to ELHIO¬, but it is worst-case optimal with respect to data complexity for all of these logics; moreover, given the form of the rewritten queries, their evaluation can be delegated to off-the-shelf (deductive) database systems. We use RQR to derive the novel complexity results that conjunctive query answering for ELHIO¬ and DL-Lite+ are, respectively, PTime and NLogSpace complete with respect to data complexity. In order to show the practicality of our approach, we present the results of an empirical evaluation. Our evaluation suggests that RQR, enhanced with various straightforward optimizations, can be successfully used in conjunction with a (deductive) database system in order to answer queries over knowledge bases in practice. Moreover, in spite of being a more general procedure, RQR will often produce significantly smaller rewritings than the standard query rewriting algorithm for the DL-Lite family of logics.
7

Efficient Querying and Analytics of Semantic Web Data / Interrogation et Analyse Efficiente des Données du Web Sémantique

Roatis, Alexandra 22 September 2014 (has links)
L'utilité et la pertinence des données se trouvent dans l'information qui peut en être extraite.Le taux élevé de publication des données et leur complexité accrue, par exemple dans le cas des données du Web sémantique autodescriptives et hétérogènes, motivent l'intérêt de techniques efficaces pour la manipulation de données.Dans cette thèse, nous utilisons la technologie mature de gestion de données relationnelles pour l'interrogation des données du Web sémantique.La première partie se concentre sur l'apport de réponse aux requêtes sur les données soumises à des contraintes RDFS, stockées dans un système de gestion de données relationnelles. L'information implicite, résultant du raisonnement RDF est nécessaire pour répondre correctement à ces requêtes.Nous introduisons le fragment des bases de données RDF, allant au-delà de l'expressivité des fragments étudiés précédemment.Nous élaborons de nouvelles techniques pour répondre aux requêtes dans ce fragment, en étendant deux approches connues de manipulation de données sémantiques RDF, notamment par saturation de graphes et reformulation de requêtes.En particulier, nous considérons les mises à jour de graphe au sein de chaque approche et proposerons un procédé incrémental de maintenance de saturation. Nous étudions expérimentalement les performances de nos techniques, pouvant être déployées au-dessus de tout moteur de gestion de données relationnelles.La deuxième partie de cette thèse considère les nouvelles exigences pour les outils et méthodes d'analyse de données, issues de l'évolution du Web sémantique.Nous revisitons intégralement les concepts et les outils pour l'analyse de données, dans le contexte de RDF.Nous proposons le premier cadre formel pour l'analyse d'entrepôts RDF. Notamment, nous définissons des schémas analytiques adaptés aux graphes RDF hétérogènes à sémantique riche, des requêtes analytiques qui (au-delà de cubes relationnels) permettent l'interrogation flexible des données et schémas, ainsi que des opérations d'agrégation puissantes de type OLAP. Des expériences sur une plateforme entièrement implémentée démontrent l'intérêt pratique de notre approche. / The utility and relevance of data lie in the information that can be extracted from it.The high rate of data publication and its increased complexity, for instance the heterogeneous, self-describing Semantic Web data, motivate the interest in efficient techniques for data manipulation.In this thesis we leverage mature relational data management technology for querying Semantic Web data.The first part focuses on query answering over data subject to RDFS constraints, stored in relational data management systems. The implicit information resulting from RDF reasoning is required to correctly answer such queries. We introduce the database fragment of RDF, going beyond the expressive power of previously studied fragments. We devise novel techniques for answering Basic Graph Pattern queries within this fragment, exploring the two established approaches for handling RDF semantics, namely graph saturation and query reformulation. In particular, we consider graph updates within each approach and propose a method for incrementally maintaining the saturation. We experimentally study the performance trade-offs of our techniques, which can be deployed on top of any relational data management engine.The second part of this thesis considers the new requirements for data analytics tools and methods emerging from the development of the Semantic Web. We fully redesign, from the bottom up, core data analytics concepts and tools in the context of RDF data. We propose the first complete formal framework for warehouse-style RDF analytics. Notably, we define analytical schemas tailored to heterogeneous, semantic-rich RDF graphs, analytical queries which (beyond relational cubes) allow flexible querying of the data and the schema as well as powerful aggregation and OLAP-style operations. Experiments on a fully-implemented platform demonstrate the practical interest of our approach.
8

Best effort query answering for mediators with union views

Papri, Rowshon Jahan 07 1900 (has links)
Consider an SQL query that involves joins of several relations, optionally followed by selections and/or projections. It can be represented by a conjunctive datalog query Q without negation or arithmetic subgoals. We consider the problem of answering such a query Q using a mediator M. For each relation R that corresponds to a subgoal in Q, M contains several sources; each source for R provides some of the tuples in R. The capability of each source are described in terms of templates. It might not be possible to get all the tuples in the result, Result(Q), using M, due to restrictions imposed by the templates. We consider best-effort query answering: Find as many tuples in Result(Q) as possible. We present an algorithm to determine if Q can be so answered using M. / Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
9

Extension d'ASP pour couvrir des fragments DL traitables : étude théorique et implémentation / Extension of ASP to cover treatable DL fragments : theorical study and implementation

Garreau, Fabien 24 November 2016 (has links)
Les ontologies sont utilisées pour la représentation et l’interrogation de connaissances d’un domaine précis et peuvent être représentées en partie à l’aide des logiques de description légères. Ces ontologies peuvent être issues de plusieurs sources dont les données sont plus ou moins complétés, ainsi certaines données peuvent être incomplètes ou incohérentes empêchant la déduction d’autres données. L’Answer Set Programming (ASP) est un langage de programmation logique non-monotone à base de règles permettant de représenter des données incomplètes mais il ne permet pas de représenter les logiques de description légères. Les règles existentielles généralisent les logiques de description légères et forment aussi un langage de programmation logique mais ne permettant pas la définition d’exceptions. A partir d’une étude théorique d’ASP et des règles existentielles nous proposons de regrouper en un seul formalisme ces deux langages, nous définissons le formalisme des programmes non-monotones existentiels permettant de traiter un programme provenant d’une ontologie avec exceptions. Cette extension a pour but de généraliser à la fois ASP et les règles existentielles et d’utiliser la puissance des solveurs ASP pour raisonner sur des ontologies avec exceptions. Cette étude propose d’approfondir les travaux sur la décidabilité d’un programme avec l’extension aux programmes non-monotones existentiels. Nous proposons aussi d’améliorer les résultats lies à l’interrogation d’un programme ASP ainsi qu’une implémentation d’une extension du solveur ASPeRiX pour traiter les programmes non-monotones existentiels. / Ontologies are meant to represent or to queryknowledge from a precise domain and can berepresented, in part, by logic formalisms such thatdescription logics. These ontologies can be providedby several sources where knowledge is more or lesscomplete, hence some data can be incomplete orincoherent preventing the deduction of other data.Answer Set Programming (ASP) formalism is anon-monotonic logic programming language based onrules, often used in knowledge representation, whichhas the feature to represent incomplete data.However, it’s impossible to represent lite descriptionlogics in ASP, because of existential variables in rules.Existential rules generalize lite description logics andalso form a programmation logic language that butdoesn’t offer the possibility to represent exceptions.Based on a theoritical study of ASP and existentialrules, we propose to gather both languages in aunique formalism, we define non-monotonic existentialprogram allowing to deal with ontology withexceptions. This extension aims to generalize bothASP and existential rules program and to use theefficiency of ASP solvers to reason on ontologies withexceptions. This thesis propose to deepen worksabout entailment and decidability of a non-monotonicexistential program. Another result from this study isthe improvement of interrogation in ASP and theimplementation of an extension of the ASPeRiX solverto deal with non-monotonic existential programs.
10

Evaluating conjunctive and graph queries over the EL profile of OWL 2

Stefanoni, Giorgio January 2015 (has links)
OWL 2 EL is a popular ontology language that is based on the EL family of description logics and supports regular role inclusions,axioms that can capture compositional properties of roles such as role transitivity and reflexivity. In this thesis, we present several novel complexity results and algorithms for answering expressive queries over OWL 2 EL knowledge bases (KBs) with regular role inclusions. We first focus on the complexity of conjunctive query (CQ) answering in OWL 2 EL and show that the problem is PSpace-complete in combined complexity, the complexity measured in the total size of the input. All the previously known approaches encode the regular role inclusions using finite automata that can be worst-case exponential in size, and thus are not optimal. In our PSpace procedure, we address this problem by using a novel, succinct encoding of regular role inclusions based on pushdown automata with a bounded stack. Moreover, we strengthen the known PSpace lower complexity bound and show that the problem is PSpace-hard even if we consider only the regular role inclusions as part of the input and the query is acyclic; thus, our algorithm is optimal in knowledge base complexity, the complexity measured in the size of the KB, as well as for acyclic queries. We then study graph queries for OWL 2 EL and show that answering positive, converse- free conjunctive graph queries is PSpace-complete. Thus, from a theoretical perspective, we can add navigational features to CQs over OWL 2 EL without an increase in complexity. Finally, we present a practicable algorithm for answering CQs over OWL 2 EL KBs with only transitive and reflexive composite roles. None of the previously known approaches target transitive and reflexive roles specifically, and so they all run in PSpace and do not provide a tight upper complexity bound. In contrast, our algorithm is optimal: it runs in NP in combined complexity and in PTime in KB complexity. We also show that answering CQs is NP-hard in combined complexity if the query is acyclic and the KB contains one transitive role, one reflexive role, or nominals—concepts containing precisely one individual.

Page generated in 0.1142 seconds