Spelling suggestions: "subject:"query"" "subject:"guery""
81 |
Spatial and continuous spatial queries on smart mobile clients /Hu, Haibo. January 2005 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2005. / Vita. Includes bibliographical references (leaves 136-145). Also available in electronic version.
|
82 |
Multi-dimensional queries in distributed systems /Liu, Bin. January 2005 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2005. / Includes bibliographical references (leaves 47-50). Also available in electronic version.
|
83 |
SPQL : the design of a relational preference query language /Ning, Wei. January 2005 (has links)
Thesis (M.Sc.)--York University, 2005. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 123-126). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss &rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR11872
|
84 |
Query-by-Pointing: Algorithms and Pointing Error CompensationFaisal, Farhan January 2003 (has links) (PDF)
No description available.
|
85 |
Keyword Join: Realizing Keyword Search for Information IntegrationYu, Bei, Liu, Ling, Ooi, Beng Chin, Tan, Kian Lee 01 1900 (has links)
Information integration has been widely addressed over the last several decades. However, it is far from solved due to the complexity of resolving schema and data heterogeneities. In this paper, we propose out attempt to alleviate such difficulty by realizing keyword search functionality for integrating information from heterogeneous databases. Our solution does not require predefined global schema or any mappings between databases. Rather, it relies on an operator called keyword join to take a set of lists of partial answers from different data sources as input, and output a list of results that are joined by the tuples from input lists based on predefined similarity measures as integrated results. Our system allows source databases remain autonomous and the system to be dynamic and extensible. We have tested our system with real dataset and benchmark, which shows that our proposed method is practical and effective. / Singapore-MIT Alliance (SMA)
|
86 |
Méthodes d'optimisation pour le traitement de requêtes réparties à grande échelle sur des données liées / Optimization methods for large-scale distributed query processing on linked dataOğuz, Damla 28 June 2017 (has links)
Données Liées est un terme pour définir un ensemble de meilleures pratiques pour la publication et l'interconnexion des données structurées sur le Web. A mesure que le nombre de fournisseurs de Données Liées augmente, le Web devient un vaste espace de données global. La fédération de requêtes est l'une des approches permettant d'interroger efficacement cet espace de données distribué. Il est utilisé via un moteur de requêtes fédéré qui vise à minimiser le temps de réponse du premier tuple du résultat et le temps d'exécution pour obtenir tous les tuples du résultat. Il existe trois principales étapes dans un moteur de requêtes fédéré qui sont la sélection de sources de données, l'optimisation de requêtes et l'exécution de requêtes. La plupart des études sur l'optimisation de requêtes dans ce contexte se concentrent sur l'optimisation de requêtes statique qui génère des plans d'exécution de requêtes avant l'exécution et nécessite des statistiques. Cependant, l'environnement des Données Liées a plusieurs caractéristiques spécifiques telles que les taux d'arrivée de données imprévisibles et les statistiques peu fiables. En conséquence, l'optimisation de requêtes statique peut provoquer des plans d'exécution inefficaces. Ces contraintes montrent que l'optimisation de requêtes adaptative est une nécessité pour le traitement de requêtes fédéré sur les données liées. Dans cette thèse, nous proposons d'abord un opérateur de jointure adaptatif qui vise à minimiser le temps de réponse et le temps d'exécution pour les requêtes fédérées sur les endpoints SPARQL. Deuxièmement, nous étendons la première proposition afin de réduire encore le temps d'exécution. Les deux propositions peuvent changer la méthode de jointure et l'ordre de jointures pendant l'exécution en utilisant une optimisation de requêtes adaptative. Les opérateurs adaptatifs proposés peuvent gérer différents taux d'arrivée des données et le manque de statistiques sur des relations. L'évaluation de performances dans cette thèse montre l'efficacité des opérateurs adaptatifs proposés. Ils offrent des temps d'exécution plus rapides et presque les mêmes temps de réponse, comparé avec une jointure par hachage symétrique. Par rapport à bind join, les opérateurs proposés se comportent beaucoup mieux en ce qui concerne le temps de réponse et peuvent également offrir des temps d'exécution plus rapides. En outre, le deuxième opérateur proposé obtient un temps de réponse considérablement plus rapide que la bind-bloom join et peut également améliorer le temps d'exécution. Comparant les deux propositions, la deuxième offre des temps d'exécution plus rapides que la première dans toutes les conditions. En résumé, les opérateurs de jointure adaptatifs proposés présentent le meilleur compromis entre le temps de réponse et le temps d'exécution. Même si notre objectif principal est de gérer différents taux d'arrivée des données, l'évaluation de performance révèle qu'ils réussissent à la fois avec des taux d'arrivée de données fixes et variés. / Linked Data is a term to define a set of best practices for publishing and interlinking structured data on the Web. As the number of data providers of Linked Data increases, the Web becomes a huge global data space. Query federation is one of the approaches for efficiently querying this distributed data space. It is employed via a federated query engine which aims to minimize the response time and the completion time. Response time is the time to generate the first result tuple, whereas completion time refers to the time to provide all result tuples. There are three basic steps in a federated query engine which are data source selection, query optimization, and query execution. This thesis contributes to the subject of query optimization for query federation. Most of the studies focus on static query optimization which generates the query plans before the execution and needs statistics. However, the environment of Linked Data has several difficulties such as unpredictable data arrival rates and unreliable statistics. As a consequence, static query optimization can cause inefficient execution plans. These constraints show that adaptive query optimization should be used for federated query processing on Linked Data. In this thesis, we first propose an adaptive join operator which aims to minimize the response time and the completion time for federated queries over SPARQL endpoints. Second, we extend the first proposal to further reduce the completion time. Both proposals can change the join method and the join order during the execution by using adaptive query optimization. The proposed operators can handle different data arrival rates of relations and the lack of statistics about them. The performance evaluation of this thesis shows the efficiency of the proposed adaptive operators. They provide faster completion times and almost the same response times, compared to symmetric hash join. Compared to bind join, the proposed operators perform substantially better with respect to the response time and can also provide faster completion times. In addition, the second proposed operator provides considerably faster response time than bind-bloom join and can improve the completion time as well. The second proposal also provides faster completion times than the first proposal in all conditions. In conclusion, the proposed adaptive join operators provide the best trade-off between the response time and the completion time. Even though our main objective is to manage different data arrival rates of relations, the performance evaluation reveals that they are successful in both fixed and different data arrival rates.
|
87 |
Using semantics to enhance query reformulation in dynamic distributed environmentsFernandes, Damires Yluska de Souza 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:49:34Z (GMT). No. of bitstreams: 1
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / O processamento de consultas tem sido abordado como um problema central em
ambientes dinâmicos e distribuídos. O ponto crítico do processamento, no entanto, é a
reformulação da consulta submetida em um ponto origem em termos de um ponto
destino, considerando as correspondências existentes entre eles. Abordagens
tradicionais, em geral, realizam a reformulação utilizando correspondências de
equivalência. Entretanto, nem sempre conceitos de um ponto origem têm
correspondentes equivalentes no ponto destino, o que pode gerar uma reformulação
vazia e, possivelmente, nenhuma resposta para o usuário. Neste caso, se o usuário
considera interessante receber respostas relacionadas, mesmo que não precisas, é melhor
gerar uma reformulação adaptada ou enriquecida e, por consequência, respostas
aproximadas, do que nenhuma.
Dentro deste escopo, o presente trabalho propõe um enfoque baseado em
semântica, denominado SemRef, que visa integrar técnicas de enriquecimento e
reformulação de consultas de forma a prover usuários com um conjunto de respostas
expandidas. Reformulações exatas e enriquecidas são produzidas para permitir alcançar
esse conjunto. Para tal, usamos semântica obtida principalmente de um conjunto de
correspondências semânticas que estendem as normalmente encontradas na literatura.
Exemplos de correspondências não usuais são closeness e disjointness. Além disso,
usamos o contexto do usuário, da consulta e do ambiente como meio de favorecer o
processo de reformulação e lidar com informações que somente são obtidas
dinamicamente.
Formalizamos as definições propostas através da Lógica Descritiva ALC e
apresentamos o algoritmo que compõe o enfoque proposto, garantindo, através de
propriedades aferidas, sua corretude e completude. Desenvolvemos o algoritmo SemRef
através de um módulo de submissão e execução de consultas em um Sistema de
gerenciamento de dados em ambiente P2P (PDMS). Mostramos exemplos que illustram
o funcionamento e as vantagens do trabalho desenvolvido. Por fim, apresentamos a
experimentação realizada com os resultados que foram obtidos
|
88 |
The Conceptual Integration Modelling Framework: Semantics and Query AnsweringEkaterina, Guseva January 2016 (has links)
In the context of business intelligence (BI), the accuracy and accessibility of information consolidation play an important role. Integrating data from different sources involves its transformation according to constraints expressed in an appropriate language. The Conceptual Integration Modelling framework (CIM) acts as such a language. The CIM is aimed to allow business users to specify what information is needed in a simplified and comprehensive language. Achieving this requires raising the level of abstraction to the conceptual level, so that users are able to pose queries expressed in a conceptual query language (CQL).
The CIM is comprised of three facets: an Extended Entity Relationship (EER) model
(a high level conceptual model that is used to design databases), a conceptual schema against which users pose their queries, a relational multidimensional model that represents data sources, and mappings between the conceptual schema and sources. Such mappings can be specified in two ways: in the first scenario, the so-called global-as-view (GAV), the global schema is mapped to views over the relational sources by specifying how to obtain tuples of the global relation from tuples in the sources. In the second scenario, sources may contain less detailed information (a more aggregated data) so the local relations are defined as views over global relations that is called as local-as-view (LAV).
In this thesis, we address the problem of expressibility and decidability of queries written in CQL. We first define the semantics of the CIM by translating the conceptual model so we could translate it into a set of first order sentences containing a class of conceptual dependencies (CDs) - tuple-generating dependencies (TGDs) and equality generating dependencies (EGDs), in addition to certain (first order) restrictions to express multidimensionality. Here a multidimensionality means that facts in a data warehouse can be described from different perspectives. The EGDs set the equality between tuples and the TGDs set the rule that two instances are in a subtype association (more precise definitions are given further in the thesis).
We use a non-conflicting class of conceptual dependencies that guarantees a query's decidability. The non-conflicting dependencies avoid an interaction between TGDs and
EGDs. Our semantics extend the existing semantics defined for extended entity relationship models to the notions of fact, dimension category, dimensional hierarchy and dimension attributes.
In addition, a class of conceptual queries will be defined and proven to be decidable.
A DL-Lite logic has been extensively used for query rewriting as it allows us to reduce the complexity of the query answering to AC0. Moreover, we present a query rewriting algorithm for the class of defined conceptual dependencies.
Finally, we consider the problem in light of GAV and LAV approaches and prove the query answering complexities. The query answering problem becomes decidable if we add certain constraints to a well-known set of EGDs + TGDs dependencies to guarantee summarizability. The query answering problem in light of the global-as-a-view approach of mapping has AC0 data complexity and EXPTIME combined complexity. This problem becomes coNP hard if we are to consider it a LAV approach of mapping.
|
89 |
Evaluating Query Estimation Errors Using Bootstrap SamplingCal, Semih 29 July 2021 (has links)
No description available.
|
90 |
Modelování na základě genealogických dat / Modelling for GenealogyProstredníková, Hana January 2018 (has links)
This thesis contains detailed study of given problems related to genealogy science and genealogical records. There are analyzed roles and relationships that occurs in genealogical records and problems of their representation are described too. The goal is to design and implement system, which will validate relationships in genealogical records and enable processing this data.
|
Page generated in 0.0446 seconds