Spelling suggestions: "subject:"query languages"" "subject:"query ianguages""
71 |
Multi-image query content-based image retrievalRen, Feng Hui. January 2006 (has links)
Thesis (M.Comp.Sc.)--University of Wollongong, 2006. / Typescript. Includes bibliographical references: leaf 140-157.
|
72 |
OntoSELF+TQ a topology query system for OntoSELF /Pei, Zhisong. January 2009 (has links)
Thesis (M.C.S.)--Miami University, Dept. of Computer Science and Systems Analysis, 2009. / Title from first page of PDF document. Includes bibliographical references (p. 122-123).
|
73 |
Semantic data sharing with a peer data management system /Tatarinov, Igor. January 2004 (has links)
Thesis (Ph. D.)--University of Washington, 2004. / Vita. Includes bibliographical references (p. 113-124).
|
74 |
A Common Programming Interface for Managed Heterogeneous Data AnalysisLuong, Johannes 28 July 2021 (has links)
The widespread success of data analysis in a growing number of application domains has lead to the development of a variety of purpose build data processing systems. Today, many organizations operate whole fleets of different data related systems. Although this differentiation has good reasons there is also a growing need to create holistic perspectives that cut across the borders of individual systems. Application experts that want to create such perspectives are confronted with a variety of programming interfaces, data formats, and the task to combine available systems in an efficient manner. These issues are generally unrelated to the application domain and require a specialized set of skills. As a consequence, development is slowed down and made more expensive which stifles exploration and innovation. In addition, the direct use of specialized system interfaces can couple application code to specific processing systems.
In this dissertation, we propose the data processing platform DataCalc which presents users with a unified application oriented programming interface and which automatically executes this interface in an efficient manner on a variety of processing systems. DataCalc offers a managed environment for data analyses that enables domain experts to concentrate on their application logic and decouples code from specific processing technology. The basis of this managed processing environment are the high-level domain oriented program representation DCIL and a flexible and extensible cost based optimization component. In addition to traditional up-front optimization, the optimizer also supports dynamic re-optimization of partially executed DCIL programs. This enables the system to benefit from dynamic information that only becomes available during execution of queries. DataCalc assigns workloads to available processing systems using a fine grained task scheduling model to enable efficient exploitation of available resources.
In the second part of the dissertation we present a prototypical implementation of the DataCalc platform which includes connectors for the relational DBMS PostgreSQL, the document store MongoDB, the graph database Neo4j, and for the custom build PyProc processing system. For the evaluation of this prototype we have implemented an extended application scenario. Our experiments demonstrate that DataCalc is able to find and execute efficient execution strategies that minimize cross system data movement. The system achieves much better results than a naive
implementation and it comes close to the performance of a hand-optimized solution. Based on these findings we are confident to conclude that the DataCalc platform architecture provides an excellent environment for cross domain data analysis on a heterogeneous federated processing architecture.
|
75 |
Concept-Oriented Model and Nested Partially Ordered SetsSavinov, Alexandr 24 April 2014 (has links)
Concept-oriented model of data (COM) has been recently defined syntactically by means of the concept-oriented query language (COQL). In this paper we propose a formal embodiment of this model, called nested partially ordered sets (nested posets), and demonstrate how it is connected with its syntactic counterpart. Nested poset is a novel formal construct that can be viewed either as a nested set with partial order relation established on its elements or as a conventional poset where elements can themselves be posets. An element of a nested poset is defined as a couple consisting of one identity tuple and one entity tuple. We formally define main operations on nested posets and demonstrate their usefulness in solving typical data management and analysis tasks such as logic navigation, constraint propagation, inference and multidimensional analysis.
|
76 |
Developing a Car2X Communication Application using a Queriable Wireless Sensor NetworkJitta, Srinivasu 11 September 2018 (has links)
The development of wireless sensor networks has reached a point where each individual node of a network may store and deliver a massive amount of (sensor-based) information at once or over time. In the future, massively connected, highly dynamic wireless sensor networks such as vehicle-2-vehicle communication scenarios may hold an even greater information potential. This is mostly due to the increase in node complexity. Consequently, data volumes will become a problem for traditional data aggregation strategies trafficwise as well as with regard to energy efficiency. Therefore, in this thesis, proposed a database aggregation strategy can be used to minimize most of these problems of big data in embedded and wireless sensor networks which enables the efficient use of energy and the handling of large data volumes.
Moreover, evaluated latency and traffic volume in the network based on experiments using sensor platforms.
|
77 |
Integration framework for artifact-centric processes in the internet of things / Cadre d'intégration pour les processus centrés artéfacts dans l'Internet des objetsAbi Assaf, Maroun 09 July 2018 (has links)
La démocratisation des objets communicants fixes ou mobiles pose de nombreux défis concernant leur intégration dans des processus métiers afin de développer des services intelligents. Dans le contexte de l’Internet des objets, les objets connectés sont des entités hétérogènes et dynamiques qui englobent des fonctionnalités et propriétés cyber-physiques et interagissent via différents protocoles de communication. Pour pallier aux défis d’interopérabilité et d’intégration, il est primordial d’avoir une vue unifiée et logique des différents objets connectés afin de définir un ensemble de langages, outils et architectures permettant leur intégration et manipulation à grande échelle. L'artéfact métier a récemment émergé comme un modèle d’objet (métier) autonome qui encapsule ses données, un ensemble de services, et manipulant ses données ainsi qu'un cycle de vie à base d’états. Le cycle de vie désigne le comportement de l’objet et son évolution à travers ses différents états pour atteindre son objectif métier. La modélisation des objets connectés sous forme d’artéfact métier étendu nous permet de construire un paradigme intuitif pour exprimer facilement des processus d’intégration d’objets connectés dirigés par leurs données. Face aux changements contextuels et à la réutilisation des objets connectés dans différentes applications, les processus dirigés par les données, (appelés aussi « artifacts » au sens large) restent relativement invariants vu que leurs structures de données ne changent pas. Or, les processus centrés sur les services requièrent souvent des changements dans leurs flux d'exécution. Cette thèse propose un cadre d'intégration de processus centré sur les artifacts et leur application aux objets connectés. Pour cela, nous avons construit une vue logique unifiée et globale d’artéfact permettant de spécifier, définir et interroger un très grand nombre d'artifacts distribués, ayant des fonctionnalités similaires (maisons intelligentes ou voitures connectées, …). Le cadre d'intégration comprend une méthode de modélisation conceptuelle des processus centrés artifacts, des des algorithmes d'appariement inter-artifacts et une algèbre de définition et de manipulation d’artifacts. Le langage déclaratif, appelé AQL (Artifact Query Language) permet en particulier d’interroger des flux continus d’artifacts. Il s'appuie sur une syntaxe de type SQL pour réduire les efforts d'apprentissage. Nous avons également développé un prototype pour valider nos contributions et mener des expériences dans le contexte de l’Internet des objets. / The emergence of fixed or mobile communicating objects poses many challenges regarding their integration into business processes in order to develop smart services. In the context of the Internet of Things, connected devices are heterogeneous and dynamic entities that encompass cyber-physical features and properties and interact through different communication protocols. To overcome the challenges related to interoperability and integration, it is essential to build a unified and logical view of different connected devices in order to define a set of languages, tools and architectures allowing their integrations and manipulations at a large scale. Business artifact has recently emerged as an autonomous (business) object model that encapsulates attribute-value pairs, a set of services manipulating its attribute data, and a state-based lifecycle. The lifecycle represents the behavior of the object and its evolution through its different states in order to achieve its business objective. Modeling connected devices and smart objects as an extended business artifact allows us to build an intuitive paradigm to easily express integration data-driven processes of connected objects. In order to handle contextual changes and reusability of connected devices in different applications, data-driven processes (or artifact processes in the broad sense) remain relatively invariant as their data structures do not change. However, service-centric or activity-based processes often require changes in their execution flows. This thesis proposes a framework for integrating artifact-centric processes and their application to connected devices. To this end, we introduce a logical and unified view of a "global" artifact allowing the specification, definition and interrogation of a very large number of distributed artifacts, with similar functionalities (smart homes or connected cars, ...). The framework includes a conceptual modeling method for artifact-centric processes, inter-artifact mapping algorithms, and artifact definition and manipulation algebra. A declarative language, called AQL (Artifact Query Language) aims in particular to query continuous streams of artifacts. The AQL relies on a syntax similar to the SQL in relational databases in order to reduce its learning curve. We have also developed a prototype to validate our contributions and conducted experimentations in the context of the Internet of Things.
|
78 |
Discovering and Tracking Interesting Web ServicesRocco, Daniel J. (Daniel John) 01 December 2004 (has links)
The World Wide Web has become the standard mechanism for information distribution and scientific collaboration on the Internet. This dissertation research explores a suite of techniques for discovering relevant dynamic sources in a specific domain of interest and for managing Web data effectively. We first explore techniques for discovery and automatic classification of dynamic Web sources. Our approach utilizes a service class model of the dynamic Web that allows the characteristics of interesting services to be specified using a service class description.
To promote effective Web data management, the Page Digest Web document encoding eliminates tag redundancy and places structure, content, tags, and attributes into separate containers, each of which can be referenced in isolation or in conjunction with the other elements of the document. The Page Digest Sentinel system leverages our unique encoding to provide efficient and scalable change monitoring for arbitrary Web documents through document compartmentalization and semantic change request grouping.
Finally, we present XPack, an XML document compression system that uses a containerized view of an XML document to provide both good compression and efficient querying over compressed documents. XPack's queryable XML compression format is general-purpose, does not rely on domain knowledge or particular document structural characteristics for compression, and achieves better query performance than standard query processors using text-based XML.
Our research expands the capabilities of existing dynamic Web techniques, providing superior service discovery and classification services, efficient change monitoring of Web information, and compartmentalized document handling. DynaBot is the first system to combine a service class view of the Web with a modular crawling architecture to provide automated service discovery and classification. The Page Digest Web document encoding represents Web documents efficiently by separating the individual characteristics of the document. The Page Digest Sentinel change monitoring system utilizes the Page Digest document encoding for scalable change monitoring through efficient change algorithms and intelligent request grouping. Finally, XPack is the first XML compression system that delivers compression rates similar to existing techniques while supporting better query performance than standard query processors using text-based XML.
|
79 |
Model-checking based data retrieval an application to semistructured and temporal data /Quintarelli, Elisa. January 1900 (has links)
Texte remanié de : PhD : Politecnico di Milano : 2002. / Bibliogr. p. [129]-134.
|
80 |
Assigning related categories to user queriesHe, Miao. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Department of Computer Science, Thomas J. Watson School of Engineering and Applied Science, 2006. / Includes bibliographical references.
|
Page generated in 0.0481 seconds