• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 9
  • 6
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 104
  • 95
  • 34
  • 30
  • 22
  • 22
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

An Ontology-based Multimedia Information Management System

Tarakci, Hilal 01 August 2008 (has links) (PDF)
In order to manage the content of multimedia data, the content must be annotated. Although any user-defined annotation is acceptable, it is preferable if systems agree on the same annotation format. MPEG-7 is a widely accepted standard for multimedia content annotation. However, in MPEG-7, semantically identical metadata can be represented in multiple ways due to lack of precise semantics in its XML-based syntax. Unfortunately this prevents metadata interoperability. To overcome this problem, MPEG-7 standard is translated into an ontology. In this thesis, MPEG-7 ontology is used on top and the given user-defined ontologies are attached to the MPEG-7 ontology via a user friendly interface, thus building MPEG-7 based ontologies automatically. Our proposed system is an ontology-based multimedia information management framework due to its modular architecture, ease of integrating with domain specific ontologies naturally and automatic harmonization of MPEG-7 ontology and domain-specific ontologies. Integration with domain specific ontologies is carried out by enabling import of domain ontologies via a user-friendly interface which makes the system independent of application domains.
122

Improvements to the complex question answering models

Imam, Md. Kaisar January 2011 (has links)
In recent years the amount of information on the web has increased dramatically. As a result, it has become a challenge for the researchers to find effective ways that can help us query and extract meaning from these large repositories. Standard document search engines try to address the problem by presenting the users a ranked list of relevant documents. In most cases, this is not enough as the end-user has to go through the entire document to find out the answer he is looking for. Question answering, which is the retrieving of answers to natural language questions from a document collection, tries to remove the onus on the end-user by providing direct access to relevant information. This thesis is concerned with open-domain complex question answering. Unlike simple questions, complex questions cannot be answered easily as they often require inferencing and synthesizing information from multiple documents. Hence, we considered the task of complex question answering as query-focused multi-document summarization. In this thesis, to improve complex question answering we experimented with both empirical and machine learning approaches. We extracted several features of different types (i.e. lexical, lexical semantic, syntactic and semantic) for each of the sentences in the document collection in order to measure its relevancy to the user query. We have formulated the task of complex question answering using reinforcement framework, which to our best knowledge has not been applied for this task before and has the potential to improve itself by fine-tuning the feature weights from user feedback. We have also used unsupervised machine learning techniques (random walk, manifold ranking) and augmented semantic and syntactic information to improve them. Finally we experimented with question decomposition where instead of trying to find the answer of the complex question directly, we decomposed the complex question into a set of simple questions and synthesized the answers to get our final result. / x, 128 leaves : ill. ; 29 cm
123

Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web Representations

Stefanova, Silvia January 2013 (has links)
This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases. An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs. When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing. To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time. / eSSENCE
124

Flexible querying of RDF databases : a contribution based on fuzzy logic / Interrogation flexible de bases de données RDF : une contribution basée sur la logique floue

Slama, Olfa 22 November 2017 (has links)
Cette thèse porte sur la définition d'une approche flexible pour interroger des graphes RDF à la fois classiques et flous. Cette approche, basée sur la théorie des ensembles flous, permet d'étendre SPARQL qui est le langage de requête standardisé W3C pour RDF, de manière à pouvoir exprimer i) des préférences utilisateur floues sur les données (par exemple, l'année de publication d'un album est récente) et sur la structure du graphe (par exemple, le chemin entre deux amis doit être court) et ii) des préférences utilisateur plus complexes, prenant la forme de propositions quantifiées floues (par exemple, la plupart des albums qui sont recommandés par un artiste, sont très bien notés et ont été créés par un jeune ami de cet artiste). Nous avons effectué des expérimentations afin d'étudier les performances de cette approche. L'objectif principal de ces expérimentations était de montrer que le coût supplémentaire dû à l'introduction du flou reste limité/acceptable. Nous avons également étudié, dans un cadre plus général, celui de bases de données graphe, la question de l'intégration du même type de propositions quantifiées floues dans une extension floue de Cypher qui est un langage déclaratif pour l'interrogation des bases de données graphe classiques. Les résultats expérimentaux obtenus montrent que le coût supplémentaire induit par la présence de conditions quantifiées floues dans les requêtes reste également très limité dans ce cas. / This thesis concerns the definition of a flexible approach for querying both crisp and fuzzy RDF graphs. This approach, based on the theory of fuzzy sets, makes it possible to extend SPARQL which is the W3C-standardised query language for RDF, so as to be able to express i) fuzzy user preferences on data (e.g., the release year of an album is recent) and on the structure of the data graph (e.g., the path between two friends is required to be short) and ii) more complex user preferences, namely, fuzzy quantified statements (e.g., most of the albums that are recommended by an artist, are highly rated and have been created by a young friend of this artist). We performed some experiments in order to study the performances of this approach. The main objective of these experiments was to show that the extra cost due to the introduction of fuzziness remains limited/acceptable. We also investigated, in a more general framework, namely graph databases, the issue of integrating the same type of fuzzy quantified statements in a fuzzy extension of Cypher which is a declarative language for querying (crisp) graph databases. Some experimental results are reported and show that the extra cost induced by the fuzzy quantified nature of the queries also remains very limited.
125

Porovnání přístupů k dotazování chemických sloučenin / Comparison of Approaches for Querying of Chemical Compounds

Šípek, Vojtěch January 2019 (has links)
The purpose of this thesis is to perform an analysis of approaches to querying chemical databases and to validate or invalidate its results. Currently, there exists no work which would compare the performance and memory usage of the best performing approaches on the same data set. In this thesis, we address this lack of information and we create an un-biased benchmark of the most popular index building methods for subgraph querying of chemical databases. Also, we compare the results of such benchmark with the performance results of an SQL and a graph database. 1
126

Trusted Querying over Wireless Sensor Networks and Network Security Visualization

Abuaitah, Giovani Rimon 22 May 2009 (has links)
No description available.
127

Feature extraction and similarity-based analysis for proteome and genome databases

Ozturk, Ozgur 20 September 2007 (has links)
No description available.
128

Formulation interactive des requêtes pour l’analyse et la compréhension du code source

Jridi, Jamel Eddine 11 1900 (has links)
Nous proposons une approche basée sur la formulation interactive des requêtes. Notre approche sert à faciliter des tâches d’analyse et de compréhension du code source. Dans cette approche, l’analyste utilise un ensemble de filtres de base (linguistique, structurel, quantitatif, et filtre d’interactivité) pour définir des requêtes complexes. Ces requêtes sont construites à l’aide d’un processus interactif et itératif, où des filtres de base sont choisis et exécutés, et leurs résultats sont visualisés, changés et combinés en utilisant des opérateurs prédéfinis. Nous avons évalués notre approche par l’implantation des récentes contributions en détection de défauts de conception ainsi que la localisation de fonctionnalités dans le code. Nos résultats montrent que, en plus d’être générique, notre approche aide à la mise en œuvre des solutions existantes implémentées par des outils automatiques. / We propose an interactive querying approach for program analysis and comprehension tasks. In our approach, an analyst uses a set of basic filters (linguistic, structural, quantitative, and user selection) to define complex queries. These queries are built following an interactive and iterative process where basic filters are selected and executed, and their results displayed, changed, and combined using predefined operators. We evaluated our querying approach by implementing recent state-of-the-art contributions on feature location and design defect detection. Our results show that, in addition to be generic; our approach helps improving existing solutions implemented by fully-automated tools.
129

CircularTrip and ArcTrip:effective grid access methods for continuous spatial queries.

Cheema, Muhammad Aamir, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
A k nearest neighbor query q retrieves k objects that lie closest to the query point q among a given set of objects P. With the availability of inexpensive location aware mobile devices, the continuous monitoring of such queries has gained lot of attention and many methods have been proposed for continuously monitoring the kNNs in highly dynamic environment. Multiple continuous queries require real-time results and both the objects and queries issue frequent location updates. Most popular spatial index, R-tree, is not suitable for continuous monitoring of these queries due to its inefficiency in handling frequent updates. Recently, the interest of database community has been shifting towards using grid-based index for continuous queries due to its simplicity and efficient update handling. For kNN queries, the order in which cells of the grid are accessed is very important. In this research, we present two efficient and effective grid access methods, CircularTrip and ArcTrip, that ensure that the number of cells visited for any continuous kNN query is minimum. Our extensive experimental study demonstrates that CircularTrip-based continuous kNN algorithm outperforms existing approaches in terms of both efficiency and space requirement. Moreover, we show that CircularTrip and ArcTrip can be used for many other variants of nearest neighbor queries like constrained nearest neighbor queries, farthest neighbor queries and (k + m)-NN queries. All the algorithms presented for these queries preserve the properties that they visit minimum number of cells for each query and the space requirement is low. Our proposed techniques are flexible and efficient and can be used to answer any query that is hybrid of above mentioned queries. For example, our algorithms can easily be used to efficiently monitor a (k + m) farthest neighbor query in a constrained region with the flexibility that the spatial conditions that constrain the region can be changed by the user at any time.
130

Formulation interactive des requêtes pour l’analyse et la compréhension du code source

Jridi, Jamel Eddine 11 1900 (has links)
Nous proposons une approche basée sur la formulation interactive des requêtes. Notre approche sert à faciliter des tâches d’analyse et de compréhension du code source. Dans cette approche, l’analyste utilise un ensemble de filtres de base (linguistique, structurel, quantitatif, et filtre d’interactivité) pour définir des requêtes complexes. Ces requêtes sont construites à l’aide d’un processus interactif et itératif, où des filtres de base sont choisis et exécutés, et leurs résultats sont visualisés, changés et combinés en utilisant des opérateurs prédéfinis. Nous avons évalués notre approche par l’implantation des récentes contributions en détection de défauts de conception ainsi que la localisation de fonctionnalités dans le code. Nos résultats montrent que, en plus d’être générique, notre approche aide à la mise en œuvre des solutions existantes implémentées par des outils automatiques. / We propose an interactive querying approach for program analysis and comprehension tasks. In our approach, an analyst uses a set of basic filters (linguistic, structural, quantitative, and user selection) to define complex queries. These queries are built following an interactive and iterative process where basic filters are selected and executed, and their results displayed, changed, and combined using predefined operators. We evaluated our querying approach by implementing recent state-of-the-art contributions on feature location and design defect detection. Our results show that, in addition to be generic; our approach helps improving existing solutions implemented by fully-automated tools.

Page generated in 0.0658 seconds