• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 507
  • 79
  • 36
  • 29
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 870
  • 286
  • 264
  • 221
  • 201
  • 169
  • 152
  • 133
  • 129
  • 128
  • 124
  • 116
  • 103
  • 101
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Ubiquitous healthcare system based on a wireless sensor network

Chung, W.-Y. (Wan-Young) 17 November 2009 (has links)
Abstract This dissertation aimed at developing a multi-modal sensing u-healthcare system (MSUS), which reflects the unique properties of a healthcare application in a wireless sensor network. Together with health parameters, such as ECG, SpO2 and blood pressure, the system also transfers context-aware data, including activity, position and tracking data, in a wireless sensor network environment at home or in a hospital. Since packet loss may have fatal consequences for patients, health-related data are more critical than most other types of monitoring data. Thus, compared to environmental, agricultural or industrial monitoring, healthcare monitoring in a wireless environment imposes different requirements and priorities. These include heavy data traffic with wavelike parameters in wireless sensor network and fatal data loss due to the traffic. To ensure reliable data transfer in a wireless sensor network, this research placed special emphasis on the optimization of sampling rate, packet length and transmission rate, and on the traffic reduction method. To improve the reliability and accuracy of diagnosis, the u-healthcare system also collects context-aware information on the user’s activity and location and provides real-time tracking. Waveform health parameters, such as ECG, are normally sampled in the 100 to 400 Hz range according to the monitoring purpose. This type of waveform data may incur a heavy burden in wireless communication. To reduce wireless traffic between the sensor nodes and the gateway node, the system utilizes on-site ECG analysis implemented on the sensor nodes as well as query architecture. A 3D VRML viewer was also developed for the realistic monitoring of the user’s moving path and location. Two communication methods, an 802.15.4-based wireless sensor network and a CDMA cellular network are used by sensors placed on the users’ bodies to gather medical data, which is then transmitted to a server PC at home or in the hospital, depending on whether the sensor is within or outside the range of the wireless sensor network.
452

Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information

Thost, Veronika 24 August 2017 (has links) (PDF)
Ontology-based data access (OBDA) augments classical query answering in databases by including domain knowledge provided by an ontology. An ontology captures the terminology of an application domain and describes domain knowledge in a machine-processable way. Formal ontology languages additionally provide semantics to these specifications. Systems for OBDA thus may apply logical reasoning to answer queries; they use the ontological knowledge to infer new information, which is only implicitly given in the data. Moreover, they usually employ the open-world assumption, which means that knowledge not stated explicitly in the data or inferred is neither assumed to be true nor false. Classical OBDA regards the knowledge however only w.r.t. a single moment, which means that information about time is not used for reasoning and hence lost; in particular, the queries generally cannot express temporal aspects. We investigate temporal query languages that allow to access temporal data through classical ontologies. In particular, we study the computational complexity of temporal query answering regarding ontologies written in lightweight description logics, which are known to allow for efficient reasoning in the atemporal setting and are successfully applied in practice. Furthermore, we present a so-called rewritability result for ontology-based temporal query answering, which suggests ways for implementation. Our results may thus guide the choice of a query language for temporal OBDA in data-intensive applications that require fast processing, such as context recognition.
453

Performance Analysis of kNN Query Processing on large datasets using CUDA & Pthreads : comparing between CPU & GPU

Kalakuntla, Preetham January 2017 (has links)
Telecom companies do a lot of analytics to provide consumers a better service and to stay in competition. These companies accumulate special big data that has potential to provide inputs for business. Query processing is one of the major tool to fire analytics at their data. Traditional query processing techniques which follow in-memory algorithm cannot cope up with the large amount of data of telecom operators. The k nearest neighbour technique(kNN) is best suitable method for classification and regression of large datasets. Our research is focussed on implementation of kNN as query processing algorithm and evaluate the performance of it on large datasets using single core, multi-core and on GPU. This thesis shows an experimental implementation of kNN query processing on single core CPU, Multicore CPU and GPU using Python, P- threads and CUDA respectively. We considered different levels of sizes, dimensions and k as inputs to evaluate the performance. The experiment shows that GPU performs better than CPU single core on the order of 1.4 to 3 times and CPU multi-core on the order of 5.8 to 16 times for different levels of inputs.
454

Découverte et exploitation de proportions analogiques dans les bases de données relationnelles / Discovering and exploiting analogical proportions in a relational database context

Correa Beltran, William 18 July 2016 (has links)
Dans cette thèse, nous nous intéressons aux proportions analogiques dans le contexte des bases de données relationnelles. Les proportions analogiques permettent de lier quatre éléments dans une relation du type ''A est à B ce que C est à D''. Par exemple, « Paris est à la France ce que Rome est à l'Italie ». Nous avons étudié le problème de la prédiction de valeurs manquantes dans une base de données en utilisant les proportions analogiques. Un algorithme de classification fondé sur les proportions analogiques a été modifié afin de résoudre ce problème. Puis, nous avons étudié les propriétés des éléments appartenant à l'ensemble d'apprentissage des classificateurs analogiques fréquemment exploités pour calculer la prédiction. Ceci nous a permis de réduire considérablement la taille de cet ensemble par élimination des éléments peu pertinents et par conséquent, de diminuer les temps d'exécution de ces classificateurs. La deuxième partie de la thèse a pour objectif de découvrir de nouveaux patrons basés sur la relation d'analogie, i.e., des parallèles, dans les bases de données. Nous avons montré qu'il est possible d'extraire ces patrons en s'appuyant sur des approches de clustering. Les clusters produits par de telles techniques présentent aussi un intérêt pour l'évaluation de requêtes recherchant des patrons d'analogie dans les bases de données. Dans cette perspective, nous avons proposé d'étendre le langage de requêtes SQL pour pouvoir trouver des quadruplets d'une base de données satisfaisant une proportion analogique. Nous avons proposé différentes stratégies d'évaluation pour de telles requêtes, et avons comparé expérimentalementleurs performances. / In this thesis, we are interested in the notion of analogical proportions in a relational database context. An analogical proportion is a statement of the form “A is to B as C is to D”, expressing that the relation beween A and B is the same as the relation between C and D. For instance, one may say that “Paris is to France as Rome is to Italy”. We studied the problem of imputing missing values in a relational database by means of analogical proportions. A classification algorithm based on analogical proportions has been modified in order to impute missing values. Then, we studied how analogical classifiers work in order to see if their processing could be simplified. We showed how some typeof analogical proportions is more useful than the others when performing classification. We then proposed an algorithm using this information, which allowed us to considerably reduce the size of the training set used by the analogical classificationalgorithm, and hence to reduce its execution time. In the second part of this thesis, we payed a particular attention to the mining of combinations of four tuples bound by an analogical relationship. For doing so, we used several clustering algorithms, and we proposed some modifications to them, in order tomake each obtained cluster represent a set of analogical proportions. Using the results of the clustering algorithms, we studied how to efficiently retrieve the analogical proportions in a database by means of queries. For doing so, we proposed to extend the SQL query language in order to retrieve from a database the quadruples of tuples satisfying an analogical proportion. We proposed severalquery evaluation strategies and experimentally compared their performances.
455

An Integrated Framework for Patent Analysis and Mining

zhang, longhui 01 April 2016 (has links)
Patent documents are important intellectual resources of protecting interests of individuals, organizations and companies. These patent documents have great research values, beneficial to the industry, business, law, and policy-making communities. Patent mining aims at assisting patent analysts in investigating, processing, and analyzing patent documents, which has attracted increasing interest in academia and industry. However, despite recent advances in patent mining, several critical issues in current patent mining systems have not been well explored in previous studies. These issues include: 1) the query retrieval problem that assists patent analysts finding all relevant patent documents for a given patent application; 2) the patent documents comparative summarization problem that facilitates patent analysts in quickly reviewing any given patent documents pairs; and 3) the key patent documents discovery problem that helps patent analysts to quickly grasp the linkage between different technologies in order to better understand the technical trend from a collection of patent documents. This dissertation follows the stream of research that covers the aforementioned issues of existing patent analysis and mining systems. In this work, we delve into three interleaved aspects of patent mining techniques, including (1) PatSearch, a framework of automatically generating the search query from a given patent application and retrieving relevant patents to user; (2) PatCom, a framework for investigating the relationship in terms of commonality and difference between patent documents pairs, and (3) PatDom, a framework for integrating multiple types of patent information to identify important patents from a large volume of patent documents. In summary, the increasing amount and textual complexity of patent repository lead to a series of challenges that are not well addressed in the current generation systems. My work proposed reasonable solutions to these challenges and provided insights on how to address these challenges using a simple yet effective integrated patent mining framework.
456

A Comparison of EncryptionAlgorithms for Protecting data passed Through a URL

Osman, Mohamed, Johansson, Adam January 2017 (has links)
This project starts off with giving an overview of what sensitive data is, encryption algorithms and other required knowledge for this thesis project.This is because of the aim of this thesis project, that is to find the best way to encrypt data passed through a URL with a focus on protecting sensitive data in web applications. Data sent through the URL of a web application could be sensitive data, and exposure of sensitive data can be devastating for governments, companies, and individuals. The tools and methods that are used for this thesis project are described. An overview is given of the requirements of the web application that was to be created, the development of it, implementation and comparison of encryption algorithms. After that, the results of the encryption algorithms are compared and displayed together with a prototype of the web application and its encryption. The results are then analyzed in two different sections: security of the encryptions and performance tests. With the results given we conclude which one of the encryption algorithms is the most suitable for our web application, and otherwise when encrypting data through the URL of a web application. The results show that AES has a great advantage over 3DES both in security and performance when encrypting sensitive data passed through a URL.Those results are then used to build a secure web application to help and assist a broker during an open showing. The web application is then used together with information from interested buyers so it can be easy for the broker to contact them after the showing.
457

Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées / Predicting query performance and explaining results to assist Linked Data consumption

Hasan, Rakebul 04 November 2014 (has links)
Prédire les performances des requêtes et expliquer les résultats pour assister la consommation de données liées. Notre objectif est d'aider les utilisateurs à comprendre les performances d'interrogation SPARQL, les résultats de la requête, et dérivations sur les données liées. Pour aider les utilisateurs à comprendre les performances des requêtes, nous fournissons des prévisions de performances des requêtes sur la base de d’historique de requêtes et d'apprentissage symbolique. Nous n'utilisons pas de statistiques sur les données sous-jacentes à nos prévisions. Ce qui rend notre approche appropriée au Linked Data où les statistiques sont souvent absentes. Pour aider les utilisateurs des résultats de la requête dans leur compréhension, nous fournissons des explications de provenance. Nous présentons une approche sans annotation pour expliquer le “pourquoi” des résultats de la requête. Notre approche ne nécessite pas de reconception du processeur de requêtes, du modèle de données, ou du langage de requête. Nous utilisons SPARQL 1.1 pour générer la provenance en interrogeant les données, ce qui rend notre approche appropriée pour les données liées. Nous présentons également une étude sur les utilisateurs montrant l'impact des explications. Enfin, pour aider les utilisateurs à comprendre les dérivations sur les données liées, nous introduisons le concept d’explications liées. Nous publions les métadonnées d’explication comme des données liées. Cela permet d'expliquer les résultats en suivant les liens des données utilisées dans le calcul et les liens des explications. Nous présentons une extension de l'ontologie PROV W3C pour décrire les métadonnées d’explication. Nous présentons également une approche pour résumer ces explications et aider les utilisateurs à filtrer les explications. / Our goal is to assist users in understanding SPARQL query performance, query results, and derivations on Linked Data. To help users in understanding query performance, we provide query performance predictions based on the query execution history. We present a machine learning approach to predict query performances. We do not use statistics about the underlying data for our predictions. This makes our approach suitable for the Linked Data scenario where statistics about the underlying data is often missing such as when the data is controlled by external parties. To help users in understanding query results, we provide provenance-based query result explanations. We present a non-annotation-based approach to generate why-provenance for SPARQL query results. Our approach does not require any re-engineering of the query processor, the data model, or the query language. We use the existing SPARQL 1.1 constructs to generate provenance by querying the data. This makes our approach suitable for Linked Data. We also present a user study to examine the impact of query result explanations. Finally to help users in understanding derivations on Linked Data, we introduce the concept of Linked Explanations. We publish explanation metadata as Linked Data. This allows explaining derived data in Linked Data by following the links of the data used in the derivation and the links of their explanation metadata. We present an extension of the W3C PROV ontology to describe explanation metadata. We also present an approach to summarize these explanations to help users filter information in the explanation, and have an understanding of what important information was used in the derivation.
458

Chinese-English cross-lingual information retrieval in biomedicine using ontology-based query expansion

Wang, Xinkai January 2011 (has links)
In this thesis, we propose a new approach to Chinese-English Biomedical cross-lingual information retrieval (CLIR) using query expansion based on the eCMeSH Tree, a Chinese-English ontology extended from the Chinese Medical Subject Headings (CMeSH) Tree. The CMeSH Tree is not designed for information retrieval (IR), since it only includes heading terms and has no term weighting scheme for these terms. Therefore, we design an algorithm, which employs a rule-based parsing technique combined with the C-value term extraction algorithm and a filtering technique based on mutual information, to extract Chinese synonyms for the corresponding heading terms. We also develop a term-weighting mechanism. Following the hierarchical structure of CMeSH, we extend the CMeSH Tree to the eCMeSH Tree with synonymous terms and their weights. We propose an algorithm to implement CLIR using the eCMeSH Tree terms to expand queries. In order to evaluate the retrieval improvements obtained from our approach, the results of the query expansion based on the eCMeSH Tree are individually compared with the results of the experiments of query expansion using the CMeSH Tree terms, query expansion using pseudo-relevance feedback, and document translation. We also evaluate the combinations of these three approaches. This study also investigates the factors which affect the CLIR performance, including a stemming algorithm, retrieval models, and word segmentation.
459

Påverkan av query-komplexitet på söktiden hos NoSQL-databaser / The effect of query complexity of NoSQL-databases in respect to searchtime

Sortelius, Erik, Önnestam, Gabriellle January 2018 (has links)
Arbetet jämför fyra olika NoSQL-databaser med fokus på tidseffektivitet. De fyra databaserna är MongoDB, RavenDB, ArangoDB och Couchbase. Studien består av en benchmark för att mäta tidseffektiviteten av de fyra databaserna och en litteraturstudie av hur tidseffektiviteten påverkas av optimeringslösningar. Tillsammans bidrar dessa metoder till en slutsats från båda perspektiven då de kompletterar varandra och ger en grund för resultatets betydelse. Arbetets grund ligger i ett tidigare examensarbete som går ut på att jämföra en SQL-databas mot en NoSQL-databas med en benchmark. Resultatet av studien visar att för de flesta databaser så ökar söktiden för en query i korrelation med ökningen av query-komplexiteten, och att tidseffektiviteten mellan de olika databaserna varierar vid sökningar med hög komplexitet. Framtida arbeten som kan baseras på denna studie är att göra en liknande benchmark på ett dataset som är större eller att en annan typ av databas används.
460

"Uma linguagem visual de consulta a banco de dados utilizando o paradigma de fluxo de dados" / One visual query language using data flow paradigm

Ana Paula Appel 02 April 2003 (has links)
Apesar de muito trabalho ter sido dispendido sobre linguagens de consulta a Sistemas de Gerenciamento de Bancos de Dados Relacionais, existem somente dois paradigmas básicos para essas linguagens, que são representados pela Structured Query Language – SQL e pela Query by Example – QBE. Apesar dessas linguagens de consultas serem computacionalmente completas, elas tem a desvantagem de não permitir ao usuário nenhuma interação gráfica com a informação contida na base de dados. Um dos principais desenvolvimentos na área de base de dados diz respeito às ferramentas que proveêm aos usuários um entendimento simples da base de dados e uma extração amigável da informação. A linguagem descrita neste trabalho possibilita que usuários criem consultas graficamente por meio de diagramas de fluxo de dados. Além da linguagem de consulta gráfica, este trabalho mostra também a ferramenta de apoio Data Flow Query Language - DFQL, que é um editor/executor de consultas construído para suportar essa linguagem, através de um conjunto de operadores representados graficamente, e a execução desses diagramas, analisando a rede e gerando os comandos correspondentes em SQL para realização da consulta. Esses comandos são submetidos ao sistema de gerenciamento de banco de dados e o resultado é mostrado/gravado conforme a consulta feita. / In spite of many works done on query languages, all existing languages are direct extensions of Structured Query Language – SQL and query-By-Example – QBE. These two languages were developed in the beginning of the Relational Database Management Systems – RDBMS development. Althoug these languages are computationally complete, they take the disadvantage of not supporting graphical interaction with data. One of the the main developments in the database area concerns tools to provide users a simple understand of database content, and friendly extraction of the information. The language described in this work enables users to create graphical queries using data flow diagrams. Besides the graphical query language, this work also shows the Data Flow Query Language - DFQL tool. This tool is a query editor/executer that supports this language, using a set of operators represented graphicaly, and the diagram execution is done by analising the network and producing the respective commands in SQL to realize the query. This commands are sent to the DBMS and the result is shown/recorded according to the query.

Page generated in 0.8986 seconds