• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 9
  • 6
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 137
  • 104
  • 95
  • 34
  • 30
  • 22
  • 22
  • 18
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Avaliação experimental de uma técnica de padronização de escores de similaridade / Experimental evaluation of a similarity score standardization technique

Nunes, Marcos Freitas January 2009 (has links)
Com o crescimento e a facilidade de acesso a Internet, o volume de dados cresceu muito nos últimos anos e, consequentemente, ficou muito fácil o acesso a bases de dados remotas, permitindo integrar dados fisicamente distantes. Geralmente, instâncias de um mesmo objeto no mundo real, originadas de bases distintas, apresentam diferenças na representação de seus valores, ou seja, os mesmos dados no mundo real podem ser representados de formas diferentes. Neste contexto, surgiram os estudos sobre casamento aproximado utilizando funções de similaridade. Por consequência, surgiu a dificuldade de entender os resultados das funções e selecionar limiares ideais. Quando se trata de casamento de agregados (registros), existe o problema de combinar os escores de similaridade, pois funções distintas possuem distribuições diferentes. Com objetivo de contornar este problema, foi desenvolvida em um trabalho anterior uma técnica de padronização de escores, que propõe substituir o escore calculado pela função de similaridade por um escore ajustado (calculado através de um treinamento), o qual é intuitivo para o usuário e pode ser combinado no processo de casamento de registros. Tal técnica foi desenvolvida por uma aluna de doutorado do grupo de Banco de Dados da UFRGS e será chamada aqui de MeaningScore (DORNELES et al., 2007). O presente trabalho visa estudar e realizar uma avaliação experimental detalhada da técnica MeaningScore. Com o final do processo de avaliação aqui executado, é possível afirmar que a utilização da abordagem MeaningScore é válida e retorna melhores resultados. No processo de casamento de registros, onde escores de similaridades distintos devem ser combinados, a utilização deste escore padronizado ao invés do escore original, retornado pela função de similaridade, produz resultados com maior qualidade. / With the growth of the Web, the volume of information grew considerably over the past years, and consequently, the access to remote databases became easier, which allows the integration of distributed information. Usually, instances of the same object in the real world, originated from distinct databases, present differences in the representation of their values, which means that the same information can be represented in different ways. In this context, research on approximate matching using similarity functions arises. As a consequence, there is a need to understand the result of the functions and to select ideal thresholds. Also, when matching records, there is the problem of combining the similarity scores, since distinct functions have different distributions. With the purpose of overcoming this problem, a previous work developed a technique that standardizes the scores, by replacing the computed score by an adjusted score (computed through a training), which is more intuitive for the user and can be combined in the process of record matching. This work was developed by a Phd student from the UFRGS database research group, and is referred to as MeaningScore (DORNELES et al., 2007). The present work intends to study and perform an experimental evaluation of this technique. As the validation shows, it is possible to say that the usage of the MeaningScore approach is valid and return better results. In the process of record matching, where distinct similarity must be combined, the usage of the adjusted score produces results with higher quality.
112

Avaliação experimental de uma técnica de padronização de escores de similaridade / Experimental evaluation of a similarity score standardization technique

Nunes, Marcos Freitas January 2009 (has links)
Com o crescimento e a facilidade de acesso a Internet, o volume de dados cresceu muito nos últimos anos e, consequentemente, ficou muito fácil o acesso a bases de dados remotas, permitindo integrar dados fisicamente distantes. Geralmente, instâncias de um mesmo objeto no mundo real, originadas de bases distintas, apresentam diferenças na representação de seus valores, ou seja, os mesmos dados no mundo real podem ser representados de formas diferentes. Neste contexto, surgiram os estudos sobre casamento aproximado utilizando funções de similaridade. Por consequência, surgiu a dificuldade de entender os resultados das funções e selecionar limiares ideais. Quando se trata de casamento de agregados (registros), existe o problema de combinar os escores de similaridade, pois funções distintas possuem distribuições diferentes. Com objetivo de contornar este problema, foi desenvolvida em um trabalho anterior uma técnica de padronização de escores, que propõe substituir o escore calculado pela função de similaridade por um escore ajustado (calculado através de um treinamento), o qual é intuitivo para o usuário e pode ser combinado no processo de casamento de registros. Tal técnica foi desenvolvida por uma aluna de doutorado do grupo de Banco de Dados da UFRGS e será chamada aqui de MeaningScore (DORNELES et al., 2007). O presente trabalho visa estudar e realizar uma avaliação experimental detalhada da técnica MeaningScore. Com o final do processo de avaliação aqui executado, é possível afirmar que a utilização da abordagem MeaningScore é válida e retorna melhores resultados. No processo de casamento de registros, onde escores de similaridades distintos devem ser combinados, a utilização deste escore padronizado ao invés do escore original, retornado pela função de similaridade, produz resultados com maior qualidade. / With the growth of the Web, the volume of information grew considerably over the past years, and consequently, the access to remote databases became easier, which allows the integration of distributed information. Usually, instances of the same object in the real world, originated from distinct databases, present differences in the representation of their values, which means that the same information can be represented in different ways. In this context, research on approximate matching using similarity functions arises. As a consequence, there is a need to understand the result of the functions and to select ideal thresholds. Also, when matching records, there is the problem of combining the similarity scores, since distinct functions have different distributions. With the purpose of overcoming this problem, a previous work developed a technique that standardizes the scores, by replacing the computed score by an adjusted score (computed through a training), which is more intuitive for the user and can be combined in the process of record matching. This work was developed by a Phd student from the UFRGS database research group, and is referred to as MeaningScore (DORNELES et al., 2007). The present work intends to study and perform an experimental evaluation of this technique. As the validation shows, it is possible to say that the usage of the MeaningScore approach is valid and return better results. In the process of record matching, where distinct similarity must be combined, the usage of the adjusted score produces results with higher quality.
113

Query authentication in data outsourcing and integration services

Chen, Qian 27 August 2015 (has links)
Owing to the explosive growth of data driven by e-commerce, social media, and mobile apps, data outsourcing and integration have become two popular Internet services. These services involve one or more data owners (DOs), many requesting clients, and a service provider (SP). The DOs outsource/synchronize their data to the SP, and the SP will provide query services to the requesting clients on behalf of DOs. However, as a third-party server, the SP might alter (leave out or forge) the outsourced/integrated data and query results, intentionally or not. To address this trustworthy issue, the SP is expected to deliver their services in an authenticatable manner, so that the correctness of the service results can be verified by the clients. Unfortunately, existing work on query authentication cannot preserve the privacy of the data being queried. Furthermore, almost all previous studies assume only a single data source/owner, while data integration services usually combine data from multiple sources. In this dissertation, we take the first step to study the authentication of location-based queries with confidentiality and investigate authenticated online data integration services. Cost models, security analysis, and experimental results consistently show the effectiveness and robustness of our proposed schemes under various system settings and query workloads.
114

Consultas kNN em redes dependentes do tempo / KNN queries in time-dependent networks

LÃvia Almada Cruz 21 February 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Nesta dissertaÃÃo foi estudado o problema de processar consultas kNN em redes de rodovias considerando o histÃrico das condiÃÃes de trÃfego, em particular o caso onde a velocidade dos objetos mÃveis depende do tempo. Dado que um usuÃrio està em uma dada localizaÃÃo e em um determinado instante de tempo, a consulta retorna os k pontos de interesse (por exemplo, postos de gasolina) que podem ser alcanÃados em uma quantidade de tempo mÃnima considerando condiÃÃes histÃricas de trÃfego. SoluÃÃes anteriores para consultas kNN e outras consultas comuns em redes de rodovia estÃticas nÃo funcionam quando o custo das arestas (tempo de viagem) à dependente do tempo. A construÃÃo de estratÃgias e algoritmos eficientes e corretos, e mÃtodos de armazenamento e acesso para o processamento destas consultas à um desafio desde que algumas das propriedades de grafos comumente supostas em estratÃgias para redes estÃticas nÃo se mantÃm para redes dependentes do tempo. O mÃtodo proposto aplica uma busca A∗ à medida que vai, de maneira incremental, explorando a rede. O objetivo do mÃtodo à reduzir o percentual da rede avaliado na busca. Para dar suporte à execuÃÃo do algoritmo, foi tambÃm proposto um mÃtodo para armazenamento e acesso para redes dependentes do tempo. A construÃÃo e a corretude do algoritmo sÃo discutidas e sÃo apresentados resultados experimentais com dados reais e sintÃticos que mostram a eficiÃncia da soluÃÃo. / In this dissertation we study the problem of processing k-nearest neighbours (kNN)queries in road networks considering the history of traffic conditions, in particular the case where the speed of moving objects is time-dependent. For instance, given that the user is at a given location at a certain time, the query returns the k points of interest (e.g., gas stations) that can be reached in the minimum amount of time. Previous solutions to answer kNN queries and others common queries in road networks do not work when the moving speed in each road is not constant. Building efficient and correct approaches and algorithms and storage and access schemes for processing these queries is a challenge because graph properties considered in static networks do not hold in the time dependent case. Our approach uses the well-known A∗ search algorithm by applying incremental network expansion and pruning unpromising vertices. The goal is reduce the percentage of network assessed in the search. To support the algorithm execution, we propose a storage and access method for time-dependent networks. We discuss the design and correctness of our algorithm and present experimental results that show the efficiency and effectiveness of our solution.
115

Authenticated query processing in the cloud

Xu, Cheng 19 February 2019 (has links)
With recent advances in data-as-a-service (DaaS) and cloud computing, outsourcing data to the cloud has become a common practice. In a typical scenario, the data owner (DO) outsources the data and delegates the query processing service to a service provider (SP). However, as the SP is often an untrusted third party, the integrity of the query results cannot be guaranteed and is thus imperative to be authenticated. To tackle this issue, a typical approach is letting the SP provide a cryptographic proof, which can be used to verify the soundness and completeness of the query results by the clients. Despite extensive research on authenticated query processing for outsourced databases, existing techniques have only considered limited query types. They fail to address a variety of needs demanded by enterprise customers such as supporting aggregate queries over set-valued data, enforcing fine-grained access control, and using distributed computing paradigms. In this dissertation, we take the first step to comprehensively investigate the authenticated query processing in the cloud that fulfills the aforementioned requirements. Security analysis and performance evaluation show that the proposed solutions and techniques are robust and efficient under a wide range of system settings.
116

Dotazování nad časoprostorovými daty pohybujících se objektů / Querying Spatio-Temporal Data of Moving Objects

Dvořáček, Ondřej January 2009 (has links)
This master's thesis is devoted to the studies of possibilities, which can be used for representation of moving objects data and for querying such spatio-temporal data. It also shows results of the master's thesis created by Ing. Jaroslav Vališ, that should be used for the solution of this master's thesis. But based on the theoretical grounds defined at the beginning of this work was designed and implemented new database extension for saving and querying spatio-temporal data. Special usage of this extension is demonstrated in an example application. This application uses the database extension for the implementation of its own database functions that are domain specific. At the conclusion, there are presented ways of the farther development of this database extension and the results of this master's thesis are there set into the context of the following project, doctoral thesis "Moving objects database".
117

IMPERATIVE MODELS TO DECLARATIVE CONSTRAINTS : Generating Control-Flow Constraints from Business Process Models

Bergman Thörn, Arvid January 2023 (has links)
In complex information systems, it is often crucial to evaluate whether a sequence of activities obtained from a system log complies with behavioural rules. This process of evaluation is called conformance checking, and the most classical approach to specifying the behavioural rules is in the form of flow chartlike process diagrams, e.g., in the Business Process Model and Notation (BPMN) language. Traditionally, control flow constraints are extracted using Petri net replay-based approaches. Though, with the use of industrial process query languages such as Signavio Analytics Language (SIGNAL) that allows for temporal row matching, the possibility of performing conformance checking using temporal constraints opens up. To this end, this thesis presents a parser for extracting control-flow objects from BPMN-based business process models and a compiler for generating both linear temporal logic-like rules as well as SIGNAL queries. The parser succeeds at parsing all industry models and most academic models; the exceptions in the latter case can presumably be traced back to edge cases and unidiomatic modelling. The constraints generated by the compiler are in some, but not in all cases, identical to constraints extracted via Petri net replay as an intermediate step, indicating some differences in the formal interpretation of BPMN control flow. In conclusion, the implementation and evaluation of the parser and compiler indicate that it is feasible to move directly from business user-oriented process models to declarative, query language-based constraints, cutting out the Petri net-replay middleman and hence facilitating elegant and more efficient process data querying.
118

DATAWAREHOUSE APPROACH TO DECISION SUPPORT SYSTEM FROM DISTRIBUTED, HETEROGENEOUS SOURCES

Sannellappanavar, Vijaya Laxmankumar 05 October 2006 (has links)
No description available.
119

Linked Open Data Alignment & Querying

Jain, Prateek 27 August 2012 (has links)
No description available.
120

Novel spatial query processing techniques for scaling location based services

Pesti, Peter 12 November 2012 (has links)
Location based services (LBS) are gaining widespread user acceptance and increased daily usage. GPS based mobile navigation systems (Garmin), location-related social network updates and "check-ins" (Facebook), location-based games (Nokia), friend queries (Foursquare) and ads (Google) are some of the popular LBSs available to mobile users today. Despite these successes, current user services fall short of a vision where mobile users could ask for continuous location-based services with always-up-to-date information around them, such as the list of friends or favorite restaurants within 15 minutes of driving. Providing such a location based service in real time faces a number of technical challenges. In this dissertation research, we propose a suite of novel techniques and system architectures to address some known technical challenges of continuous location queries and updates. Our solution approaches enable the creation of new, practical and scalable location based services with better energy efficiency on mobile clients and higher throughput at the location servers. Our first contribution is the development of RoadTrack, a road network aware and query-aware location update framework and a suite of algorithms. A unique characteristic of RoadTrack is the innovative design of encounter points and system-defined precincts to manage the desired spatial resolution of location updates for different mobile clients while reducing the complexity and energy consumption of location update strategies. The second novelty of this dissertation research is the technical development of Dandelion data structures and algorithms that can deliver superior performance for the periodic re-evaluation of continuous road-network distance based location queries, when compared with the alternative of repeatedly performing a network expansion along a mobile user's trajectory. The third contribution of this dissertation research is the FastExpand algorithm that can speed up the computation of single-issue shortest-distance road network queries. Finally, we have developed the open source GT MobiSim mobility simulator, a discrete event simulation platform to generate realistic driving trajectories for real road maps. It has been downloaded and utilized by many to evaluate the efficiency and effectiveness of the location query and location update algorithms, including the research efforts in this dissertation.

Page generated in 0.0614 seconds