• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 507
  • 79
  • 36
  • 29
  • 22
  • 15
  • 11
  • 10
  • 9
  • 8
  • 6
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 870
  • 286
  • 264
  • 221
  • 201
  • 169
  • 152
  • 133
  • 129
  • 128
  • 124
  • 116
  • 103
  • 101
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Title-based video summarization using attention networks

Li, Changwei 23 August 2022 (has links)
No description available.
552

Abortable and Query-abortable Types and Their Efficient Implementation

Horn, Stephanie Lorraine 24 September 2009 (has links)
We introduce abortable and query-abortable object types intended for implementation in asynchronous shared-memory systems with low contention. Implementations of such types behave like ordinary objects when accessed sequentially, but may abort operations when accessed concurrently. An aborted operation may or may not take effect, i.e., cause a state transition, and it returns no indication of which possibility occurred. Since this uncertainty can be problematic, a query-abortable type supports a QUERY operation that each process can use to determine its last non-QUERY operation on the object that caused a state transition, and the response associated with this state transition. Our research is closely related to obstruction-free implementations (introduced by Herlihy, Luchangco and Moir) and responsive obstruction-free implementations (introduced by Attiya, Guerraoui and Kouznetsov). Like abortable and query-abortable types, these implementations may exhibit degraded behaviour in the face of contention. We show that abortable registers--registers strictly weaker than safe registers--can be used to obtain obstruction-free and responsive obstruction-free implementations for any type. We present universal constructions for abortable and query-abortable types that are novel and efficient in the number of registers used. Specifically, they are based on a simple timestamping mechanism for detecting concurrent executions, and, in systems with n processes, use only n abortable registers or only O(n^2) single-reader, single-writer abortable registers. The timestamping mechanism we introduce is based on the inc&read counter type and appears to be interesting in its own right. As a generalization, we study the k-inc&read counter types, for k>0. We also identify a potential problem with correctness properties based on step contention: with such properties, the composition of correct object implementations may result in an implementation that is not correct. In other words, implementations defined in terms of step contention are not always composable. To avoid this problem, we introduce a property based on interval contention, namely non-triviality, to define the correct behaviour of abortable and query-abortable object implementations.
553

Data Aggregation through Web Service Composition in Smart Camera Networks

Rajapaksage, Jayampathi S 14 December 2010 (has links)
Distributed Smart Camera (DSC) networks are power constrained real-time distributed embedded systems that perform computer vision using multiple cameras. Providing data aggregation techniques that is criti-cal for running complex image processing algorithms on DSCs is a challenging task due to complexity of video and image data. Providing highly desirable SQL APIs for sophisticated query processing in DSC networks is also challenging for similar reasons. Research on DSCs to date have not addressed the above two problems. In this thesis, we develop a novel SOA based middleware framework on a DSC network that uses Distributed OSGi to expose DSC network services as web services. We also develop a novel web service composition scheme that aid in data aggregation and a SQL query interface for DSC net-works that allow sophisticated query processing. We validate our service orchestration concept for data aggregation by providing query primitive for face detection in smart camera network.
554

A Domain-Specific Conceptual Query System

Shen, Xiuyun 02 August 2007 (has links)
This thesis presents the architecture and implementation of a query system resulted from a domain-specific conceptual data modeling and querying methodology. The query system is built for a high level conceptual query language that supports dynamically user-defined domain-specific functions and application-specific functions. It is DBMS-independent and can be translated to SQL and OQL through a normal form. Currently, it has been implemented in neuroscience domain and can be applied to any other domain.
555

Abortable and Query-abortable Types and Their Efficient Implementation

Horn, Stephanie Lorraine 24 September 2009 (has links)
We introduce abortable and query-abortable object types intended for implementation in asynchronous shared-memory systems with low contention. Implementations of such types behave like ordinary objects when accessed sequentially, but may abort operations when accessed concurrently. An aborted operation may or may not take effect, i.e., cause a state transition, and it returns no indication of which possibility occurred. Since this uncertainty can be problematic, a query-abortable type supports a QUERY operation that each process can use to determine its last non-QUERY operation on the object that caused a state transition, and the response associated with this state transition. Our research is closely related to obstruction-free implementations (introduced by Herlihy, Luchangco and Moir) and responsive obstruction-free implementations (introduced by Attiya, Guerraoui and Kouznetsov). Like abortable and query-abortable types, these implementations may exhibit degraded behaviour in the face of contention. We show that abortable registers--registers strictly weaker than safe registers--can be used to obtain obstruction-free and responsive obstruction-free implementations for any type. We present universal constructions for abortable and query-abortable types that are novel and efficient in the number of registers used. Specifically, they are based on a simple timestamping mechanism for detecting concurrent executions, and, in systems with n processes, use only n abortable registers or only O(n^2) single-reader, single-writer abortable registers. The timestamping mechanism we introduce is based on the inc&read counter type and appears to be interesting in its own right. As a generalization, we study the k-inc&read counter types, for k>0. We also identify a potential problem with correctness properties based on step contention: with such properties, the composition of correct object implementations may result in an implementation that is not correct. In other words, implementations defined in terms of step contention are not always composable. To avoid this problem, we introduce a property based on interval contention, namely non-triviality, to define the correct behaviour of abortable and query-abortable object implementations.
556

Processing Exact Results for Queries over Data Streams

Chakraborty, Abhirup 23 February 2010 (has links)
In a growing number of information-processing applications, such as network-traffic monitoring, sensor networks, financial analysis, data mining for e-commerce, etc., data takes the form of continuous data streams rather than traditional stored databases/relational tuples. These applications have some common features like the need for real time analysis, huge volumes of data, and unpredictable and bursty arrivals of stream elements. In all of these applications, it is infeasible to process queries over data streams by loading the data into a traditional database management system (DBMS) or into main memory. Such an approach does not scale with high stream rates. As a consequence, systems that can manage streaming data have gained tremendous importance. The need to process a large number of continuous queries over bursty, high volume online data streams, potentially in real time, makes it imperative to design algorithms that should use limited resources. This dissertation focuses on processing exact results for join queries over high speed data streams using limited resources, and proposes several novel techniques for processing join queries incorporating secondary storages and non-dedicated computers. Existing approaches for stream joins either, (a) deal with memory limitations by shedding loads, and therefore can not produce exact or highly accurate results for the stream joins over data streams with time varying arrivals of stream tuples, or (b) suffer from large I/O-overheads due to random disk accesses. The proposed techniques exploit the high bandwidth of a disk subsystem by rendering the data access pattern largely sequential, eliminating small, random disk accesses. This dissertation proposes an I/O-efficient algorithm to process hybrid join queries, that join a fast, time varying or bursty data stream and a persistent disk relation. Such a hybrid join is the crux of a number of common transformations in an active data warehouse. Experimental results demonstrate that the proposed scheme reduces the response time in output results by exploiting spatio-temporal locality within the input stream, and minimizes disk overhead through disk-I/O amortization. The dissertation also proposes an algorithm to parallelize a stream join operator over a shared-nothing system. The proposed algorithm distributes the processing loads across a number of independent, non-dedicated nodes, based on a fixed or predefined communication pattern; dynamically maintains the degree of declustering in order to minimize communication and processing overheads; and presents mechanisms for reducing storage and communication overheads while scaling over a large number of nodes. We present experimental results showing the efficacy of the proposed algorithms.
557

Interactive visualization of financial data : Development of a visual data mining tool

Saltin, Joakim January 2012 (has links)
In this project, a prototype visual data mining tool was developed, allowing users to interactively investigate large multi-dimensional datasets visually (using 2D visualization techniques) using so called drill-down, roll-up and slicing operations. The project included all steps of the development, from writing specifications and designing the program to implementing and evaluating it. Using ideas from data warehousing, custom methods for storing pre-computed aggregations of data (commonly referred to as materialized views) and retrieving data from these were developed and implemented in order to achieve higher performance on large datasets. View materialization enables the program to easily fetch or calculate a view using other views, something which can yield significant performance gains if view sizes are much smaller than the underlying raw dataset. The choice of which views to materialize was done in an automated manner using a well-known algorithm - the greedy algorithm for view materialization - which selects the fraction of all possible views that is likely (but not guaranteed) to yield the best performance gain. The use of materialized views was shown to have good potential to increase performance for large datasets, with an average speedup (compared to on-the-fly queries) between 20 and 70 for a test dataset containing 500~000 rows. The end result was a program combining flexibility with good performance, which was also reflected by good scores in a user-acceptance test, with participants from the company where this project was carried out.
558

Processing Exact Results for Queries over Data Streams

Chakraborty, Abhirup 23 February 2010 (has links)
In a growing number of information-processing applications, such as network-traffic monitoring, sensor networks, financial analysis, data mining for e-commerce, etc., data takes the form of continuous data streams rather than traditional stored databases/relational tuples. These applications have some common features like the need for real time analysis, huge volumes of data, and unpredictable and bursty arrivals of stream elements. In all of these applications, it is infeasible to process queries over data streams by loading the data into a traditional database management system (DBMS) or into main memory. Such an approach does not scale with high stream rates. As a consequence, systems that can manage streaming data have gained tremendous importance. The need to process a large number of continuous queries over bursty, high volume online data streams, potentially in real time, makes it imperative to design algorithms that should use limited resources. This dissertation focuses on processing exact results for join queries over high speed data streams using limited resources, and proposes several novel techniques for processing join queries incorporating secondary storages and non-dedicated computers. Existing approaches for stream joins either, (a) deal with memory limitations by shedding loads, and therefore can not produce exact or highly accurate results for the stream joins over data streams with time varying arrivals of stream tuples, or (b) suffer from large I/O-overheads due to random disk accesses. The proposed techniques exploit the high bandwidth of a disk subsystem by rendering the data access pattern largely sequential, eliminating small, random disk accesses. This dissertation proposes an I/O-efficient algorithm to process hybrid join queries, that join a fast, time varying or bursty data stream and a persistent disk relation. Such a hybrid join is the crux of a number of common transformations in an active data warehouse. Experimental results demonstrate that the proposed scheme reduces the response time in output results by exploiting spatio-temporal locality within the input stream, and minimizes disk overhead through disk-I/O amortization. The dissertation also proposes an algorithm to parallelize a stream join operator over a shared-nothing system. The proposed algorithm distributes the processing loads across a number of independent, non-dedicated nodes, based on a fixed or predefined communication pattern; dynamically maintains the degree of declustering in order to minimize communication and processing overheads; and presents mechanisms for reducing storage and communication overheads while scaling over a large number of nodes. We present experimental results showing the efficacy of the proposed algorithms.
559

On the effect of INQUERY term-weighting scheme on query-sensitive similarity measures

Kini, Ananth Ullal 12 April 2006 (has links)
Cluster-based information retrieval systems often use a similarity measure to compute the association among text documents. In this thesis, we focus on a class of similarity measures named Query-Sensitive Similarity (QSS) measures. Recent studies have shown QSS measures to positively influence the outcome of a clustering procedure. These studies have used QSS measures in conjunction with the ltc term-weighting scheme. Several term-weighting schemes have superseded the ltc term-weighing scheme and demonstrated better retrieval performance relative to the latter. We test whether introducing one of these schemes, INQUERY, will offer any benefit over the ltc scheme when used in the context of QSS measures. The testing procedure uses the Nearest Neighbor (NN) test to quantify the clustering effectiveness of QSS measures and the corresponding term-weighting scheme. The NN tests are applied on certain standard test document collections and the results are tested for statistical significance. On analyzing results of the NN test relative to those obtained for the ltc scheme, we find several instances where the INQUERY scheme improves the clustering effectiveness of QSS measures. To be able to apply the NN test, we designed a software test framework, Ferret, by complementing the features provided by dtSearch, a search engine. The test framework automates the generation of NN coefficients by processing standard test document collection data. We provide an insight into the construction and working of the Ferret test framework.
560

Systematisierung und Evaluierung von Clustering-Verfahren im Information Retrieval

Kürsten, Jens 04 December 2006 (has links) (PDF)
Im Rahmen der vorliegenden Diplomarbeit werden Verfahren zur Clusteranalyse sowie deren Anwendungsmöglichkeiten zur Optimierung der Rechercheergebnisse von Information Retrievalsystemen untersucht. Die Grundlage der vergleichenden Evaluation erfolgversprechender Ansätze zur Clusteranalyse anhand der Domain Specific Monolingual Tasks des Cross-Language Evaluation Forums 2006 bildet die systematische Analyse der in der Forschung etablierten Verfahren zur Clusteranalyse. Die Implementierung ausgewählter Clusterverfahren wird innerhalb eines bestehenden, Lucene-basierten Retrievalsystems durchgeführt. Zusätzlich wird dieses System im Rahmen dieser Arbeit mit Komponenten zur Query Expansion und zur Datenfusion ausgestattet. Diese beiden Ansätze haben sich in der Forschung zur automatischen Optimierung von Retrievalergebnissen durchgesetzt und bilden daher die Bewertungsgrundlage für die implementierten Konzepte zur Optimierung von Rechercheergebnissen auf Basis der Clusteranalyse. Im Ergebnis erweist sich das lokale Dokument Clustering auf Basis des k-means Clustering-Algorithmus in Kombination mit dem Pseudo-Relevanz-Feedback Ansatz zur Selektion der Dokumente für die Query Expansion als besonders erfolgversprechend. Darüber hinaus wird gezeigt, dass mit Hilfe der Datenfusion auf Basis des Z-Score Operators die Ergebnisse verschiedener Indizierungsverfahren so kombiniert werden können, dass sehr gute und insbesondere sehr robuste Rechercheergebnisse erreicht werden. / Within the present diploma thesis, widely used Cluster Analysis approaches are studied in respect to their application to optimize the results of Information Retrieval systems. A systematic analysis of approved methods of the Cluster Analysis is the basis of the comparative evaluation of promising approaches to use the Cluster Analysis to optimize retrieval results. The evaluation is accomplished by the participation at the Domain Specific Monolingual Tasks of the Cross-Language Evaluation Forum 2006. The implementation of selected approaches for Clustering is realized within the framework of an existing Lucene-based retrieval system. Within the scope of work, this system will be supplemented with components for Query Expansion and Data Fusion. Both approaches have prevailed in the research of automatic optimization of retrieval results. Therefore, they are the basis of assessment of the implemented methods, which aim at improving the results of retrieval and are based on Cluster Analysis. The results show that selecting documents for Query Expansion with the help of local Document Clustering based on the k-means Clustering algorithm combined with the Blind Feedback approach is very promising. Furthermore, the Data Fusion approach based on the Z-Score operator proves to be very useful to combine retrieval results of different indexing methods. In fact, this approach achieves very good and in particular very robust results of retrieval.

Page generated in 0.0288 seconds