• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 164
  • 27
  • Tagged with
  • 1182
  • 775
  • 701
  • 438
  • 438
  • 401
  • 401
  • 398
  • 398
  • 116
  • 116
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

50,000 Tiny Videos: A Large Dataset for Non-parametric Content-based Retrieval and Recognition

Karpenko, Alexandre 22 September 2009 (has links)
This work extends the tiny image data-mining techniques developed by Torralba et al. to videos. A large dataset of over 50,000 videos was collected from YouTube. This is the largest user-labeled research database of videos available to date. We demonstrate that a large dataset of tiny videos achieves high classification precision in a variety of content-based retrieval and recognition tasks using very simple similarity metrics. Content-based copy detection (CBCD) is evaluated on a standardized dataset, and the results are applied to related video retrieval within tiny videos. We use our similarity metrics to improve text-only video retrieval results. Finally, we apply our large labeled video dataset to various classification tasks. We show that tiny videos are better suited for classifying activities than tiny images. Furthermore, we demonstrate that classification can be improved by combining the tiny images and tiny videos datasets.
202

Data Recovery For Web Applications

Akkus, Istemi Ekin 14 December 2009 (has links)
Web applications store their data at the server. Despite several benefits, this design raises a serious problem because a bug or misconfiguration causing data loss or corruption can affect a large number of users. We describe the design of a generic recovery system for web applications. Our system tracks application requests and reuses undo logs already kept by databases to selectively recover from corrupting requests and their effects. The main challenge is to correlate requests across the multiple tiers of the application to determine the correct recovery actions. We explore using dependencies both within and across requests at three layers, (i.e., database, application, client) to help identify data corruption accurately. We evaluate our system using known bugs and misconfigurations in popular web applications, including Wordpress, Drupal and Gallery2. Our results show that our system enables recovery from data corruption without loss of critical data incurring little overhead while tracking requests.
203

A Study of Conflict Detection in Software Transactional Memory

Lupei, Daniel 15 February 2010 (has links)
Transactional Memory (TM) has been proposed as a simpler parallel programming model compared to the traditional locking model. However, uptake from the programming community has been slow, primarily because performance issues of software-based TM strategies are not well understood. In this thesis we conduct a systematic analysis of conflict scenarios that may emerge when enforcing correctness between conflicting transactions. We find that some combinations of conflict detection and resolution strategies perform better than others depending on the conflict patterns in the application. We validate our findings by implementing several concurrency control strategies, and by measuring their relative performance. Based on these observations, we introduce partial rollbacks as a mechanism for effectively compensating the variability in the TM algorithm performance. We show that using this mechanism we can obtain close to the overall best performance for a range of conflict patterns in a synthetically generated workload and a realistic game application.
204

Toward More Efficient Motion Planning with Differential Constraints

Kalisiak, Maciej 31 July 2008 (has links)
Agents with differential constraints, although common in the real world, pose a particular difficulty for motion planning algorithms. Methods for solving such problems are still relatively slow and inefficient. In particular, current motion planners generally can neither "see" the world around them, nor generalize from experience. That is, their reliance on collision tests as the only means of sensing the environment yields a tactile, myopic perception of the world. Such short-sightedness greatly limits any potential for detection, learning, or reasoning about frequently encountered situations. In result these methods solve each problem in exactly the same way, whether the first or the hundredth time they attempt it, each time none the wiser. The key component of this thesis proposes a general approach for motion planning in which local sensory information, in conjunction with prior accumulated experience, are exploited to improve planner performance. The approach relies on learning viability models for the agent's "perceptual space", and the use thereof to direct planning effort. In addition, a method is presented for improving runtimes of the RRT motion planning algorithm in heavily constrained search-spaces, a common feature for agents with differential constraints. Finally, the thesis explores the use of viability models for maintaing safe operation of user-controlled agents, a related application which could be harnessed to yield additional, more "natural" experience data for further improving motion planning.
205

DescribeX: A Framework for Exploring and Querying XML Web Collections

Rizzolo, Flavio Carlos 26 February 2009 (has links)
The nature of semistructured data in web collections is evolving. Even when XML web documents are valid with regard to a schema, the actual structure of such documents exhibits significant variations across collections for several reasons: an XML schema may be very lax (e.g., to accommodate the flexibility needed to represent collections of documents in RSS feeds), a schema may be large and different subsets used for different documents (e.g., this is common in industry standards like UBL), or open content models may allow arbitrary schemas to be mixed (e.g., RSS extensions like those used for podcasting). A schema alone may not provide sufficient information for many data management tasks that require knowledge of the actual structure of the collection. Web applications (such as processing RSS feeds or web service messages) rely on XPath-based data manipulation tools. Web developers need to use XPath queries effectively on increasingly larger web collections containing hundreds of thousands of XML documents. Even when tasks only need to deal with a single document at a time, developers benefit from understanding the behaviour of XPath expressions across multiple documents (e.g., what will a query return when run over the thousands of hourly feeds collected during the last few months?). Dealing with the (highly variable) structure of such web collections poses additional challenges. This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogenous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX’s light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.
206

Clause Learning, Resolution Space, and Pebbling

Hertel, Philipp 19 January 2009 (has links)
Currently, the most effective complete SAT solvers are based on the DPLL algorithm augmented by Clause Learning. These solvers can handle many real-world problems from application areas like verification, diagnosis, planning, and design. Clause Learning works by storing previously computed, intermediate results and then reusing them to prune the future search tree. Without Clause Learning, however, DPLL loses most of its effectiveness on real world problems. Recently there has been some work on obtaining a deeper understanding of the technique of Clause Learning. In this thesis, we contribute to the understanding of Clause Learning, and the Resolution proof system that underlies it, in a number of ways. We characterize Clause Learning as a new, intuitive Resolution refinement which we call CL. We then show that CL proofs can effectively p-simulate general Resolution. Furthermore, this result holds even for the more restrictive class of greedy, unit propagating CL proofs, which more accurately characterize Clause Learning as it is used in practice. This result is surprising and indicates that Clause Learning is significantly more powerful than was previously known. Since Clause Learning makes use of previously derived clauses, it motivates the study of Resolution space. We contribute to this area of study by proving that determining the variable space of a Resolution derivation is PSPACE-complete. The reduction also yields a surprising exponential size/space trade-off for Resolution in which an increase of just 3 units of variable space results in an exponential decrease in proofsize. This result runs counter to the intuitions of many in the SAT-solving community who have generally believed that proof-size should decrease smoothly as available space increases. In order to prove these Resolution results, we need to make use of some intuition regarding the relationship between Black-White Pebbling and Resolution. In fact, determining the complexity of Resolution variable space required us to first prove that Black-White Pebbling is PSPACE-complete. The complexity of the Black-White Pebbling Game has remained an open problem for 30 years and resisted numerous attempts to solve it. Its solution is the primary contribution of this thesis.
207

Nogood Processing in CSPs

Katsirelos, George 19 January 2009 (has links)
The constraint satisfaction problem is an NP-complete problem that provides a convenient framework for expressing many computationally hard problems. In addition, domain knowledge can be efficiently integrated into CSPs, providing a potentially exponential speedup in some cases. The CSP is closely related to the satisfiability problem and many of the techniques developed for one have been transferred to the other. However, the recent dramatic improvements in SAT solvers that result from learning clauses during search have not been transferred successfully to CSP solvers. In this thesis we propose that this failure is due to a fundamental restriction of \newtext{nogood learning, which is intended to be the analogous to clause learning in CSPs}. This restriction means that nogood learning can exhibit a superpolynomial slowdown compared to clause learning in some cases. We show that the restriction can be lifted, delivering promising results. Integration of nogood learning in a CSP solver, however, presents an additional challenge, as a large body of domain knowledge is typically encoded in the form of domain specific propagation algorithms called global constraints. Global constraints often completely eliminate the advantages of nogood learning. We demonstrate generic methods that partially alleviate the problem irrespective of the type of global constraint. We also show that more efficient methods can be integrated into specific global constraints and demonstrate the feasibility of this approach on several widely used global constraints.
208

Merging and Consistency Checking of Distributed Models

Sabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges. Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest. In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches. We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
209

A Design-rule-Based Constructive Approach To Building Traceable Software

Ghazarian, Arbi 18 February 2010 (has links)
The maintenance of large-scale software systems without trace information between development artifacts is a challenging task. This thesis focuses on the problem of supporting software maintenance through a mechanism for establishing traceability relations between the system requirements and its code elements. The core of the proposed solution is a set of design rules that regulates the positional (e.g., package), structural (e.g., class), and behavioral (e.g., method) aspects of the system elements, thus establishing traceability between requirements and code. We identify several types of requirements each of which can be supported by design rules. We introduce a rule-based approach to software construction and demonstrate that such a process can support maintainability through two mechanisms: (a) traceability and (b) reduction of defect rate. We distinguish our work from traditional traceability approaches in that we regard traceability as an intrinsic structural property of software systems. This view of traceability is in contrast to traditional traceability approaches where traceability is achieved extrinsically through creating maps such as the traceability matrices or allocation tables. The approach presented in this thesis has been evaluated through conducting several empirical studies as well as building a proof-of-concept system. The results we obtained demonstrate the effectiveness and usefulness of our approach.
210

Abstraction for Verification and Refutation in Model Checking

Wei, Ou 13 April 2010 (has links)
Model checking is an automated technique for deciding whether a computer program satisfies a temporal property. Abstraction is the key to scaling model checking to industrial-sized problems, which approximates a large (or infinite) program by a smaller abstract model and lifts the model checking result over the abstract model back to the original program. In this thesis, we study abstraction in model checking based on \emph{exact-approximation}, which allows for verification and refutation of temporal properties within the same abstraction framework. Our work in this thesis is driven by problems from both practical and theoretical aspects of exact-approximation. We first address challenges of effectively applying symmetry reduction to \emph{virtually} symmetric programs. Symmetry reduction can be seen as a \emph{strong} exact-approximation technique, where a property holds on the original program if and only if it holds on the abstract model. In this thesis, we develop an efficient procedure for identifying virtual symmetry in programs. We also explore techniques for combining virtual symmetry with symbolic model checking. Our second study investigates model checking of \emph{recursive} programs. Previously, we have developed a software model checker for non-recursive programs based on exact-approximating predicate abstraction. In this thesis, we extend it to reachability and non-termination analysis of recursive programs. We propose a new program semantics that effectively removes call stacks while preserving reachability and non-termination. By doing this, we reduce recursive analysis to non-recursive one, which allows us to reuse existing abstract analysis in our software model checker to handle recursive programs. A variety of \emph{partial} transition systems have been proposed for construction of abstract models in exact-approximation. Our third study conducts a systematic analysis of them from both semantic and logical points of view. We analyze the connection between semantic and logical consistency of partial transition systems, compare the expressive power of different families of these formalisms, and discuss the precision of model checking over them. Abstraction based on exact-approximation uses a uniform framework to prove correctness and detect errors of computer programs. Our results in this thesis provide better understanding of this approach and extend its applicability in practice.

Page generated in 0.1367 seconds