• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 162
  • 27
  • Tagged with
  • 1180
  • 773
  • 699
  • 436
  • 436
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

DescribeX: A Framework for Exploring and Querying XML Web Collections

Rizzolo, Flavio Carlos 26 February 2009 (has links)
The nature of semistructured data in web collections is evolving. Even when XML web documents are valid with regard to a schema, the actual structure of such documents exhibits significant variations across collections for several reasons: an XML schema may be very lax (e.g., to accommodate the flexibility needed to represent collections of documents in RSS feeds), a schema may be large and different subsets used for different documents (e.g., this is common in industry standards like UBL), or open content models may allow arbitrary schemas to be mixed (e.g., RSS extensions like those used for podcasting). A schema alone may not provide sufficient information for many data management tasks that require knowledge of the actual structure of the collection. Web applications (such as processing RSS feeds or web service messages) rely on XPath-based data manipulation tools. Web developers need to use XPath queries effectively on increasingly larger web collections containing hundreds of thousands of XML documents. Even when tasks only need to deal with a single document at a time, developers benefit from understanding the behaviour of XPath expressions across multiple documents (e.g., what will a query return when run over the thousands of hourly feeds collected during the last few months?). Dealing with the (highly variable) structure of such web collections poses additional challenges. This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogenous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX’s light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.
172

Merging and Consistency Checking of Distributed Models

Sabetzadeh, Mehrdad 26 February 2009 (has links)
Large software projects are characterized by distributed environments consisting of teams at different organizations and geographical locations. These teams typically build multiple overlapping models, representing different perspectives, different versions across time, different variants in a product family, different development concerns, etc. Keeping track of the relationships between these models, constructing a global view, and managing consistency are major challenges. Model Management is concerned with describing the relationships between distributed models, i.e., models built in a distributed development environment, and providing systematic operators to manipulate these models and their relationships. Such operators include, among others, Match, for finding relationships between disparate models, Merge, for combining models with respect to known or hypothesized relationships between them, Slice, for producing projections of models and relationships based on given criteria, and Check-Consistency, for verifying models and relationships against the consistency properties of interest. In this thesis, we provide automated solutions for two key model management operators, Merge and Check-Consistency. The most novel aspects of our work on model merging are (1) the ability to combine arbitrarily large collections of interrelated models and (2) support for toleration of incompleteness and inconsistency. Our consistency checking technique employs model merging to reduce the problem of checking inter-model consistency to checking intra-model consistency of a merged model. This enables a flexible way of verifying global consistency properties that is not possible with other existing approaches. We develop a prototype tool, TReMer+, implementing our merge and consistency checking approaches. We use TReMer+ to demonstrate that our contributions facilitate understanding and refinement of the relationships between distributed models.
173

A Design-rule-Based Constructive Approach To Building Traceable Software

Ghazarian, Arbi 18 February 2010 (has links)
The maintenance of large-scale software systems without trace information between development artifacts is a challenging task. This thesis focuses on the problem of supporting software maintenance through a mechanism for establishing traceability relations between the system requirements and its code elements. The core of the proposed solution is a set of design rules that regulates the positional (e.g., package), structural (e.g., class), and behavioral (e.g., method) aspects of the system elements, thus establishing traceability between requirements and code. We identify several types of requirements each of which can be supported by design rules. We introduce a rule-based approach to software construction and demonstrate that such a process can support maintainability through two mechanisms: (a) traceability and (b) reduction of defect rate. We distinguish our work from traditional traceability approaches in that we regard traceability as an intrinsic structural property of software systems. This view of traceability is in contrast to traditional traceability approaches where traceability is achieved extrinsically through creating maps such as the traceability matrices or allocation tables. The approach presented in this thesis has been evaluated through conducting several empirical studies as well as building a proof-of-concept system. The results we obtained demonstrate the effectiveness and usefulness of our approach.
174

Face Routing with Guaranteed Message Delivery in Wireless Ad-hoc Networks

Guan, Xiaoyang 01 March 2010 (has links)
Face routing is a simple method for routing in wireless ad-hoc networks. It only uses location information about nodes to do routing and it provably guarantees message delivery in static connected plane graphs. However, a static connected plane graph is often difficult to obtain in a real wireless network. This thesis extends face routing to more realistic models of wireless ad-hoc networks. We present a new version of face routing that generalizes and simplifies previous face routing protocols and develop techniques to apply face routing directly on general, non-planar network graphs. We also develop techniques for face routing to deal with changes to the graph that occur during routing. Using these techniques, we create a collection of face routing protocols for a series of increasingly more general graph models and prove the correctness of these protocols.
175

Astrometry.net: Automatic Recognition and Calibration of Astronomical Images

Lang, Dustin 03 March 2010 (has links)
We present Astrometry.net, a system for automatically recognizing and astrometrically calibrating astronomical images, using the information in the image pixels alone. The system is based on the geometric hashing approach in computer vision: We use the geometric relationships between low-level features (stars and galaxies), which are relatively indistinctive, to create geometric features that are distinctive enough that we can recognize images that cover less than one-millionth of the area of the sky. The geometric features are used to generate rapidly hypotheses about the location---the pointing, scale, and rotation---of an image on the sky. Each hypothesis is then evaluated in a Bayesian decision theory framework in order to ensure that most correct hypotheses are accepted while false hypotheses are almost never accepted. The feature-matching process is accelerated by using a new fast and space-efficient kd-tree implementation. The Astrometry.net system is available via a web interface, and the software is released under an open-source license. It is being used by hundreds of individual astronomers and several large-scale projects, so we have at least partially achieved our goal of helping ``to organize, annotate and make searchable all the world's astronomical information.''
176

Abstraction for Verification and Refutation in Model Checking

Wei, Ou 13 April 2010 (has links)
Model checking is an automated technique for deciding whether a computer program satisfies a temporal property. Abstraction is the key to scaling model checking to industrial-sized problems, which approximates a large (or infinite) program by a smaller abstract model and lifts the model checking result over the abstract model back to the original program. In this thesis, we study abstraction in model checking based on \emph{exact-approximation}, which allows for verification and refutation of temporal properties within the same abstraction framework. Our work in this thesis is driven by problems from both practical and theoretical aspects of exact-approximation. We first address challenges of effectively applying symmetry reduction to \emph{virtually} symmetric programs. Symmetry reduction can be seen as a \emph{strong} exact-approximation technique, where a property holds on the original program if and only if it holds on the abstract model. In this thesis, we develop an efficient procedure for identifying virtual symmetry in programs. We also explore techniques for combining virtual symmetry with symbolic model checking. Our second study investigates model checking of \emph{recursive} programs. Previously, we have developed a software model checker for non-recursive programs based on exact-approximating predicate abstraction. In this thesis, we extend it to reachability and non-termination analysis of recursive programs. We propose a new program semantics that effectively removes call stacks while preserving reachability and non-termination. By doing this, we reduce recursive analysis to non-recursive one, which allows us to reuse existing abstract analysis in our software model checker to handle recursive programs. A variety of \emph{partial} transition systems have been proposed for construction of abstract models in exact-approximation. Our third study conducts a systematic analysis of them from both semantic and logical points of view. We analyze the connection between semantic and logical consistency of partial transition systems, compare the expressive power of different families of these formalisms, and discuss the precision of model checking over them. Abstraction based on exact-approximation uses a uniform framework to prove correctness and detect errors of computer programs. Our results in this thesis provide better understanding of this approach and extend its applicability in practice.
177

Monitoring and Diagnosis for Autonomic Systems: A Requirement Engineering Approach

Wang, Yiqiao 21 April 2010 (has links)
Autonomic computing holds great promise for software systems of the future, but at the same time poses great challenges for Software Engineering. Autonomic computing research aims to design software systems that self-configure, self-repair, self-optimize and self-protect, so as to reduce software maintenance cost while improving performance. The aim of our research is to develop tool-supported methodologies for designing and operating autonomic systems. Like other researchers in this area, we assume that autonomic system architectures consist of monitoring, analysis/diagnosis, planning, and execution components that define a feedback loop and serve as the basis for system self-management. This thesis proposes an autonomic framework founded on models of requirements and design. This framework defines the normal operation of a software system in terms of models of its requirements (goal models) and/or operation (statechart models). These models determine what to monitor and how to interpret log data in order to diagnose failures. The monitoring component collects and manages log data. The diagnostic component analyzes log data, identifies failures, and pinpoints problematic components. We transform the diagnostic problem into a propositional satisfiability (SAT) problem solvable by off-the-shelf SAT solvers. Log data are preprocessed into a compact propositional encoding that scales well with growing problem size. For repair, our compensation component executes compensation actions to restore the system to an earlier consistent state. The framework repairs failures through reconfiguration when monitoring and diagnosis use requirements. The reconfiguration component selects a best system reconfiguration that contributes most positively to the system's non-functional requirements. It selects a reconfiguration that achieves this while reconfiguring the system minimally. The framework does not currently offer a repair mechanism when monitoring and diagnosis use statecharts. We illustrate our framework with two medium-sized, publicly-available case studies. We evaluate the framework's performance through a series of experiments on randomly generated and progressively larger specifications. The results demonstrate that our approach scales well with problem size, and can be applied to industrial sized software applications.
178

Machine Learning Approaches to Biological Sequence and Phenotype Data Analysis

Min, Renqiang 17 February 2011 (has links)
To understand biology at a system level, I presented novel machine learning algorithms to reveal the underlying mechanisms of how genes and their products function in different biological levels in this thesis. Specifically, at sequence level, based on Kernel Support Vector Machines (SVMs), I proposed learned random-walk kernel and learned empirical-map kernel to identify protein remote homology solely based on sequence data, and I proposed a discriminative motif discovery algorithm to identify sequence motifs that characterize protein sequences' remote homology membership. The proposed approaches significantly outperform previous methods, especially on some challenging protein families. At expression and protein level, using hierarchical Bayesian graphical models, I developed the first high-throughput computational predictive model to filter sequence-based predictions of microRNA targets by incorporating the proteomic data of putative microRNA target genes, and I proposed another probabilistic model to explore the underlying mechanisms of microRNA regulation by combining the expression profile data of messenger RNAs and microRNAs. At cellular level, I further investigated how yeast genes manifest their functions in cell morphology by performing gene function prediction from the morphology data of yeast temperature-sensitive alleles. The developed prediction models enable biologists to choose some interesting yeast essential genes and study their predicted novel functions.
179

Exploiting Coherence and Data-driven Models for Real-time Global Illumination

Nowrouzezahrai, Derek 17 February 2011 (has links)
Realistic computer generated images are computed by combining geometric effects, reflectance models for several captured and phenomenological materials, and real-world lighting according to mathematical models of physical light transport. Several important lighting phenomena should be considered when targeting realistic image simulation. A combination of soft and hard shadows, which arise from the interaction of surface and light geometries, provide necessary shape perception cues for a viewer. A wide variety of realistic materials, from physically-captured reflectance datasets to empirically designed mathematical models, modulate the virtual surface appearances in a manner that can further dissuade a viewer from considering the possibility of computational image synthesis over that of reality. Lastly, in many important cases, light reflects off many different surfaces before entering the eye. These secondary effects can be critical in grounding the viewer in a virtual world, since the human visual system is adapted to the physical world, where such effects are constantly in play. Simulating each of these effects is challenging due to their individual underlying complexity. The net complexity is compounded when several effects are combined. This thesis will investigate real-time approaches for simulating these effects under stringent performance and memory constraints, and with varying degrees of interactivity. In order to make these computations tractable given these added constraints, I will use data and signal analysis techniques to identify predictable patterns in the different spatial and angular signals used during image synthesis. The results of this analysis will be exploited with several analytic and data-driven mathematical models that are both efficient, and yield accurate approximations with predictable and controllable error.
180

Automatic Camera Control for Capturing Collaborative Meetings

Ranjan, Abhishek 25 September 2009 (has links)
The growing size of organizations is making it increasingly expensive to attend meetings and difficult to retain what happened in those meetings. Meeting video capture systems exist to support video conferencing for remote participation or archiving for later review, but they have been regarded ineffective. The reason is twofold. Firstly, the conventional way of capturing video using a single static camera fails to capture focus and context. Secondly, a single static view is often monotonous, making the video onerous to review. To address these issues, often human camera operators are employed to capture effective videos with changing views, but this approach is expensive. In this thesis, we argue that camera views can be changed automatically to produce meeting videos effectively and inexpensively. We automate the camera view control by automatically determining the visual focus of attention as a function of time and moving the camera to capture it. In order to determine visual focus of attention for different meetings, we conducted experiments and interviewed television production professionals who capture meeting videos. Furthermore, television production principles were used to appropriately frame shots and switch between shots. The result of the evaluation of the automatic camera control system indicated its significant benefits over conventional static camera view. By applying television production principles various issues related to shot stability and screen motion were resolved. The performance of the automatic camera control based on television production principles also approached the performance of trained human camera crew. To further reduce the cost of the automation, we also explored the application of computer vision and audio tracking. Results of our explorations provide empirical evidence in support of the utility of camera control encouraging future research in this area. Successful application of television production principles to automatically control cameras suggest various ways to handle issues involved in the automation process.

Page generated in 0.0383 seconds