• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 163
  • 27
  • Tagged with
  • 1181
  • 774
  • 700
  • 437
  • 437
  • 401
  • 401
  • 398
  • 398
  • 116
  • 115
  • 103
  • 88
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Face Routing with Guaranteed Message Delivery in Wireless Ad-hoc Networks

Guan, Xiaoyang 01 March 2010 (has links)
Face routing is a simple method for routing in wireless ad-hoc networks. It only uses location information about nodes to do routing and it provably guarantees message delivery in static connected plane graphs. However, a static connected plane graph is often difficult to obtain in a real wireless network. This thesis extends face routing to more realistic models of wireless ad-hoc networks. We present a new version of face routing that generalizes and simplifies previous face routing protocols and develop techniques to apply face routing directly on general, non-planar network graphs. We also develop techniques for face routing to deal with changes to the graph that occur during routing. Using these techniques, we create a collection of face routing protocols for a series of increasingly more general graph models and prove the correctness of these protocols.
212

Astrometry.net: Automatic Recognition and Calibration of Astronomical Images

Lang, Dustin 03 March 2010 (has links)
We present Astrometry.net, a system for automatically recognizing and astrometrically calibrating astronomical images, using the information in the image pixels alone. The system is based on the geometric hashing approach in computer vision: We use the geometric relationships between low-level features (stars and galaxies), which are relatively indistinctive, to create geometric features that are distinctive enough that we can recognize images that cover less than one-millionth of the area of the sky. The geometric features are used to generate rapidly hypotheses about the location---the pointing, scale, and rotation---of an image on the sky. Each hypothesis is then evaluated in a Bayesian decision theory framework in order to ensure that most correct hypotheses are accepted while false hypotheses are almost never accepted. The feature-matching process is accelerated by using a new fast and space-efficient kd-tree implementation. The Astrometry.net system is available via a web interface, and the software is released under an open-source license. It is being used by hundreds of individual astronomers and several large-scale projects, so we have at least partially achieved our goal of helping ``to organize, annotate and make searchable all the world's astronomical information.''
213

Monitoring and Diagnosis for Autonomic Systems: A Requirement Engineering Approach

Wang, Yiqiao 21 April 2010 (has links)
Autonomic computing holds great promise for software systems of the future, but at the same time poses great challenges for Software Engineering. Autonomic computing research aims to design software systems that self-configure, self-repair, self-optimize and self-protect, so as to reduce software maintenance cost while improving performance. The aim of our research is to develop tool-supported methodologies for designing and operating autonomic systems. Like other researchers in this area, we assume that autonomic system architectures consist of monitoring, analysis/diagnosis, planning, and execution components that define a feedback loop and serve as the basis for system self-management. This thesis proposes an autonomic framework founded on models of requirements and design. This framework defines the normal operation of a software system in terms of models of its requirements (goal models) and/or operation (statechart models). These models determine what to monitor and how to interpret log data in order to diagnose failures. The monitoring component collects and manages log data. The diagnostic component analyzes log data, identifies failures, and pinpoints problematic components. We transform the diagnostic problem into a propositional satisfiability (SAT) problem solvable by off-the-shelf SAT solvers. Log data are preprocessed into a compact propositional encoding that scales well with growing problem size. For repair, our compensation component executes compensation actions to restore the system to an earlier consistent state. The framework repairs failures through reconfiguration when monitoring and diagnosis use requirements. The reconfiguration component selects a best system reconfiguration that contributes most positively to the system's non-functional requirements. It selects a reconfiguration that achieves this while reconfiguring the system minimally. The framework does not currently offer a repair mechanism when monitoring and diagnosis use statecharts. We illustrate our framework with two medium-sized, publicly-available case studies. We evaluate the framework's performance through a series of experiments on randomly generated and progressively larger specifications. The results demonstrate that our approach scales well with problem size, and can be applied to industrial sized software applications.
214

Machine Learning Approaches to Biological Sequence and Phenotype Data Analysis

Min, Renqiang 17 February 2011 (has links)
To understand biology at a system level, I presented novel machine learning algorithms to reveal the underlying mechanisms of how genes and their products function in different biological levels in this thesis. Specifically, at sequence level, based on Kernel Support Vector Machines (SVMs), I proposed learned random-walk kernel and learned empirical-map kernel to identify protein remote homology solely based on sequence data, and I proposed a discriminative motif discovery algorithm to identify sequence motifs that characterize protein sequences' remote homology membership. The proposed approaches significantly outperform previous methods, especially on some challenging protein families. At expression and protein level, using hierarchical Bayesian graphical models, I developed the first high-throughput computational predictive model to filter sequence-based predictions of microRNA targets by incorporating the proteomic data of putative microRNA target genes, and I proposed another probabilistic model to explore the underlying mechanisms of microRNA regulation by combining the expression profile data of messenger RNAs and microRNAs. At cellular level, I further investigated how yeast genes manifest their functions in cell morphology by performing gene function prediction from the morphology data of yeast temperature-sensitive alleles. The developed prediction models enable biologists to choose some interesting yeast essential genes and study their predicted novel functions.
215

Exploiting Coherence and Data-driven Models for Real-time Global Illumination

Nowrouzezahrai, Derek 17 February 2011 (has links)
Realistic computer generated images are computed by combining geometric effects, reflectance models for several captured and phenomenological materials, and real-world lighting according to mathematical models of physical light transport. Several important lighting phenomena should be considered when targeting realistic image simulation. A combination of soft and hard shadows, which arise from the interaction of surface and light geometries, provide necessary shape perception cues for a viewer. A wide variety of realistic materials, from physically-captured reflectance datasets to empirically designed mathematical models, modulate the virtual surface appearances in a manner that can further dissuade a viewer from considering the possibility of computational image synthesis over that of reality. Lastly, in many important cases, light reflects off many different surfaces before entering the eye. These secondary effects can be critical in grounding the viewer in a virtual world, since the human visual system is adapted to the physical world, where such effects are constantly in play. Simulating each of these effects is challenging due to their individual underlying complexity. The net complexity is compounded when several effects are combined. This thesis will investigate real-time approaches for simulating these effects under stringent performance and memory constraints, and with varying degrees of interactivity. In order to make these computations tractable given these added constraints, I will use data and signal analysis techniques to identify predictable patterns in the different spatial and angular signals used during image synthesis. The results of this analysis will be exploited with several analytic and data-driven mathematical models that are both efficient, and yield accurate approximations with predictable and controllable error.
216

Processing Desktop Work on a Large High-resolution Display: Studies and Designs

Bi, Xiaojun 05 January 2012 (has links)
With the ever increasing amount of digital information, information workers desire more screen real estate to process their daily desktop work. Thanks to the quick advance in display technology, big screens are increasingly affordable and have been gradually adopted in desktop computing environments. A large wall-size high resolution display, a recent emerging class of display which possesses a huge visualization surface, could potentially benefit information processing work. In this dissertation we investigate such a large display as the primary working space for information processing work. We firstly conducted a longitudinal diary study and three control experiments investigating effects of a large display on information processing work. The longitudinal diary study investigates large display use in a personal desktop computing context by comparing it with single- and dual-monitor. The three controlled experiments further investigate the effects of two factors determining resolution of a display—physical size and pixel-density on users’ performance and behaviors. The diary study reveals the distinct behavior patterns of large display users in partitioning screen space and managing windows, while the control experiments deeply reveal the effects of the physical size and pixel density of a display on different information processing tasks. Aside from studying a continuous large display, we also articulate how interior bezels within a tiled-monitor large display affect users’ performance and behaviors in basic visual search and action tasks via a series of controlled experiments. Based on the understanding of large display effects and users’ behavior patterns, we then design new interaction techniques to address a big challenge of working on a large display: managing overflowing windows. We design and implement a large display oriented window management system prototype: WallTop. It includes a set of interaction techniques that provide greater flexibility for managing windows. Usability tests show that users can quickly and easily learn the new techniques and apply them to realistic window management tasks with increased efficiency on a large display.
217

Otherworld - Giving Applications a Chance to Survive OS Kernel Crashes

Depoutovitch, Alexandre 06 January 2012 (has links)
The default behavior of all commodity operating systems today is to restart the system when a critical error is encountered in the kernel. This terminates all running applications with an attendant loss of "work in progress" that is non-persistent. Our thesis is that an operating system kernel is simply a component of a larger software system, which is logically well isolated from other components, such as applications, and therefore it should be possible to reboot the kernel without terminating everything else running on the same system. In order to prove this thesis, we designed and implemented a new mechanism, called Otherworld, that microreboots the operating system kernel when a critical error is encountered in the kernel, and it does so without clobbering the state of the running applications. After the kernel microreboot, Otherworld attempts to resurrect the applications that were running at the time of failure. It does so by restoring the application memory spaces, open files and other resources. In the default case it then continues executing the processes from the point at which they were interrupted by the failure. Optionally, applications can have user-level recovery procedures registered with the kernel, in which case Otherworld passes control to these procedures after having restored their process state. Recovery procedures might check the integrity of application data and restore resources Otherworld was not able to restore. We implemented Otherworld in Linux, but we believe that the technique can be applied to all commodity operating systems. In an extensive set of experiments on real-world applications (MySQL, Apache/PHP, Joe, vi), we show that Otherworld is capable of successfully microrebooting the kernel and restoring the applications in over 97\% of the cases. In the default case, Otherworld adds negligible overhead to normal execution. In an enhanced mode, Otherworld can provide extra application memory protection with overhead of between 4% and 12%.
218

Multi-Camera Active-vision System Reconfiguration for Deformable Object Motion Capture

Schacter, David 19 March 2014 (has links)
To improve the accuracy in capturing the motion of deformable objects, a reconfigurable multi-camera active-vision system which can dynamically reposition its cameras online is proposed, and a design for such a system, along with a methodology to select the near-optimal positions and orientations of the set of cameras, is presented. The active-vision system accounts for the deformation of the object-of-interest by tracking triangulated vertices in order to predict the shape of the object at subsequent demand instants. It then selects a system configuration that minimizes the expected error in the recovered position of each of these vertices. Extensive simulations and experiments have verified that using the proposed reconfigurable system to both translate and rotate cameras to near-optimal poses is tangibly superior to using cameras which are either static, or can only rotate, in minimizing the error in recovered vertex positions.
219

Pattern Discovery in DNA Sequences

Yan, Rui 20 March 2014 (has links)
A pattern is a relatively short sequence that represents a phenomenon in a set of sequences. Not all short sequences are patterns; only those that are statistically significant are referred to as patterns or motifs. Pattern discovery methods analyze sequences and attempt to identify and characterize meaningful patterns. This thesis extends the application of pattern discovery algorithms to a new problem domain - Single Nucleotide Polymorphism (SNP) classification. SNPs are single base-pair (bp) variations in the genome, and are probably the most common form of genetic variation. On average, one in every thousand bps may be an SNP. The function of most SNPs, especially those not associated with protein sequence changes, remains unclear. However, genome-wide linkage analyses have associated many SNPs with disorders ranging from Crohn’s disease, to cancer, to quantitative traits such as height or hair color. As a result, many groups are working to predict the functional effects of individual SNPs. In contrast, very little research has examined the causes of SNPs: Why do SNPs occur where they do? This thesis addresses this problem by using pattern discovery algorithms to study DNA non-coding sequences. The hypothesis is that short DNA patterns can be used to predict SNPs. For example, such patterns found in the SNP sequence might block the DNA repair mechanism for the SNP, thus causing SNP occurrence. In order to test the hypothesis, a model is developed to predict SNPs by using pattern discovery methods. The results show that SNP prediction with pattern discovery methods is weak (50 2%), whereas machine learning classification algorithms can achieve prediction accuracy as high as 68%. To determine whether the poor performance of pattern discovery is due to data characteristics (such as sequence length or pattern length) or to the specific biological problem (SNP prediction), a survey was conducted by profiling eight representative pattern discovery methods at multiple parameter settings on 6,754 real biological datasets. This is the first systematic review of pattern discovery methods with assessments of prediction accuracy, CPU usage and memory consumption. It was found that current pattern discovery methods do not consider positional information and do not handle short sequences well (<150 bps), including SNP sequences. Therefore, this thesis proposes a new supervised pattern discovery classification algorithm, referred to as Weighted-Position Pattern Discovery and Classification (WPPDC). The WPPDC is able to exploit positional information to identify positionally-enriched motifs, and to select motifs with a high information content for further classification. Tree structure is applied to WPPDC (referred to as T-WPPDC) in order to reduce algorithmic complexity. Compared to pattern discovery methods T-WPPDC not only showed consistently superior prediction accuracy and but generated patterns with positional information. Machine-learning classification methods (such as Random Forests) showed comparable prediction accuracy. However, unlike T-WPPDC, they are classification methods and are unable to generate SNP-associated patterns.
220

Pattern Discovery in DNA Sequences

Yan, Rui 20 March 2014 (has links)
A pattern is a relatively short sequence that represents a phenomenon in a set of sequences. Not all short sequences are patterns; only those that are statistically significant are referred to as patterns or motifs. Pattern discovery methods analyze sequences and attempt to identify and characterize meaningful patterns. This thesis extends the application of pattern discovery algorithms to a new problem domain - Single Nucleotide Polymorphism (SNP) classification. SNPs are single base-pair (bp) variations in the genome, and are probably the most common form of genetic variation. On average, one in every thousand bps may be an SNP. The function of most SNPs, especially those not associated with protein sequence changes, remains unclear. However, genome-wide linkage analyses have associated many SNPs with disorders ranging from Crohn’s disease, to cancer, to quantitative traits such as height or hair color. As a result, many groups are working to predict the functional effects of individual SNPs. In contrast, very little research has examined the causes of SNPs: Why do SNPs occur where they do? This thesis addresses this problem by using pattern discovery algorithms to study DNA non-coding sequences. The hypothesis is that short DNA patterns can be used to predict SNPs. For example, such patterns found in the SNP sequence might block the DNA repair mechanism for the SNP, thus causing SNP occurrence. In order to test the hypothesis, a model is developed to predict SNPs by using pattern discovery methods. The results show that SNP prediction with pattern discovery methods is weak (50 2%), whereas machine learning classification algorithms can achieve prediction accuracy as high as 68%. To determine whether the poor performance of pattern discovery is due to data characteristics (such as sequence length or pattern length) or to the specific biological problem (SNP prediction), a survey was conducted by profiling eight representative pattern discovery methods at multiple parameter settings on 6,754 real biological datasets. This is the first systematic review of pattern discovery methods with assessments of prediction accuracy, CPU usage and memory consumption. It was found that current pattern discovery methods do not consider positional information and do not handle short sequences well (<150 bps), including SNP sequences. Therefore, this thesis proposes a new supervised pattern discovery classification algorithm, referred to as Weighted-Position Pattern Discovery and Classification (WPPDC). The WPPDC is able to exploit positional information to identify positionally-enriched motifs, and to select motifs with a high information content for further classification. Tree structure is applied to WPPDC (referred to as T-WPPDC) in order to reduce algorithmic complexity. Compared to pattern discovery methods T-WPPDC not only showed consistently superior prediction accuracy and but generated patterns with positional information. Machine-learning classification methods (such as Random Forests) showed comparable prediction accuracy. However, unlike T-WPPDC, they are classification methods and are unable to generate SNP-associated patterns.

Page generated in 0.0917 seconds