• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24237
  • 5237
  • 1427
  • 1171
  • 866
  • 663
  • 502
  • 157
  • 155
  • 149
  • 149
  • 148
  • 146
  • 98
  • 46
  • Tagged with
  • 38518
  • 38518
  • 14387
  • 10595
  • 2872
  • 2795
  • 2707
  • 2540
  • 2518
  • 2358
  • 2288
  • 2216
  • 2196
  • 2129
  • 1935
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The evolution of an online environment to support the studio based pedagogical approach for computing education

Agrawal, Anukrati, January 2009 (has links) (PDF)
Thesis (M.S. in computer science)--Washington State University, August 2009. / Title from PDF title page (viewed on Sept. 22, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 63-68).
2

User Interfaces for Wearable Computers Development and Evaluation /

Witt, Hendrik. January 2008 (has links)
Diss. Univ. Bremen, 2007. / Computer Science (Springer-11645).
3

Extensible Resource Management for Networked Virtual Computing

Grit, Laura Ellen 14 December 2007 (has links)
Advances in server virtualization offer new mechanisms to provideresource management for shared server infrastructures. Resourcesharing requires coordination across self-interested systemparticipants (e.g., providers from different administrative domains orthird-party brokering intermediaries). Assignments of the sharedinfrastructure must be fluid and adaptive to meet the dynamic demandsof clients. This thesis addresses the hypothesis that a new, foundational layerfor virtual computing is sufficiently powerful to support a diversityof resource management needs in a general and uniform manner.Incorporating resource management at a lower virtual computing layerprovides the ability to dynamically share server infrastructurebetween multiple hosted software environments (e.g., grid computingmiddleware and job execution systems). Resource assignments withinthe virtual layer occur through a lease abstraction, and extensiblepolicy modules define management functions. This research makes thefollowing contributions: * Defines the foundation for resource management in a virtual computinglayer. Defines protocols and extensible interfaces for formulatingresource contracts between system participants. Separates resourcemanagement functionalities across infrastructure providers,application controllers, and brokering intermediaries, and explores theimplications and limitations of this structure. * Demonstrates policy extensibility by implementing a virtualcomputing layer prototype, Shirako, and evaluating a range of resource arbitration policies for various objectives. Provides results with proportional share, priority, worst-fit, andmulti-dimensional resource slivering. * Defines a proportional share policy, WINKS, that integrates a fairqueuing algorithm with a calendar scheduler. Provides a comprehensiveset of features and extensions for virtual computing systems (e.g.,requests for multiple resources, advance reservations,multi-dimensional allocation, and dynamic resource pools). Shows thepolicy preserves fairness properties across queue transformations andcalendar operations needed to implement these extensions. * Explores at what layer, and at what granularity, decisions about resource control should occur. Shows that resource management at a lower layer can expose dynamic resource control to hosted middleware,at a modest cost in fidelity to the goals of the policy. / Dissertation
4

DNA Based Self-Assembly and Nanorobotic: Theory and Experiments

Sahu, Sudheer 10 December 2007 (has links)
We study the following fundamental questions in DNA based self-assembly and nanorobotics: How to control errors in self-assembly? How to construct complex nanoscale objects in simpler ways? How to transport nanoscale objects in programmable manner?Fault tolerance in self-assembly: Fault tolerant self-assembly is important for nanofabrication and nanocomputing applications. It is desirable to design compact error-resilient schemes that do not result in the increase in the original size of the assemblies. We present a comprehensive theory of compact error-resilient schemes for algorithmic self-assembly in two and three dimensions, and discuss the limitations and capabilities of redundancy based compact error correction schemes.New and powerful self-assembly model: We develop a reversible self-assembly model in which the glue strength between two juxtaposed tiles is a function of the time they have been in neighboring positions. Under our time-dependent glue model, we can rigorously study and demonstrate catalysis and self-replication in the tile assembly. We can assemble thin rectangles of size k×N using O(logN/loglogN) types of tiles in our model.Modeling DNA based Nanorobotical Devices: We design a framework for a discrete event simulator for DNA based nanorobotical systems. It has two major components: a physical model and a kinetic model. The physical model captures the conformational changes in molecules, molecular motions and molecular collisions. The kinetic model governs the modeling of various reactions in a DNA nanorobotical systems including hybridization, dehybridization and strand displacement.DNA-based molecular devices using DNAzyme: We design a class of nanodevices that are autonomous, programmable, and require no protein enzymes. Our DNAzyme based designs include (1) DNAzyme FSA, a finite state automata device , (2) DNAzyme router for programmable routing of nanostructures on two-dimensional DNA addressable lattice, and (3) DNAzyme doctor, a medical-related application that respond to the under-expression or over-expression of various RNAs, by releasing an RNA.Nanomotor Powered by Polymerase: We, for the first time, attempt to harness the mechanical energy of a polymerase φ29 to construct a polymerase based nanomotor that pushes a cargo on a DNA track. Polymerase based nanomotor has advantage of high speeds of polymerase. / Dissertation
5

Handling Resource Constraints and Scalability in Continuous Query Processing

Xie, Junyi 12 December 2007 (has links)
Recent years have witnessed a rapid rise of a new class of data-intensive applications in which data arrive as transient, high-volume streams. Financial data processing, network monitoring, and sensor networks are all examples of such applications. Traditional relational database systems model data as persistent relations, but for this new class of applications, it is more appropriate to model data as unbounded streams with continuously arriving tuples. The stream data model necessitates a new style of queries called continuous queries. Unlike a one-time query executed over a single finite and static database state, a continuous query continuously generates new result tuples as new stream tuples arrive. This dissertation tackles a range of challenges that arise in processing continuous queries. Specifically, for resource-constrained settings, this dissertation proposes techniques for coping with response-time and memory constraints. To scale to a large number of continuous queries running concurrently, this dissertation proposes techniques for indexing continuous queries as data, and processing and optimizing incoming stream tuples as queries over such data. A common theme underlying most of these techniques is exploiting the characteristics of the data and the continuous queries, e.g., asymmetry in the costs of processing different streams, temporal trends in the values of stream attributes, and clusteredness that arises in a large number of continuous queries. / Dissertation
6

Low-cost Methods for Error Detection in Multi-core Systems

Meixner, Albert 10 April 2008 (has links)
There is broad consensus among academic and industrial researchers in computer architecture that hardware faults, both transient and permanent, will become significantly more frequent as CMOS feature sizes continue to shrink. Circuit-level techniques alone are insufficient to overcome this problem, and therefore system designers have begun to add fault tolerance features to processor micro-architectures and memory systems. Many of the techniques used today were developed in a time when fault coverage was the primary optimization target; hardware, power, and performance costs were only secondary concerns. These priorities do not accurately reflect the needs of today's commodity systems, which are very sensitive to manufacturing and performance costs and can trade-off some amount of fault coverage to reduce these costs. In my dissertation work I have developed novel error detection techniques with significantly lower area and performance costs than those traditionally used in high availability designs. These savings were made possible by a guiding principle of verifying high-level system tasks rather than checking correct operation of specific low-level components. This high-level, end-to-end approach to error-detection has distinct advantages over checking low-level components in terms of applicability to a wide range of systems, coverage of complex component interactions, and implementation cost. The major challenge in developing end-to-end checkers is to find high-level tasks that are both relevant and verifiable at runtime. I approached this problem by decomposing system-level tasks into sub-tasks that are more easily verifiable and, when combined, are sufficient to ensure correctness of a high-level task. Such a decomposition is a step back from a full end-to-end design and requires additional assumptions about the underlying system, but I found the resulting cost and complexity benefits to outweigh the loss in flexibility that comes with them. I have applied the ideas of task decomposition and high-level checking to processor cores, memory systems, and the I/O system, in order to develop low-cost checkers for each of these subsystems. The checking mechanisms resulting from this work are highly effective in detecting errors and incur lower hardware and performance cost than mechanisms with comparable error coverage proposed in the past. / Dissertation
7

Towards a Complete Transcriptional Regulatory Code: Improved Motif Discovery Using Informative Priors

Narlikar, Leelavati 24 April 2008 (has links)
<p>Transcriptional regulation is the primary mechanism employed by the cell to ensure coordinated expression of its numerous genes. A key component of this process is the binding of proteins called transcription factors (TFs) to corresponding regulatory sites on the DNA. Understanding where exactly these TFs bind, under what conditions they are active, and which genes they regulate is all part of deciphering the transcriptional regulatory code. An important step towards solving this problem is the identification of DNA binding specificities, represented as motifs, for all TFs. In spite of an explosion of TF binding data from high-throughput technologies, the problem of motif discovery remains unsolved, due to the short length and degeneracy of binding sites. </p><p>We introduce PRIORITY, a Gibbs sampling-based approach, which incorporates informative positional priors into a probabilistic framework, to find significant motifs from high-throughput TF binding data. We use different data sources to build our positional priors and apply them to yeast ChIP-chip data: </p><p>* TFs can be classified into several structural classes based on their DNA-binding domains. Using a Bayesian learning algorithm, we show that it is possible to predict the class of a TF with remarkable accuracy, using information solely from its DNA binding sites. We further incorporate these results in the form of informative priors into PRIORITY, which learns the structural class of the TF in addition to its motif. </p><p>* In the nucleus, DNA is present in the form of chromatin--wrapped around nucleosomes--with certain regions being more accessible to TFs than others. It has been shown that functional binding sites are generally located in nucleosome-free regions. We use nucleosome occupancy predictions to compute a novel positional prior that biases the search towards the more accessible regions, thereby enriching the motif signal.</p><p>* Functional elements are often conserved across related species. Most conventional methods that exploit this fact use alignments. However, multiple alignments cannot always capture relocation and reversed orientation of binding sites across species. We propose a new alignment-free technique that not only accounts for these transformations, but is much faster than conventional methods. </p><p>All our priors significantly outperform conventional methods, finding motifs matching literature for 52 TFs. We produce a genome-wide map of TF binding sites in yeast based on these and other novel motif predictions.</p> / Dissertation
8

Foresight: Countering Malware through Cooperative Forensics Sharing

Zaffar, Fareed M 08 August 2008 (has links)
<p>With the Internet's rapid growth has come a proportional increase in exposure to attacks, misuse and abuse. Modern viruses and worms are causing damage much more quickly than those created in the past. The fast replication and epidemic nature of the spreads limits the time security experts have to respond and be able to protect and fortify their systems. A pathogen might infect thousands of machines and cascade across the network producing consequences that could overwhelm the internet very quickly. Such attacks have the potential of making a human response to them all but ineffective. While pathogens are becoming much more aggressive, there is also a significant delay between the identification of a new threat and the generation of a cure for it. Worms and viruses have been able to cause significant damage in this 'submission to cure generation' window of vulnerability. Having timely and credible security information is thus becoming critical to network and security management.</p><p>The main hypothesis behind our research is that sharing threat information and forensic evidence among cooperating domains yields important benefits for dealing with modern day pathogens in a timely fashion. The idea is that each host might have an incomplete, approximate or inexact information about a particular threat or attack. We can get a more comprehensive view of the extent and nature of developing threats by observing suspect behavior and combining information gathered from different vantage points. A better understanding of the pathogen allows for effective and timely immunization in order to thwart epidemic cascading of threats. We also propose cooperative policing mechanisms as an effective approach to trace large scale distributed threats like Ddos attacks. Increased cooperation amongst domains helps to mitigate such attacks nearer to the sources so that their effects on the overall network are minimized.</p><p>This thesis leverages experiences and ideas from fields of cryptography, machine learning, security and multi-agent systems to build Foresight: an internet scale threat analysis, indication, early warning and response architecture. Foresight allows cooperating domains to share a global threat view in order to detect zero-day pathogens and isolate them using cooperative policing mechanisms.</p><p>- We describe a novel behavioral signature scheme to extract a generalized footprint for multi-modal threats. Blended or multi-modal threats combine the characteristics of viruses, worms, trojan horses and malicious code to initiate, transmit and spread attacks. By using multiple methods and techniques, blended threats can quickly spread and surpass defenses that address only a single type of malicious activity and hence are much more difficult to defend against. System performance analysis, through trace-based simulations, shows significant benefits for sharing forensics data between cooperating domains.</p><p>- We present Mail-trap, an anomaly based system that catches zero-day email borne pathogens and retards their growth through effective behavior monitoring of mail traffic and active forensics sharing between cooperating domains. Mail-trap relies on Foresight's cooperative policing model to identify and pre-empt email-borne threats. Our results show that behavior monitoring alone can be an effective tool for malware detection. Cooperation amongst domains greatly increases the effectiveness of our approach. Domains are able to pre-empt attacks and respond to malware behavior that they have not seen before. We also analyze various immunization/prevention and containment techniques.</p><p>- We present AMP, a service architecture for countering distributed denial of service attacks using alert sharing and cooperative policing mechanisms. Our simulation architecture enables us to test the system with actual, benign and worm traffic traces, and realistic network topologies. AMP does not require universal deployment and is complementary to other schemes for countering Ddos attacks, however with the use of collaborative policing techniques, the performance of the scheme can be improved greatly.</p><p>- We also present a prototype implementation for Paranoid, a novel global secure file sharing mechanism which can be used to allow secure resource access across administrative domains. We describe the design of a trust-based cooperation scheme to create a global community which is more accountable and hence less vulnerable to attacks and abuse.</p> / Dissertation
9

Simplifying System Management Through Automated Forecasting, Diagnosis, and Configuration Tuning

Duan, Songyun January 2010 (has links)
<p>Large-scale networked computing systems are widely deployed to run business-critical applications in environments where changes are frequent. Manual management of these complex systems can be tedious and error-prone. Meanwhile, the high costs of application downtime make it critical to ensure system availability and reliability. Recent progress in monitoring tools enables system administrators to collect fine-grained data about system activity with low overhead. This data provides valuable information for system management. However, the monitoring data collected from production systems is massive in size and noisy; which makes it hard for system administrators to fully utilize this data for effective system management.</p> <p>This dissertation describes a data-management platform, called Fa, where system administrators can pose declarative queries over system monitoring data. Fa automatically finds fairly accurate and efficient execution plans for given queries, and returns query results in easy-to-interpret formats. Fa supports three key query types, namely, forecasting queries (for predicting or detecting performance problems), diagnosis queries (for finding the cause of performance problems), and tuning queries (for recommending changes to system configuration to resolve diagnosed problems):</p> <p>(a) For processing diagnosis queries, Fa constructs problem signatures from system monitoring data to identify recurrent problems and to reuse past diagnostic information. For a rare or new problem, Fa employs an anomaly-based clustering technique to generate performance baselines and to characterize the deviation from baselines to pinpoint root causes. Fa also incorporates an active-learning component that identifies diagnosis queries whose results, if provided or confirmed by system administrators, can be used to update problem signatures and to improve the accuracy and efficiency for processing future queries.</p> <p>(b) For processing tuning queries to resolve problems caused by system misconfiguration, Fa employs an adaptive sampling algorithm that plans experiments to efficiently identify high-impact configuration parameters and high-performance settings. These experiments bring in information---required for generating accurate query results---that is missing in the monitoring data collected so far.</p> <p>(c) For both one-time and continuous forecasting queries, Fa automatically searches for efficient execution plans in a large space of plans composed of data-transformation operators as well as synopsis-learning and prediction operators. Forecasting queries can be composed with diagnosis and tuning queries to enable proactive system management that avoids potential problems.</p> <p>We have evaluated the Fa platform with monitoring data collected from database-backed multitier services, and with synthetic data that models the noisy nature of monitoring data from production systems. Our evaluation shows that Fa's query plan selection and execution strategies provide actionable information for system management automatically, accurately, and efficiently. Critical features like reliable confidence estimates, robustness to noise, and providing supporting evidence for query results make Fa a practical and useful platform.</p> / Dissertation
10

Modes of Gaussian Mixtures and an Inequality for the Distance Between Curves in Space

Fasy, Brittany Terese January 2012 (has links)
<p>This dissertation studiess high dimensional problems from a low dimensional perspective. First, we explore rectifiable curves in high-dimensional space by using the Fréchet distance between and total curvatures of the two curves to bound the difference of their lengths. We create this bound by mapping the curves into R^2 while preserving the length between the curves and increasing neither</p><p>the total curvature of the curves nor the Fr\'echet distance between them. The bound is independent of the dimension of the ambient Euclidean space, it improves upon a bound by Cohen-Steiner and Edelsbrunner for dimensions greater than three and it generalizes</p><p>a result by F\'ary and Chakerian.</p><p>In the second half of the dissertation, we analyze Gaussian mixtures. In particular, we consider the sum of n Gaussians, where each Gaussian is centered at the vertex of a regular n-simplex. Fixing the width of the Guassians and varying the diameter of the simplex from zero to infinity by increasing a parameter that we call the scale factor, we find the window of scale factors for which the Gaussian mixture has more modes, or local maxima, than components of the mixture.</p><p>We see that the extra mode created is subtle, but can be higher than the modes closer to the vertices of the simplex. In addition, we prove that all critical points are located on a set of one-dimensional lines (axes) connecting barycenters of complementary faces of</p><p>the simplex.</p> / Dissertation

Page generated in 0.0996 seconds