• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27506
  • 5236
  • 1483
  • 1311
  • 1311
  • 1311
  • 1311
  • 1311
  • 1301
  • 1211
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43057
  • 43057
  • 14704
  • 11025
  • 3185
  • 2987
  • 2822
  • 2605
  • 2596
  • 2542
  • 2509
  • 2492
  • 2391
  • 2289
  • 2128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

Nexsched : next generation logical spreadsheet for interactively solving constraint satisfaction problems /

Chitnis, Siddharth, January 2006 (has links)
Thesis (M.S.) -- University of Texas at Dallas, 2006. / Includes vita. Includes bibliographical references (leaves 48-49)
462

Context-aware telephony and its users methods to improve the accuracy of mobile device interruptions /

Khalil, Ashraf. January 2006 (has links)
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2006. / Source: Dissertation Abstracts International, Volume: 67-03, Section: B, page: 1518. Adviser: Kay Connelly. "Title from dissertation home page (viewed March 21, 2007)."
463

Natural Language Tutoring and the Novice Programmer

Lane, H. Chad 31 January 2005 (has links)
For beginning programmers, inadequate problem solving and planning skills are among the most salient of their weaknesses. Novices, by definition, lack much of the tacit knowledge that underlies effective programming. This dissertation examines the efficacy of natural language tutoring (NLT) to foster acquisition of this tacit knowledge. Coached Program Planning (CPP) is proposed as a solution to the problem of teaching the tacit knowledge of programming. The general aim is to cultivate the development of such knowledge by eliciting and scaffolding the problem solving and planning activities that novices are known to underestimate or bypass altogether. ProPL (pro-PELL), a dialogue-based intelligent tutoring system based on CPP, is also described. In an evaluation, the primary findings were that students who received tutoring from ProPL seemed to exhibit an improved ability compose plans and displayed behaviors suggestive of thinking at greater levels of abstraction than students in a read-only control group. The major finding is that NLT appears to be effective in teaching program composition skills.
464

Energy and Reliability Management in Parallel Real-Time Systems

Zhu, Dakai 01 February 2005 (has links)
Historically, slack time in real-time systems has been used as temporal redundancy by rollback recovery schemes to increase system reliability in the presence of faults. However, with advanced technologies, slack time can also be used by energy management schemes to save energy. For reliable real-time systems where higher levels of reliability are as important as lower levels of energy consumption, centralized management of slack time is desired. For frame-based parallel real-time applications, energy management schemes are first explored. Although the simple static power management that evenly allocates static slack over a schedule is optimal for uni-processor systems, it is not optimal for parallel systems due to different levels of parallelism in a schedule. Taking parallelism variations into consideration, a parallel static power management scheme is proposed. When dynamic slack is considered, assuming global scheduling strategies, slack shifting and sharing schemes as well as speculation schemes are proposed for more energy savings. For simultaneous management of power and reliability, checkpointing techniques are first deployed to efficiently use slack time and the optimal numbers of checkpoints needed to minimize energy consumption or to maximize system reliability are explored. Then, an energy efficient optimistic modular redundancy scheme is addressed. Finally, a framework that encompasses energy and reliability management is proposed for obtaining optimal redundant configurations. While exploring the trade-off between energy and reliability, the effects of voltage scaling on fault rates are considered.
465

Science as an Anomaly-Driven Enterprise: A Computational Approach to Generating Acceptable Theory Revisions in the Face of Anomalous Data

Bridewell, Will 03 February 2005 (has links)
Anomalous data lead to scientific discoveries. Although machine learning systems can be forced to resolve anomalous data, these systems use general learning algorithms to do so. To determine whether anomaly-driven approaches to discovery produce more accurate models than the standard approaches, we built a program called Kalpana. We also used Kalpana to explore means for identifying those anomaly resolutions that are acceptable to domain experts. Our experiments indicated that anomaly-driven approaches can lead to a richer set of model revisions than standard methods. Additionally we identified semantic and syntactic measures that are significantly correlated with the acceptability of model revisions. These results suggest that by interpreting data within the context of a model and by interpreting model revisions within the context of domain knowledge, discovery systems can more readily suggest accurate and acceptable anomaly resolutions.
466

The Design of A High Capacity and Energy Efficient Phase Change Main Memory

Ferreira, Alexandre Peixoto 22 June 2011 (has links)
Higher energy-efficiency has become essential in servers for a variety of reasons that range from heavy power and thermal constraints, environmental issues and financial savings. With main memory responsible for at least 30% of the energy consumed by a server, a low power main memory is fundamental to achieving this energy efficiency DRAM has been the technology of choice for main memory for the last three decades primarily because it traditionally combined relatively low power, high performance, low cost and high density. However, with DRAM nearing its density limit, alternative low-power memory technologies, such as Phase-change memory (PCM), have become a feasible replacement. PCM limitations, such as limited endurance and low write performance, preclude simple drop-in replacement and require new architectures and algorithms to be developed. A PCM main memory architecture (PMMA) is introduced in this dissertation, utilizing both DRAM and PCM, to create an energy-efficient main memory that is able to replace a DRAM-only memory. PMMA utilizes a number of techniques and architectural changes to achieve a level of performance that is par with DRAM. PMMA achieves gains in energy-delay of up to 65%, with less than 5% of performance loss and extremely high energy gains. To address the other major shortcoming of PCM, namely limited endurance, a novel, low- overhead wear-leveling algorithm that builds on PMMA is proposed that increases the lifetime of PMMA to match the expected server lifetime so that both server and memory subsystems become obsolete at about the same time. We also study how to better use the excess capacity, traditionally available on PCM devices, to obtain the highest lifetime possible. We show that under specific endurance distributions, the naive choice does not achieve the highest lifetime. We devise rules that empower the designer to select algorithms and parameters to achieve higher lifetime or simplify the design knowing the impact on the lifetime. The techniques presented also apply to other storage class memories (SCM) memories that suffer from limited endurance.
467

Lightweight Hierarchical Error Control Codes for Multi-Bit Differential Channels

Bakos, Jason Daniel 03 October 2005 (has links)
This dissertation describes a new class of non-linear block codes called Lightweight Hierarchical Error Control Codes (LHECC). LHECC is designed to operate over system-level interconnects such as network-on-chip, inter-chip, and backplane interconnects. LHECC provides these interconnects with powerful error correction capability and thus effectively increases signal integrity and noise immunity. As a result, these interconnects may carry data with lower signal power and/or higher transmission rates. LHECC is designed such that support for it may be tightly integrated into high-speed, low-latency system-level I/O interfaces. Encoding and decoding may be performed at system core speeds with low chip area requirements. LHECC is optimized for a new type of high-performance system-level interconnect technology called Multi-Bit Differential Signaling (MBDS). MBDS channels require the use of a physical-layer channel code called N choose M (nCm) encoding, where each channel is restricted to a symbol set such that half of the bits in each symbol are 1-bits. These symbol sets have properties such as inherent error detection capability and unused symbol space. These properties are used to give MBDS-based system-level interconnects an arbitrary error correction capability with low or zero information overhead. This is achieved by hierarchical encoding, where a portion of source data is encoded into a high-level block code while the remainder of the data is encoded into a low-level code by choosing particular nCm symbols from symbol subsets specified by the high-level encoding. This dissertation presents the following. First, it provides a theoretical study of LHECC and illustrates its effectiveness at achieving low-overhead error control for system-level interconnects. Second, it provides example implementations of efficient LHECC encoder and decoder architectures that are capable of operating at speeds necessary for high-performance system-level channels. Third, it describes an experimental technique to verify the effectiveness of these codes, where interconnect error behavior is captured using channel and noise models over a range of transmission rates and noise characteristics. Results obtained through simulation of these models characterize the effectiveness of LHECC. Using this method, system-level interconnects that utilize this new encoding technique are shown to have significantly higher noise immunity than those without.
468

Techniques and Algorithms for Immersive and Interactive Visualization of Large Datasets

Sharma, Ashish 23 April 2002 (has links)
Advances in computing power have made it possible for scientists to perform atomistic simulations of material systems that range in size, from a few hundred thousand atoms to one billion atoms. An immersive and interactive walkthrough of such datasets is an ideal method for exploring and understanding the complex material processes in these simulations. However rendering such large datasets at interactive frame rates is a major challenge. A scalable visualization platform is developed that is scalable and allows interactive exploration in an immersive, virtual environment. The system uses an octree based data management system that forms the core of the application. This reduces the amount of data sent to the pipeline without a per-atom analysis. Secondary algorithms and techniques such as modified occlusion culling, multiresolution rendering and distributed computing are employed to further speed up the rendering process. The resulting system is highly scalable and is capable of visualizing large molecular systems at interactive frame rates on dual processor SGI Onyx2 with an InfinteReality2 graphics pipeline.
469

On-the-Fly Tracing for Data-Centric Computing: Parallelization, Workflow and Applications

Jiang, Lei 01 May 2013 (has links)
As data-centric computing becomes the trend in science and engineering, more and more hardware systems, as well as middleware frameworks, are emerging to handle the intensive computations associated with big data. At the programming level, it is crucial to have corresponding programming paradigms for dealing with big data. Although MapReduce is now a known programming model for data-centric computing where parallelization is completely replaced by partitioning the computing task through data, not all programs particularly those using statistical computing and data mining algorithms with interdependence can be re-factorized in such a fashion. On the other hand, many traditional automatic parallelization methods put an emphasis on formalism and may not achieve optimal performance with the given limited computing resources.<br><br> In this work we propose a cross-platform programming paradigm, called "on-the-fly data tracing", to provide source-to-source transformation where the same framework also provides the functionality of workflow optimization on larger applications. Using a "big-data approximation" computations related to large-scale data input are identified in the code and workflow and a simplified core dependence graph is built based on the computational load taking in to account big data. The code can then be partitioned into sections for efficient parallelization; and at the workflow level, optimization can be performed by adjusting the scheduling for big-data considerations, including the I/O performance of the machine. Regarding each unit in both source code and workflow as a model, this framework enables model-based parallel programming that matches the available computing resources. <br><br> The techniques used in model-based parallel programming as well as the design of the software framework for both parallelization and workflow optimization as well as its implementations with multiple programming languages are presented in the dissertation. Then, the following experiments are performed to validate the framework: i) the benchmarking of parallelization speed-up using typical examples in data analysis and machine learning (e.g. naive Bayes, k-means) and ii) three real-world applications in data-centric computing with the framework are also described to illustrate the efficiency: pattern detection from hurricane and storm surge simulations, road traffic flow prediction and text mining from social media data. In the applications, it illustrates how to build scalable workflows with the framework along with performance enhancements.
470

Approximate Sequence Alignment

Cai, Xuanting 19 June 2013 (has links)
Given a collection of strings and a query string, the goal of the approximate string matching is to efficiently find the strings in the collection, which are similar to the query string. In this paper, we focus on edit distance as a measure to quantify the similarity between two strings. Existing q-gram based methods use inverted lists to index the q-grams of the given string collection. These methods begin with generating the q-grams of the query string, disjoint or overlapping, and then merge the inverted lists of these q-grams. Several filtering techniques have been proposed to segment inverted lists in order to obtain relatively shorter lists, thus reducing the merging cost. The filtering technique we propose in this thesis, which is called position restricted alignment, combines well known length filtering and position filtering to provide more aggressive pruning. We then provide an indexing scheme that integrates the inverted lists storage with the proposed filter. It enables us to auto-filter the inverted lists. We evaluate the effectiveness of the proposed approach by experiments.

Page generated in 0.0556 seconds