• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 232
  • 36
  • 14
  • 9
  • 7
  • 6
  • Tagged with
  • 322
  • 322
  • 322
  • 322
  • 159
  • 112
  • 63
  • 49
  • 34
  • 34
  • 32
  • 28
  • 27
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Cache design and timing analysis for preemptive multi-tasking real-time uniprocessor systems

Tan, Yudong 18 April 2005 (has links)
In this thesis, we propose an approach to estimate the Worst Case Response Time (WCRT) of each task in a preemptive multi-tasking single-processor real-time system utilizing an L1 cache. The approach combines inter-task cache eviction analysis and intra-task cache access analysis to estimate the Cache Related Preemption Delay (CRPD). CRPD caused by preempting task(s) is then incorporated into WCRT analysis. We also propose a prioritized cache to reduce CRPD by exploiting cache partitioning technique. Our WCRT analysis approach is then applied to analyze the behavior of a prioritized cache. Four sets of applications with up to six concurrent tasks running are used to test our WCRT analysis approach and the prioritized cache. The experimental results show that our WCRT analysis approach can tighten the WCRT estimate by up to 32% (1.4X) over prior state-of-the-art. By using a prioritized cache, we can reduce the WCRT estimate of tasks by up to 26%, as compared to a conventional set associative cache.
242

Measurement and resource allocation problems in data streaming systems

Zhao, Haiquan 26 April 2010 (has links)
In a data streaming system, each component consumes one or several streams of data on the fly and produces one or several streams of data for other components. The entire Internet can be viewed as a giant data streaming system. Other examples include real-time exploratory data mining and high performance transaction processing. In this thesis we study several measurement and resource allocation optimization problems of data streaming systems. Measuring quantities associated with one or several data streams is often challenging because the sheer volume of data makes it impractical to store the streams in memory or ship them across the network. A data streaming algorithm processes a long stream of data in one pass using a small working memory (called a sketch). Estimation queries can then be answered from one or more such sketches. An important task is to analyze the performance guarantee of such algorithms. In this thesis we describe a tail bound problem that often occurs and present a technique for solving it using majorization and convex ordering theories. We present two algorithms that utilize our technique. The first is to store a large array of counters in DRAM while achieving the update speed of SRAM. The second is to detect global icebergs across distributed data streams. Resource allocation decisions are important for the performance of a data streaming system. The processing graph of a data streaming system forms a fork and join network. The underlying data processing tasks consists of a rich set of semantics that include synchronous and asynchronous data fork and data join. The different types of semantics and processing requirements introduce complex interdependence between various data streams within the network. We study the distributed resource allocation problem in such systems with the goal of achieving the maximum total utility of output streams. For networks with only synchronous fork and join semantics, we present several decentralized iterative algorithms using primal and dual based optimization techniques. For general networks with both synchronous and asynchronous fork and join semantics, we present a novel modeling framework to formulate the resource allocation problem, and present a shadow-queue based decentralized iterative algorithm to solve the resource allocation problem. We show that all the algorithms guarantee optimality and demonstrate through simulation that they can adapt quickly to dynamically changing environments.
243

Zero-sided communication : challenges in implementing time-based channels using the MPI/RT specification

Neelamegam, Jothi P. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
244

A conceptualized data architecture framework for a South African banking service.

Mcwabeni-Pingo, Lulekwa Gretta. January 2014 (has links)
M. Tech. Business Information Systems / Currently there is a high demand in the banking environment for real time delivery of consistent, quality data for operational information. South African banks have the fastest growing use and demand for quality data; however, the bank still experiences data management related challenges and issues. It is argued that the existing challenges may be leveraged by having a sound data architecture framework. To this point, this study sought to address the data problem by theoretically conceptualizing a data architecture framework that may subsequently be used as a guide to improve data management. The purpose of the study was to explore and describe how data management challenges could be improved through Data Architecture.
245

A fuzzy logic approach for call admission control in cellular networks.

Tokpo Ovengalt, Christophe Boris. January 2014 (has links)
M. Tech. Electrical Engineering. / Discusses Call Admission Control (CAC) is a standard operating procedure responsible for accepting or rejecting calls based on the availability of network resources. It is also used to guarantee good Quality of Service (QoS) to ongoing users. However, there are a number of imprecisions to consider during the admission and handoff processes. These uncertainties arise from the mobility of subscribers and the time-varying nature of key admission factors such as latency and packet loss.These parameters are often imprecisely measured, which has a negative impact on the estimation of a channel spectral efficiency. In mobile networking, greater emphasis is towards delivering good QoS to real-time (RT) applications. It has become increasingly necessary to develop a model capable of handling uncertainties associated with the network in order to improve the quality of decisions relating to CAC. Type-1 and Type-2 Fuzzy Logic Controllers (FLCs) were deployed to allow the CAC to make better decisions in the presence of numerous uncertainties. The model successfully proposed associated meanings and degrees of certainty to the measured values of loss and latency by means of fuzzy sets and Membership Functions (MFs). The results obtained show that the fuzzy-based CAC performs better by reducing the call blocking and call dropping probabilities which are some of the key measurement parameters of QoS in wireless networking.
246

Real-time methods in neural electrophysiology to improve efficacy of dynamic clamp

Lin, Risa J. 17 May 2012 (has links)
In the central nervous system, most of the processes ranging from ion channels to neuronal networks occur in a closed loop, where the input to the system depends on its output. In contrast, most experimental preparations and protocols operate autonomously in an open loop and do not depend on the output of the system. Real-time software technology can be an essential tool for understanding the dynamics of many biological processes by providing the ability to precisely control the spatiotemporal aspects of a stimulus and to build activity-dependent stimulus-response closed loops. So far, application of this technology in biological experiments has been limited primarily to the dynamic clamp, an increasingly popular electrophysiology technique for introducing artificial conductances into living cells. Since the dynamic clamp combines mathematical modeling with electrophysiology experiments, it inherits the limitations of both, as well as issues concerning accuracy and stability that are determined by the chosen software and hardware. In addition, most dynamic clamp systems to date are designed for specific experimental paradigms and are not easily extensible to general real-time protocols and analyses. The long-term goal of this research is to develop a suite of real-time tools to evaluate the performance, improve the efficacy, and extend the capabilities of the dynamic clamp technique and real-time neural electrophysiology. We demonstrate a combined dynamic clamp and modeling approach for studying synaptic integration, a software platform for implementing flexible real-time closed-loop protocols, and the potential and limitations of Kalman filter-based techniques for online state and parameter estimation of neuron models.
247

Real-time interactive multiprogramming.

Heher, Anthony Douglas. January 1978 (has links)
This thesis describes a new method of constructing a real-time interactive software system for a minicomputer to enable the interactive facilities to be extended and improved in a multitasking environment which supports structured programming concepts. A memory management technique called Software Virtual Memory Management, which is implemented entirely in software, is used to extend the concept of hardware virtual memory management. This extension unifies the concepts of memory space allocation and control and of file system management, resulting in a system which is simple and safe for the application oriented user. The memory management structures are also used to provide exceptional protection facilities. A number of users can work interactively, using a high-level structured language in a multi-tasking environ=ment, with very secure access to shared data bases. A system is described which illustrates these concepts. This system is implemented using an interpreter and significant improvements in the performance of interpretive systems are shown to be possible using the structures presented. The system has been implemented on a Varian minicomputer as well as on a microprogrammable micro= processor. The virtual memory technique has been shown to work with a variety of bulk storage devices and should be particularly suitable for use with recent bulk storage developments such as bubble memory and charge coupled devices. A detailed comparison of the performance of the system vis-a-vis that of a FORTRAN based system executing in-line code with swapping has been performed by means of a process control Case study. These measurements show that an interpretive system using this new memory management technique can have a performance which is comparable to or better than a compiler. oriented system. / Thesis (Ph.D.)-University of Natal, 1978.
248

Real time image processing on parallel arrays for gigascale integration

Chai, Sek Meng 12 1900 (has links)
No description available.
249

Global investigations of radiated seismic energy and real-time implementation

Convers, Jaime Andres 13 January 2014 (has links)
This dissertation contains investigations of radiated seismic energy measurements from large earthquakes and duration determinations as significant properties of the dynamic earthquake rupture and its applications in the identification of very large and slow source rupturing earthquakes. This includes a description of earthquake released seismic energy from 1997 to 2010 and identification of slow source tsunami earthquakes in that time period. The implementation of these measurements in real-time since the beginning of 2009, with a case study of the Mentawai 2010 tsunami earthquake are also discussed. Further studies of rupture duration assessments and its technical improvements for more rapid and robust solutions are investigated as well, with application to the Tohoku-Oki 2011 earthquake an a case of directivity in the 2007 Mw 8.1 Solomon islands earthquake. Finally, the set of routines and programs developed for implementation at Georgia Tech and IRIS to produce the real-time results since 2009 presented in this study are described.
250

Storage and aggregation for fast analytics systems

Amur, Hrishikesh 13 January 2014 (has links)
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.

Page generated in 0.089 seconds