• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 260
  • 49
  • 21
  • 9
  • 7
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 378
  • 327
  • 164
  • 119
  • 68
  • 49
  • 37
  • 34
  • 33
  • 31
  • 30
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

A conceptualized data architecture framework for a South African banking service.

Mcwabeni-Pingo, Lulekwa Gretta. January 2014 (has links)
M. Tech. Business Information Systems / Currently there is a high demand in the banking environment for real time delivery of consistent, quality data for operational information. South African banks have the fastest growing use and demand for quality data; however, the bank still experiences data management related challenges and issues. It is argued that the existing challenges may be leveraged by having a sound data architecture framework. To this point, this study sought to address the data problem by theoretically conceptualizing a data architecture framework that may subsequently be used as a guide to improve data management. The purpose of the study was to explore and describe how data management challenges could be improved through Data Architecture.
272

A fuzzy logic approach for call admission control in cellular networks.

Tokpo Ovengalt, Christophe Boris. January 2014 (has links)
M. Tech. Electrical Engineering. / Discusses Call Admission Control (CAC) is a standard operating procedure responsible for accepting or rejecting calls based on the availability of network resources. It is also used to guarantee good Quality of Service (QoS) to ongoing users. However, there are a number of imprecisions to consider during the admission and handoff processes. These uncertainties arise from the mobility of subscribers and the time-varying nature of key admission factors such as latency and packet loss.These parameters are often imprecisely measured, which has a negative impact on the estimation of a channel spectral efficiency. In mobile networking, greater emphasis is towards delivering good QoS to real-time (RT) applications. It has become increasingly necessary to develop a model capable of handling uncertainties associated with the network in order to improve the quality of decisions relating to CAC. Type-1 and Type-2 Fuzzy Logic Controllers (FLCs) were deployed to allow the CAC to make better decisions in the presence of numerous uncertainties. The model successfully proposed associated meanings and degrees of certainty to the measured values of loss and latency by means of fuzzy sets and Membership Functions (MFs). The results obtained show that the fuzzy-based CAC performs better by reducing the call blocking and call dropping probabilities which are some of the key measurement parameters of QoS in wireless networking.
273

Real-time methods in neural electrophysiology to improve efficacy of dynamic clamp

Lin, Risa J. 17 May 2012 (has links)
In the central nervous system, most of the processes ranging from ion channels to neuronal networks occur in a closed loop, where the input to the system depends on its output. In contrast, most experimental preparations and protocols operate autonomously in an open loop and do not depend on the output of the system. Real-time software technology can be an essential tool for understanding the dynamics of many biological processes by providing the ability to precisely control the spatiotemporal aspects of a stimulus and to build activity-dependent stimulus-response closed loops. So far, application of this technology in biological experiments has been limited primarily to the dynamic clamp, an increasingly popular electrophysiology technique for introducing artificial conductances into living cells. Since the dynamic clamp combines mathematical modeling with electrophysiology experiments, it inherits the limitations of both, as well as issues concerning accuracy and stability that are determined by the chosen software and hardware. In addition, most dynamic clamp systems to date are designed for specific experimental paradigms and are not easily extensible to general real-time protocols and analyses. The long-term goal of this research is to develop a suite of real-time tools to evaluate the performance, improve the efficacy, and extend the capabilities of the dynamic clamp technique and real-time neural electrophysiology. We demonstrate a combined dynamic clamp and modeling approach for studying synaptic integration, a software platform for implementing flexible real-time closed-loop protocols, and the potential and limitations of Kalman filter-based techniques for online state and parameter estimation of neuron models.
274

Real-time interactive multiprogramming.

Heher, Anthony Douglas. January 1978 (has links)
This thesis describes a new method of constructing a real-time interactive software system for a minicomputer to enable the interactive facilities to be extended and improved in a multitasking environment which supports structured programming concepts. A memory management technique called Software Virtual Memory Management, which is implemented entirely in software, is used to extend the concept of hardware virtual memory management. This extension unifies the concepts of memory space allocation and control and of file system management, resulting in a system which is simple and safe for the application oriented user. The memory management structures are also used to provide exceptional protection facilities. A number of users can work interactively, using a high-level structured language in a multi-tasking environ=ment, with very secure access to shared data bases. A system is described which illustrates these concepts. This system is implemented using an interpreter and significant improvements in the performance of interpretive systems are shown to be possible using the structures presented. The system has been implemented on a Varian minicomputer as well as on a microprogrammable micro= processor. The virtual memory technique has been shown to work with a variety of bulk storage devices and should be particularly suitable for use with recent bulk storage developments such as bubble memory and charge coupled devices. A detailed comparison of the performance of the system vis-a-vis that of a FORTRAN based system executing in-line code with swapping has been performed by means of a process control Case study. These measurements show that an interpretive system using this new memory management technique can have a performance which is comparable to or better than a compiler. oriented system. / Thesis (Ph.D.)-University of Natal, 1978.
275

Real time image processing on parallel arrays for gigascale integration

Chai, Sek Meng 12 1900 (has links)
No description available.
276

Global investigations of radiated seismic energy and real-time implementation

Convers, Jaime Andres 13 January 2014 (has links)
This dissertation contains investigations of radiated seismic energy measurements from large earthquakes and duration determinations as significant properties of the dynamic earthquake rupture and its applications in the identification of very large and slow source rupturing earthquakes. This includes a description of earthquake released seismic energy from 1997 to 2010 and identification of slow source tsunami earthquakes in that time period. The implementation of these measurements in real-time since the beginning of 2009, with a case study of the Mentawai 2010 tsunami earthquake are also discussed. Further studies of rupture duration assessments and its technical improvements for more rapid and robust solutions are investigated as well, with application to the Tohoku-Oki 2011 earthquake an a case of directivity in the 2007 Mw 8.1 Solomon islands earthquake. Finally, the set of routines and programs developed for implementation at Georgia Tech and IRIS to produce the real-time results since 2009 presented in this study are described.
277

Storage and aggregation for fast analytics systems

Amur, Hrishikesh 13 January 2014 (has links)
Computing in the last decade has been characterized by the rise of data- intensive scalable computing (DISC) systems. In particular, recent years have wit- nessed a rapid growth in the popularity of fast analytics systems. These systems exemplify a trend where queries that previously involved batch-processing (e.g., run- ning a MapReduce job) on a massive amount of data, are increasingly expected to be answered in near real-time with low latency. This dissertation addresses the problem that existing designs for various components used in the software stack for DISC sys- tems do not meet the requirements demanded by fast analytics applications. In this work, we focus specifically on two components: 1. Key-value storage: Recent work has focused primarily on supporting reads with high throughput and low latency. However, fast analytics applications require that new data entering the system (e.g., new web-pages crawled, currently trend- ing topics) be quickly made available to queries and analysis codes. This means that along with supporting reads efficiently, these systems must also support writes with high throughput, which current systems fail to do. In the first part of this work, we solve this problem by proposing a new key-value storage system – called the WriteBuffer (WB) Tree – that provides up to 30× higher write per- formance and similar read performance compared to current high-performance systems. 2. GroupBy-Aggregate: Fast analytics systems require support for fast, incre- mental aggregation of data for with low-latency access to results. Existing techniques are memory-inefficient and do not support incremental aggregation efficiently when aggregate data overflows to disk. In the second part of this dis- sertation, we propose a new data structure called the Compressed Buffer Tree (CBT) to implement memory-efficient in-memory aggregation. We also show how the WB Tree can be modified to support efficient disk-based aggregation.
278

The virtual time function and rate-based schedulers for real-time communications over packet networks

Devadason, Tarith Navendran January 2007 (has links)
[Truncated abstract] The accelerating pace of convergence of communications from disparate application types onto common packet networks has made quality of service an increasingly important and problematic issue. Applications of different classes have diverse service requirements at distinct levels of importance. Also, these applications offer traffic to the network with widely variant characteristics. Yet a common network is expected at all times to meet the individual communication requirements of each flow from all of these application types. One group of applications that has particularly critical service requirements is the class of real-time applications, such as packet telephony. They require both the reproduction of a specified timing sequence at the destination, and nearly instantaneous interaction between the users at the endpoints. The associated delay limits (in terms of upper bound and variation) must be consistently met; at every point where these are violated, the network transfer becomes worthless, as the data cannot be used at all. In contrast, other types of applications may suffer appreciable deterioration in quality of service as a result of slower transfer, but the goal of the transfer can still largely be met. The goal of this thesis is to evaluate the potential effectiveness of a class of packet scheduling algorithms in meeting the specific service requirements of real-time applications in a converged network environment. Since the proposal of Weighted Fair Queueing, there have been several schedulers suggested to be capable of meeting the divergent service requirements of both real-time and other data applications. ... This simulation study also sheds light on false assumptions that can be made about the isolation produced by start-time and finish-time schedulers based on the deterministic bounds obtained. The key contributions of this work are as follows. We clearly show how the definition of the virtual time function affects both delay bounds and delay distributions for a real-time flow in a converged network, and how optimality is achieved. Despite apparent indications to the contrary from delay bounds, the simulation analysis demonstrates that start-time rate-based schedulers possess useful characteristics for real-time flows that the traditional finish-time schedulers do not. Finally, it is shown that all the virtual time rate-based schedulers considered can produce isolation problems over multiple hops in networks with high loading. It becomes apparent that the benchmark First-Come-First-Served scheduler, with spacing and call admission control at the network ingresses, is a preferred arrangement for real-time flows (although lower priority levels would also need to be implemented for dealing with other data flows).
279

Système de suivi des tempêtes de verglas en temps réel = Analysis of real time icing events /

Eter, Walid, January 2003 (has links)
Thèse (M.Eng.) -- Université du Québec à Chicoutimi, 2003. / Bibliogr.: f. 182-187. Document électronique également accessible en format PDF. CaQCU
280

Designing a real-time data streaming technique for enhancing the effectiveness of destination selection

Githinji, Stanley Muturi 08 1900 (has links)
The effectiveness of tour destination selection is dependent on pre-visit information sources. As competition increases in the tourism industry, destination organisations need to improve current destination selection processes. Research on current processes indicate that information sources accessed by potential tourists when making travel decisions may not be a true reflection of what the destination is offering. Any negative difference between perceived images during pre-visit and real images during the actual visit may result in poor destination reputation and dissatisfied customers. This research addresses this gap by improving the process of destination selection using a real-time data streaming mediation technique as an additional pre-visit information source. The researcher adopted a social-technologist research paradigm and a design-science approach. The research process was executed in three phases; the first phase focused on gathering knowledge on destination selection and pre-visit information sources. The findings in Phase 1 were used in Phase 2 to develop and test the performance of a prototype. Phase 3 involved the evaluation of the prototype tool in a real-world setting. One of the main outcomes of this research is the development of a destination selection framework using real-time data streaming mediation and a tool (http://www.tourcamportal.com) as proof of concept. This research has shown that real-time images are valuable pre-visit information sources when making travel decisions. Real-time images authenticate destination attractions, provide real-time availability of destinations, reduce speculations on destination attractions, and provide actual representations of destinations. The findings of this study contribute the body of knowledge and practice in the tourism sector and provide new areas for further research. / Computing / D. Phil. (Information Systems)

Page generated in 0.0599 seconds