• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 5
  • 3
  • Tagged with
  • 26
  • 26
  • 26
  • 26
  • 10
  • 9
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The development of a mass memory unit for a micro-satellite using NAND flash memory

Horsburgh, Ian J. 04 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2005. / ENGLISH ABSTRACT: This thesis investigates the possible use of NAND flash memory for a mass memory unit on a micro-satellite. The investigation begins with an analysis of NAND flash memory devices including the complexity of the internal circuitry and the occurrence of bad memory sections (bad blocks). Design specifications are produced and various design architectures are discussed and evaluated. Subsequently, a four bus serial access architecture using 16- bit NAND flash devices was chosen to be developed further. A VHDL design was created in order to realise the intended system functionality. The main functions of the design include a sustained write data rate of 24 MB/s, bad block management, multiple image storing, error checking and correction, defective device handling and reading while writing. The design was simulated extensively using NAND flash simulation models. Finally, a demonstration test board was designed and produced. This board includes an FPGA and an array of 16 8-bit NAND flash devices. The board was tested sucessfully and a write data rate of 12 MB/s was achieved along with all the other main functions. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die moontlike gebruik van NAND flash tegnologie as die geheue eenheid van ’n mikrosatelliet. As ’n beginpunt word NAND flash tegnologie ondersoek in terme van die kompleksiteit van interne stroombane en die voorkoms van defektiewe geheuesegmente. Daarna word ontwerpspesifikasies voortgebring en verskillende ontwerpsmoontlikhede met mekaar vergelyk. Vanuit hierdie oorwegings is daar besluit om die oplossing te implementeer met ’n vier-bus seri¨ele struktuur bestaande uit 16-bis NAND flash toestelle. Om die ontwerpspesifikasies te realiseer, is ’n VHDL stelsel geskep. Die belangrikste funksies van hierdie stelsel is ’n konstante skryftempo van 24 MB/s, die bestuur van defektiewe geheuesegmente, die stoor van meer as een beeld, foutopsporing en -herstel, optimale werking in die geval van defektiewe geheuetoestelle en laastens, die gelyktydige lees en skryf van data. Die stelsel is breedvoerig getoets met NAND flash simulasiemodelle. Ten slotte is ’n fisiese demonstrasiebord, bestaande uit ’n FPGA en 16 8-bis NAND flash toestelle, ontwerp en gebou. Fisiese metings was ’n sukses. ’n Skryftempo van 12 MB/s is gehaal, tesame met die korrekte werking van die ander hooffunksies.
22

Scalable Integration View Computation and Maintenance with Parallel, Adaptive and Grouping Techniques

Liu, Bin 19 August 2005 (has links)
" Materialized integration views constructed by integrating data from multiple distributed data sources help to achieve better access, reliable performance, and high availability for a wide range of applications. In this dissertation, we propose parallel, adaptive, and grouping techniques to address scalability challenges in high-performance integration view computation and maintenance due to increasingly large data sources and high rates of source updates. State-of-the-art parallel integration view computation makes the common assumption that the maximal pipelined parallelism leads to superior performance. We instead propose segmented bushy parallel processing that combines pipelined parallelism with alternate forms of parallelism to achieve an overall more effective strategy. Experimental studies conducted over a cluster of high-performance PCs confirm that the proposed strategy has an on average of 50\% improvement in terms of total processing time in comparison to existing solutions. Run-time adaptation becomes critical for parallel integration view computation due to its long running and memory intensive nature. We investigate two types of state level adaptations, namely, state spill and state relocation, to address the run-time memory shortage. We propose lazy-disk and active-disk approaches that integrate both adaptations to maximize run-time query throughput in a memory constrained environment. We also propose global throughput-oriented state adaptation strategies for computation plans with multiple state intensive operators. Extensive experiments confirm the effectiveness of our proposed adaptation solutions. Once results have been computed and materialized, it's typically more efficient to maintain them incrementally instead of full recomputation. However, state-of-the-art incremental view maintenance require O($n^2$) maintenance queries with n being the number of data sources that the view is defined upon. Moreover, they do not exploit view definitions and data source processing capabilities to further improve view maintenance performance. We propose novel grouping maintenance algorithms that dramatically reduce the number of maintenance queries to (O(n)). A cost-based view maintenance framework has been proposed to generate optimized maintenance plans tuned to particular environmental settings. Extensive experimental studies verify the effectiveness of our maintenance algorithms as well as the maintenance framework. "
23

Automating the conversion of natural language fiction to multi-modal 3D animated virtual environments

Glass, Kevin Robert January 2009 (has links)
Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
24

MIST : Mlgrate The Storage Too

Kamala, R 07 1900 (has links) (PDF)
We address the problem of migration of local storage of desktop users to remote sites. Assuming a network connection is maintained between the source and destination after the migration makes it possible for us to transfer a fraction of storage state while trying to operate as close to disconnected mode as possible. We have designed an approach to determine the subset of storage state that is to be transferred based on past accesses. We show that it is feasible to use information about files accessed to determine clusters and hot-spots in the file system. Using the tree structure of the file system and by applying an appropriate similarity measure to user accesses, we can approximate the working sets of the data accessed by the applications running at the time. Our results indicate that our technique reduces the amount of data to be copied by two orders of magnitude, bringing it into the realm of the possible.
25

The Sea of Stuff : a model to manage shared mutable data in a distributed environment

Conte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
26

Performance Specific I/O Scheduling Framework for Cloud Storage

Jain, Nitisha January 2015 (has links) (PDF)
Virtualization is one of the important enabling technologies for Cloud Computing which facilitates sharing of resources among the virtual machines. However, it incurs performance overheads due to contention of physical devices such as disk and network bandwidth. Various I/O applications having different latency requirements may be executing concurrently on different virtual machines provisioned on a single server in Cloud data-centers. It is pertinent that the performance SLAs of such applications are satisfied through intelligent scheduling and allocation of disk resources. The underlying disk scheduler at the server is unable to distinguish between the application requests being oblivious to the characteristics of these applications. Therefore, all the applica- tions are provided best effort services by default. This may lead to performance degradation for the latency sensitive applications. In this work, we propose a novel disk scheduling framework PriDyn (Dynamic Priority) which provides differentiated services to various I/O applications co-located on a single host based on their latency attributes and desired performance. The framework employs a scheduling algorithm which dynamically computes latency estimates for all concurrent I/O applications for a given system state. Based on these, an appropriate pri- ority assignment for the applications is determined which is taken into consideration by the underlying disk scheduler at the host while scheduling the I/O applications on the physical disk. The proposed scheduling framework is able to successfully satisfy QoS requirements for the concurrent I/O applications within system constraints. This has been verified through ex- tensive experimental analysis. In order to realize the benefits of differentiated services provided by the PriDyn scheduler, proper combination of I/O applications must be ensured for the servers through intelligent meta-scheduling techniques at the Cloud data-center level. For achieving this, in the second part of this work, we extended the PriDyn framework to design a proactive admission control and scheduling framework PCOS (P rescient C loud I/O S cheduler). It aims to maximize to Utilization of disk resources without adversely affecting the performance of the applications scheduled on the systems. By anticipating the performance of the systems running multiple I/O applications, PCOS prevents the scheduling of undesirable workloads on them in order to maintain the necessary balance between resource consolidation and application performance guarantees. The PCOS framework includes the PriDyn scheduler as an important component and utilizes the dynamic disk resource allocation capabilities of PriDyn for meeting its goals. Experimental validations performed on real world I/O traces demonstrate that the proposed framework achieves appreciable enhancements in I/O performance through selection of optimal I/O workload combinations, indicating that this approach is a promising step towards enabling QoS guarantees for Cloud data-centers.

Page generated in 0.1173 seconds