• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 2
  • Tagged with
  • 30
  • 30
  • 28
  • 26
  • 11
  • 11
  • 9
  • 8
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Real-time interactive multiprogramming.

Heher, Anthony Douglas. January 1978 (has links)
This thesis describes a new method of constructing a real-time interactive software system for a minicomputer to enable the interactive facilities to be extended and improved in a multitasking environment which supports structured programming concepts. A memory management technique called Software Virtual Memory Management, which is implemented entirely in software, is used to extend the concept of hardware virtual memory management. This extension unifies the concepts of memory space allocation and control and of file system management, resulting in a system which is simple and safe for the application oriented user. The memory management structures are also used to provide exceptional protection facilities. A number of users can work interactively, using a high-level structured language in a multi-tasking environ=ment, with very secure access to shared data bases. A system is described which illustrates these concepts. This system is implemented using an interpreter and significant improvements in the performance of interpretive systems are shown to be possible using the structures presented. The system has been implemented on a Varian minicomputer as well as on a microprogrammable micro= processor. The virtual memory technique has been shown to work with a variety of bulk storage devices and should be particularly suitable for use with recent bulk storage developments such as bubble memory and charge coupled devices. A detailed comparison of the performance of the system vis-a-vis that of a FORTRAN based system executing in-line code with swapping has been performed by means of a process control Case study. These measurements show that an interpretive system using this new memory management technique can have a performance which is comparable to or better than a compiler. oriented system. / Thesis (Ph.D.)-University of Natal, 1978.
22

The development of a mass memory unit for a micro-satellite using NAND flash memory

Horsburgh, Ian J. 04 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2005. / ENGLISH ABSTRACT: This thesis investigates the possible use of NAND flash memory for a mass memory unit on a micro-satellite. The investigation begins with an analysis of NAND flash memory devices including the complexity of the internal circuitry and the occurrence of bad memory sections (bad blocks). Design specifications are produced and various design architectures are discussed and evaluated. Subsequently, a four bus serial access architecture using 16- bit NAND flash devices was chosen to be developed further. A VHDL design was created in order to realise the intended system functionality. The main functions of the design include a sustained write data rate of 24 MB/s, bad block management, multiple image storing, error checking and correction, defective device handling and reading while writing. The design was simulated extensively using NAND flash simulation models. Finally, a demonstration test board was designed and produced. This board includes an FPGA and an array of 16 8-bit NAND flash devices. The board was tested sucessfully and a write data rate of 12 MB/s was achieved along with all the other main functions. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die moontlike gebruik van NAND flash tegnologie as die geheue eenheid van ’n mikrosatelliet. As ’n beginpunt word NAND flash tegnologie ondersoek in terme van die kompleksiteit van interne stroombane en die voorkoms van defektiewe geheuesegmente. Daarna word ontwerpspesifikasies voortgebring en verskillende ontwerpsmoontlikhede met mekaar vergelyk. Vanuit hierdie oorwegings is daar besluit om die oplossing te implementeer met ’n vier-bus seri¨ele struktuur bestaande uit 16-bis NAND flash toestelle. Om die ontwerpspesifikasies te realiseer, is ’n VHDL stelsel geskep. Die belangrikste funksies van hierdie stelsel is ’n konstante skryftempo van 24 MB/s, die bestuur van defektiewe geheuesegmente, die stoor van meer as een beeld, foutopsporing en -herstel, optimale werking in die geval van defektiewe geheuetoestelle en laastens, die gelyktydige lees en skryf van data. Die stelsel is breedvoerig getoets met NAND flash simulasiemodelle. Ten slotte is ’n fisiese demonstrasiebord, bestaande uit ’n FPGA en 16 8-bis NAND flash toestelle, ontwerp en gebou. Fisiese metings was ’n sukses. ’n Skryftempo van 12 MB/s is gehaal, tesame met die korrekte werking van die ander hooffunksies.
23

Scalable Integration View Computation and Maintenance with Parallel, Adaptive and Grouping Techniques

Liu, Bin 19 August 2005 (has links)
" Materialized integration views constructed by integrating data from multiple distributed data sources help to achieve better access, reliable performance, and high availability for a wide range of applications. In this dissertation, we propose parallel, adaptive, and grouping techniques to address scalability challenges in high-performance integration view computation and maintenance due to increasingly large data sources and high rates of source updates. State-of-the-art parallel integration view computation makes the common assumption that the maximal pipelined parallelism leads to superior performance. We instead propose segmented bushy parallel processing that combines pipelined parallelism with alternate forms of parallelism to achieve an overall more effective strategy. Experimental studies conducted over a cluster of high-performance PCs confirm that the proposed strategy has an on average of 50\% improvement in terms of total processing time in comparison to existing solutions. Run-time adaptation becomes critical for parallel integration view computation due to its long running and memory intensive nature. We investigate two types of state level adaptations, namely, state spill and state relocation, to address the run-time memory shortage. We propose lazy-disk and active-disk approaches that integrate both adaptations to maximize run-time query throughput in a memory constrained environment. We also propose global throughput-oriented state adaptation strategies for computation plans with multiple state intensive operators. Extensive experiments confirm the effectiveness of our proposed adaptation solutions. Once results have been computed and materialized, it's typically more efficient to maintain them incrementally instead of full recomputation. However, state-of-the-art incremental view maintenance require O($n^2$) maintenance queries with n being the number of data sources that the view is defined upon. Moreover, they do not exploit view definitions and data source processing capabilities to further improve view maintenance performance. We propose novel grouping maintenance algorithms that dramatically reduce the number of maintenance queries to (O(n)). A cost-based view maintenance framework has been proposed to generate optimized maintenance plans tuned to particular environmental settings. Extensive experimental studies verify the effectiveness of our maintenance algorithms as well as the maintenance framework. "
24

Automating the conversion of natural language fiction to multi-modal 3D animated virtual environments

Glass, Kevin Robert January 2009 (has links)
Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
25

MIST : Mlgrate The Storage Too

Kamala, R 07 1900 (has links) (PDF)
We address the problem of migration of local storage of desktop users to remote sites. Assuming a network connection is maintained between the source and destination after the migration makes it possible for us to transfer a fraction of storage state while trying to operate as close to disconnected mode as possible. We have designed an approach to determine the subset of storage state that is to be transferred based on past accesses. We show that it is feasible to use information about files accessed to determine clusters and hot-spots in the file system. Using the tree structure of the file system and by applying an appropriate similarity measure to user accesses, we can approximate the working sets of the data accessed by the applications running at the time. Our results indicate that our technique reduces the amount of data to be copied by two orders of magnitude, bringing it into the realm of the possible.
26

Extensible Networked-storage Virtualization with Metadata Management at the Block Level

Flouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side, which in turn increase costs to scale. In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization, simplicity and easier management. First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches. Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such as cluster filesystems, while being scalable. Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals (CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
27

The Sea of Stuff : a model to manage shared mutable data in a distributed environment

Conte, Simone Ivan January 2019 (has links)
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
28

Extensible Networked-storage Virtualization with Metadata Management at the Block Level

Flouris, Michail D. 24 September 2009 (has links)
Increased scaling costs and lack of desired features is leading to the evolution of high-performance storage systems from centralized architectures and specialized hardware to decentralized, commodity storage clusters. Existing systems try to address storage cost and management issues at the filesystem level. Besides dictating the use of a specific filesystem, however, this approach leads to increased complexity and load imbalance towards the file-server side, which in turn increase costs to scale. In this thesis, we examine these problems at the block-level. This approach has several advantages, such as transparency, cost-efficiency, better resource utilization, simplicity and easier management. First of all, we explore the mechanisms, the merits, and the overheads associated with advanced metadata-intensive functionality at the block level, by providing versioning at the block level. We find that block-level versioning has low overhead and offers transparency and simplicity advantages over filesystem-based approaches. Secondly, we study the problem of providing extensibility required by diverse and changing application needs that may use a single storage system. We provide support for (i)adding desired functions as block-level extensions, and (ii)flexibly combining them to create modular I/O hierarchies. In this direction, we design, implement and evaluate an extensible block-level storage virtualization framework, Violin, with support for metadata-intensive functions. Extending Violin we build Orchestra, an extensible framework for cluster storage virtualization and scalable storage sharing at the block-level. We show that Orchestra's enhanced block interface can substantially simplify the design of higher-level storage services, such as cluster filesystems, while being scalable. Finally, we consider the problem of consistency and availability in decentralized commodity clusters. We propose RIBD, a novel storage system that provides support for handling both data and metadata consistency issues at the block layer. RIBD uses the notion of consistency intervals (CIs) to provide fine-grain consistency semantics on sequences of block level operations by means of a lightweight transactional mechanism. RIBD relies on Orchestra's virtualization mechanisms and uses a roll-back recovery mechanism based on low-overhead block-level versioning. We evaluate RIBD on a cluster of 24 nodes, and find that it performs comparably to two popular cluster filesystems, PVFS and GFS, while offering stronger consistency guarantees.
29

Performance Specific I/O Scheduling Framework for Cloud Storage

Jain, Nitisha January 2015 (has links) (PDF)
Virtualization is one of the important enabling technologies for Cloud Computing which facilitates sharing of resources among the virtual machines. However, it incurs performance overheads due to contention of physical devices such as disk and network bandwidth. Various I/O applications having different latency requirements may be executing concurrently on different virtual machines provisioned on a single server in Cloud data-centers. It is pertinent that the performance SLAs of such applications are satisfied through intelligent scheduling and allocation of disk resources. The underlying disk scheduler at the server is unable to distinguish between the application requests being oblivious to the characteristics of these applications. Therefore, all the applica- tions are provided best effort services by default. This may lead to performance degradation for the latency sensitive applications. In this work, we propose a novel disk scheduling framework PriDyn (Dynamic Priority) which provides differentiated services to various I/O applications co-located on a single host based on their latency attributes and desired performance. The framework employs a scheduling algorithm which dynamically computes latency estimates for all concurrent I/O applications for a given system state. Based on these, an appropriate pri- ority assignment for the applications is determined which is taken into consideration by the underlying disk scheduler at the host while scheduling the I/O applications on the physical disk. The proposed scheduling framework is able to successfully satisfy QoS requirements for the concurrent I/O applications within system constraints. This has been verified through ex- tensive experimental analysis. In order to realize the benefits of differentiated services provided by the PriDyn scheduler, proper combination of I/O applications must be ensured for the servers through intelligent meta-scheduling techniques at the Cloud data-center level. For achieving this, in the second part of this work, we extended the PriDyn framework to design a proactive admission control and scheduling framework PCOS (P rescient C loud I/O S cheduler). It aims to maximize to Utilization of disk resources without adversely affecting the performance of the applications scheduled on the systems. By anticipating the performance of the systems running multiple I/O applications, PCOS prevents the scheduling of undesirable workloads on them in order to maintain the necessary balance between resource consolidation and application performance guarantees. The PCOS framework includes the PriDyn scheduler as an important component and utilizes the dynamic disk resource allocation capabilities of PriDyn for meeting its goals. Experimental validations performed on real world I/O traces demonstrate that the proposed framework achieves appreciable enhancements in I/O performance through selection of optimal I/O workload combinations, indicating that this approach is a promising step towards enabling QoS guarantees for Cloud data-centers.
30

Techno-economic Potential of Customer Flexibility : A Case Study

Bouraleh, Maryan January 2020 (has links)
District heating plays a major role in the Swedish energy system. It is deemed a renewable energy source and is the main provider for multi-family dwellings with 90 %. Although the district heating fuel mix consists of majority renewables, a share of 5 % is provided from fossil fuels. To reduce fossil fuel usage and eradicate CO2-emissions from the district heating system new solutions are sought after. In this project, the potential for shortterm thermal energy storage in buildings is investigated. This concept is referred to as customer flexibility. Demand flexibility is created in the district heating system (DHS) by varying the indoor temperature in 50 multi-family dwellings with maximum 1◦C, without jeopardizing the thermal comfort for the tenants. The flexible load makes it possible to store energy shortterm in the building’ envelope. Consequently, heat load curves are evened in production. This leads to a reduction of the peak load in the DHS. Peaks are associated with high costs and environmental impact. Therefore, the potential benefits of customer flexibility are reduced peak production, fuel costs, and CO2-emissions, depending on the fuel mix in the DHS. The project objective is to examine the techno-economic potential of customer flexibility in a specific DHS. The case study is made in a DHS owned by the company Vattenfall, located in the Stockholm area. To evaluate the potential benefits of implementing the concept, seven key performance indicators are chosen. They are peak power, peak fuel usage, produced volume, total fuel cost, fuel cost per produced MWh, climate footprint, and primary energy. Moreover, an in-house optimization model is used to simulate multiple scenarios of the district heating DHS. Different sets of assumptions about the available flexibility in the DHS and the thermal characteristics of the buildings are made. Customer flexibility is modeled as virtual heat storage that can be charged up or down depending on the speed and size of the available storage at a specific outdoor temperature. Simulation results give a maximum peak power reduction of 10.9 % and annual fuel cost reduction between 0.9-3.6 % depending on the scenario. The results found are comparable to values found in similar studies. However, the environmental key performance indicators generated an increase in CO2-emissions and primary energy compared to the baseline scenarios. The result would have looked different if fossil fuels were used in peak production instead of biofuels. The master thesis also aimed to validate assumptions and parameters made in the input data to the optimization model. This was achieved by using results attained from a pilot in the specific DHS. Therefore results generated from the simulations are deemed accurate and confirm that customer flexibility leads to reduced peak production and DHS optimization. / Se filen

Page generated in 0.0392 seconds