• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 10
  • 7
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Storage System Management Using Reinforcement Learning Techniques and Nonlinear Models

Mahootchi, Masoud January 2009 (has links)
In this thesis, modeling and optimization in the field of storage management under stochastic condition will be investigated using two different methodologies: Simulation Optimization Techniques (SOT), which are usually categorized in the area of Reinforcement Learning (RL), and Nonlinear Modeling Techniques (NMT). For the first set of methods, simulation plays a fundamental role in evaluating the control policy: learning techniques are used to deliver sub-optimal policies at the end of a learning process. These iterative methods use the interaction of agents with the stochastic environment through taking actions and observing different states. To converge to the steady-state condition where policies and value functions do not change significantly with the continuation of the learning process, all or most important states must be visited sufficiently. This might be prohibitively time-consuming for large-scale problems. To make these techniques more efficient both in terms of computation time and robust optimal policies, the idea of Opposition-Based Learning (OBL-Type I and Type II) is employed to modify/extend popular RL techniques including Q-Learning, Q(λ), sarsa, and sarsa(λ). Several new algorithms are developed using this idea. It is also illustrated that, function approximation techniques such as neural networks can contribute to the process of learning. The state-of-the-art implementations usually consider the maximization of expected value of accumulated reward. Extending these techniques to consider risk and solving some well-known control problems are important contributions of this thesis. Furthermore, the new nonlinear modeling for reservoir management using indicator functions and randomized policy introduced by Fletcher and Ponnambalam, is extended to stochastic releases in multi-reservoir systems. In this extension, two different approaches for defining the release policies are proposed. In addition, the main restriction of considering the normal distribution for inflow is relaxed by using a beta-equivalent general distribution. A five-reservoir case study from India is used to demonstrate the benefits of these new developments. Using a warehouse management problem as an example, application of the proposed method to other storage management problems is outlined.
12

Storage Management for Embedded SIMD Processors

Ryu, Soojung 17 December 2003 (has links)
SIMD parallelism offers a high performance and efficient execution approach for today's broad range of portable multimedia consumer products. However, new methods are needed to meet the complex demands of high performance, embedded systems. This research explores new storage management techniques for this focused but critical application. These techniques include memory design exploration based on the application retargeting technique, storage-based systolic instruction broadcast, and systolic virtual memory to improve both the performance and efficiency of embedded SIMD systems. For an efficient storage usage by memory design space exploration in embedded SIMD systems, an analysis method for assessing storage needs and costs of a given application automatically retargeted across a spectrum of storage configuration designs was developed. Using this technique, a SIMD processing element achieves optimal area and energy efficiency with a register file containing between 8 and 12 words for given workload. This configuration is between 15% and 25% more area and energy efficient than other memory configurations being considered. Systolic instruction broadcast is a high performance and area efficient instruction broadcasting scheme with short-wire interconnects by eliminating of wire latency bottleneck found in global instruction broadcast. Three implementation methods are defined and evaluated - software method, 2-write port register file method, and bypass method. In our evaluations, due to the system's short clock cycle time and scheduler, a speedup in system performance of up to 7.5 can be achieved by the year 2010. In addition, speedup of area efficiency also can be achieved up to 7.2 for a given workload. The ability of minimizing off-chip memory access latency while maximizing access frequency by scheduling techniques along with data prefetch techniques in systolic virtual memory mechanism was evaluated using our SIMD-systolic architecture simulator. Results show that, systolic virtual off-chip memory with shared address space can achieve over 50% higher area efficiency than that of an on-chip only system for a matrix multiplication application.
13

Teile und herrsche - Logical Volume Management unter Linux

Schreiber, Alexander 17 October 2001 (has links) (PDF)
Logical Volume Management, auf kommerziellen UNIX-System (z.B. AIX, HP-UX) schon seit Jahren im Einsatz, ist mittlerweile auch fuer Linux verfuegbar. Der Vortrag bietet eine Einfuehrung in LVM unter Linux, einen Ueberblick ueber die aktuelle LVM-Technologie auf Linux sowie deren Einsatz.
14

Storage Management of Data-intensive Computing Systems

Xu, Yiqi 18 March 2016 (has links)
Computing systems are becoming increasingly data-intensive because of the explosion of data and the needs for processing the data, and storage management is critical to application performance in such data-intensive computing systems. However, existing resource management frameworks in these systems lack the support for storage management, which causes unpredictable performance degradations when applications are under I/O contention. Storage management of data-intensive systems is a challenging problem because I/O resources cannot be easily partitioned and distributed storage systems require scalable management. This dissertation presents the solutions to address these challenges for typical data-intensive systems including high-performance computing (HPC) systems and big-data systems. For HPC systems, the dissertation presents vPFS, a performance virtualization layer for parallel file system (PFS) based storage systems. It employs user-level PFS proxies to interpose and schedule parallel I/Os on a per-application basis. Based on this framework, it enables SFQ(D)+, a new proportional-share scheduling algorithm which allows diverse applications with good performance isolation and resource utilization. To manage an HPC system’s total I/O service, it also provides two complementary synchronization schemes to coordinate the scheduling of large numbers of storage nodes in a scalable manner. For big-data systems, the dissertation presents IBIS, an interposition-based big-data I/O scheduler. By interposing the different I/O phases of big-data applications, it schedules the I/Os transparently to the applications. It enables a new proportional-share scheduling algorithm, SFQ(D2), to address the dynamics of the underlying storage by adaptively adjusting the I/O concurrency. Moreover, it employs a scalable broker to coordinate the distributed I/O schedulers and provide proportional sharing of a big-data system’s total I/O service. Experimental evaluations show that these solutions have low-overhead and provide strong I/O performance isolation. For example, vPFS’ overhead is less than 3% in through- put and it delivers proportional sharing within 96% of the target for diverse workloads; and IBIS provides up to 99% better performance isolation for WordCount and 30% better proportional slowdown for TeraSort and TeraGen than native YARN.
15

Evaluation of unidirectional background push content download services for the delivery of television programs

Fraile Gil, Francisco 02 September 2013 (has links)
Este trabajo de tesis presenta los servicios de descarga de contenido en modo push como un mecanismo eficiente para el envío de contenido de televisión pre-producido sobre redes de difusión. Hoy en día, los operadores de red dedican una cantidad considerable de recursos de red a la entrega en vivo de contenido televisivo, tanto sobre redes de difusión como sobre conexiones unidireccionales. Esta oferta de servicios responde únicamente a requisitos comerciales: disponer de los contenidos televisivos en cualquier momento y lugar. Sin embargo, desde un punto de vista estrictamente académico, el envío en vivo es únicamente un requerimiento para el contenido en vivo, no para contenidos que ya han sido producidos con anterioridad a su emisión. Más aún, la difusión es solo eficiente cuando el contenido es suficientemente popular. Los servicios bajo estudio en esta tesis utilizan capacidad residual en redes de difusión para enviar contenido pre-producido para que se almacene en los equipos de usuario. La propuesta se justifica únicamente por su eficiencia. Por un lado, genera valor de recursos de red que no se aprovecharían de otra manera. Por otro lado, realiza la entrega de contenidos pre-producidos y populares de la manera más eficiente: sobre servicios de descarga de contenidos en difusión. Los resultados incluyen modelos para la popularidad y la duración de contenidos, valiosos para cualquier trabajo de investigación basados en la entrega de contenidos televisivos. Además, la tesis evalúa la capacidad residual disponible en redes de difusión, por medio de estudios empíricos. Después, estos resultados son utilizados en simulaciones que evalúan las prestaciones de los servicios propuestos en escenarios diferentes y para aplicaciones diferentes. La evaluación demuestra que este tipo de servicios son un recurso muy útil para la entrega de contenido televisivo. / This thesis dissertation presents background push Content Download Services as an efficient mechanism to deliver pre-produced television content through existing broadcast networks. Nowadays, network operators dedicate a considerable amount of network resources to live streaming live, through both broadcast and unicast connections. This service offering responds solely to commercial requirements: Content must be available anytime and anywhere. However, from a strictly academic point of view, live streaming is only a requirement for live content and not for pre-produced content. Moreover, broadcasting is only efficient when the content is sufficiently popular. The services under study in this thesis use residual capacity in broadcast networks to push popular, pre-produced content to storage capacity in customer premises equipment. The proposal responds only to efficiency requirements. On one hand, it creates value from network resources otherwise unused. On the other hand, it delivers popular pre-produced content in the most efficient way: through broadcast download services. The results include models for the popularity and the duration of television content, valuable for any research work dealing with file-based delivery of television content. Later, the thesis evaluates the residual capacity available in broadcast networks through empirical studies. These results are used in simulations to evaluate the performance of background push content download services in different scenarios and for different applications. The evaluation proves that this kind of services can become a great asset for the delivery of television content / Fraile Gil, F. (2013). Evaluation of unidirectional background push content download services for the delivery of television programs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31656 / TESIS
16

Teile und herrsche - Logical Volume Management unter Linux

Schreiber, Alexander 17 October 2001 (has links)
Logical Volume Management, auf kommerziellen UNIX-System (z.B. AIX, HP-UX) schon seit Jahren im Einsatz, ist mittlerweile auch fuer Linux verfuegbar. Der Vortrag bietet eine Einfuehrung in LVM unter Linux, einen Ueberblick ueber die aktuelle LVM-Technologie auf Linux sowie deren Einsatz.
17

Exploiting Heterogeneity in Distributed Software Frameworks

Kumaraswamy Ravindranathan, Krishnaraj 08 January 2016 (has links)
The objective of this thesis is to address the challenges faced in sustaining efficient, high-performance and scalable Distributed Software Frameworks (DSFs), such as MapReduce, Hadoop, Dryad, and Pregel, for supporting data-intensive scientific and enterprise applications on emerging heterogeneous compute, storage and network infrastructure. Large DSF deployments in the cloud continue to grow both in size and number, given DSFs are cost-effective and easy to deploy. DSFs are becoming heterogeneous with the use of advanced hardware technologies and due to regular upgrades to the system. For instance, low-cost, power-efficient clusters that employ traditional servers along with specialized resources such as FPGAs, GPUs, powerPC, MIPS and ARM based embedded devices, and high-end server-on-chip solutions will drive future DSFs infrastructure. Similarly, high-throughput DSF storage is trending towards hybrid and tiered approaches that use large in-memory buffers, SSDs, etc., in addition to disks. However, the schedulers and resource managers of these DSFs assume the underlying hardware to be similar or homogeneous. Another problem faced in evolving applications is that they are typically complex workflows comprising of different kernels. The kernels can be diverse, e.g., compute-intensive processing followed by data-intensive visualization and each kernel will have a different affinity towards different hardware. Because of the inability of the DSFs to understand heterogeneity of the underlying hardware architecture and applications, existing resource managers cannot ensure appropriate resource-application match for better performance and resource usage. In this dissertation, we design, implement, and evaluate DerbyhatS, an application-characteristics-aware resource manager for DSFs, which predicts the performance of the application under different hardware configurations and dynamically manage compute and storage resources as per the application needs. We adopt a quantitative approach where we first study the detailed behavior of various Hadoop applications running on different hardware configurations and propose application-attuned dynamic system management in order to improve the resource-application match. We re-design the Hadoop Distributed File System (HDFS) into a multi-tiered storage system that seamlessly integrates heterogeneous storage technologies into the HDFS. We also propose data placement and retrieval policies to improve the utilization of the storage devices based on their characteristics such as I/O throughput and capacity. DerbyhatS workflow scheduler is an application-attuned workflow scheduler and is constituted by two components. phi-Sched coupled with epsilon-Sched manages the compute heterogeneity and DUX coupled with AptStore manages the storage substrate to exploit heterogeneity. DerbyhatS will help realize the full potential of the emerging infrastructure for DSFs, e.g., cloud data centers, by offering many advantages over the state of the art by ensuring application-attuned, dynamic heterogeneous resource management. / Ph. D.
18

Studies In Automatic Management Of Storage Systems

Pipada, Pankaj 06 1900 (has links) (PDF)
Autonomic management is important in storage systems and the space of autonomics in storage systems is vast. Such autonomic management systems can employ a variety of techniques depending upon the specific problem. In this thesis, we first take an algorithmic approach towards reliability enhancement and then we use learning along with a reactive framework to facilitate storage optimization for applications. We study how the reliability of non-repairable systems can be improved through automatic reconfiguration of their XOR-coded structure. To this regard we propose to increase the fault tolerance of non-repairable systems by reorganizing the system, after a failure is detected, to a new XOR-code with a better fault tolerance. As errors can manifest during reorganization due to whole reads of multiple submodules, our framework takes them in to account and models such errors as based on access intensity (ie.BER-biterrorrate). We present and evaluate the reliability of an example storage system with and without reorganization. Motivated by the critical need for automating various aspects of data management in virtualized data centers, we study the specific problem of automatically implementing Virtual Machine (VM) migration in a dynamic environment according to some pre-set policies. This is a problem that requires automated identification of various workloads and their execution environments running inside virtual machines in a non-intrusive manner. To this end we propose AuM (for Autonomous Manager) that has the capability to learn workloads by aggregating variety of information obtained from network traces of storage protocols. We use state of the art Machine Learning tools, namely Multiple Kernel learning ,to aggregate information and show that AuM is indeed very accurate in identifying work loads, their execution environments and is also successful in following user set policies very closely for the VM migration tasks. Storage infrastructure in large-scale cloud data center environments must support applications with diverse, time-varying data access patterns while observing the quality of service. To meet service level requirements in such heterogeneous application phases, storage management needs to be phase-aware and adaptive ,i.e. ,identify specific storage access patterns of applications as they occur and customize their handling accordingly. We build LoadIQ, an online application phase detector for networked (file and block) storage systems. In a live deployment , LoadIQ analyzes traces and emits phase labels learnt online. Such labels could be used to generate alerts or to trigger phase-specific system tuning.
19

Profillagerhantering med lagerhiss: en studie för att skapa ett användarvänligt lager på MaxiDoor AB : Profile storage management with a vertical storage lift:  a study to create a user-friendly storage at MaxiDoor AB

Björtoft, David, Eneqvist, Adam January 2020 (has links)
MaxiDoor AB konstruerar och tillverkar ståldörrar och stålpartier efter specifika kundorder och är specialist på lösningar där hög säkerhet efterfrågas. Företaget ämnar expandera under kommande år och har sjösatt en investeringsplan för att göra verksamheten anpassad för en högre omsättning.  Detta arbete avser att se över och förbättra dagens materialförsörjning vad gäller stålprofiler med förutsättningar om förväntad omsättning för produktionsenheten ES kommer att öka från 100 mkr till 145 mkr samt att företaget vill bibehålla en enstycksproduktion. Detta ska lösas med hjälp av bland annat en lagerhiss. Parallellt med att tillverkningstakten förväntas öka måste även materialförsörjningen utvecklas för att klara framtida efterfrågan.  Med grund i teori och analys av nuläget konstateras att lagerutformningen snabbt blir komplex med de förutsättningar som existerar. Studiens slutgiltiga rekommendation är att dela upp lagret i tre sektioner; lagerhiss, grovbuffert och finbuffert. Detta får till följd en användarvänlig materialförsörjning som förväntas vara hållbar på sikt.  Lagerhissen avses förvara både ordrar med planerad och dedikerad förbrukning samt övriga profiler och slattar. På så sätt agerar hissen såväl lager som buffert. Grovbuffert och finbuffert brukas enligt en trattprincip där ordrar bryts ned till enstaka profiler som skall kapas. Avslutningsvis diskuteras olika aspekter kring materialförsörjningen och framtida arbete. / MaxiDoor AB designs and manufacture steel doors and steel segments to specific customer order and are specialist on solutions where high security is requested. The company seeks to expand during the coming years and have launched an investment plan to adapt the production for higher turnover.  This work intends to look over and improve todays material supply in terms of steel profiles with the basis that expected turnover for the production unit ES will grow from 100 mkr to 145 mkr and in addition the company wishes to keep a one-piece production. This will be solved with the help of, among other things, a vertical storage lift. In parallel with the expected increase in rate of production, the supply of materials must also be developed to meet future demand. Based on theory and analysis of the current situation, it is found that the storage design quickly becomes complex with the existing conditions. The final recommendation of the study is to divide the storage into three sections; a vertical storage lift, a coarse buffer and a fine buffer. This results in a user-friendly material supply that is expected to be sustainable in the long term. The vertical storage lift is intended to store orders with planned and dedicated consumption as well as other profiles and residual profiles. In this way, the elevator acts as both storage and buffer. The coarse buffer and fine buffer are used according to a funnel principle where orders are broken down into individual profiles to be cut. Finally, various aspects are discussed regarding the supply of materials and future work.
20

Leveraging Non-Volatile Memory in Modern Storage Management Architectures

Lersch, Lucas 14 May 2021 (has links)
Non-volatile memory technologies (NVM) introduce a novel class of devices that combine characteristics of both storage and main memory. Like storage, NVM is not only persistent, but also denser and cheaper than DRAM. Like DRAM, NVM is byte-addressable and has lower access latency. In recent years, NVM has gained a lot of attention both in academia and in the data management industry, with views ranging from skepticism to over excitement. Some critics claim that NVM is not cheap enough to replace flash-based SSDs nor is it fast enough to replace DRAM, while others see it simply as a storage device. Supporters of NVM have observed that its low latency and byte-addressability requires radical changes and a complete rewrite of storage management architectures. This thesis takes a moderate stance between these two views. We consider that, while NVM might not replace flash-based SSD or DRAM in the near future, it has the potential to reduce the gap between them. Furthermore, treating NVM as a regular storage media does not fully leverage its byte-addressability and low latency. On the other hand, completely redesigning systems to be NVM-centric is impractical. Proposals that attempt to leverage NVM to simplify storage management result in completely new architectures that face the same challenges that are already well-understood and addressed by the traditional architectures. Therefore, we take three common storage management architectures as a starting point, and propose incremental changes to enable them to better leverage NVM. First, in the context of log-structured merge-trees, we investigate the impact of storing data in NVM, and devise methods to enable small granularity accesses and NVM-aware caching policies. Second, in the context of B+Trees, we propose to extend the buffer pool and describe a technique based on the concept of optimistic consistency to handle corrupted pages in NVM. Third, we employ NVM to enable larger capacity and reduced costs in a index+log key-value store, and combine it with other techniques to build a system that achieves low tail latency. This thesis aims to describe and evaluate these techniques in order to enable storage management architectures to leverage NVM and achieve increased performance and lower costs, without major architectural changes.:1 Introduction 1.1 Non-Volatile Memory 1.2 Challenges 1.3 Non-Volatile Memory & Database Systems 1.4 Contributions and Outline 2 Background 2.1 Non-Volatile Memory 2.1.1 Types of NVM 2.1.2 Access Modes 2.1.3 Byte-addressability and Persistency 2.1.4 Performance 2.2 Related Work 2.3 Case Study: Persistent Tree Structures 2.3.1 Persistent Trees 2.3.2 Evaluation 3 Log-Structured Merge-Trees 3.1 LSM and NVM 3.2 LSM Architecture 3.2.1 LevelDB 3.3 Persistent Memory Environment 3.4 2Q Cache Policy for NVM 3.5 Evaluation 3.5.1 Write Performance 3.5.2 Read Performance 3.5.3 Mixed Workloads 3.6 Additional Case Study: RocksDB 3.6.1 Evaluation 4 B+Trees 4.1 B+Tree and NVM 4.1.1 Category #1: Buffer Extension 4.1.2 Category #2: DRAM Buffered Access 4.1.3 Category #3: Persistent Trees 4.2 Persistent Buffer Pool with Optimistic Consistency 4.2.1 Architecture and Assumptions 4.2.2 Embracing Corruption 4.3 Detecting Corruption 4.3.1 Embracing Corruption 4.4 Repairing Corruptions 4.5 Performance Evaluation and Expectations 4.5.1 Checksums Overhead 4.5.2 Runtime and Recovery 4.6 Discussion 5 Index+Log Key-Value Stores 5.1 The Case for Tail Latency 5.2 Goals and Overview 5.3 Execution Model 5.3.1 Reactive Systems and Actor Model 5.3.2 Message-Passing Communication 5.3.3 Cooperative Multitasking 5.4 Log-Structured Storage 5.5 Networking 5.6 Implementation Details 5.6.1 NVM Allocation on RStore 5.6.2 Log-Structured Storage and Indexing 5.6.3 Garbage Collection 5.6.4 Logging and Recovery 5.7 Systems Operations 5.8 Evaluation 5.8.1 Methodology 5.8.2 Environment 5.8.3 Other Systems 5.8.4 Throughput Scalability 5.8.5 Tail Latency 5.8.6 Scans 5.8.7 Memory Consumption 5.9 Related Work 6 Conclusion Bibliography A PiBench

Page generated in 0.0787 seconds