• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 11
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exploring atomicity on memory mapped files based on non-volatile memory file systems

Puglia, Gianlucca Oliveira 21 March 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2017-12-11T16:00:35Z No. of bitstreams: 1 Gianlucca_Oliveira_Puglia_dis.pdf: 2043630 bytes, checksum: f7fc70f33d1d15b56eded8458fbed2fa (MD5) / Approved for entry into archive by Tatiana Lopes (tatiana.lopes@pucrs.br) on 2017-12-18T11:25:26Z (GMT) No. of bitstreams: 1 Gianlucca_Oliveira_Puglia_dis.pdf: 2043630 bytes, checksum: f7fc70f33d1d15b56eded8458fbed2fa (MD5) / Made available in DSpace on 2017-12-18T11:49:55Z (GMT). No. of bitstreams: 1 Gianlucca_Oliveira_Puglia_dis.pdf: 2043630 bytes, checksum: f7fc70f33d1d15b56eded8458fbed2fa (MD5) Previous issue date: 2017-03-21 / As tecnologias de mem?rias n?o-vol?teis s?o uma grande promessa na ?rea de arquitetura de computadores e ? esperado que sejam poderosas ferramentas para solucionar os problemas referentes a manipula??o eficiente de dados dos dias de hoje. Estas tecnologias prov?m alta performance e acesso em granularidade de bytes com a distinta vantagem de serem persistentes. Por?m, afim de explorar estas tecnologias em todo seu potencial, os sistemas e arquiteturas de hoje precisam buscar meios de se adaptar a esta nova forma de acessar dados e de superar os desafios que v?m com ela.Trabalhos existentes na ?rea j? prop?em m?todos para adaptar as arquiteturas existentes para o uso de NVM bem como formas inovadoras de empregar estas mem?rias em futuras aplica??es. No entanto, o suporte dos sistemas operacionais a estas solu??es, ainda que existente, ainda ? muito limitado. Neste trabalho, n?s apresentamos duas varia??es da chamada de sistema msync, modeladas para explorar as caracter?sticas das tecnologias de NVM e garantir consist?ncia para os dados dos usu?rios. Ambas s?o solu??es simples que permitem aos usu?rios definirem checkpoints de seus arquivos usando a sintaxe comum de sistemas de arquivos. N?s implementamos e testamos estes m?todos sobre o sistema operacional Linux utilizando como base um sistema de arquivo nativamente voltado a NVM. Nossos resultados mostram que estes mecanismos s?o capazes de garantir a integridade dos arquivos mesmo na presen?a de falhas no sistema enquanto mant?m uma performance razo?vel. / Upcoming non-volatile memory technologies are a big promise in computer architecture and are expected to be powerful tools to address today?s issues regarding efficient data manipulation. They provide high performance and byte granularity while also having the distinct advantage of being persistent. However in order to explore these technologies to their full potential, existing systems and architecture must adapt to this new way of working with data and workaround the challenges that come with it. Existing work in the area already proposes methods to adapt existing architecture to NVM as well as innovative ways to employ these memories in future applications. However operating system support to such NVM-enabled solutions, although existent, still very limited. In this work, we present two variations of the existing mmap system call, designed to both explore NVM characteristics and provide user data consistency. Both are very simple solutions that allow users to control the persistence and define checkpoints to their files while using the common mapped file syntax. We have implemented and tested these methods over Linux using a NVM file system as our base. Our results show that these mechanisms can ensure file integrity in the presence of system failures while also providing a reasonable performance.
12

Architectural Principles for Database Systems on Storage-Class Memory

Oukid, Ismail 23 January 2018 (has links) (PDF)
Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM.
13

Architectural Principles for Database Systems on Storage-Class Memory

Oukid, Ismail 05 December 2017 (has links)
Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM.
14

Nanoscale resistive switching memory devices: a review

Slesazeck, Stefan, Mikolajick, Thomas 10 November 2022 (has links)
In this review the different concepts of nanoscale resistive switching memory devices are described and classified according to their I–V behaviour and the underlying physical switching mechanisms. By means of the most important representative devices, the current state of electrical performance characteristics is illuminated in-depth. Moreover, the ability of resistive switching devices to be integrated into state-of-the-art CMOS circuits under the additional consideration with a suitable selector device for memory array operation is assessed. From this analysis, and by factoring in the maturity of the different concepts, a ranking methodology for application of the nanoscale resistive switching memory devices in the memory landscape is derived. Finally, the suitability of the different device concepts for beyond pure memory applications, such as brain inspired and neuromorphic computational or logic in memory applications that strive to overcome the vanNeumann bottleneck, is discussed.
15

Dynamické modely výrobních modulů / Dynamic Models of Power-Generating Modules

Kopička, Marek January 2021 (has links)
The dissertation thesis deals with the design of the concept and implementation of models of electrical power system elements with regard to the potential of the use of computer programs – simulations. The solution of the thesis focuses on the compliance simulations of power-generating modules according to RfG (Requirements for Generators), as a document which setting out the requirements for the connection of power-generating facilities and also focuses on the issue of smart grids and MAS (Multi-Agent Systems) respectively. The framework of the thesis is thus defined by the area of requirements for power-generating modules according to legislative requirements (not only RfG, but also related standards incl. DSC (Distribution System Code), CSN EN 50438 and CSN EN 50549), requirements for agent functionalities and power-generating module abilities to operate in synchronous and island operation, including transitions between them, the process of synchronization (phasing) and communication between the individual elements of the power system.

Page generated in 0.0213 seconds