• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 6
  • 5
  • 4
  • 2
  • 1
  • Tagged with
  • 46
  • 46
  • 10
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Flash Memory Garbage Collection in Hard Real-Time Systems

Lai, Chien-An 2011 August 1900 (has links)
Due to advances in capacity, speed, and economics, NAND-based flash memory technology is increasingly integrated into all types of computing systems, ranging from enterprise servers to embedded devices. However, due to its unpredictable up-date behavior and time consuming garbage collection mechanism, NAND-based flash memory is difficult to integrate into hard-real-time embedded systems. In this thesis, I propose a performance model for flash memory garbage collection that can be used in conjunction with a number of different garbage collection strategies. I describe how to model the cost of reactive (lazy) garbage collection and compare it to that of more proactive schemes. I develop formulas to assess the schedulability of hard real- time periodic task sets under simplified memory consumption models. Results show that I prove the proactive schemes achieve the larger maximum schedulable utilization than the traditional garbage collection mechanism for hard real-time systems in flash memory.
2

Floating gate engineering for novel nonvolatile flash memories

Liu, Hai, 1977- 07 October 2010 (has links)
The increasing demands on higher density, lower cost, higher speed, better endurance and longer retention has push flash memory technology, which is predominant and the driving force of the semiconductor nonvolatile memory market in recent years, to the position facing great challenges. However, the conventional flash memory technology using continuous highly doped polysilicon as floating gate, which is the most common in today’s commercial market, can't satisfy these demands, with the transistor size continuously scaling down beyond 32 nm. Nanocrystal floating gate flash memory and SONOS-type flash memory are considered among the most promising approaches to extend scalability and performance improvement for next generation flash memory. This dissertation addresses the issues that have big effects on nanocrystal floating gate flash memory and SONOS-type flash memory performances. New device structures and new material compatible to CMOS flow are proposed and demonstrated as potential solutions for further device performance improvement. First, the effect of nanocrystal-high k dielectric interface quality on nanocrystal flash memory performance is studied. By using germanium-silicon core-shell nanocrystals or ruthenium nanocrystals buried in HfO₂ as charge storage nodes, high interface quality has been achieved, leading to promising memory device performance. Next, another crucial challenge for nanocrystal flash memory on how to deposit uniformly distributed nanocrystal matrix in good shape and size control with high density is discussed. Using protein GroEL to obtain well ordered high density nanocrystal pattern, a flash memory device with Ni nanocrystals buried in HfO₂ is demonstrated. For this technique, the nanocrystal size is restricted to the GroEL's central cavity size and the density is limited by protein template. To overcome this limitation, a novel method using self-assembled Co-SiO₂ nanocrystals as charge storage nodes is demonstrated. Separated by thin SiO₂, these nanocrystals can form close packed form to achieve ultrahigh density. Finally, charge trapping layer band engineering is proposed for SONOS-type memory for better memory performance. By manipulating the pulse ratio of Hf and Al precursor during ALD deposition, the band diagram of Hf[subscript x]Al[subscript y]O charge trapping layer is optimized to have a Hf : Al ratio 3:1 at bottom and 1:3 at the top, leading to better trade-off between programming and retention for the of memory device. / text
3

Automotive embedded systems software reprogramming

Schmidgall, Ralf January 2012 (has links)
The exponential growth of computer power is no longer limited to stand alone computing systems but applies to all areas of commercial embedded computing systems. The ongoing rapid growth in intelligent embedded systems is visible in the commercial automotive area, where a modern car today implements up to 80 different electronic control units (ECUs) and their total memory size has been increased to several hundreds of megabyte. This growth in the commercial mass production world has led to new challenges, even within the automotive industry but also in other business areas where cost pressure is high. The need to drive cost down means that every cent spent on recurring engineering costs needs to be justified. A conflict between functional requirements (functionality, system reliability, production and manufacturing aspects etc.), testing and maintainability aspects is given. Software reprogramming, as a key issue within the automotive industry, solve that given conflict partly in the past. Software Reprogramming for in-field service and maintenance in the after sales markets provides a strong method to fix previously not identified software errors. But the increasing software sizes and therefore the increasing software reprogramming times will reduce the benefits. Especially if ECU’s software size growth faster than vehicle’s onboard infrastructure can be adjusted. The thesis result enables cost prediction of embedded systems’ software reprogramming by generating an effective and reliable model for reprogramming time for different existing and new technologies. This model and additional research results contribute to a timeline for short term, mid term and long term solutions which will solve the currently given problems as well as future challenges, especially for the automotive industry but also for all other business areas where cost pressure is high and software reprogramming is a key issue during products life cycle.
4

Writing on Dirty Memory

Kim, Yongjune 01 July 2016 (has links)
Non-volatile memories (NVM) including flash memories and resistive memories have attracted significant interest as data storage media. Flash memories are widely employed in mobile devices and solid-state drives (SSD). Resistive memories are promising as storage class memory and embedded memory applications. Data reliability is the fundamental requirement of NVM as data storage media. However, modern nano-scale NVM suffers from challenges of inter-cell interference (ICI), charge leakage, and write endurance, which threaten the reliability of stored data. In order to cope with these adverse effects, advanced coding techniques including soft decision decoding have been investigated actively. However, current coding techniques do not capture the physical properties of NVM well, so the improvement of data reliability is limited. Although soft decision decoding improves the data reliability by using soft decision values, it degrades read speed performance due to multiple read operations needed to obtain soft decision values. In this dissertation, we explore coding schemes that use side information corresponding to the physical phenomena to improve the data reliability significantly. The side information is obtained before writing data into memory and incorporated during the encoding stage. Hence, the proposed coding schemes maintain the read speed whereas the write speed performance would be degraded. It is a big advantage from the perspective of speed performance since the read speed is more critical than the write speed in many memory applications. First, this dissertation investigates the coding techniques for memory with stuckat defects. The idea of coding techniques for memory with stuck-at defects is employed to handle critical problems of flash memories and resistive memories. For 2D planar flash memories, we propose a coding scheme that combats the ICI, which is a primary challenge of 2D planar flash memories. Also, we propose a coding scheme that reduces the effect of fast detrapping, a degradation factor in 3D vertical flash memories. Finally, we investigate the coding techniques that improve write endurance and power consumption of resistive memories.
5

LDPC Codes over Large Alphabets and Their Applications to Compressed Sensing and Flash Memory

Zhang, Fan 2010 August 1900 (has links)
This dissertation is mainly focused on the analysis, design and optimization of Low-density parity-check (LDPC) codes over channels with large alphabet sets and the applications on compressed sensing (CS) and flash memories. Compared to belief-propagation (BP) decoding, verification-based (VB) decoding has significantly lower complexity and near optimal performance when the channel alphabet set is large. We analyze the verification-based decoding of LDPC codes over the q-ary symmetric channel (q-SC) and propose the list-message-passing (LMP) decoding which off ers a good tradeoff between complexity and decoding threshold. We prove that LDPC codes with LMP decoding achieve the capacity of the q-SC when q and the block length go to infinity. CS is a newly emerging area which is closely related to coding theory and information theory. CS deals with the sparse signal recovery problem with small number of linear measurements. One big challenge in CS literature is to reduce the number of measurements required to reconstruct the sparse signal. In this dissertation, we show that LDPC codes with verification-based decoding can be applied to CS system with surprisingly good performance and low complexity. We also discuss modulation codes and error correcting codes (ECC’s) design for flash memories. We design asymptotically optimal modulation codes and discuss their improvement by using the idea from load-balancing theory. We also design LDPC codes over integer rings and fields with large alphabet sets for flash memories.
6

SD Storage Array: Development and Characterization of a Many-device Storage Architecture

Katsuno, Ian 29 November 2013 (has links)
Transactional workloads have storage request streams consisting of many small, independent, random requests. Flash memory is well suited to these types of access patterns, but is not always cost-effective. This thesis presents a novel storage architecture called the SD Storage Array (SDSA), which adopts a many-device approach. It utilizes many flash storage devices in the form of an array of Secure Digital (SD) cards. This approach leverages the commodity status of SD cards to pursue a cost-effective means of providing the high throughput that transactional workloads require. Characterization of a prototype revealed that when the request stream was 512B randomly addressed reads, the SDSA provided 1.5 times the I/O operations per second (IOPS) of a top-of-the-line solid state drive, provided there were at least eight requests in-flight. A scale-out simulation showed the IOPS should scale with the size of the array, provided there are no upstream bottlenecks.
7

SD Storage Array: Development and Characterization of a Many-device Storage Architecture

Katsuno, Ian 29 November 2013 (has links)
Transactional workloads have storage request streams consisting of many small, independent, random requests. Flash memory is well suited to these types of access patterns, but is not always cost-effective. This thesis presents a novel storage architecture called the SD Storage Array (SDSA), which adopts a many-device approach. It utilizes many flash storage devices in the form of an array of Secure Digital (SD) cards. This approach leverages the commodity status of SD cards to pursue a cost-effective means of providing the high throughput that transactional workloads require. Characterization of a prototype revealed that when the request stream was 512B randomly addressed reads, the SDSA provided 1.5 times the I/O operations per second (IOPS) of a top-of-the-line solid state drive, provided there were at least eight requests in-flight. A scale-out simulation showed the IOPS should scale with the size of the array, provided there are no upstream bottlenecks.
8

Finding Alternatives to the Hard Disk Drive for Virtual Memory

Embry, Bruce Albert 01 July 2009 (has links) (PDF)
Current computer systems fill the demand of operating systems and applications for ever greater amounts of random access memory by paging the least recently used data to the hard disk drive. This paging process is called "virtual memory," to indicate that the hard disk drive is used to create the illusion that the computer has more random access memory than it actually has. Unfortunately, the fastest hard disk drives are over five orders of magnitude slower than the DRAM they are emulating. When the demand for memory increases to the point that processes are being continually saved to disk and then retrieved again, a process called "thrashing" occurs, and the performance of the entire computer system plummets. This thesis sought to find alternatives for home and small business computer users to the hard disk drive for virtual memory which would not suffer from the same long delays. Virtual memory is especially important for older computers, which often are limited by their motherboards, their processors and their power supplies to a relatively small amount of random access memory. Thus, this thesis was focused on improving the performance of older computers by replacing the hard disk drive with faster technologies for the virtual memory. Of the different technologies considered, flash memory was selected because of its low power requirements, its ready availability, its relatively low cost and its significantly faster random access times. Two devices were evaluated on a system with a 512MB of RAM, a Pentium 4 processor and a SATA hard disk drive. Theoretical models and a simulator were developed, and physical performance measurements were taken. Flash memory was not shown to be significantly faster than the hard disk drive in virtual memory applications.
9

A NEW GENERATION OF RECORDING TECHNOLOGY THE SOLID STATE RECORDER

Jensen, Peter, Thacker, Christopher 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / The Test & Evaluation community is starting to migrate toward solid state recording. This paper outlines some of the important areas that are new to solid state recording as well as examining some of the issues involved in moving to a direct recording methodology. Some of the parameters used to choose a solid state memory architecture are included. A matrix to compare various methods of data recording, such as solid state and magnetic tape recording, will be discussed. These various methods will be evaluated using the following parameters: Ruggedness (Shock, Vibration, Temperature), Capacity, and Reliability (Error Correction). A short discussion of data formats with an emphasis on efficiency and usability is included.
10

A Power Conservation Methodology for Hard Drives by Combining Prefetching Algorithms and Flash Memory

Halper, Raymond 01 January 2013 (has links)
Computing system power consumption is a concern as it has financial and environmental implications. These concerns will increase in the future due to the current trends in data growth, information availability requirements, and increases in the cost of energy. Data growth is compounded daily because of the accessibility of portable devices, increased connectivity to the Internet, and a trend toward storing information electronically. These three factors also result in an increased demand for the data to be available for access at all times which results in more electronic devices requiring power. As more electricity is required the overall cost of energy increases due to demand and limited resource availability. The environment also suffers as most electricity is generated from fossil fuels which increase emission of carbon dioxide into the atmosphere. In order to reduce the amount of energy required while maintaining data availability researchers have focused on changing how data is accessed from hard drives. Hard drives have been found to consume 10 to 86 percent of a system's energy. Through changing the way data is accessed by implementing multi speed hard drives, algorithms that prefetch, cache, and batch data requests, or by implementing flash drive caches researchers have been able to reduce the energy required from hard drive operation. However, these approaches often result in reduced I/O performance or reduced data availability. This dissertation provides a new method of reducing hard drive energy consumption by implementing a prefetching technique that predicts a chain of future requests based upon previous request observations. The files to be prefetched are given to a caching system which uses a flash memory device for caching. This caching system implements energy sensitive algorithms to optimize the value of files stored in the flash memory device. Through prefetching files the hard drive on a system can be placed in a low power sleep state. This results in reduced power consumption while providing high I/O performance and data availability. Analysis of simulator results confirmed that this new method increased I/O performance and data availability over previous studies while also providing a higher level of energy savings. Out of 30 scenarios, the new method displayed better energy savings in 26 scenarios and better performance in all 30 scenarios over previous studies. The new method also displayed it could achieve results of 50.9 percent less time and 34.6 percent less energy for a workload over previous methodologies.

Page generated in 0.0845 seconds