• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 14
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 98
  • 98
  • 42
  • 23
  • 17
  • 16
  • 14
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

The limiting error correction capabilities of the CDROM

Roberts, Jonathan D. January 1995 (has links)
The purpose of this work was to explore the error correction performance of the CDROM data storage medium in both a standard and hostile environment. A detailed simulation of the channel has been written in Pascal. Using this the performance of the CD-ROM correction strategies against errors may be analysed. Modulated data was corrupted with both burst and random errors. At each stage of the decoding process the remaining errors are both illustrated and discussed. Results are given for a number of varying burst lengths each at different points within the data structure. It is shown that the maximum correctable burst error is approximately 7000 modulated data bytes. The effect of both transient and permanent errors on the performance of a CD-ROM was also investigated. Here software was written which allows both block access times and retries to be obtained from a PC connected to a Hitachi drive unit via a SCSI bus. A number of sequential logical data blocks are read from test discs and access times and retry counts are recorded for each. Results are presented for two classes of disc, one which is clean and one with a surface blemish. Both are exposed to both standard and hostile vibration environments. Three classes of vibration are considered: isolated shock, fixed state sinusoidal and swept sinusoidal. The critical band of frequencies are demonstrated for each level of vibration. The effect of surface errors on the resistance to vibration is investigated.
12

Improving the Reliability of NAND Flash, Phase-change RAM and Spin-torque Transfer RAM

January 2014 (has links)
abstract: Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
13

Error Control in Wireless ATM Network

Pu, Jianfeng 07 July 2000 (has links)
Asynchronous Transfer Mode (ATM) protocol was designed to support real-time traffic steams over high quality links like fiber optics where the transmission error is extremely low. ATM performs poorly in an error-prone environment such as wireless communications. The purpose of this research is to investigate error control schemes in wireless ATM (W-ATM) to support real-time service, such that the physical layer error conditions are handled in lower layers under ATM transport layer. Automatic Repeat reQuest schemes (ARQ) and Forward Error Correction (FEC) have been widely used for reliable data transmissions. However, the current existing ARQ schemes can potentially introduce unbounded delay in high error rate environments like W-ATM network due to the lack of delay control mechanism. As a result, they are not appropriate for real-time data communications in which there are strict packet delay requirements. In this dissertation, we explored the issues related to W-ATM area. Adaptation of FEC, specifically Reed-Solomon code, to channel error conditions in W-ATM is investigated. The quality-of-service (QoS)-aware error control algorithm is originated and its performance is evaluated. The algorithm is further simplified to make it more suitable for practical applications. The requirements of ARQ applicability for real-time communication environment like W-ATM is extensively analyzed. An ARQ scheme, called D-bit protocol, is developed to satisfy the real-time requirements. The scheme supports reliable packet discarding while allowing retransmissions without compromising user-level QoS for real-time stream applications. Simulations show the effectiveness and liveness of the protocol. / Ph. D.
14

Performance of Soft-Decision Block-Decoded Hybrid-ARQ Error Control

Rice, Michael 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Soft-decision correlation decoding with retransmission requests for block codes is proposed and the resulting performance is analyzed. The correlation decoding rule is modified to allow retransmission requests when the received word is rendered unreliable by the channel noise. The modification is realized by a reduction in the volume in Euclidean space of the decoding region corresponding to each codeword. The performance analysis reveals the typical throughput - reliability trade-off characteristic of error control systems which employ retransmissions. Performance comparisons with hard-decision decoding reveal performance improvements beyond those attainable with hard-decision decoding algorithms. The proposed soft-decision decoding rule permits the use of a simplified codeword searching algorithm which reduces the complexity of the correlation decoder to the point where practical implementation is feasible.
15

Investigation of Forward Error Correction Coding Schemes for a Broadcast Communication System

Wang, Xiaohan Sasha January 2013 (has links)
This thesis investigates four FEC (forward error correction) coding schemes for their suitability for a broadcast system where there is one energy-rich transmitter and many energy-constrained receivers with a variety of channel conditions. The four coding schemes are: repetition codes (the baseline scheme); Reed-Solomon (RS) codes; Luby-Transform (LT) codes; and a type of RS and LT concatenated codes. The schemes were tested in terms of their ability to achieve both high average data reception success probability and short data reception time at the receivers (due to limited energy). The code rate (Rc) is fixed to either 1/2 or 1/3. Two statistical channel models were employed: the memoryless channel and the Gilbert-Elliott channel. The investigation considered only the data-link layer behaviour of the schemes. During the course of the investigation, an improvement to the original LT encoding process was made, the name LTAM (LT codes with Added Memory) was given to this improved coding method. LTAM codes reduce the overhead needed for decoding short-length messages. The improvement can be seen for decoding up to 10000 number of user packets. The maximum overhead reduction is as much as 10% over the original LT codes. The LT-type codes were found to have the property that can both achieve high success data reception performance and flexible switch off time for the receivers. They are also adaptable to different channel characteristics. Therefore it is a prototype of the ideal coding scheme that this project is looking for. This scheme was then further developed by applying an RS code as an inner code to further improve the success probability of packet reception. The results show that LT&RS code has a significant improvement in the channel error tolerance over that of the LT codes without an RS code applied. The trade-off is slightly more reception time needed and more decoding complexity. This LT&RS code is then determined to be the best scheme that fulfils the aim in the context of this project which is to find a coding scheme that both has a high overall data reception probability and short overall data reception time. Comparing the LT&RS code with the baseline repetition code, the improvement is in three aspects. Firstly, the LT&RS code can keep full success rate over channels have approximately two orders of magnitude more errors than the repetition code. This is for the two channel models and two code rates tested. Secondly, the LT&RS code shows an exceptionally good performance under burst error channels. It is able to maintain more than 70% success rate under the long burst error channels where both the repetition code and the RS code have almost zero success probability. Thirdly, while the success rates are improved, the data reception time, measured in terms of number of packets needed to be received at the receiver, of the LT&RS codes can reach a maximum of 58% reduction for Rc = 1=2 and 158% reduction for Rc = 1=3 compared with both the repetition code and the RS code at the worst channel error rate that the LT&RS code maintains almost 100% success probability.
16

Energy-based Error Control Strategies Suitable for Long MD Simulations

Easley, Kante 31 December 2010 (has links)
When evaluating integration schemes used in molecular dynamics (MD) simulations, energy conservation is often cited as the primary criterion by which the integrators should be com- pared. As a result variable stepsize Runge-Kutta methods are often ruled out of consideration due to their characteristic energy drift. We have shown that by appropriately modifying the stepsize selection strategy in a variable stepsize RK method it is possible for the MD practitioner to obtain substantial control over the energy drift during the course of a simulation. This ability has been previously unreported in the literature, and we present numerical examples to illustrate that it can be achieved without sacrificing computational efficiency under currently obtainable timescales.
17

Energy-based Error Control Strategies Suitable for Long MD Simulations

Easley, Kante 31 December 2010 (has links)
When evaluating integration schemes used in molecular dynamics (MD) simulations, energy conservation is often cited as the primary criterion by which the integrators should be com- pared. As a result variable stepsize Runge-Kutta methods are often ruled out of consideration due to their characteristic energy drift. We have shown that by appropriately modifying the stepsize selection strategy in a variable stepsize RK method it is possible for the MD practitioner to obtain substantial control over the energy drift during the course of a simulation. This ability has been previously unreported in the literature, and we present numerical examples to illustrate that it can be achieved without sacrificing computational efficiency under currently obtainable timescales.
18

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.
19

Error control for scalable image and video coding

Kuang, Tianbo 24 November 2003 (has links)
Scalable image and video has been proposed to transmit image and video signals over lossy networks, such as the Internet and wireless networks. However, scalability alone is not a complete solution since there is a conflict between the unequal importance of the scalable bit stream and the agnostic nature of packet losses in the network. This thesis investigates three methods to combat the detrimental effects of random packet losses to scalable images and video, namely the error resilient method, the error concealment method, and the unequal error protection method within the joint source-channel coding framework. For the error resilient method, an optimal bit allocation algorithm is proposed without considering the distortion caused by packet losses. The allocation algorithm is then extended to accommodate packet losses. For the error concealment method, a simple temporal error concealment mechanism is designed to work for video signals. For the unequal error protection method, the optimal protection allocation problem is formulated and solved. These methods are tested on the wavelet-based Set Partitioning in Hierarchical Trees(SPIHT) scalable image coder. Performance gains and losses in lossy and lossless environments are studied for both the original coder and the error-controlled coders. The results show performance advantages of all three methods over the original SPIHT coder. Particularly, the unequal error protection method and error concealment method are promising for future Internet/wireless image and video transmission, because the former has very good performance even at heavy packet loss (a PSNR of 22.00 dB has been seen at nearly 60% packet loss) and the latter does not introduce any extra overhead.
20

Adaptive Error Control Schemes for Scalable Video Transmission over Wireless Internet

Lee, Chen-Wei 22 July 2008 (has links)
Based on the fast evolution of wireless networks and multimedia compression technologies in recent years, real-time multimedia transmission over wireless networks will be the next step for the implementation of contemporary communication system. Lower bandwidth and higher loss rate make wireless networks hard to transmit multimedia content than its wired counterpart. In addition, the common delay constraint from real-time multimedia transmission raises the challenges for the design of wireless communication system. This dissertation proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst error channel. An analytic model is developed to evaluate the impact of channel bit-error-rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation. Furthermore, in order to meet the real time need in many video transmission applications, this dissertation has proposed low time-complexity packet size assignment schemes. Meanwhile, from the test result, it can be seen that although this method has sacrificed a little bit video quality as compared to optimized method, yet it can adapt to all kinds of network situations and display smoother quality and performance. Moreover, as compared to optimized method, this strategy greatly reduces the calculation time-complexity.

Page generated in 0.0652 seconds