371 |
Error resilient image transmission using T-codes and edge-embeddingReddy, Premchander. January 1900 (has links)
Thesis (M.S.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains x, 80 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 75-80).
|
372 |
Capacity-based parameter optimization of bandwidth constrained CPMIyer Seshadri, Rohit. January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains xiv, 161 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 152-161).
|
373 |
Permutation polynomial based interleavers for turbo codes over integer rings theory and applications /Ryu, Jong Hoon, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 109-114).
|
374 |
Error-correcting codes on low néron-severi rank surfacesZarzar, Marcos Augusto, January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
|
375 |
A no free lunch result for optimization and its implicationsSmith, Marisa B. January 2009 (has links)
Thesis (M.S.)--Duquesne University, 2009. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 42) and index.
|
376 |
Concealment algorithms for networked video transmission systemsTudor-Jones, Gareth January 1999 (has links)
This thesis addresses the problem of cell loss when transmitting video data over an ATM network. Cell loss causes sections of an image to be lost or discarded in the interconnecting nodes between the transmitting and receiving locations. The method used to combat this problem is to use a technique called Error Concealment, where the lost sections of an image are replaced with approximations derived from the information in the surrounding areas to the error. This technique does not require any additional encoding, as used by Error Correction. Conventional techniques conceal from within the pixel domain, but require a large amount of processing (2N2 up to 20N2) where N is the dimension of an N×N square block. Also, previous work at Loughborough used Linear Interpolation in the transform domain, which required much less processing, to conceal the error.
|
377 |
Numerical error analysis in foundation phase (Grade 3) mathematicsNdamase- Nzuzo, Pumla Patricia January 2014 (has links)
The focus of the research was on numerical errors committed in foundation phase mathematics. It therefore explored: (1) numerical errors learners in foundation phase mathematics encounter (2) relationships underlying numerical errors and (3) the implementable strategies suitable for understanding numerical error analysis in foundation phase mathematics (Grade 3). From 375 learners who formed the population of the study in the primary schools (16 in total), the researcher selected by means of a simple random sample technique 80 learners as the sample size, which constituted 10% of the population as response rate. On the basis of the research questions and informed by positivist paradigm, a quantitative approach was used by means of tables, graphs and percentages to address the research questions. A Likert scale was used with four categories of responses ranging from (A) Agree, (S A) Strongly Agree, (D) Disagree and (S D) Strongly Disagree. The results revealed that: (1) the underlying numerical errors that learners encounter, include the inability to count backwards and forwards, number sequencing, mathematical signs, problem solving and word sums (2) there was a relationship between committing errors and a) copying numbers b) confusion of mathematical signs or operational signs c) reading numbers which contained more than one digit (3) It was also revealed that teachers needed frequent professional training for development; topics need to change and lastly government needs to involve teachers at ground roots level prior to policy changes on how to implement strategies with regards to numerical errors in the foundational phase. It is recommended that attention be paid to the use of language and word sums in order to improve cognition processes in foundation phase mathematics. Moreover, it recommends that learners are to be assisted time and again when reading or copying their work, so that they could have fewer errors in foundation phase mathematics. Additionally it recommends that teachers be trained on how to implement strategies of numerical error analysis in foundation phase mathematics. Furthermore, teachers can use tests to identify learners who could be at risk of developing mathematical difficulties in the foundation phase.
|
378 |
The limiting error correction capabilities of the CDROMRoberts, Jonathan D. January 1995 (has links)
The purpose of this work was to explore the error correction performance of the CDROM data storage medium in both a standard and hostile environment. A detailed simulation of the channel has been written in Pascal. Using this the performance of the CD-ROM correction strategies against errors may be analysed. Modulated data was corrupted with both burst and random errors. At each stage of the decoding process the remaining errors are both illustrated and discussed. Results are given for a number of varying burst lengths each at different points within the data structure. It is shown that the maximum correctable burst error is approximately 7000 modulated data bytes. The effect of both transient and permanent errors on the performance of a CD-ROM was also investigated. Here software was written which allows both block access times and retries to be obtained from a PC connected to a Hitachi drive unit via a SCSI bus. A number of sequential logical data blocks are read from test discs and access times and retry counts are recorded for each. Results are presented for two classes of disc, one which is clean and one with a surface blemish. Both are exposed to both standard and hostile vibration environments. Three classes of vibration are considered: isolated shock, fixed state sinusoidal and swept sinusoidal. The critical band of frequencies are demonstrated for each level of vibration. The effect of surface errors on the resistance to vibration is investigated.
|
379 |
QUANTUM ERROR CORRECTION AND LEAKAGE ELIMINATION FOR QUANTUM DOTSPegahan, Saeed 01 August 2015 (has links)
The development of a quantum computer presents one of the greatest challenges in science and engineering to date. The promise of more ecient computing based on entangled quantum states and the superposition principle has led to a worldwide explosion of interest in the elds of quantum information and computation. Decoherence is one of the main problems that gives rise to dierent errors in the quantum system. However, the discovery of quantum error correction and the establishment of the accuracy threshold theorem provide us comprehensive tools to build a quantum computer. This thesis contributes to this eort by investigating a particular class of quantum error correcting codes, called Decoherence free subsystems. The passive approach to error correction taken by these encodings provides an ecient means of protection for symmetrically coupled system-bath interactions. Here I will present methods for determining the subsystem-preserving evolutions for noiseless subsystem encodings and more importantly implementing a Universal quantum computing over three-quantum dots.
|
380 |
RADIATION INDUCED TRANSIENT PULSE PROPAGATION USING THE WEIBULL DISTRIBUTION FUNCTIONWatkins, Adam Christopher 01 May 2012 (has links)
In recent years, studying soft errors has become an issue of greater importance. There have been many methods developed that estimate the Soft Error Rate. Those methods are either deterministic or statistical. The proposed deterministic model aims to improve Soft Error Rate estimation by accurately approximating the generated pulse and all subsequent pulses. The generated pulse is approximated by a piecewise function consisting of two Weibull cumulative distribution functions. This method is an improvement over existing methods as it offers high accuracy while requiring less pre-characterization. The proposed algorithm reduces pre-characterization by allowing the beta Weibull parameter to be calculated during runtime using gate parameters such as the gate delay.
|
Page generated in 0.0286 seconds