• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 42
  • 38
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 8
  • 6
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 381
  • 381
  • 381
  • 321
  • 316
  • 97
  • 73
  • 60
  • 57
  • 48
  • 44
  • 44
  • 44
  • 40
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Estimation and equalization of time-selective fading channels

Kim, Yongsub 12 1900 (has links)
No description available.
212

Interleaved concalenated coding for input-constrained channels

Anim-Appiah, Kofi Dankwa 12 1900 (has links)
No description available.
213

Delay-constrained 3-D graphics streaming over lossy networks

Al-Regib, Ghassan 08 1900 (has links)
No description available.
214

Optimized error coverage in built-in self-test by output data modification

Zorian, Yervant January 1987 (has links)
The concept of Built-In Self-Test (BIST) has recently become an increasingly attractive solution to the complex problem of testing VLSI chips. However, the realization of BIST faces some challenging problems of its own. One of these problems is to increase the quality of fault coverage of a BIST implementation, without incurring a large overhead. In particular, the loss of information in the output data compressor, which is typically a multi-input linear feedback shift register (MISR), is a major cause of concern. / In the recent past, several researchers have proposed different schemes to reduce this loss of information, while maintaining the need for a small area overhead. / In this dissertation, a new BIST scheme, based on modifying the output data before compression, is developed. This scheme, called output data modification (ODM), exploits the knowledge of the functionality of the circuit under test to provide a circuit-specific BIST structure. This structure is developed so that it can conveniently be implemented for any general circuit under consideration. But more importantly, a proof of effectiveness is provided to show that ODM will, on the average, be orders of magnitude better than all existing schemes in its capability to reduce the information loss, for a given amount of area overhead. / Moreover, the constructive nature of the proof will allow one to provide a simple trade-off between the reduction tolerated in information loss to the area overhead needed to affect this reduction.
215

Software testing tools and productivity

Moschoglou, Georgios Moschos January 1996 (has links)
Testing statistics state that testing consumes more than half of a programmer's professional life, although few programmers like testing, fewer like test design and only 5% of their education will be devoted to testing. The main goal of this research is to test the efficiency of two software testing tools. Two experiments were conducted in the Computer Science Department at Ball State University. The first experiment compares two conditions - testing software using no tool and testing software using a command-line based testing tool - to the length of time and number of test cases needed to achieve an 80% statement coverage for 22 graduate students in the Computer Science Department. The second experiment compares three conditions - testing software using no tool, testing software using a command-line based testing tool, and testing software using a GUI interactive tool with added functionality - to the length of time and number of test cases needed to achieve 95% statement coverage for 39 graduate and undergraduate students in the same department. / Department of Computer Science
216

Neural networks and their application to metrics research

Lin, Burch January 1996 (has links)
In the development of software, time and resources are limited. As a result, developers collect metrics in order to more effectively allocate resources to meet time constraints. For example, if one could collect metrics to determine, with accuracy, which modules were error-prone and which were error-free, one could allocate personnel to work only on those error-prone modules.There are three items of concern when using metrics. First, with the many different metrics that have been defined, one may not know which metrics to collect. Secondly, the amount of metrics data collected can be staggering. Thirdly, interpretation of multiple metrics may provide a better indication of error-proneness than any single metric.This thesis researched the accuracy of a neural network, an unconventional model, in building a model that can determine whether a module is error-prone from an input of a suite of metrics. The accuracy of the neural network model was compared with the accuracy of a linear regression model, a standard statistical model, that has the same input and output. In other words, we attempted to find whether metrics correlated with error-proneness. The metrics were gathered from three different software projects. The suite of metrics that was used to build the models was a subset of a larger collection of metrics that was reduced using factor analysis.The conclusion of this thesis is that, from the projects analyzed, neither the neural network model nor the logistic regression model provide acceptable accuracies for real use. We cannot conclude whether one model provides better accuracy than the other. / Department of Computer Science
217

Identifikavimo schemos, naudojančios klaidas taisančius kodus / Identification schemes based on error-correcting codes

Orlov, Dmitrij 02 July 2014 (has links)
Detaliai apžvelgėme pagrindines schemų [Ste90] ir [Ste94] idėjas: tiek teorines, kurios buvo pasiūlytos aukščiau išvardintuose darbuose, tiek ir praktines, t.y. schemų realizacijos aspektus. Ištyrėme schemų saugumo aspektus: galimas atakas, įveikimo laiką esant vienodom pradinėm sąlygom. Palyginome nagrinėjamas schemas su kitomis žinomomis kriptografinėmis schemomis. Pateikėme schemų su prasmingais schemų parametrais veikimo greičių įvertinimus. Išvardijome nagrinėjamų schemų patobulinimo būdus. Aprašėme patobulintų schemų realizacijos pricipus. Ištyrėme patobulintų schemų saugumo aspektus. / In this paper we discussed [Ste90] and [Ste94] identification schemes theoretical and practical aspects, such as: security aspect – main attack types, time needed to break any of discussed schemes (with the same parameter values). We compared the discussed identification schemes with other identification schemas based on other mathematical problems. In this paper we computed user identification time with practical parameters of all presented schemas, described possible improvements of our schemes, and examined proposed improvement of schemes.
218

The hybrid list decoding and Chase-like algorithm of Reed-Solomon codes.

Jin, Wei. January 2005 (has links)
Reed-Solomon (RS) codes are powerful error-correcting codes that can be found in a wide variety of digital communications and digital data-storage systems. Classical hard decoder of RS code can correct t = (dmin -1) /2 errors where dmin = (n - k+ 1) is the minimum distance of the codeword, n is the length of codeword and k is the dimension of codeword. Maximum likelihood decoding (MLD) performs better than the classical decoding and therefore how to approach the performance of the MLD with less complexity is a subject which has been researched extensively. Applying the bit reliability obtained from channel to the conventional decoding algorithm is always an efficient technique to approach the performance of MLD, although the exponential increase of complexity is always concomitant. It is definite that more enhancement of performance can be achieved if we apply the bit reliability to enhanced algebraic decoding algorithm that is more powerful than conventional decoding algorithm. In 1997 Madhu Sudan, building on previous work of Welch-Berlekamp, and others, discovered a polynomial-time algorithm for decoding low-rate Reed- Solomon codes beyond the classical error-correcting bound t = (dmin -1) /2. Two years later Guruswami and Sudan published a significantly improved version of Sudan's algorithm (GS), but these papers did not focus on devising practical implementation. The other authors, Kotter, Roth and Ruckenstein, were able to find realizations for the key steps in the GS algorithm, thus making the GS algorithm a practical instrument in transmission systems. The Gross list algorithm, which is a simplified one with less decoding complexity realized by a reencoding scheme, is also taken into account in this dissertation. The fundamental idea of the GS algorithm is to take advantage of an interpolation step to get an interpolation polynomial produced by support symbols, received symbols and their corresponding multiplicities. After that the GS algorithm implements a factorization step to find the roots of the interpolation polynomial. After comparing the reliability of these codewords which are from the output of factorization, the GS algorithm outputs the most likely one. The support set, received set and multiplicity set are created by Koetter Vardy (KV) front end algorithm. In the GS list decoding algorithm, the number of errors that can be corrected increases to tcs = n - 1 - lJ (k - 1) n J. It is easy to show that the GS list decoding algorithm is capable of correcting more errors than a conventional decoding algorithm. In this dissertation, we present two hybrid list decoding and Chase-like algorithms. We apply the Chase algorithms to the KV soft-decision front end. Consequently, we are able to provide a more reliable input to the KV list algorithm. In the application of Chase-like algorithm, we take two conditions into consideration, so that the floor cannot occur and more coding gains are possible. With an increase of the bits that are chosen by the Chase algorithm, the complexity of the hybrid algorithm increases exponentially. To solve this problem an adaptive algorithm is applied to the hybrid algorithm based on the fact that as signal-to-noise ratio (SNR) increases the received bits are more reliable, and not every received sequence needs to create the fixed number of test error patterns by the Chase algorithm. We set a threshold according to the given SNR and utilize it to finally decide which unreliable bits are picked up by Chase algorithm. However, the performance of the adaptive hybrid algorithm at high SNRs decreases as the complexity decreases. It means that the adaptive algorithm is not a sufficient mechanism for eliminating the redundant test error patterns. The performance of the adaptive hybrid algorithm at high SNRs motivates us to find out another way to reduce the complexity without loss of performance. We would consider the two following problems before dealing with the problem on hand. One problem is: can we find a terminative condition to decide which generated candidate codeword is the most likely codeword for received sequence before all candidates of received set are tested? Another one is: can we eliminate the test error patterns that cannot create more likely codewords than the generated codewords? In our final algorithm, an optimality lemma coming from the Kaneko algorithm is applied to solve the first problem and the second problem is solved by a ruling out scheme for the reduced list decoding algorithm. The Gross list algorithm is also applied in our final hybrid algorithm. After the two problems have been solved, the final hybrid algorithm has performance comparable with the hybrid algorithm combined the KV list decoding algorithm and the Chase algorithm but much less complexity at high SNRs. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005
219

Multiple-path stack algorithms for decoding convolutional codes

Haccoun, David January 1974 (has links)
No description available.
220

Quantum codes over Finite Frobenius Rings

Sarma, Anurupa 2012 August 1900 (has links)
It is believed that quantum computers would be able to solve complex problems more quickly than any other deterministic or probabilistic computer. Quantum computers basically exploit the rules of quantum mechanics for speeding up computations. However, building a quantum computer remains a daunting task. A quantum computer, as in any quantum mechanical system, is susceptible to decohorence of quantum bits resulting from interaction of the stored information with the environment. Error correction is then required to restore a quantum bit, which has changed due to interaction with external state, to a previous non-erroneous state in the coding subspace. Until now the methods for quantum error correction were mostly based on stabilizer codes over finite fields. The aim of this thesis is to construct quantum error correcting codes over finite Frobenius rings. We introduce stabilizer codes over quadratic algebra, which allows one to use the hamming distance rather than some less known notion of distance. We also develop propagation rules to build new codes from existing codes. Non binary codes have been realized as a gray image of linear Z4 code, hence the most natural class of ring that is suitable for coding theory is given by finite Frobenius rings as it allow to formulate the dual code similar to finite fields. At the end we show some examples of code construction along with various results of quantum codes over finite Frobenius rings, especially codes over Zm.

Page generated in 0.0742 seconds