• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 750
  • 194
  • 183
  • 159
  • 42
  • 34
  • 22
  • 20
  • 16
  • 14
  • 14
  • 9
  • 9
  • 9
  • 9
  • Tagged with
  • 1992
  • 506
  • 458
  • 420
  • 388
  • 320
  • 252
  • 222
  • 178
  • 149
  • 148
  • 134
  • 129
  • 126
  • 121
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

An examination of Academic Integrity Policies, Standards, and Programs at Public and Private Intstitutions

Johnson, Brian M. 19 August 2003 (has links)
Academic dishonesty is a major dilemma for institutions of higher learning. Cheating behaviors among students have been documented as early as 1941 when Drake conducted a study that indicated that 23% of students cheated. Since then percentages of students involved in cheating and academic dishonesty have increased. Students are now cheating at an alarming rate as evidenced in a study by McCabe and Trevino (1993) where 52% of 6,000 undergraduate students surveyed admitted cheating on an exam by copying from another student. The purpose of this study was to analyze the extent to which academic integrity policies, standards, and programs differ by institutional type. Specifically, the study focused on the academic integrity policy of each institution, the promotion of standards, and the academic integrity program. Data were collected using the Academic Integrity Survey originally developed by Kibler (1993) and modified for use in this study. The survey consisted of 48 questions designed to measure the differences between academic integrity policies, standards, and programs by institution type. The findings revealed significant difference in three of the five areas. These findings suggest that private institutions are developing honor code systems, training faculty more, and seeing better results from their academic dishonesty initiatives than private institutions. / Master of Arts
272

On Skew-Constacyclic Codes

Fogarty, Neville Lyons 01 January 2016 (has links)
Cyclic codes are a well-known class of linear block codes with efficient decoding algorithms. In recent years they have been generalized to skew-constacyclic codes; such a generalization has previously been shown to be useful. We begin with a study of skew-polynomial rings so that we may examine these codes algebraically as quotient modules of non-commutative skew-polynomial rings. We introduce a skew-generalized circulant matrix to aid in examining skew-constacyclic codes, and we use it to recover a well-known result on the duals of skew-constacyclic codes from Boucher/Ulmer in 2011. We also motivate and develop a notion of idempotent elements in these quotient modules. We are particularly concerned with the existence and uniqueness of idempotents that generate a given submodule; we generalize relevant results from previous work on skew-constacyclic codes by Gao/Shen/Fu in 2013 and well-known results from the classical case.
273

A brief survey of self-dual codes

Oktavia, Rini 2009 August 1900 (has links)
This report is a survey of self-dual binary codes. We present the fundamental MacWilliams identity and Gleason’s theorem on self-dual binary codes. We also examine the upper bound of minimum weights of self-dual binary codes using the extremal weight enumerator formula. We describe the shadow code of a self-dual code and the restrictions of the weight enumerator of the shadow code. Then using the restrictions, we calculate the weight enumerators of self-dual codes of length 38 and 40 and we obtain the known weight enumerators of this lengths. Finally, we investigate the Gaborit-Otmani experimental construction of selfdual binary codes. This construction involves a fixed orthogonal matrix, and we compare the result to the results obtained using other orthogonal matrices. / text
274

MULTIVARIATE LIST DECODING OF EVALUATION CODES WITH A GRÖBNER BASIS PERSPECTIVE

Busse, Philip 01 January 2008 (has links)
Please download dissertation to view abstract.
275

An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

Zou, Ding, Djordjevic, Ivan B. 13 February 2016 (has links)
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10(-15), making it a viable solution in the next-generation high-speed fiber optical communications.
276

An Empirical Investigation of the Structural Form and Measurement Validity of the Hill Inventory

Blake, Faye W. 08 1900 (has links)
This research began with the Hill Inventory. Cognitive style preference variables were classified as one of following four types: Theoretical Codes, Qualitative Codes, Social-Cultural Codes or Reasoning Modalities. A consumer behavior perspective was then used to form an alternative structure for the Hill Inventory variables. The following three constructs were proposed: Evaluation Codes, Perceptual Codes, and Reasoning Modalities. The purpose of this research was to assess the structural form and measurement validity of the Hill Inventory. Specific steps taken to accomplish this objective included: developing confirmatory factor and structural equation models; using the LISREL software package to analyze the model specifications; and assessing the validity of the questions used to measure the variables. A descriptive research design was used to compare the model specifications. The research instrument consisted of eight statements for each of twenty-eight variables for a total of 224 questions. Five-point response choices were described by the words: often, sometimes, unsure, rarely, or never. The sample consisted of 285 student subjects in marketing classes at a large university. Data analysis began by comparing the distributions of the data to a normal case. Parameter estimates, root mean square residuals and squared multiple correlations then were obtained using the LISREL VI software package. The chi-square statistic was used to test the hypotheses. This statistic was supplemented by the Tucker-Lewis index which used a null model for comparisons. The final step in data analysis was to assess the reliability of the measurements. This study affected the potential usage of the Hill Inventory for consumer behavior research. The major conclusion was that the measurement of the variables must be improved before model parameters can be tested. Specific question sets on the inventory were identified that were most in need of revision.
277

Optimisation des stratégies de décodage des codes LDPC dans les environnements impulsifs : application aux réseaux de capteurs et ad hoc / LDPC strategy decoding optimization in impulsive environments : sensors and ad hoc networks application

Ben Maad, Hassen 29 June 2011 (has links)
L’objectif de cette thèse est d’étudier le comportement des codes LDPC dans un environnement où l’interférence générée par un réseau n’est pas de nature gaussienne mais présente un caractère impulsif. Un premier constat rapide montre que sans précaution, les performances de ces codes se dégradent très significativement. Nous étudions tout d’abord les différentes solutions possibles pour modéliser les bruits impulsifs. Dans le cas des interférences d’accès multiples qui apparaissent dans les réseaux ad hoc et les réseaux de capteurs, il nous semble approprié de choisir les distributions alpha-stables. Généralisation de la gaussienne, stables par convolution, elles peuvent être validées théoriquement dans plusieurs situations.Nous déterminons alors la capacité de l’environnement α-stable et montrons par une approche asymptotique que les codes LDPC dans cet environnement sont bons mais qu’une simple opération linéaire à l’entrée du décodeur ne permet pas d’obtenir de bonnes performances. Nous avons donc proposé différentes façons de calculer la vraisemblance en entrée du décodeur. L’approche optimale est très complexe à mettre en oeuvre. Nous avons étudié plusieurs approches différentes et en particulier le clipping dont nous avons cherché les paramètres optimaux. / The goal of this PhD is to study the performance of LDPC codes in an environment where interference, generated by the network, has not a Gaussian nature but presents an impulsive behavior.A rapid study shows that, if we do not take care, the codes’ performance significantly degrades.In a first step, we study different approaches for impulsive noise modeling. In the case of multiple access interference that disturb communications in ad hoc or sensor networks, the choice of alpha-stable distributions is appropriate. They generalize Gaussian distributions, are stable by convolution and can be theoretically justified in several contexts.We then determine the capacity if the α-stable environment and show using an asymptotic method that LDPC codes in such an environment are efficient but that a simple linear operation on the received samples at the decoder input does not allow to obtain the expected good performance. Consequently we propose several methods to obtain the likelihood ratio necessary at the decoder input. The optimal solution is highly complex to implement. We have studied several other approaches and especially the clipping for which we proposed several approaches to determine the optimal parameters.
278

Correcting bursts of adjacent deletions by adapting product codes

25 March 2015 (has links)
M.Ing. (Electrical and Electronic Engineering) / In this study, the problem of correcting burst of adjacent deletions by adapting product codes was investigated. The first step in any digital transmission is to establish synchronization between the sending and receiving nodes. This initial synchronization ensures that the receiver samples the information bits at the correct interval. Unfortunately synchronization is not guaranteed to last for the entire duration of data transmission. Though synchronization errors rarely occur, it has disastrous effects at the receiving end of transmission. These synchronization errors are modelled as either insertions or deletions in the transmitted data. In the best case scenario, these errors are restricted to single bit errors. In the worst case scenario, these errors lead to bursts of bits being incorrect. If these synchronization errors are not detected and corrected, it can cause a shift in the transmitted sequence which in turn leads to loss of synchronization. When a signal is subjected to synchronization errors it is difficult accurately recover the original data signal. In addition to the loss of synchronization, the information transmitted over the channel is also subjected to noise. This noise in the channel causes inversion errors within the signal. The objective of this dissertation is to investigate if an error correction scheme can be designed that has the ability to detect and correct adjacent bursts of deletions and random inversion errors. This error correction scheme needed to make use of a product code matrix structure. This product matrix needed to incorporate both an error correction and synchronization technique. The chosen error correcting techniques were Hamming and Reed-Solomon codes. The chosen synchronization techniques for this project were the marker technique or an adaptation of the Hamming code technique. In order to find an effective model, combinations of these models were simulated and compared. From the research obtained and analyzed in this document it was found that, depending on the desired performance, complexity and code rate, an error correction scheme can be used in the efficient correction of bursts of adjacent deletions by adapting product codes.
279

Optimisation de codes correcteurs d’effacements par application de transformées polynomiales / Optimisation of erasure codes by applying polynomial transforms

Detchart, Jonathan 05 December 2018 (has links)
Les codes correcteurs d’effacements sont aujourd’hui une solution bien connueutilisée pour fiabiliser les protocoles de communication ou le stockage distribué desdonnées. La plupart de ces codes sont basés sur l’arithmétique des corps finis, définissantl’addition et la multiplication sur un ensemble fini d’éléments, nécessitantsouvent des opérations complexes à réaliser. En raison de besoins en performancetoujours plus importants, ces codes ont fait l’objet de nombreuses recherches dans lebut d’obtenir de meilleures vitesses d’exécution, tout en ayant la meilleure capacitéde correction possible. Nous proposons une méthode permettant de transformer les éléments de certains corps finis en éléments d’un anneau afin d’y effectuer toutes les opérations dans lebut de simplifier à la fois le processus de codage et de décodage des codes correcteursd’effacements, sans aucun compromis sur les capacités de correction. Nous présentonségalement une technique de réordonnancement des opérations, permettant deréduire davantage le nombre d’opérations nécessaires au codage grâce à certainespropriétés propres aux anneaux utilisés. Enfin, nous analysons les performances decette méthode sur plusieurs architectures matérielles, et détaillons une implémentationsimple, basée uniquement sur des instructions xor et s’adaptant beaucoupplus efficacement que les autres implémentations à un environnement d’exécutionmassivement parallèle. / Erasure codes are widely used to cope with failures for nearly all of today’snetworks communications and storage systems. Most of these codes are based onfinite field arithmetic, defining the addition and the multiplication over a set offinite elements. These operations can be very complex to perform. As a matter offact, codes performance improvements are still an up to date topic considering thecurrent data growth explosion. We propose a method to transform the elements of some finite fields into ring elements and perform the operations in this ring to simplify both coding and decoding of erasure codes, without any threshold on the correction capacities.We also present a scheduling technique allowing to reduce the number of operations thanks to some particular properties of the ring structure. Finally, we analyse the performance ofsuch a method considering several hardware architectures and detail a simple implementation, using only xor operations, fully scalable over a multicore environment.
280

Analysis of bounded distance decoding for Reed Solomon codes

Babalola, Oluwaseyi Paul January 2017 (has links)
Masters Report A report submitted in ful llment of the requirements for the degree of Master of Science (50/50) in the Centre for Telecommunication Access and Services (CeTAS) School of Electrical and Information Engineering Faculty of Engineering and the Built Environment February 2017 / Bounded distance decoding of Reed Solomon (RS) codes involves nding a unique codeword if there is at least one codeword within the given distance. A corrupted message having errors that is less than or equal to half the minimum distance cor- responds to a unique codeword, and therefore will decode errors correctly using the minimum distance decoder. However, increasing the decoding radius to be slightly higher than half of the minimum distance may result in multiple codewords within the Hamming sphere. The list decoding and syndrome extension methods provide a maximum error correcting capability whereby the radius of the Hamming ball can be extended for low rate RS codes. In this research, we study the probability of having unique codewords for (7; k) RS codes when the decoding radius is increased from the error correcting capability t to t + 1. Simulation results show a signi cant e ect of the code rates on the probability of having unique codewords. It also shows that the probability of having unique codeword for low rate codes is close to one. / MT2017

Page generated in 0.0342 seconds