• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1918
  • 597
  • 576
  • 417
  • 240
  • 177
  • 57
  • 53
  • 40
  • 26
  • 26
  • 25
  • 24
  • 23
  • 20
  • Tagged with
  • 4803
  • 533
  • 503
  • 497
  • 429
  • 421
  • 375
  • 362
  • 354
  • 345
  • 340
  • 336
  • 318
  • 318
  • 316
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Joint random linear network coding and convolutional code with interleaving for multihop wireless network

Susanto, Misfa, Hu, Yim Fun, Pillai, Prashant January 2013 (has links)
No / Abstract: Error control techniques are designed to ensure reliable data transfer over unreliable communication channels that are frequently subjected to channel errors. In this paper, the effect of applying a convolution code to the Scattered Random Network Coding (SRNC) scheme over a multi-hop wireless channel was studied. An interleaver was implemented for bit scattering in the SRNC with the purpose of dividing the encoded data into protected blocks and vulnerable blocks to achieve error diversity in one modulation symbol while randomising errored bits in both blocks. By combining the interleaver with the convolution encoder, the network decoder in the receiver would have enough number of correctly received network coded blocks to perform the decoding process efficiently. Extensive simulations were carried out to study the performance of three systems: 1) SRNC with convolutional encoding, 2) SRNC; and 3) A system without convolutional encoding nor interleaving. Simulation results in terms of block error rate for a 2-hop wireless transmission scenario over an Additive White Gaussian Noise (AWGN) channel were presented. Results showed that the system with interleaving and convolutional code achieved better performance with coding gain of at least 1.29 dB and 2.08 dB on average when the block error rate is 0.01 when compared with system II and system III respectively.
562

Towards Measuring & Improving Source Code Quality

Iftikhar, Umar January 2024 (has links)
Context: Software quality has a multi-faceted description encompassing several quality attributes. Central to our efforts to enhance software quality is to improve the quality of the source code. Poor source code quality impacts the quality of the delivered product. Empirical studies have investigated how to improve source code quality and how to quantify the source code improvement. However, the reported evidence linking internal code structure information and quality attributes observed by users is varied and, at times, conflicting. Furthermore, there is a further need for research to improve source code quality by understanding trends in feedback from code review comments. Objective: This thesis contributes towards improving source code quality and synthesizes metrics to measure improvement in source code quality. Hence, our objectives are 1) To synthesize evidence of links between source code metrics and external quality attributes, & identify source code metrics, and 2) To identify areas to improve source code quality by identifying recurring code quality issues using the analysis of code review comments. Method: We conducted a tertiary study to achieve the first objective, an archival analysis and a case study to investigate the latter two objectives. Results: To quantify source code quality improvement, we reported a comprehensive catalog of source code metrics and a small set of source code metrics consistently linked with maintainability, reliability, and security. To improve source code quality using analysis of code review comments, our explored methodology improves the state-of-the-art with interesting results. Conclusions: The thesis provides a promising way to analyze themes in code review comments. Researchers can use the source code metrics provided to estimate these quality attributes reliably. In future work, we aim to derive a software improvement checklist based on the analysis of trends in code review comments.
563

Implementation of Parallel and Serial Concatenated Convolutional Codes

Wu, Yufei 27 April 2000 (has links)
Parallel concatenated convolutional codes (PCCCs), called "turbo codes" by their discoverers, have been shown to perform close to the Shannon bound at bit error rates (BERs) between 1e-4 and 1e-6. Serial concatenated convolutional codes (SCCCs), which perform better than PCCCs at BERs lower than 1e-6, were developed borrowing the same principles as PCCCs, including code concatenation, pseudorandom interleaving and iterative decoding. The first part of this dissertation introduces the fundamentals of concatenated convolutional codes. The theoretical and simulated BER performance of PCCC and SCCC are discussed. Encoding and decoding structures are explained, with emphasis on the Log-MAP decoding algorithm and the general soft-input soft-output (SISO) decoding module. Sliding window techniques, which can be employed to reduce memory requirements, are also briefly discussed. The second part of this dissertation presents four major contributions to the field of concatenated convolutional coding developed through this research. First, the effects of quantization and fixed point arithmetic on the decoding performance are studied. Analytic bounds and modular renormalization techniques are developed to improve the efficiency of SISO module implementation without compromising the performance. Second, a new stopping criterion, SDR, is discovered. It is found to perform well with lowest cost when evaluating its complexity and performance in comparison with existing criteria. Third, a new type-II code combining automatic repeat request (ARQ) technique is introduced which makes use of the related PCCC and SCCC. Fourth, a new code-assisted synchronization technique is presented, which uses a list approach to leverage the simplicity of the correlation technique and the soft information of the decoder. In particular, the variant that uses SDR criterion achieves superb performance with low complexity. Finally, the third part of this dissertation discusses the FPGA-based implementation of the turbo decoder, which is the fruit of cooperation with fellow researchers. / Ph. D.
564

Codes from norm-trace curves: local recovery and fractional decoding

Murphy, Aidan W. 04 April 2022 (has links)
Codes from curves over finite fields were first developed in the late 1970's by V. D. Goppa and are known as algebraic geometry codes. Since that time, the construction has been tailored to fit particular applications, such as erasure recovery and error correction using less received information than in the classical case. The Hermitian-lifted code construction of L'opez, Malmskog, Matthews, Piñero-González, and Wootters (2021) provides codes from the Hermitian curve over $F_{q^2}$ which have the same locality as the well-known one-point Hermitian codes but with a rate bounded below by a positive constant independent of the field size. However, obtaining explicit expressions for the code is challenging. In this dissertation, we consider codes from norm-trace curves, which are a generalization of the Hermitian curve. We develop norm-trace-lifted codes and demonstrate an explicit basis of the codes. We then consider fractional decoding of codes from norm-trace curves, extending the results obtained for codes from the Hermitian curve by Matthews, Murphy, and Santos (2021). / Doctor of Philosophy / Coding theory focuses on recovering information, whether that data is corrupted and changed (called an error) or is simply lost (called an erasure). Classical codes achieve this goal by accessing all received symbols. Because long codes, meaning those with many symbols, are common in applications, it is useful for codes to be able to correct errors and recover erasures by accessing less information than classical codes allow. That is the focus of this dissertation. Codes with locality are designed for erasure recovery using fewer symbols than in the classical case. Such codes are said to have locality $r$ and availability $s$ if each symbol can be recovered from $s$ disjoint sets of $r$ other symbols. Algebraic curves, such as the Hermitian curve or the more general norm-trace curves, offer a natural structure for designing codes with locality. This is done by considering lines intersected with the curve to form repair groups, which are sets of $r+1$ points where the information from one point can be recovered using the rest of the points in the repair group. An error correction method which uses less data than the classical case is that of fractional decoding. Fractional decoding takes advantage of algebraic properties of the field trace to correct errors by downloading only a $lambda$-proportion of the received information, where $lambda < 1$. In this work, we consider a new family of codes resulting from norm-trace curves, and study their locality and availability, as well as apply the ideas of fractional decoding to these codes.
565

Deep Learning for Code Generation using Snippet Level Parallel Data

Jain, Aneesh 05 January 2023 (has links)
In the last few years, interest in the application of deep learning methods for software engineering tasks has surged. A variety of different approaches like transformer based methods, statistical machine translation models, models inspired from natural language settings have been proposed and shown to be effective at tasks like code summarization, code synthesis and code translation. Multiple benchmark data sets have also been released but all suffer from one limitation or the other. Some data sets only support a select few programming languages while others support only certain tasks. These limitations restrict researchers' ability to be able to perform thorough analyses of their proposed methods. In this work we aim to alleviate some of the limitations faced by researchers who work in the paradigm of deep learning applications for software engineering tasks. We introduce a large, parallel, multi-lingual programming language data set that supports tasks like code summarization, code translation, code synthesis and code search in 7 different languages. We provide benchmark results for the current state of the art models on all these tasks and we also explore some limitations of current evaluation metrics for code related tasks. We provide a detailed analysis of the compilability of code generated by deep learning models because that is a better measure of ascertaining usability of code as opposed to scores like BLEU and CodeBLEU. Motivated by our findings about compilability, we also propose a reinforcement learning based method that incorporates code compilability and syntax level feedback as rewards and we demonstrate it's effectiveness in generating code that has less syntax errors as compared to baselines. In addition, we also develop a web portal that hosts the models we have trained for code translation. The portal allows translation between 42 possible language pairs and also allows users to check compilability of the generated code. The intent of this website is to give researchers and other audiences a chance to interact with and probe our work in a user-friendly way, without requiring them to write their own code to load and inference the models. / Master of Science / Deep neural networks have now become ubiquitous and find their applications in almost every technology and service we use today. In recent years, researchers have also started applying neural network based methods to problems in the software engineering domain. Software engineering by it's nature requires a lot of documentation, and creating this natural language documentation automatically using programs as input to the neural networks has been one their first applications in this domain. Other applications include translating code between programming languages and searching for code using natural language as one does on websites like stackoverflow. All of these tasks now have the potential to be powered by deep neural networks. It is common knowledge that neural networks are data hungry and in this work we present a large data set containing codes in multiple programming languages like Java, C++, Python, C#, Javascript, PHP and C. Our data set is intended to foster more research in automating software engineering tasks using neural networks. We provide an analysis of performance of multiple state of the art models using our data set in terms of compilability, which measures the number of syntax errors in the code, as well as other metrics. In addition, propose our own deep neural network based model for code translation, which uses feedback from programming language compilers in order to reduce the number of syntax errors in the generated code. We also develop and present a website where some of our code translation models have been hosted. The website allows users to interact with our work in an easy manner without any knowledge of deep learning and get a sense of how these technologies are being applied for software engineering tasks.
566

Optimum linear single user detection in direct-sequence spread-spectrum multiple access systems

Aue, Volker 10 July 2009 (has links)
After Qualcomm's proposal of the IS-95 standard, <i>code-division multiple access</i> (CDMA) gained popularity as an alternative multiple-access scheme in cellular and <i>personal communication systems</i> (PCS). Besides the advantage of allowing asynchronous operation of the users, CDMA <i>direct-sequence spread spectrum</i> (DS-SS) offers resistance to frequency selective fading and graceful degradation of the performance as the number of users increases. Orthogonality of the signals in time-division multiple access and frequency-division multiple access is inherent from the nature of the multiple access scheme. In a CDMA system, orthogonality of the signals is not guaranteed in general. Consequently, the performance of conventional correlation receivers suffers. Sub-optimum receivers which use knowledge of the interfering signals have been investigated by other researchers. These receivers attempt to cancel the multi-user interference by despreading the interfering users. Hence, these receivers require knowledge about all the spreading codes, amplitude levels, and signal timing, and are, in general, computationally intensive. In this thesis, a technique is presented for which a high degree of interference rejection can be obtained without the necessity of despreading each user. It is shown that exploiting spectral correlation can help mitigate the effects of the multiple-access interference. If <i>code-on-pulse</i> DS-SS modulation is used, a cyclic form of the Wiener filter provides substantial improvements in performance in terms of bit error rate and user capacity. Furthermore, it is shown, that a special error-criterion should be used to adapt the weights of the filter. The computational complexity of the receiver is equivalent to that of conventional equalizers. / Master of Science
567

Automated Identification and Application of Code Refactoring in Scratch to Promote the Culture Quality from the Ground up

Techapalokul, Peeratham 04 June 2020 (has links)
Much of software engineering research and practice is concerned with improving software quality. While enormous prior efforts have focused on improving the quality of programs, this dissertation instead provides the means to educate the next generation of programmers who care deeply about software quality. If they embrace the culture of quality, these programmers would be positioned to drastically improve the quality of the software ecosystem. This dissertation describes novel methodologies, techniques, and tools for introducing novice programmers to software quality and its systematic improvement. This research builds on the success of Scratch, a popular novice-oriented block-based programming language, to support the learning of code quality and its improvement. This dissertation improves the understanding of quality problems of novice programmers, creates analysis and quality improvement technologies, and develops instructional approaches for teaching quality improvement. The contributions of this dissertation are as follows. (1) We identify twelve code smells endemic to Scratch, show their prevalence in a large representative codebase, and demonstrate how they hinder project reuse and communal learning. (2) We introduce four new refactorings for Scratch, develop an infrastructure to support them in the Scratch programming environment, and evaluate their effectiveness for the target audience. (3) We study the impact of introducing code quality concepts alongside the fundamentals of programming with and without automated refactoring support. Our findings confirm that it is not only feasible but also advantageous to promote the culture of quality from the ground up. The contributions of this dissertation can benefit both novice programmers and introductory computing educators. / Doctor of Philosophy / Software remains one of the most defect-prone artifacts across all engineering disciplines. Much of software engineering research and practice is concerned with improving software quality. While enormous prior efforts have focused on improving the quality of programs, this dissertation instead provides the means to educate the next generation of programmers who care deeply about software quality. If they embrace the culture of quality, these programmers would be positioned to drastically improve the quality of the software ecosystem, akin to professionals in traditional engineering disciplines. This dissertation describes novel methodologies, techniques, and tools for introducing novice programmers to software quality and its systematic improvement. This research builds on the success of Scratch, a popular visual programming language for teaching introductory students, to support the learning of code quality and its improvement. This dissertation improves the understanding of quality problems of novice programmers, creates analysis and quality improvement technologies, and develops instructional approaches for teaching quality improvement. This dissertation contributes (1) a large-scale study of recurring quality problems in Scratch projects and how these problems hinder communal learning, (2) four new refactorings, quality improving behavior-preserving program transformations, as well as their implementation and evaluation, (3) a study of the impact of introducing code quality concepts alongside the fundamentals of programming with and without automated refactoring support. Our findings confirm that it is not only feasible but also advantageous to promote the culture of quality from the ground up. The contributions of this dissertation can benefit both novice programmers and introductory computing educators.
568

Repairing Cartesian Codes with Linear Exact Repair Schemes

Valvo, Daniel William 10 June 2020 (has links)
In this paper, we develop a scheme to recover a single erasure when using a Cartesian code,in the context of a distributed storage system. Particularly, we develop a scheme withconsiderations to minimize the associated bandwidth and maximize the associateddimension. The problem of recovering a missing node's data exactly in a distributedstorage system is known as theexact repair problem. Previous research has studied theexact repair problem for Reed-Solomon codes. We focus on Cartesian codes, and show wecan enact the recovery using a linear exact repair scheme framework, similar to the oneoutlined by Guruswami and Wooters in 2017. / Master of Science / Distributed storage systems are systems which store a single data file over multiple storage nodes. Each storage node has a certain storage efficiency, the "space" required to store the information on that node. The value of these systems, is their ability to safely store data for extended periods of time. We want to design distributed storage systems such that if one storage node fails, we can recover it from the data in the remaining nodes. Recovering a node from the data stored in the other nodes requires the nodes to communicate data with each other. Ideally, these systems are designed to minimize the bandwidth, the inter-nodal communication required to recover a lost node, as well as maximize the storage efficiency of each node. A great mathematical framework to build these distributed storage systems on is erasure codes. In this paper, we will specifically develop distributed storage systems that use Cartesian codes. We will show that in the right setting, these systems can have a very similar bandwidth to systems build from Reed-Solomon codes, without much loss in storage efficiency.
569

All English and No Code-switching : A thematic analysis of writing behaviours among EMI master's students

James, Calum January 2022 (has links)
As a kind of education strategy, English as a medium of instruction (EMI) has become increasingly widespread across the world in recent years. The increased adoption means that many students are performing study activities such as reading, writing, and giving presentations in English all while maintaining and using a native language in other situations. One area of interest within EMI research is how it may relate to academic writing, and here there are relatively few studies aiming to examine the interactions between EMI and writing among master’s students. This paper collected qualitative interview data from five EMI master’s students who were asked to describe how they go about writing academic texts, what experiences and opinions they have of multilingualism in their lives, as well as how they may utilise the languages available to them to assist in their writing through code-switching or translanguaging. A thematic analysis was conducted which generated ten themes within two overarching categories, language use and multilingualism and writing behaviours. Participants in the present study reported no code-switching behaviours at any point throughout their writing, contrasting with previous research in multilingual university settings. This may be due to constraints of the EMI environment, where all produced materials from students need to be in English, discouraging the use of multiple languages and leading to opinions of sticking to one language being easier. Future research could usefully examine language use within EMI educational contexts with a focus on how it facilitates or otherwise affects code-switching tendencies. / English as a medium of instruction (EMI) har blivit en allt vanligare strategi inom utbildning under senare år. Dess utökning medför att många studenter utför studieaktiviteter som skrivning, läsning och muntliga presentationer på engelska och samtidigt bibehåller och använder sig av ett modersmål inom andra sammanhang. Ett intresseområde inom EMI-forskning är hur det knyter an till akademiskt skrivande, där det finns relativt få studier som fokuserar på samspelet mellan EMI och skrivande hos mastersstudenter. Denna studie samlade in kvalitativa data från intervjuer med fem mastersstudenter inom EMI program. De beskrev hur de gick tillväga vid akademiskt skrivande, vad de har för erfarenheter av och åsikter om flerspråkighet samt hur de använder sina tillgängliga språk till hjälp vid skrivprocessen genom så kallat code-switching eller translanguaging. En tematisk analys utfördes vilket skapade tio teman inom två breda kategorier, nämligen språkbruk och flerspråkighet och skrivbeteenden. Deltagarna i denna studie rapporterade inte att de använde code-switching alls inom skrivprocessen, till skillnad från tidigare studier från flerspråkiga universitetssammanhang. Detta kan bero på begräsningar inom EMI-miljöer, där texter och presentationer från studenter måste vara på engelska, vilket kan hindra användningen av flera språk. Framtida forskning skulle användbart kunna utforska språkbruk hos EMI-studenter med fokus på hur utbildningen med fokus på hur det underlättar eller annars påverkar code-switching tendenser.
570

Ressources linguistiques et visée référentielle chez des individus bilingues français-anglais : l’alternance codique comme stratégie d’expression sur le plan lexical

Vogh, Kendall 22 May 2018 (has links)
L’alternance codique est un comportement langagier incontournable partout où il y a du multilinguisme. Elle est aussi un phénomène complexe dont une description juste ne sera possible que par la mise en commun d’une diversité de traditions empiriques et épistémologiques. Cependant, elle n’est qu’exceptionnellement abordée dans une perspective où l’on tient compte de ce que le locuteur exprime sur le plan sémantique au moyen de ses alternances. Cette étude vise à combler cette lacune en mettant sur pied une méthode d’analyse des occurrences d’alternance codique qui permettra de prendre en compte les possibles intentions référentielles du locuteur, et ce, dans un cadre d’analyse qui envisage le locuteur bilingue comme un usager pleinement impliqué dans l’emploi de ses ressources linguistiques. Cette méthode sera élaborée à travers une analyse quantitative et qualitative sur corpus, à savoir des corpus de conversations semi-dirigées enregistrées auprès des locuteurs bilingues anglais-français de l’Alberta et du Maine. Dans l’ensemble, les résultats démontrent que cette méthode est fructueuse. Même avec plusieurs restrictions pratiques, il a été possible d’identifier des tendances dans l’usage d’unités faisant l’objet d’alternance codique, tendances qui peuvent être attribuées à ce que l’unité est employée pour exprimer. Notamment, les locuteurs dont les productions sont étudiées semblent produire de l’alternance codique afin de se prévaloir des unités lexicales qui structurent et gèrent l’interaction d’une manière précise, orientent l’interprétation des énoncés et jouent un rôle dans l’entretien des relations interpersonnelles et la protection des faces. De plus, il en ressort que l’alternance codique elle-même est une ressource linguistique qui peut servir à ancrer la visée référentielle. Ces résultats indiquent que non seulement on peut, mais on doit prendre en compte la dimension de l’expression sémantique dans l’étude de l’alternance codique comme une partie constitutive des habitudes de pratique, des compétences communicatives, et ultimement, des expériences vécues des locuteurs bilingues. / Code-switching is an undeniable fact of language activity for multilingual individuals and communities everywhere. It is also a complex phenomenon, a complete description of which is impossible without combining a multiplicity of empirical and epistemological traditions. However, code-switching is only rarely studied from a semantic perspective, in which the meanings the speaker seeks to express through switched lexical units are taken into account as a possible reason for the switch. This study endeavours to fill a gap in the literature by establishing a method of analysis that takes such meanings into account, within a framework that considers bilingual speakers as fully-involved agents in the use of their linguistic resources. This method is elaborated through a qualitative and quantitative analysis of corpus data, specifically audio or audio-visual recordings of semi-structured interactions between French-English bilinguals in Alberta and in Maine. The results of this study indicate that the method put forward is productive. In spite of several practical restrictions on the data and the analysis, it was possible to identify trends in the usage of code-switched units that can be attributed to the meanings those units are used to express. In particular, the speakers whose productions were studied appear to code-switch in order to avail themselves of lexical units that help to structure and manage the interaction in specific ways, direct the interpretation of utterances, perform face-protecting acts, and manage interpersonal relationships. What is more, code-switching itself appears to be a linguistic resource that has semantic value. These results demonstrate that it is not only possible but necessary to include the dimension of semantic expression in the field of code-switching research, since it forms an integral part of the language practices, the communicative competences, and ultimately, the lived experiences of bilingual speakers.

Page generated in 0.0661 seconds