Spelling suggestions: "subject:"source core""
1 |
Functionality Based Refactoring: Improving Source Code ComprehensionBeiko, Jeffrey Lee 27 September 2007 (has links)
Thesis (Master, Computing) -- Queen's University, 2007-09-25 12:38:48.455 / Software maintenance is the lifecycle activity that consumes the greatest amount of resources. Maintenance is a difficult task because of the size of software systems. Much of the time spent on maintenance is spent trying to understand source code. Refactoring offers a way to improve source code design and quality. We present an approach to refactoring that is based on the functionality of source code. Sets of heuristics are captured as patterns of source code. Refactoring opportunities are located using these patterns, and dependencies are verified to check if the located refactorings preserve the dependencies in the source code. Our automated tool performs the functional-based refactoring opportunities detection process, verifies dependencies, and performs the refactorings that preserve dependencies. These refactorings transform the source code into a series of functional regions of code, which makes it easier for developers to locate code they are searching for. This also creates a chunked structure in the source code, which helps with bottom-up program comprehension. Thus, this process reduces the amount of time required for maintenance by reducing the amount of time spent on program comprehension. We perform case studies to demonstrate the effectiveness of our automated approach on two open source applications. / Master
|
2 |
Survey of Code Review Tools / Survey of Code Review ToolsŽember, Martin January 2011 (has links)
In the present work we study behaviour of tools intended for code review and how they aim at eliminating security vulnerabilities. There is a lot of such tools, but a smaller set of them suffice to effectively improve the security of software. We provide results of empirical testing of these tools on artificial data in order to map vulnerability classes they are able to identify and also on real data in order to test their scalability.
|
3 |
Using risk mitigation approaches to define the requirements for software escrowRode, Karl January 2015 (has links)
Two or more parties entering into a contract for service or goods may make use of an escrow of the funds for payment to enable trust in the contract. In such an event the documents or financial instruments, the object(s) in escrow, are held in trust by a trusted third party (escrow provider) until the specified conditions are fulfilled. In the scenario of software escrow, the object of escrow is typically the source code, and the specified release conditions usually address potential scenarios wherein the software provider becomes unable to continue providing services (such as due to bankruptcy or a change in services provided, etc.) The subject of software escrow is not well documented in the academic body of work, with the largest information sources, active commentary and supporting papers provided by commercial software escrow providers, both in South Africa and abroad. This work maps the software escrow topic onto the King III compliance framework in South Africa. This is of value since any users of bespoke developed applications may require extended professional assistance to align with the King III guidelines. The supporting risk assessment model developed in this work will serve as a tool to evaluate and motivate for software escrow agreements. It will also provide an overview of the various escrow agreement types and will transfer the focus to the value proposition that they each hold. Initial research has indicated that current awareness of software escrow in industry is still very low. This was evidenced by the significant number of approached specialists that declined to participate in the survey due to their own admitted inexperience in applying the discipline of software escrow within their companies. Moreover, the participants that contributed to the research indicated that they only required software escrow for medium to highly critical applications. This proved the value of assessing the various risk factors that bespoke software development introduces, as well as the risk mitigation options available, through tools such as escrow, to reduce the actual and residual risk to a manageable level.
|
4 |
Cross-Entropy Approaches To Software Forensics: Source Code Authorship IdentificationStinson, James Thomas 09 December 2011 (has links)
Identification of source code authorship can be a useful tool in the areas of security and forensic investigation by helping to create corroborating evidence that may send a suspected cyber terrorist, hacker, or malicious code writer to jail. When applied to academia, it can also prove a useful tool for professors who suspect students of academic dishonesty, plagiarism, or modification of source code related to programming assignments. The purpose of this dissertation is to determine whether or not cross-entropy approaches to source code authorship analysis will succeed in predicting the correct author of a given piece of source code. If so, this work will try to identify factors that affect the accuracy of the algorithm, how programmer experience determines accuracy, and whether a cross-entropy approach performs better than some known source code authorship approaches. The approach taken in the research effort will manufacture a corpus of source code writings from various authors based on the same system descriptions and varying system descriptions, from which benchmarks of different approaches can be measured.
|
5 |
Structural analysis of source code plagiarism using graphsObaido, George Rabeshi January 2017 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand,
Johannesburg in fulfillment of the requirements for the degree of Master of Science.
May 2017 / Plagiarism is a serious problem in academia. It is prevalent in the computing discipline
where students are expected to submit source code assignments as part of their
assessment; hence, there is every likelihood of copying. Ideally, students can collaborate
with each other to perform a programming task, but it is expected that each student
submit his/her own solution for the programming task. More so, one might conclude
that the interaction would make them learn programming. Unfortunately, that may not
always be the case. In undergraduate courses, especially in the computer sciences, if a
given class is large, it would be unfeasible for an instructor to manually check each and
every assignment for probable plagiarism. Even if the class size were smaller, it is still
impractical to inspect every assignment for likely plagiarism because some potentially
plagiarised content could still be missed by humans. Therefore, automatically checking
the source code programs for likely plagiarism is essential.
There have been many proposed methods that attempt to detect source code plagiarism
in undergraduate source code assignments but, an ideal system should be able to
differentiate actual cases of plagiarism from coincidental similarities that usually occur
in source code plagiarism. Some of the existing source code plagiarism detection
systems are either not scalable, or performed better when programs are modified with
a number of insertions and deletions to obfuscate plagiarism. To address this issue, a
graph-based model which considers structural similarities of programs is introduced to
address cases of plagiarism in programming assignments.
This research study proposes an approach to measuring cases of similarities in programming
assignments using an existing plagiarism detection system to find similarities
in programs, and a graph-based model to annotate the programs. We describe
experiments with data sets of undergraduate Java programs to inspect the programs
for plagiarism and evaluate the graph-model with good precision. An evaluation of
the graph-based model reveals a high rate of plagiarism in the programs and resilience
to many obfuscation techniques, while false detection (coincident similarity) rarely occurred.
If this detection method is adopted into use, it will aid an instructor to carry
out the detection process conscientiously. / MT 2017
|
6 |
Baseband Processing Using the Julia LanguageMellberg, Linus January 2015 (has links)
Baseband processing is an important and computationally heavy part of modern mobile cellular systems. These systems use specialized hardware that has many digital signal processing cores and hardware accelerators. The algorithms that run on these systems are complexand needs to take advantage of this hardware. Developing software for these systems requires domain knowledge about baseband processing and low level programming on parallel real time systems. This thesis investigates if the programming language Julia can be used to implement algorithms for baseband processing in mobile telephony base stations. If it is possible to use a scientific language like Julia to directly implement programs for the special hardware in the base stations it can reduce lead times and costs. In this thesis a uplink receiver is implemented in Julia. This implementation is written usinga domain specific language. This makes it possible to specify a number of transformations that use the metaprogramming capabilities in Julia to transform the uplink receiver such that it is better suited to execute on the hardware described above. This is achieved by transforming the program such that it consists of functions that either can be executed on single digital signal processing cores or hardware accelerators. It is concluded that Julia seems suited for prototyping baseband processing algorithms. Using metaprogramming to transform a baseband processing algorithm to be better suited for baseband processing hardware is also a feasible approach.
|
7 |
Mobile code integrity through static program analysis, steganography, and dynamic transformation controlJochen, Michael J. January 2008 (has links)
Thesis (Ph.D.)--University of Delaware, 2008. / Principal faculty advisors: Lori L. Pollock and Lisa Marvel, Dept. of Computer & Information Sciences. Includes bibliographical references.
|
8 |
SCALE Source code analyzer for locating errors /Florian, Mihai. Holzmann, Gerard J. Chandy, K. Mani. January 1900 (has links)
Thesis (Masters) -- California Institute of Technology, 2010. / Title from home page (viewed 04/19/10). Advisor names found in the thesis' metadata record in the digital repository. Includes bibliographical references.
|
9 |
Structural Analysis of Source-Code Changes in Large Software through SrcDiff and DiffPathDecker, Michael J. 13 August 2012 (has links)
No description available.
|
10 |
Automatic Source Code Transformation To Pass Compiler OptimizationKahla, Moustafa Mohamed 03 January 2024 (has links)
Loop vectorization is a powerful optimization technique that can significantly boost the runtime of loops. This optimization depends on functional equivalence between the original and optimized code versions, a requirement typically established through the compiler's static analysis. When this condition is not met, the compiler will miss the optimization. The process of manually rewriting the source code to pass an already missed compiler optimization is time-consuming, given the multitude of potential code variations, and demands a high level of expertise, making it impractical in many scenarios. In this work, we propose a novel framework that aims to take the code blocks that the compiler failed to optimize and transform them to another code block that passes the compiler optimization. We develop an algorithm to efficiently search for a code structure that automatically passes the compiler optimization (weakly verified through a correctness test). We focus on loop-vectorize optimization inside OpenMP directives, where the introduction of parallelism adds complexity to the compiler's vectorization task and is shown to hinder optimizations. Furthermore, we introduce a modified version of TSVC, a loop vectorization benchmark in which all original loops are executed within OpenMP directives.
Our evaluation shows that our framework enables " loop-vectorize" optimizations that the compiler failed to pass, resulting in a speedup up to 340× in the blocks optimized. Furthermore, applying our tool to HPC benchmark applications, where those applications are already built with optimization and performance in mind, demonstrates that our technique successfully enables extended compiler optimization, thereby accelerating the execution time of the optimized blocks in 15 loops and the entire execution time of the three applications by up to 1.58 times. / Master of Science / Loop vectorization is a powerful technique for improving the performance of specific sections in computer programs known as loops. Particularly, it simultaneously executes instructions of different iterations in a loop, providing a considerable speedup on its runtime due to this parallelism. To apply this optimization, the code needs to meet certain conditions, which are usually checked by the compiler. However, sometimes the compiler cannot verify these conditions, and the optimization fails. Our research introduces a new approach to fix these issues automatically.
Normally, fixing the code manually to meet these conditions is time-consuming and requires high expertise. To overcome this, we've developed a tool that can efficiently find ways to make the code satisfy the conditions needed for optimization.
Our focus is on a specific type of code that uses OpenMP directives to split the loop on multiple processor cores and runs them simultaneously, where adding this parallelism makes the code more complex for the compiler to optimize.
Our tests show that our approach successfully improves the speed of computer programs by enabling optimizations initially missed by the compiler. This results in significant speed improvements for specific parts of the code, sometimes up to 340 times faster. We've also applied our method to well-optimized computer programs, and it still managed to make them run up to 1.58 times faster.
|
Page generated in 0.0607 seconds