• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 16
  • 15
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 141
  • 141
  • 58
  • 46
  • 43
  • 37
  • 33
  • 19
  • 19
  • 16
  • 14
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Automatic Source Code Transformation To Pass Compiler Optimization

Kahla, Moustafa Mohamed 03 January 2024 (has links)
Loop vectorization is a powerful optimization technique that can significantly boost the runtime of loops. This optimization depends on functional equivalence between the original and optimized code versions, a requirement typically established through the compiler's static analysis. When this condition is not met, the compiler will miss the optimization. The process of manually rewriting the source code to pass an already missed compiler optimization is time-consuming, given the multitude of potential code variations, and demands a high level of expertise, making it impractical in many scenarios. In this work, we propose a novel framework that aims to take the code blocks that the compiler failed to optimize and transform them to another code block that passes the compiler optimization. We develop an algorithm to efficiently search for a code structure that automatically passes the compiler optimization (weakly verified through a correctness test). We focus on loop-vectorize optimization inside OpenMP directives, where the introduction of parallelism adds complexity to the compiler's vectorization task and is shown to hinder optimizations. Furthermore, we introduce a modified version of TSVC, a loop vectorization benchmark in which all original loops are executed within OpenMP directives. Our evaluation shows that our framework enables " loop-vectorize" optimizations that the compiler failed to pass, resulting in a speedup up to 340× in the blocks optimized. Furthermore, applying our tool to HPC benchmark applications, where those applications are already built with optimization and performance in mind, demonstrates that our technique successfully enables extended compiler optimization, thereby accelerating the execution time of the optimized blocks in 15 loops and the entire execution time of the three applications by up to 1.58 times. / Master of Science / Loop vectorization is a powerful technique for improving the performance of specific sections in computer programs known as loops. Particularly, it simultaneously executes instructions of different iterations in a loop, providing a considerable speedup on its runtime due to this parallelism. To apply this optimization, the code needs to meet certain conditions, which are usually checked by the compiler. However, sometimes the compiler cannot verify these conditions, and the optimization fails. Our research introduces a new approach to fix these issues automatically. Normally, fixing the code manually to meet these conditions is time-consuming and requires high expertise. To overcome this, we've developed a tool that can efficiently find ways to make the code satisfy the conditions needed for optimization. Our focus is on a specific type of code that uses OpenMP directives to split the loop on multiple processor cores and runs them simultaneously, where adding this parallelism makes the code more complex for the compiler to optimize. Our tests show that our approach successfully improves the speed of computer programs by enabling optimizations initially missed by the compiler. This results in significant speed improvements for specific parts of the code, sometimes up to 340 times faster. We've also applied our method to well-optimized computer programs, and it still managed to make them run up to 1.58 times faster.
12

SUPPORTING SOFTWARE EXPLORATION WITH A SYNTACTIC AWARESOURCE CODE QUERY LANGUAGE

Bartman, Brian M. 26 July 2017 (has links)
No description available.
13

An ontology-based reengineering methodology for service orientation

Zhang, Zhuopeng January 2009 (has links)
The “Software as a Service” model in service-oriented computing allows loosely coupled software components to be designed and integrated with other software systems. Web services, together with service-oriented architectures, are promising integration technology to facilitate legacy system Webification. However, since most components in a legacy system were not designed and developed as services, the current software systems need to be converted into a set of loosely coupled services. Therefore, a service-oriented software reengineering process is essential for legacy systems to survive in the service-oriented computing environment. In this service-oriented software reengineering process, understanding, decomposing and reusing legacy code turn to be important activities. In this thesis, a methodology for Service-Oriented Software Reengineering (SOSR) is proposed to support the identification, extraction and integration of reusable legacy code. According to both the result of legacy system assessment and a service-oriented analysis and design process, a reengineering decision is made by proposed rules. Based on the service-oriented software reengineering decision, ontologies for SOSR, which consists of Domain Concept Ontology (DCO), Functionality Ontology (FO) and Software Component Ontology (SCO), are developed by the ontology development methodologies. These ontologies store knowledge on both application domain and code entities, which support further legacy code analysis. The identification of service candidates in legacy systems is achieved by mapping FO and SCO via a novel method combining Formal Concept Analysis (FCA) and Relational Concept Analysis (RCA). After the service candidates are identified, the reusable legacy code is extracted by dependency analysis and program slicing. Some rules are defined in code query language for the detection of dead code. Program slicing techniques are applied as main reverse engineering techniques to recover executable legacy code. An Executable Union Slicing (EUS) algorithm is defined to generate executable legacy components with high cohesion and low coupling properties. In the integration phase, extracted legacy components with core legacy code can either be wrapped into Web services for the service orchestration in the business layer, or be composed in a software service provider. The proposed SOSR methodology is proved flexible and practical to migrate legacy applications to service-oriented architectures by the case studies. It can be customised according to different legacy systems. This methodology can help software developers and maintainers to reengineer the tightly coupled legacy information systems to the loosely coupled and agile information systems.
14

Turbo codes for data compression and joint source-channel coding

Zhao, Ying. January 2007 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Javier Garcia-Frias, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
15

Low density generator matrix codes for source and channel coding

Zhong, Wei. January 2006 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Javier Garcia-Frias, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
16

Code Classification Based on Structure Similarity

Yang, Chia-hui 14 September 2012 (has links)
Automatically classifying malware variants source code is the most important research issue in the field of digital forensics. By means of malware classification, we can get complete behavior of malware which can simplify the forensics task. In previous researches, researchers use malware binary to perform dynamic analysis or static analysis after reverse engineering. In the other hand, malware developers even use anti-VM and obfuscation techniques try to cheating malware classifiers. With honeypots are increasingly used, researchers could get more and more malware source code. Analyzing these source codes could be the best way for malware classification. In this paper, a novel classification approach is proposed which based on logic and directory structure similarity of malwares. All collected source code will be classified correctly by hierarchical clustering algorithm. The proposed system not only helps us classify known malwares correctly but also find new type of malware. Furthermore, it avoids forensics staffs spending too much time to reanalyze known malware. And the system could also help realize attacker's behavior and purpose. The experimental results demonstrate the system can classify the malware correctly and be applied to other source code classification aspect.
17

Programming Language Evolution and Source Code Rejuvenation

Pirkelbauer, Peter Mathias 2010 December 1900 (has links)
Programmers rely on programming idioms, design patterns, and workaround techniques to express fundamental design not directly supported by the language. Evolving languages often address frequently encountered problems by adding language and library support to subsequent releases. By using new features, programmers can express their intent more directly. As new concerns, such as parallelism or security, arise, early idioms and language facilities can become serious liabilities. Modern code sometimes bene fits from optimization techniques not feasible for code that uses less expressive constructs. Manual source code migration is expensive, time-consuming, and prone to errors. This dissertation discusses the introduction of new language features and libraries, exemplifi ed by open-methods and a non-blocking growable array library. We describe the relationship of open-methods to various alternative implementation techniques. The benefi ts of open-methods materialize in simpler code, better performance, and similar memory footprint when compared to using alternative implementation techniques. Based on these findings, we develop the notion of source code rejuvenation, the automated migration of legacy code. Source code rejuvenation leverages enhanced program language and library facilities by finding and replacing coding patterns that can be expressed through higher-level software abstractions. Raising the level of abstraction improves code quality by lowering software entropy. In conjunction with extensions to programming languages, source code rejuvenation o ers an evolutionary trajectory towards more reliable, more secure, and better performing code. We describe the tools that allow us efficient implementations of code rejuvenations. The Pivot source-to-source translation infrastructure and its traversal mechanism forms the core of our machinery. In order to free programmers from representation details, we use a light-weight pattern matching generator that turns a C like input language into pattern matching code. The generated code integrates seamlessly with the rest of the analysis framework. We utilize the framework to build analysis systems that find common workaround techniques for designated language extensions of C 0x (e.g., initializer lists). Moreover, we describe a novel system (TACE | template analysis and concept extraction) for the analysis of uninstantiated template code. Our tool automatically extracts requirements from the body of template functions. TACE helps programmers understand the requirements that their code de facto imposes on arguments and compare those de facto requirements to formal and informal specifications.
18

Downgrading Java 5.0 Projects : An approach based on source-code transformations

Steijger, Tamara January 2008 (has links)
<p>The introduction of Java 5.0 came along with an extension of the language syntax. Several new language features as generic types and enumeration types were added to the language specification. These features cause downward-incompatibilities, code written in Java 5.0 will not work on older versions of the Java runtime environment. For some active projects, however, it is not possible to upgrade to higher Java versions, since some code might not be supported on Java 5.0. If one still wants to use components written in Java 5.0, these must be downgraded. Up to now this has been accomplished mostly by transforming the byte code of these programs.</p><p>In this thesis, we present a set of transformations which transform Java 5.0 source code to Java 1.4 compatible code. We successfully apply these transformations to two larger projects and compare our approach to the up to now common byte-code based tools.</p>
19

Specification and enforcement of usage constraints for commercial software components /

DePrince, Wayne, January 2003 (has links)
Thesis (Ph. D.)--Lehigh University, 2004. / Includes vita. Includes bibliographical references (leaves 235-240).
20

Functionality based refactoring : improving source code comprehension

Beiko, Jeffrey Lee 02 January 2008 (has links)
Software maintenance is the lifecycle activity that consumes the greatest amount of resources. Maintenance is a difficult task because of the size of software systems. Much of the time spent on maintenance is spent trying to understand source code. Refactoring offers a way to improve source code design and quality. We present an approach to refactoring that is based on the functionality of source code. Sets of heuristics are captured as patterns of source code. Refactoring opportunities are located using these patterns, and dependencies are verified to check if the located refactorings preserve the dependencies in the source code. Our automated tool performs the functional-based refactoring opportunities detection process, verifies dependencies, and performs the refactorings that preserve dependencies. These refactorings transform the source code into a series of functional regions of code, which makes it easier for developers to locate code they are searching for. This also creates a chunked structure in the source code, which helps with bottom-up program comprehension. Thus, this process reduces the amount of time required for maintenance by reducing the amount of time spent on program comprehension. We perform case studies to demonstrate the effectiveness of our automated approach on two open source applications. / Thesis (Master, Computing) -- Queen's University, 2007-10-05 12:48:56.977

Page generated in 0.1016 seconds