• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 36
  • 8
  • 6
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 97
  • 43
  • 38
  • 34
  • 34
  • 33
  • 32
  • 29
  • 27
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Functionality Based Refactoring: Improving Source Code Comprehension

Beiko, Jeffrey Lee 27 September 2007 (has links)
Thesis (Master, Computing) -- Queen's University, 2007-09-25 12:38:48.455 / Software maintenance is the lifecycle activity that consumes the greatest amount of resources. Maintenance is a difficult task because of the size of software systems. Much of the time spent on maintenance is spent trying to understand source code. Refactoring offers a way to improve source code design and quality. We present an approach to refactoring that is based on the functionality of source code. Sets of heuristics are captured as patterns of source code. Refactoring opportunities are located using these patterns, and dependencies are verified to check if the located refactorings preserve the dependencies in the source code. Our automated tool performs the functional-based refactoring opportunities detection process, verifies dependencies, and performs the refactorings that preserve dependencies. These refactorings transform the source code into a series of functional regions of code, which makes it easier for developers to locate code they are searching for. This also creates a chunked structure in the source code, which helps with bottom-up program comprehension. Thus, this process reduces the amount of time required for maintenance by reducing the amount of time spent on program comprehension. We perform case studies to demonstrate the effectiveness of our automated approach on two open source applications. / Master
52

Evolving object-oriented designs with refactorings /

Tokuda, Lance Aiji. January 1999 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1999. / Vita. Includes bibliographical references (leaves 131-136). Available also in a digital version from Dissertation Abstracts.
53

Automating program transformations based on examples of systematic edits

Meng, Na 16 January 2015 (has links)
Programmers make systematic edits—similar, but not identical changes to multiple places during software development and maintenance in order to add features and fix bugs. Finding all the correct locations and making the ed- its correctly is a tedious and error-prone process. Existing tools for automating systematic edits are limited because they do not create general purpose edit scripts or suggest edit locations, except for specialized or trivial edits. Since many similar changes occur in similar contexts (in code with similar surrounding dependent relations and syntactic structures), there is an opportunity to automate program transformations based on examples of systematic edits. By inferring systematic edits and relevant context from one or more exemplar changes, automated approaches can (1) apply similar changes to other loca- tions, (2) locate code that requires similar changes, and (3) refactor code which undergoes systematic edits. This thesis seeks to improve programmer produc- tivity and software correctness by automating parts of systematic editing and refactoring. Applying similar, but not identical code changes, to multiple locations with similar contexts requires (1) understanding and relating common program context—a program’s syntactic structure, control, and data flow—relevant to the edits in order to propagate code changes from one location to oth- ers, and (2) recognizing differences between locations in order to customize code changes for each location. Prior approaches for propagating nontrivial, general-purpose code changes from one location to another either do not ob- serve the program context when placing edits, or do not handle the differences between locations when customizing edits, producing syntactic invalid or in- correctly modified programs. We design a novel technique and implement it in a tool called Sydit. Our approach first creates an abstract, context-aware edit script which contains a syntax subtree enclosing the exemplar edit with all concrete identifiers abstracted and a sequence of edit operations. It then applies the edit script to user-selected locations by establishing both context matching and identifier matching to correctly place and customize the edit. Although SYDIT is effective in helping developers correctly apply edits to multiple locations, programmers are still on their own to identify all the appropriate locations. When developers omit some of the locations, the edit script inferred from a single code location is not always well suited to help them find the locations. One approach to infer the edit script is encoding the concrete context. However, the resulting edit script is too specific to the source location, and therefore can only identify locations which contain syntax trees identical to the source location (false negatives). Another approach is to encode context with all identifiers abstracted, but the resulting edit script may match too many locations (false positives). To suggest edit locations, we use multiple examples to create a partially abstract, context-aware edit script, and use this edit script to both find edit locations and transform the code. Our experiments show that edit scripts from multiple examples have high precision and recall in finding edit locations and high accuracy when applying systematic edits because the extracted common context together with identified common concrete identifiers from multiple examples improves the location search without sacrificing edit application accuracy. For systematic edits which insert or update duplicated code, our systematic editing approaches may encourage developers in the bad practice of creating or evolving duplicated code. We investigate and evaluate an approach that automatically refactors cloned code based on the extent of systematic edits by factoring out common code and parameterizing any differences between them. Our investigation finds that refactoring systematically edited code is not always feasible or desirable. When refactoring is desirable, systematic ed- its offer a better way to scope the refactoring as compared to whole method refactoring. Automatic clone removal refactoring cannot obviate the need for systematic editing. Developers need tool support for both automatic refactoring and systematic editing. Based on the systematic changes already made by developers for a subset of change locations, our automated approaches facilitate propagating general purpose systematic changes across large programs, identifying locations requiring systematic changes missed by developers, and refactoring code undergoing systematic edits to reduce code duplication and future repetitive code changes. The combination of these techniques opens a new way of helping developers automate tedious and error-prone tasks, when they add features, fix bugs, and maintain software. These techniques also have the potential to guide automated software development and maintenance activities based on existing code changes mined from version histories for bug fixes, feature additions, refactoring, and software migration. / text
54

Functionality based refactoring : improving source code comprehension

Beiko, Jeffrey Lee 02 January 2008 (has links)
Software maintenance is the lifecycle activity that consumes the greatest amount of resources. Maintenance is a difficult task because of the size of software systems. Much of the time spent on maintenance is spent trying to understand source code. Refactoring offers a way to improve source code design and quality. We present an approach to refactoring that is based on the functionality of source code. Sets of heuristics are captured as patterns of source code. Refactoring opportunities are located using these patterns, and dependencies are verified to check if the located refactorings preserve the dependencies in the source code. Our automated tool performs the functional-based refactoring opportunities detection process, verifies dependencies, and performs the refactorings that preserve dependencies. These refactorings transform the source code into a series of functional regions of code, which makes it easier for developers to locate code they are searching for. This also creates a chunked structure in the source code, which helps with bottom-up program comprehension. Thus, this process reduces the amount of time required for maintenance by reducing the amount of time spent on program comprehension. We perform case studies to demonstrate the effectiveness of our automated approach on two open source applications. / Thesis (Master, Computing) -- Queen's University, 2007-10-05 12:48:56.977
55

A model for the application of arbitrary object-oriented refactorings /

Gayed, Grant January 1900 (has links)
Thesis (M.C.S.) - Carleton University, 2002. / Includes bibliographical references (p. 116-119). Also available in electronic format on the Internet.
56

Refactoring for paradigm change in the interactive data language

Marquez, Gabriel L., January 2007 (has links)
Thesis (M.S.)--University of Texas at El Paso, 2007. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
57

Linguistic Refactoring of Business Process Models

Pittke, Fabian 11 1900 (has links) (PDF)
In the past decades, organizations had to face numerous challenges due to intensifying globalization and internationalization, shorter innovation cycles and growing IT support for business. Business process management is seen as a comprehensive approach to align business strategy, organization, controlling, and business activities to react flexibly to market changes. For this purpose, business process models are increasingly utilized to document and redesign relevant parts of the organization's business operations. Since companies tend to have a growing number of business process models stored in a process model repository, analysis techniques are required that assess the quality of these process models in an automatic fashion. While available techniques can easily check the formal content of a process model, there are only a few techniques available that analyze the natural language content of a process model. Therefore, techniques are required that address linguistic issues caused by the actual use of natural language. In order to close this gap, this doctoral thesis explicitly targets inconsistencies caused by natural language and investigates the potential of automatically detecting and resolving them under a linguistic perspective. In particular, this doctoral thesis provides the following contributions. First, it defines a classification framework that structures existing work on process model analysis and refactoring. Second, it introduces the notion of atomicity, which implements a strict consistency condition between the formal content and the textual content of a process model. Based on an explorative investigation, we reveal several reoccurring violation patterns are not compliant with the notion of atomicity. Third, this thesis proposes an automatic refactoring technique that formalizes the identified patterns to transform a non-atomic process models into an atomic one. Fourth, this thesis defines an automatic technique for detecting and refactoring synonyms and homonyms in process models, which is eventually useful to unify the terminology used in an organization. Fifth and finally, this thesis proposes a recommendation-based refactoring approach that addresses process models suffering from incompleteness and leading to several possible interpretations. The efficiency and usefulness of the proposed techniques is further evaluated by real-world process model repositories from various industries. (author's abstract)
58

Munch : an efficient modularisation strategy on sequential source code check-ins

Arzoky, Mahir January 2015 (has links)
As developers are increasingly creating more sophisticated applications, software systems are growing in both their complexity and size. When source code is easy to understand, the system can be more maintainable, which leads to reduced costs. Better structured code can also lead to new requirements being introduced more efficiently with fewer issues. However, the maintenance and evolution of systems can be frustrating; it is difficult for developers to keep a fixed understanding of the system’s structure as the structure can change during maintenance. Software module clustering is the process of automatically partitioning the structure of the system using low-level dependencies in the source code, to improve the system’s structure. There have been a large number of studies using the Search Based Software Engineering approach to solve the software module clustering problem. A software clustering tool, Munch, was developed and employed in this study to modularise a unique dataset of sequential source code software versions. The tool is based on Search Based Software Engineering techniques. The tool constitutes of a number of components that includes the clustering algorithm, and a number of different fitness functions and metrics that are used for measuring and assessing the quality of the clustering decompositions. The tool will provide a framework for evaluating a number of clustering techniques and strategies. The dataset used in this study is provided by Quantel Limited, it is from processed source code of a product line architecture library that has delivered numerous products. The dataset analysed is the persistence engine used by all products, comprising of over 0.5 million lines of C++. It consists of 503 software versions. This study looks to investigate whether search-based software clustering approaches can help stakeholders to understand how inter-class dependencies of the software system change over time. It performs efficient modularisation on a time-series of source code relationships, taking advantage of the fact that the nearer the source code in time the more similar the modularisation is expected to be. This study introduces a seeding concept and highlights how it can be used to significantly reduce the runtime of the modularisation. The dataset is not treated as separate modularisation problems, but instead the result of the previous modularisation of the graph is used to give the next graph a head start. Code structure and sequence is used to obtain more effective modularisation and reduce the runtime of the process. To evaluate the efficiency of the modularisation numerous experiments were conducted on the dataset. The results of the experiments present strong evidence to support the seeding strategy. To reduce the runtime further, statistical techniques for controlling the number of iterations of the modularisation, based on the similarities between time adjacent graphs, is introduced. The convergence of the heuristic search technique is examined and a number of stopping criterions are estimated and evaluated. Extensive experiments were conducted on the time-series dataset and evidence are presented to support the proposed techniques. In addition, this thesis investigated and evaluated the starting clustering arrangement of Munch’s clustering algorithm, and introduced and experimented with a number of starting clustering arrangements that includes a uniformly random clustering arrangement strategy. Moreover, this study investigates whether the dataset used for the modularisation resembles a random graph by computing the probabilities of observing certain connectivity. This thesis demonstrates how modularisation is not possible with data that resembles random graphs, and demonstrates that the dataset being used does not resemble a random graph except for small sections where there were large maintenance activities. Furthermore, it explores and shows how the random graph metric can be used as a tool to indicate areas of interest in the dataset, without the need to run the modularisation. Last but not least, there is a huge amount of software code that has and will be developed, however very little has been learnt from how the code evolves over time. The intention of this study is also to help developers and stakeholders to model the internal software and to aid in modelling development trends and biases, and to try and predict the occurrence of large changes and potential refactorings. Thus, industrial feedback of the research was obtained. This thesis presents work on the detection of refactoring activities, and discusses the possible applications of the findings of this research in industrial settings.
59

On the refactoring of activity labels in business process models

Leopold, Henrik, Smirnov, Sergey, Mendling, Jan 14 January 2012 (has links) (PDF)
Large corporations increasingly utilize business process models for documenting and redesigning their operations. The extent of such modeling initiatives with several hundred models and dozens of often hardly trained modelers calls for automated quality assurance. While formal properties of control flow can easily be checked by existing tools, there is a notable gap for checking the quality of the textual content of models, in particular, its activity labels. In this paper, we address the problem of activity label quality in business process models. We designed a technique for the recognition of labeling styles, and the automatic refactoring of labels with quality issues. More specifically, we developed a parsing algorithm that is able to deal with the shortness of activity labels, which integrates natural language tools like WordNet and the Stanford Parser. Using three business process model collections from practice with differing labeling style distributions, we demonstrate the applicability of our technique. In comparison to a straightforward application of standard natural language tools, our technique provides much more stable results. As an outcome, the technique shifts the boundary of process model quality issues that can be checked automatically from syntactic to semantic aspects.
60

Visualizing Java Code Smells with Dotplots

Jefferson, Alvin Hayes 01 January 2008 (has links)
An approach using dot plots as an aid to visualizing smells within Java source files is presented. Dot plots are a visual tool that allows for viewing duplication in a document or text string. Our approach uses a plug-in for the Eclipse Java IDE to convert Java source files into dot plots. The goal here is to find problem areas in the code, known as "Code Smells", that could indicate that the source file needs to be modified or refactored. In the dot plot these problem areas appear as sections that contain interesting dot formations. Color is also used to enhance places of the dot plot that could be important. Duplication is a common problem in source code and also an important Code Smell. We will show that through finding the Duplicate Code smell we will also be able to find other code smells creating a plug-in that a programmer can use during the coding process to help improve code design.

Page generated in 0.213 seconds