• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 24
  • 22
  • 15
  • 15
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Virtual software in reality

Knight, Claire January 2000 (has links)
Software visualisation is an important weapon in the program comprehension armoury. It is a technique that can, when designed and used effectively, aid in understanding existing program code. It can achieve this by displaying information in new and different forms, which may make obvious something missed in reading the code. It can also be used to present many aspects of the data at once. Software, despite many software engineering advances in requirements, design and implementation techniques, continues to be complex and large and if anything seems to be growing in these respects. This means that techniques that failed to aid comprehension and maintenance are certainly not going to be able to deal with the current software. Therefore this area requires research to be able to suggest solutions to deal with the information overload that is sure to occur. There are several issues that this thesis addresses; all of them related to the creation of software visualisation systems that are capable of being used and useful well into the next generation of software systems. The scale and complexity of software are pressing issues, as is the associated information overload problem that this brings. In an attempt to address this problem the following are considered to be important: abstractions, representations, mappings, metaphors, and visualisations. These areas are interrelated and the first four enable the final one, visualisations. These problems are not the only ones that face software visualisation systems. There are many that are based on the general theory of the applicability of the technique to such tasks as program comprehension, rather than the detail of how a particular code fragment is shown. These problems are also related to the enabling technology of three- dimensional visualisations; virtual reality. In summary the areas of interest are: automation, evolution, scalability, navigation and interaction, correlation, and visual complexity. This thesis provides an exploration of these identified areas in the context of software visualisation. Relationships that describe, and distinguish between, existing and future software visualisations are presented, with examples based on recent software visualisation research. Two real world metaphors (and their associated mappings and representations) are defined for the purpose of visualising software as an aid to program comprehension. These metaphors also provide a vehicle for the exploration of the areas identified above. Finally, an evaluation of the visualisations is presented using a framework developed for the comparative evaluation of three-dimensional, comprehension oriented, software visualisations. This thesis has shown the viability of using three-dimensional software visualisations. The important issues of automation, evolution, scalability, and navigation have been presented and discussed, and their relationship to real world metaphors examined. This has been done in conjunction with an investigation into the use of such real world metaphors for software visualisation. The thesis as a whole has provided an important examination of many of the issues related to these types of visualisation in the context of software and is therefore a valuable basis for future work in this area.
2

A method for re-modularising legacy code

Burd, Elizabeth L. January 1999 (has links)
This thesis proposes a method for the re-modularisation of legacy COBOL. Legacy code often performs a number of functions that if split, would improve software maintainability. For instance, program comprehension would benefit from a reduction in the size of the code modules. The method aims to identify potential reuse candidates from the functions re-modularised, and to ensure clear interfaces are present between the new modules. Furthermore, functionality is often replicated across applications and so the re-modularisation process can also seek to reduce commonality and hence the overall amount of a company's code requiring maintenance. A 10 step method is devised which assembles a number of new and existing techniques into an approach suitable for use by staff not having significant reengineering experience. Three main approaches are used throughout the method; that is the analysis of the PERFORM structure, the analysis of the data, and the use of graphical representations. Both top-down and bottom-up strategies to program comprehension are incorporated within the method as are automatable, and user controlled processes to reuse candidate selection. Three industrial case studies are used to demonstrate and evaluate the method. The case studies range in size to gain an indication of the scalability of the method. The case studies are used to evaluate the method on a step by step basis; both strong points and deficiencies are identified, as well as potential solutions to the deficiencies. A review is also presented to assesses the three main approaches of the methods; the analysis of the PERFORM and data structures, and the use of graphical representations. The review uses the process of software evolution for its evaluation using successive versions of COBOL software. The method is retrospectively applied to the earliest version and the known changes identified from the following versions are used to evaluate the re-modularisations. Within the evaluation chapters a new link within the dominance tree is proposed as is an approach for dealing with multiple dominance trees. The results show that «ach approach provides an important contribution to the method as well as giving a useful insight (in the form of graphical representations) of the process of software evolution.
3

Supporting Source Code Feature Analysis Using Execution Trace Mining

2013 October 1900 (has links)
Software maintenance is a significant phase of a software life-cycle. Once a system is developed the main focus shifts to maintenance to keep the system up to date. A system may be changed for various reasons such as fulfilling customer requirements, fixing bugs or optimizing existing code. Code needs to be studied and understood before any modification is done to it. Understanding code is a time intensive and often complicated part of software maintenance that is supported by documentation and various tools such as profilers, debuggers and source code analysis techniques. However, most of the tools fail to assist in locating the portions of the code that implement the functionality the software developer is focusing. Mining execution traces can help developers identify parts of the source code specific to the functionality of interest and at the same time help them understand the behaviour of the code. We propose a use-driven hybrid framework of static and dynamic analyses to mine and manage execution traces to support software developers in understanding how the system's functionality is implemented through feature analysis. We express a system's use as a set of tests. In our approach, we develop a set of uses that represents how a system is used or how a user uses some specific functionality. Each use set describes a user's interaction with the system. To manage large and complex traces we organize them by system use and segment them by user interface events. The segmented traces are also clustered based on internal and external method types. The clusters are further categorized into groups based on application programming interfaces and active clones. To further support comprehension we propose a taxonomy of metrics which are used to quantify the trace. To validate the framework we built a tool called TrAM that implements trace mining and provides visualization features. It can quantify the trace method information, mine similar code fragments called active clones, cluster methods based on types, categorise them based on groups and quantify their behavioural aspects using a set of metrics. The tool also lets the users visualize the design and implementation of a system using images, filtering, grouping, event and system use, and present them with values calculated using trace, group, clone and method metrics. We also conducted a case study on five different subject systems using the tool to determine the dynamic properties of the source code clones at runtime and answer three research questions using our findings. We compared our tool with trace mining tools and profilers in terms of features, and scenarios. Finally, we evaluated TrAM by conducting a user study on its effectiveness, usability and information management.
4

SWordNet: Inferring Semantically Related Words from Software Context

Yang, Jinqiu January 2013 (has links)
Code search is an integral part of software development and program comprehension. The difficulty of code search lies in the inability to guess the exact words used in the code. Therefore, it is crucial for keyword-based code search to expand queries with semantically related words, e.g., synonyms and abbreviations, to increase the search effectiveness. However, it is limited to rely on resources such as English dictionaries and WordNet to obtain semantically related words in software, because many words that are semantically related in software are not semantically related in English. On the other hand, many words that are semantically related in English are not semantically related in software. This thesis proposes a simple and general technique to automatically infer semantically re- lated words (referred to as rPairs) in software by leveraging the context of words in comments and code. In addition, we propose a ranking algorithm on the rPair results and study cross-project rPairs on two sets of software with similar functionality, i.e., media browsers and operating sys- tems. We achieve a reasonable accuracy in nine large and popular code bases written in C and Java. Our further evaluation against the state of art shows that our technique can achieve a higher precision and recall. In addition, the proposed ranking algorithm improves the rPair extraction accuracy by bringing correct rPairs to the top of the list. Our cross-project study successfully discovers overlapping rPairs among projects of similar functionality and finds that cross-project rPairs are more likely to be correct than project-specific rPairs. Since the cross-project rPairs are highly likely to be general for software of the same type, the discovered overlapping rPairs can benefit other projects of the same type that have not been anaylyzed.
5

Program Comprehension Support for Assembly Language: Assessing the Needs of Specialized Groups

Baldwin, Jennifer Ellen 29 April 2014 (has links)
Advances in software engineering and programming languages have had an impact on productivity, time to market, comprehension, maintenance, and evolution of software. Low-level systems have been largely overlooked in this arena, not only because of their complexities, but also the "bare bones'" culture of this domain. This dissertation investigates the program comprehension needs of two stakeholder groups using different assembly languages: a mainframe development group and a malware analysis group. Exploratory interviews and surveys suggest that the groups' needs may be similar at a high-level. However, a detailed study involving requirements elicitation and case studies, reveals that the truth is much more complicated. As a proof of concept, we have created the AVA (Assembly Visualization and Analysis) framework, which is independent of the underlying assembly language. Despite this independence, tools within AVA could not be applied with equal efficacy, even just within these two groups. This dissertation shows that there exist fundamental differences not only in the highly-specialized nature of each group's work, but also in the assembly languages themselves. This reality necessitates a disjoint set of tools that cannot be consolidated into a universally applicable framework. / Graduate / 0984 / jebaldwin@gmail.com
6

Program Comprehension Support for Assembly Language: Assessing the Needs of Specialized Groups

Baldwin, Jennifer Ellen 29 April 2014 (has links)
Advances in software engineering and programming languages have had an impact on productivity, time to market, comprehension, maintenance, and evolution of software. Low-level systems have been largely overlooked in this arena, not only because of their complexities, but also the "bare bones'" culture of this domain. This dissertation investigates the program comprehension needs of two stakeholder groups using different assembly languages: a mainframe development group and a malware analysis group. Exploratory interviews and surveys suggest that the groups' needs may be similar at a high-level. However, a detailed study involving requirements elicitation and case studies, reveals that the truth is much more complicated. As a proof of concept, we have created the AVA (Assembly Visualization and Analysis) framework, which is independent of the underlying assembly language. Despite this independence, tools within AVA could not be applied with equal efficacy, even just within these two groups. This dissertation shows that there exist fundamental differences not only in the highly-specialized nature of each group's work, but also in the assembly languages themselves. This reality necessitates a disjoint set of tools that cannot be consolidated into a universally applicable framework. / Graduate / 0984 / jebaldwin@gmail.com
7

Improving the Effectiveness of Software Visualization by Considering Developers’ Cognitive Behaviors and Psychological Principles

Rizkallah, Lane 10 April 2023 (has links)
No description available.
8

Crystallizing Application Configurations

Zhang, Zanqing January 2006 (has links)
Software applications have both static and dynamic dependencies. Static dependencies are those derived from the source code. Dynamic runtime dependencies are established at runtime and may be based on information external to the source code, such as configuration files. Flexible applications commonly rely on configuration to adapt to diverse environments. An application's configuration encodes runtime dependencies between the various parts of the application. Reverse engineering tools have traditionally been based solely on static dependencies extracted from the source code. Neglecting dynamic dependencies encoded in an application's configuration can result in incorrect or incomplete program comprehension. Unfortunately, many applications store their configuration in an ad hoc, unstructured format from which it is not feasible to extract runtime dependencies by traditional reverse engineering. Our work takes advantage of well structured, published configuration formats, such as that of J2EE applications. Using these formats we are able to extend reverse engineering to analyse this previously neglected information. We introduce a technique called crystallization, which extracts configuration facts that encode dynamic dependencies. We use these recovered facts to predict and validate dynamic dependencies. Crystallizing configurations has the potential to increase developer productivity by providing better program comprehension.
9

Programming paradigms, information types and graphical representations : empirical investigations of novice program comprehension

Good, Judith January 1999 (has links)
This thesis describes research into the role of various factors in novice program comprehension, including the underlying programming paradigm, the representational features of the programming language, and the various types of information which can be derived from the program. The main postulate of the thesis is that there is no unique method for understanding programs, and that program comprehension will be influenced by, among other things, the way in which programs are represented, both semantically and syntactically. This idea has implications for the learning of programming, particularly in terms of how theses concepts should be embodied. The thesis is focused around three empirical studies. The first study, based on th so-called "information types" studies, challenged the idea that program comprehension is an invariant process over languages, and suggested that programming language will have a differential effect on comprehension, as evidenced by the types of information which novices are able to extract from a program. Despite the use of a markedly different language from earliier studies, the results were broadly similar. However, it was suggested that there are other factors additional to programming notation which intervene in the comprehension process, and which cannot be discounted. Furthermore, the study highlighted the need to tie the hypotheses about information extraction more closely to the programming paradigm. The second study introduced a graphical component into the investigation, and looked at the way in which visual representations of programs combine with programming paradigm to influence comprehension. The mis-match conjecture, which suggests that tasks requiring information which is highlighted by a notation will be facilitated relative to tasks where the information must be inferred, was applied to programming paradigm. The study showed that the mis-match effect can be overridden by other factors, most notably subjects' prior experience and the programming culture in which they are taught. The third study combined the methodologies of the first two studies to look at the mis-match conjecture within the wider context of information types. Using graphical representations of the control flow and data flow paradigms, it showed that, despite a bias toward one paradigm based on prior experience and culture, programming paradigm does influence the way in which the program is understood, resulting in improved performance on tasks requiring information which the paradigm is hypothesised to highlight. Furthermore, this effect extends to groups of information which could be said to be theoretically related to the information being highlighted. The thesis also proposes a new and more precise methodology for the analysis of students' accounts of their comprehension of a program, a form a data which is typically derived from the information types studies. It then shows how an analysis of this qualitative data can be used to provide further support for the quantitative results. Finally, the thesis suggests how the core results could be used to develop computer based support environments for novice visual programming, and provides other suggestions for further work.
10

Crystallizing Application Configurations

Zhang, Zanqing January 2006 (has links)
Software applications have both static and dynamic dependencies. Static dependencies are those derived from the source code. Dynamic runtime dependencies are established at runtime and may be based on information external to the source code, such as configuration files. Flexible applications commonly rely on configuration to adapt to diverse environments. An application's configuration encodes runtime dependencies between the various parts of the application. Reverse engineering tools have traditionally been based solely on static dependencies extracted from the source code. Neglecting dynamic dependencies encoded in an application's configuration can result in incorrect or incomplete program comprehension. Unfortunately, many applications store their configuration in an ad hoc, unstructured format from which it is not feasible to extract runtime dependencies by traditional reverse engineering. Our work takes advantage of well structured, published configuration formats, such as that of J2EE applications. Using these formats we are able to extend reverse engineering to analyse this previously neglected information. We introduce a technique called crystallization, which extracts configuration facts that encode dynamic dependencies. We use these recovered facts to predict and validate dynamic dependencies. Crystallizing configurations has the potential to increase developer productivity by providing better program comprehension.

Page generated in 0.1142 seconds