Spelling suggestions: "subject:"5oftware comprehension"" "subject:"1software comprehension""
1 |
Software JourneysFilip, Adrian 06 November 2014 (has links)
Getting familiar with the code is a challenging activity and therefore resource intensive. The larger the software code base, the larger the resource expenditure. We consider software development in the case of established software developed by mid to large mature teams. This thesis explores a new way of documenting code that could increase the productivity of software development. The method consists of creating small, dynamically-ordered sets of code locations called Landmarks. These sets called Journeys are significant for a feature. The landmarks contain documentation related to system behavior and qualitative system state information at the time when the software execution reaches the locations. This new type of documentation is very light and does not require extensive additional software systems for management. This information is stored, and shared in a seamless manner via the existing source control systems.
An experiment was performed to gauge the efficiency of this method versus the current development practice. The difference of productivity between developers not using this approach versus developers benefiting from this approach was captured. The results could be qualitatively interpreted as pointing towards an overall increase of productivity for the participant developers using the new approach.
|
2 |
ITRACE: AN INFRASTRUCTURE TO SUPPORT EYE-TRACKING STUDIES IN INTEGRATED DEVELOPMENT ENVIRONMENTSBryant, Corey A. 23 July 2021 (has links)
No description available.
|
3 |
HDPV: Highly Interactive, Faithful, In-Vivo Runtime State Visualization for Software ProgramsSundararaman, Jaishankar 04 August 2008 (has links)
Program Visualization systems use graphics and animation to represent the behavior of software programs. These systems represent different aspects of the program such as source code, control flow, data structures, runtime state of the program. Representing the actual runtime state of the program finds its use in a variety of applications including program understanding, visual debugging, and pedagogy. However, existing state-of-the-art program visualization systems are limited in : (1) not providing sufficient interactive capabilities to the user; (2) not faithfully representing the runtime state of the program; (3) not allowing users to apply different layout strategies to the visualization; (4) being tied to a specific programming language.
To address these limitations, this thesis presents HDPV, a program state visualization system that visualizes any C, C++, or Java program. HDPV is based on a canonical state model that represents the memory layout of the program as a graph of memory blocks. It decouples the visualization of the program from the actual programming language in which it is written, thereby making the system language independent. HDPV supports a host of interactive features that allow the user to selectively explore different parts of the program's runtime state. Novel layout strategies support customization through user interaction. We provide a list of use-cases to show that HDPV can be applied to a wide variety of applications including - but not limited to - understanding programs that use basic concepts in computer science, demonstrating algorithm implementations, and debugging software programs. / Master of Science
|
4 |
Reverse Engineering Behavioural Models by Filtering out Utilities from Execution TracesBraun, Edna 10 September 2013 (has links)
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software.
We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria.
We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems.
By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
|
5 |
Facilitating comprehension of Swift programsChernenko, Andrii January 2018 (has links)
Program comprehension is the process of gaining knowledge about software system by extracting it from its source code or observing its behavior at runtime. Often, when documentation is unavailable or missing, this is the only reliable source of knowledge about the system, and the fact that up to 50% of total maintenance effort is spent understanding the system makes it even more important. The source code of large software systems contains thousands, sometimes millions of lines of code, motivating the need for automation, which can be achieved with the help of program comprehension tools. This makes comprehension tools an essential factor in the adoption of new programming languages. This work proposes a way to fill this gap in the ecosystem of Swift, a new, innovative programming language aiming to cover a wide range of applications while being safe, expressive, and performant. The proposed solution is to bridge the gap between Swift and VizzAnalyzer, a program analysis framework featuring a range of analyses and visualizations, as well as modular architecture which makes adding new analyses and visualizations easier. The idea is to define a formal model for representing Swift programs and mapping it to the common program model used by VizzAnalyzer as the basis for analyses and visualizations. In addition to that, this paper discusses the differences between Swift and programming languages which are already supported by VizzAnalyzer, as well as practical aspects of extracting the models of Swift programs from their source code.
|
6 |
Reverse Engineering Behavioural Models by Filtering out Utilities from Execution TracesBraun, Edna January 2013 (has links)
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software.
We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria.
We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems.
By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
|
7 |
Um modelo classificador da lista de e-mail do Projeto Apache que combina dicionário neurolinguístico com ontologia / A classifier model from the e-mail list of Apache Project that combines neurolinguistic dictionary with ontologyFarias, Mário André de Freitas 23 December 2011 (has links)
Electronic mailing lists and discussion groups are normally used by programmers to discuss and improve tasks to be performed during software projects development. Open Source Software (OSS) projects use this lists as the primary tool for collaboration and cooperation. In project like that, normally, the developers are around the world. Thus, means of interaction and communication are needed to ensure collaboration between them, as well as efficiency in the construction and maintenance of projects this size. Mailing lists can be an important data source to discovery information useful about patterns of behavior of developer aimed at project manager. The Neurominer is a text mining tool that determines the Preferred Representational System (PRS) of software developers in a specific context. The tool has a new approach which is a combination between the Neuro-Linguistic Programming NLP theory, text mining and statistic technique. In this context, we propose the extension of this tool by applying of techniques of ontology to dictionary, allowing the combination of sensory predicates with software engineering terms, providing a greater power in the context of the dictionary. This way, the text mining matched with NLP theory and ontology appears as natural candidate that consists a solution that aiming to improve the mining of textual information through mailing lists, in order to support software project managers in making decision. This matching showed significant outcomes, proposing a efficient and effective solution. / Listas de e-mails e grupos de discussão são normalmente usados por programadores para discutir e aperfeiçoar tarefas a serem executadas durante as fases de desenvolvimento de projetos de software. Projetos de softwares Open Source utilizam essas listas como uma ferramenta primária para a colaboração e cooperação. Em projetos dessa natureza, normalmente, os desenvolvedores estão em todas as partes do mundo. Desta forma, meios de interação e comunicação são necessários para garantir a colaboração entre os mesmos, bem como a eficácia no processo de construção e manutenção de projetos desse porte. Listas de e-mails podem ser uma importante fonte de dados para a descoberta de informações úteis acerca de padrões de comportamento de desenvolvedores para gerentes de projetos. O Neurominer é uma ferramenta de mineração de texto que determina o Sistema de Representação Preferencial de desenvolvedores de software em um contexto específico. A ferramenta apresenta como inovação a utilização da teoria da Programação Neurolinguística - PNL combinada com técnicas de mineração e estatística. Nesse contexto, é proposta a extensão dessa ferramenta através da aplicação de técnicas de ontologia ao seu dicionário, permitindo a combinação de predicados sensoriais a termos da engenharia de software, proporcionando um poder maior de contextualização ao seu dicionário. Sob esse prisma, a mineração de texto combinada com técnicas de PNL e ontologia surge como candidata natural para compor uma solução que objetiva melhorar a garimpagem de informações textuais, através de listas de discussões, com o propósito de apoiar gerentes de projetos de softwares na tomada de decisão. Essa combinação conduziu a resultados bastante significativos, propondo uma solução eficiente e eficaz.
|
8 |
Giveme effort: um framework para apoiar estimativa de esforço em atividades de manutenção e compreensão de softwareMiguel, Marcos Alexandre 01 September 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-16T13:41:31Z
No. of bitstreams: 1
marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5) / Approved for entry into archive by Diamantino Mayra (mayra.diamantino@ufjf.edu.br) on 2017-01-31T10:34:46Z (GMT) No. of bitstreams: 1
marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5) / Made available in DSpace on 2017-01-31T10:34:46Z (GMT). No. of bitstreams: 1
marcosalexandremiguel.pdf: 10203756 bytes, checksum: 35844967ee919f58955320a1c591c5dc (MD5)
Previous issue date: 2016-09-01 / Muitas organizações encontram problemas na tentativa de estimar esforço em atividades de manutenção de software. Quando a estimativa de esforço não está bem definida ou é imprecisa, os resultados obtidos podem refletir diretamente na entrega do software, causando insatisfação do cliente ou diminuição da qualidade do produto. O sucesso ou fracasso de projetos depende da precisão do esforço e do cronograma das atividades envolvidas. O surgimento de métodos ágeis no campo de desenvolvimento de software tem apresentado muitas oportunidades e desafios para pesquisadores e profissionais da área. Um dos principais desafios é a estimativa de esforço para as atividades de manutenção no desenvolvimento ágil de software. Nesse contexto, este trabalho apresenta um framework, nomeado GiveMe Effort, o qual objetiva apoiar as atividades de estimativa de esforço na manutenção de software usando dados históricos e informações de compreensão de software. / Many organizations encounter problems when estimating effort for software maintenance activities. When estimating effort is not well defined or are inaccurate, the results may reflect directly into the software delivery, causing customer dissatisfaction or decreased product quality. The success or failure of projects depends on the accuracy of the effort and the schedule of involved activities. The rise of agile methods in software development has presented many opportunities and challenges for researchers and professionals. In this context, a key challenge is the effort estimate for maintenance activities in the agile software development context. This work presents a framework, called GiveMe Effort, to support the effort estimation activities in software maintenance. It is based on historical data and software comprehension information.
|
9 |
Supporting Source Code Comprehension During Software Evolution and MaintenanceAlhindawi, Nouh Talal 30 July 2013 (has links)
No description available.
|
10 |
A unified framework for the comprehension of software's time dimensionBenomar, Omar 02 1900 (has links)
Les logiciels sont de plus en plus complexes et leur développement est souvent fait par des équipes dispersées et changeantes. Par ailleurs, de nos jours, la majorité des logiciels sont recyclés au lieu d’être développés à partir de zéro. La tâche de compréhension, inhérente aux tâches de maintenance, consiste à analyser plusieurs dimensions du logiciel en parallèle. La dimension temps intervient à deux niveaux dans le logiciel : il change durant son évolution et durant son exécution. Ces changements prennent un sens particulier quand ils sont analysés avec d’autres dimensions du logiciel. L’analyse de données multidimensionnelles est un problème difficile à résoudre. Cependant, certaines méthodes permettent de contourner cette difficulté. Ainsi, les approches semi-automatiques, comme la visualisation du logiciel, permettent à l’usager d’intervenir durant l’analyse pour explorer et guider la recherche d’informations. Dans une première étape de la thèse, nous appliquons des techniques de visualisation pour mieux comprendre la dynamique des logiciels pendant l’évolution et l’exécution. Les changements dans le temps sont représentés par des heat maps. Ainsi, nous utilisons la même représentation graphique pour visualiser les changements pendant l’évolution et ceux pendant l’exécution. Une autre catégorie d’approches, qui permettent de comprendre certains aspects dynamiques du logiciel, concerne l’utilisation d’heuristiques. Dans une seconde étape de la thèse, nous nous intéressons à l’identification des phases pendant l’évolution ou pendant l’exécution en utilisant la même approche. Dans ce contexte, la prémisse est qu’il existe une cohérence inhérente dans les évènements, qui permet d’isoler des sous-ensembles comme des phases. Cette hypothèse de cohérence est ensuite définie spécifiquement pour les évènements de changements de code (évolution) ou de changements d’état (exécution). L’objectif de la thèse est d’étudier l’unification de ces deux dimensions du temps que sont l’évolution et l’exécution. Ceci s’inscrit dans notre volonté de rapprocher les deux domaines de recherche qui s’intéressent à une même catégorie de problèmes, mais selon deux perspectives différentes. / Software systems are getting more and more complex and are developed by teams that are constantly changing and not necessarily working in the same location. Moreover, most software systems, nowadays, are recycled rather than being developed from scratch. A comprehension task is crucial when performing maintenance tasks; it consists of analyzing multiple software dimensions concurrently. Time is one of these dimensions, as software changes its state with time in two manners: during their execution and during their evolution. These changes make sense only when analyzed within the context of other software dimensions, such as structure or bug information. Multidimensional analysis is a difficult problem to solve. However, there are certain methods that bypass this difficulty, such as semi-automatic approaches. Software visualization is one of them, as it allows being part of the analysis by exploring and guiding information search. The first stage of the thesis consists of applying visualization techniques to better understand software dynamicity during execution and evolution. Changes over time are represented by heat maps. Hence, we utilize the same graphical representation to visualize both change types over time. Other approaches that permit the analysis of a program’s dynamic behavior over time involve the use of heuristics. In the thesis’ second stage, we are interested in the identification of the programs’ execution phases and evolution patterns using the same approach, i.e. search-based optimisation. In this context, the premise is the existence of internal cohesion between change events that allow the clustering in phases. This hypothesis of cohesion is defined specifically for change events in the code during software evolution and state changes during program execution. This thesis’ main objective is to study the unification of these two time dimensions, evolution and execution, in an attempt to bring together two research domains that work on the same set of problems, but from two different perspectives.
|
Page generated in 0.0899 seconds