• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 46
  • 46
  • 24
  • 22
  • 15
  • 15
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Reverse Engineering Object-Oriented Systems into Umple: An Incremental and Rule-Based Approach

Garzón, Miguel Alejandro January 2015 (has links)
This thesis investigates a novel approach to reverse engineering, in which modeling information such as UML associations, state machines and attributes is incrementally added to code written in Java or C++, while maintaining the system in a textual format. Umple is a textual representation that blends modeling in UML with programming language code. The approach, called umplification, produces a program with behavior identical to the original one, but written in Umple and enhanced with model-level abstractions. As the resulting program is Umple code, our approach eliminates the distinction between code and model. We implemented automated umplification in a tool called the Umplificator. The tool is rule-driven: code, including Umple code, is parsed and processed into an internal representation, which is then operated on by rules; transformed textual code and model, in Umple, is then generated. The rules used to transform code to model have been iteratively refined by using the tool on a variety of open-source software systems. The thesis consists of three main parts. The first part (Chapters 1 and 2) present the research questions and research methodology, as well as introducing Umple and the background necessary to understand the rest of the thesis. The umplification method is presented at increasing levels of detail through Chapters 3 and 4. Chapters 5 and 6 present the tool and evaluation of our approach, respectively. An analysis of related work, and comparison to our own, appears in Chapter 7. Finally, conclusions and future work directions are presented in Chapter 8.
22

PythonVis : Software Visualisation in Virtual Reality for Program Comprehension

Larsson, Mattias January 2022 (has links)
This paper presents PythonVis, a novel Virtual Reality (VR) software visualization prototype for program comprehension. The motivation for PythonVis is to leverage the affordances of VR and the debugger tool to support software developers' comprehension of novel software. An experimental study with follow up interviews was conducted using 10 participants, comparing PythonVis to a desktop setup. The results indicate that PythonVis could be useful for getting a better overview over a whole code base. Limitation are addressed and further studies are suggested. / Den här avhandlingen presenterar PythonVis, en ny prototyp av ett Virtual Reality (VR) visualiseringsverktyg för programförståelse. Motivationen för PythonVis är att utnyttja de förmågor som VR erbjuder, samt felsökarverktyg för att underlätta mjukvaruutvecklares förståelse för okända program. En experimentell studie med följande intervjuer genomfördes med 10 deltagare där PythonVis jämfördes med en traditionell dator. Resultaten indikerar att PythonVis kan användas för att få en bättre överblick av en hel kodbas. Begränsningar i studien är adresserade och framtida forskning är förslagen.
23

Methodologies, Techniques, and Tools for Understanding and Managing Sensitive Program Information

Liu, Yin 20 May 2021 (has links)
Exfiltrating or tampering with certain business logic, algorithms, and data can harm the security and privacy of both organizations and end users. Collectively referred to as sensitive program information (SPI), these building blocks are part and parcel of modern software systems in domains ranging from enterprise applications to cyberphysical setups. Hence, protecting SPI has become one of the most salient challenges of modern software development. However, several fundamental obstacles stand on the way of effective SPI protection: (1) understanding and locating the SPI for any realistically sized codebase by hand is hard; (2) manually isolating SPI to protect it is burdensome and error-prone; (3) if SPI is passed across distributed components within and across devices, it becomes vulnerable to security and privacy attacks. To address these problems, this dissertation research innovates in the realm of automated program analysis, code transformation, and novel programming abstractions to improve the state of the art in SPI protection. Specifically, this dissertation comprises three interrelated research thrusts that: (1) design and develop program analysis and programming support for inferring the usage semantics of program constructs, with the goal of helping developers understand and identify SPI; (2) provide powerful programming abstractions and tools that transform code automatically, with the goal of helping developers effectively isolate SPI from the rest of the codebase; (3) provide programming mechanism for distributed managed execution environments that hides SPI, with the goal of enabling components to exchange SPI safely and securely. The novel methodologies, techniques, and software tools, supported by programming abstractions, automated program analysis, and code transformation of this dissertation research lay the groundwork for establishing a secure, understandable, and efficient foundation for protecting SPI. This dissertation is based on 4 conference papers, presented at TrustCom'20, GPCE'20, GPCE'18, and ManLang'17, as well as 1 journal paper, published in Journal of Computer Languages (COLA). / Doctor of Philosophy / Some portions of a computer program can be sensitive, referred to as sensitive program information (SPI). By compromising SPI, attackers can hurt user security/privacy. It is hard for developers to identify and protect SPI, particularly for large programs. This dissertation introduces novel methodologies, techniques, and software tools that facilitate software developments tasks concerned with locating and protecting SPI.
24

Reverse Software Engineering Large Object Oriented Software Systems using the UML Notation

Ramasubbu, Surendranath 30 April 2001 (has links)
A common problem experienced by the software engineering community traditionally has been that of understanding legacy code. A decade ago, legacy code was used to refer to programs written in COBOL, typically for large mainframe systems. However, current software developers predominantly use Object Oriented languages like C++ and Java. The belief prevalent among software developers and object philosophers that comprehending object-oriented software will be relatively easier has turned out to be a myth. Tomorrow's legacy code is being written today, since object oriented programs are even more complex and difficult to comprehend, unless rigorously documented. Reverse Engineering is a methodology that greatly reduces the time, effort and complexity involved in solving the program comprehension problem. This thesis deals with Reverse Engineering complex object oriented software and the experiences with a sample case study. Extensive survey of literature and contemporary research on reverse engineering and program comprehension was undertaken as part of this thesis work. An Energy Information System (EIS) application created by a leading energy service provider and one that is being used extensively in the real world was chosen as a case study. Reverse engineering this industry strength Java application necessitated the definition of a formal process. An intuitive Reverse Engineering Process (REP) was defined and used for the reverse engineering effort. The learning experiences gained from this case study are discussed in this thesis. / Master of Science
25

Understanding and Maintaining C++ Generic Libraries

Sutton, Andrew 09 July 2010 (has links)
No description available.
26

Automatic Generation and Assessment of Source-code Method Summaries

Abid, Nahla Jamal 24 April 2017 (has links)
No description available.
27

A COMPREHENSIVE EXAMINATION OF FACTORS FOR ASSESSING THE QUALITY OF METHOD NAMES IN SOURCE CODE

Alsuhaibani, Reem Saleh 03 May 2022 (has links)
No description available.
28

Emotion and Cognition Analysis of Intro and Senior CS Students in Software Engineering

Evans, Justin 01 June 2021 (has links) (PDF)
he software engineering community has advanced the field in the past few decades towards making the software development life cycle more efficient, robust, and streamlined. Advances such as better integrated development environments and agile workflows have made the process more efficient as well as more flexible. Despite these many achievements software engineers still spend a great deal of time writing, reading and reviewing code. These tasks require a lot of attention from the engineer with many different variables affecting the performance of the tasks. In recent years many researchers have come to investigate how emotion and the way we think about code affect our ability to write and understand another’s code. In this work we look at how developers’ emotions affect their ability to solve software engineering tasks such as code writing and review. We also investigate how and to what extent emotions differ with the software engineering experience of the subject. The methodologies we employed utilize the Emotiv Epoc+ to take readings of subjects’ brain patterns while they perform code reviews as well as write basic code. We then examine how the electrical signals and patterns in the participants differ with experience in the field, as well as their efficiency and correctness in solving the software engineering tasks. We found in our study that senior students had much smaller distribution of emotions than novices with a few different emotion groups emerging. The novices, while able to be grouped, had a much wider dispersion of the emotion aspects recorded.
29

Révéler le contenu latent du code source : à la découverte des topoi de programme / Unveiling source code latent knowledge : discovering program topoi

Ieva, Carlo 23 November 2018 (has links)
Le développement de projets open source à grande échelle implique de nombreux développeurs distincts qui contribuent à la création de référentiels de code volumineux. À titre d'exemple, la version de juillet 2017 du noyau Linux (version 4.12), qui représente près de 20 lignes MLOC (lignes de code), a demandé l'effort de 329 développeurs, marquant une croissance de 1 MLOC par rapport à la version précédente. Ces chiffres montrent que, lorsqu'un nouveau développeur souhaite devenir un contributeur, il fait face au problème de la compréhension d'une énorme quantité de code, organisée sous la forme d'un ensemble non classifié de fichiers et de fonctions.Organiser le code de manière plus abstraite, plus proche de l'homme, est une tentative qui a suscité l'intérêt de la communauté du génie logiciel. Malheureusement, il n’existe pas de recette miracle ou bien d’outil connu pouvant apporter une aide concrète dans la gestion de grands bases de code.Nous proposons une approche efficace à ce problème en extrayant automatiquement des topoi de programmes, c'est à dire des listes ordonnées de noms de fonctions associés à un index de mots pertinents. Comment se passe le tri? Notre approche, nommée FEAT, ne considère pas toutes les fonctions comme égales: certaines d'entre elles sont considérées comme une passerelle vers la compréhension de capacités de haut niveau observables d'un programme. Nous appelons ces fonctions spéciales points d’entrée et le critère de tri est basé sur la distance entre les fonctions du programme et les points d’entrée. Notre approche peut être résumée selon ses trois étapes principales : 1) Preprocessing. Le code source, avec ses commentaires, est analysé pour générer, pour chaque unité de code (un langage procédural ou une méthode orientée objet), un document textuel correspondant. En outre, une représentation graphique de la relation appelant-appelé (graphe d'appel) est également créée à cette étape. 2) Clustering. Les unités de code sont regroupées au moyen d’une classification par clustering hiérarchique par agglomération (HAC). 3) Sélection du point d’entrée. Dans le contexte de chaque cluster, les unités de code sont classées et celles placées à des positions plus élevées constitueront un topos de programme.La contribution de cette thèse est triple: 1) FEAT est une nouvelle approche entièrement automatisée pour l'extraction de topoi de programme, basée sur le regroupement d'unités directement à partir du code source. Pour exploiter HAC, nous proposons une distance hybride originale combinant des éléments structurels et sémantiques du code source. HAC requiert la sélection d’une partition parmi toutes celles produites tout au long du processus de regroupement. Notre approche utilise un critère hybride basé sur la graph modularity et la cohérence textuelle pour sélectionner automatiquement le paramètre approprié. 2) Des groupes d’unités de code doivent être analysés pour extraire le programme topoi. Nous définissons un ensemble d'éléments structurels obtenus à partir du code source et les utilisons pour créer une représentation alternative de clusters d'unités de code. L’analyse en composantes principales, qui permet de traiter des données multidimensionnelles, nous permet de mesurer la distance entre les unités de code et le point d’entrée idéal. Cette distance est la base du classement des unités de code présenté aux utilisateurs finaux. 3) Nous avons implémenté FEAT comme une plate-forme d’analyse logicielle polyvalente et réalisé une étude expérimentale sur une base ouverte de 600 projets logiciels. Au cours de l’évaluation, nous avons analysé FEAT sous plusieurs angles: l’étape de mise en grappe, l’efficacité de la découverte de topoi et l’évolutivité de l’approche. / During the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding.
30

ARNI: an EEG-Based Model to Measure Program Comprehension

Segalotto, Matheus 18 January 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-04-24T13:44:05Z No. of bitstreams: 1 Matheus Segalotto_.pdf: 8717126 bytes, checksum: 94fda4721d448e49b82be91aaa8057c7 (MD5) / Made available in DSpace on 2018-04-24T13:44:05Z (GMT). No. of bitstreams: 1 Matheus Segalotto_.pdf: 8717126 bytes, checksum: 94fda4721d448e49b82be91aaa8057c7 (MD5) Previous issue date: 2018-01-18 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / PROSUP - Programa de Suporte à Pós-Gradução de Instituições de Ensino Particulares / A compreensão de programa é um processo cognitivo realizado no cérebro dos desenvolvedores para entender o código-fonte. Este processo cognitivo pode ser influenciado por vários fatores, incluindo o nível de modularização do código-fonte e o nível de experiência dos desenvolvedores de software. A compreensão de programa é amplamente reconhecida como uma tarefa com problemas de erro e esforço. No entanto, pouco foi feito para medir o esforço cognitivo dos desenvolvedores para compreender o programa. Além disso, esses fatores influentes não são explorados no nível de esforço cognitivo na perspectiva dos desenvolvedores de software. Além disso, alguns modelos de cognição foram criados para detectar indicadores de atividade cerebral, bem como dispositivos de eletroencefalografia (EEG) para suportar essas detecções. Infelizmente, eles não são capazes de medir o esforço cognitivo. Este trabalho, portanto, propõe o ARNI, um modelo computacional baseado em EEG para medir a compreensão do programa. O modelo ARNI foi produzido com base em lacunas encontradas na literatura após um estudo de mapeamento sistemático (SMS), que analisou 1706 estudos, 12 dos quais foram escolhidos como estudos primários. Um experimento controlado com 35 desenvolvedores de software foi realizado para avaliar o modelo ARNI através de 350 cenários de compreensão de programa. Além disso, esse experimento também avaliou os efeitos da modularização e a experiência dos desenvolvedores no esforço cognitivo dos desenvolvedores. Os resultados obtidos sugerem que o modelo ARNI foi útil para medir o esforço cognitivo. O experimento controlado revelou que a compreensão do código fonte não modular exigia menos esforço temporal (34,11%) e produziu uma taxa de compreensão mais alta (33,65%) do que o código fonte modular. As principais contribuições são: (1) a execução de SMS no contexto estudado; (2) um modelo computacional para medir a compreensão do programa para medir o código-fonte; (3) conhecimento empírico sobre os efeitos da modularização no esforço cognitivo dos desenvolvedores. Finalmente, este trabalho pode ser visto como um primeiro passo para uma agenda ambiciosa na área de compreensão de programa. / Program comprehension is a cognitive process performed in the developers’ brain to understand source code. This cognitive process may be influenced by several factors, including the modularization level of source code and the experience level of software developers. The program comprehension is widely recognized as an error-prone and effort-consuming task. However, little has been done to measure developers’ cognitive effort to comprehend program. In addition, such influential factors are not explored at the cognitive effort level from the perspective of software developers. Additionally, some cognition models have been created to detect brain-activity indicators as well as wearable Electroencephalography (EEG) devices to support these detections. Unfortunately, they are not able to measure the cognitive effort. This work, therefore, proposes the ARNI, an EEG-Based computational model to measure program comprehension. The ARNI model was produced based on gaps found in the literature after a systematic mapping study (SMS), which reviewed 1706 studies, 12 of which were chosen as primary studies. A controlled experiment with 35 software developers was performed to evaluate the ARNI model through 350 scenarios of program comprehension. Moreover, this experiment also evaluated the effects of modularization and developers’ experience on the developers’ cognitive effort. The obtained results suggest that the ARNI model was useful to measure cognitive effort. The controlled experiment revealed that the comprehension of non-modular source code required less temporal effort (34.11%) and produced a higher correct comprehension rate (33.65%) than modular source code. The main contributions are: (1) the execution of SMS in the context studied; (2) a computational model to measure program comprehension to measure source code; (3) empirical knowledge about the effects of modularization on the developers’ cognitive effort. Finally, this work can be seen as a first step for an ambitious agenda in the area of program comprehension.

Page generated in 0.1008 seconds