• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 36
  • 8
  • 6
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 97
  • 43
  • 38
  • 34
  • 34
  • 33
  • 32
  • 29
  • 27
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

BUSCA POR OPORTUNIDADES DE REFATORAÇÃO PARA APLICAÇÃO DE PADRÕES DE PROJETO / SEARCHING FOR REFACTORING OPPORTUNITIES TO APPLY DESIGN PATTERNS

Pauli, Guinther de Bitencourt 27 August 2014 (has links)
It is difficult to maintain and to adapt poorly written code presenting shortcomings in its structure. Refactoring techniques are used to improve the source code and the structure of applications, making them better and easier to modify. Design patterns are reusable solutions used in similar problems in object-oriented systems, so there is no need to recreate the solutions. Applying design patterns in the context of refactoring in a corrective way becomes a desired activity in the life cycle of a specific software system. However, for medium and large-scale projects, the manual examination of artifacts to find problems and opportunities to apply a design pattern is a hard task. In this context, we present a set of metric-based heuristic functions to detect where a design pattern can be applied in a given project, more specifically, the Strategy, Factory Method, Template Method, Creation Method and Chain Constructors patterns. To evaluate the heuristic functions and its results we have also built a tool to show the results. This tool can examine source code traversing ASTs (Abstract Syntax Trees), searching for opportunities to apply the patterns, indicating the exact location in the source code where the pattern is suggested, also showing some evidences used in the detection. / Manter e adaptar código mal escrito, com problemas estruturais, é uma tarefa difícil. As técnicas de refatoração são usadas para melhorar o código e a estrutura de aplicações, aumentando sua qualidade e tornando-as mais fáceis de serem modificadas. Padrões de projeto são soluções reutilizáveis usadas para resolver problemas comumente encontrados em sistemas orientados a objetos. A aplicação de padrões de projeto no contexto de refatoração usando uma abordagem corretiva torna-se uma atividade interessante no ciclo de vida de um sistema de software. Contudo, para projetos de média e larga-escala, examinar manualmente os artefatos de software na busca por problemas e oportunidades para aplicar um determinado padrão de projeto é uma tarefa árdua e difícil. Nesse contexto, apresentamos um conjunto de funções heurísticas baseadas em métricas que permitem detectar onde um padrão de projeto pode ser aplicado nos artefatos de código de um determinado projeto, mais especificamente, os padrões de projeto Strategy, Factory Method, Template Method, Creation Method e Chain Constructors. Para avaliar as funções propostas, desenvolvemos uma ferramenta que avalia código-fonte e exibe possíveis oportunidades para aplicar uma refatoração rumo a tais padrões. Essa ferramenta foi desenvolvida usando-se ASTs (Abstract Syntax Trees), procurando por oportunidades de refatoração, indicando os locais no código fonte onde o padrão é sugerido e mostrando algumas evidências usadas para a detecção.
72

Kodrefaktorisering / Code Refactoring

Nylander, Amy January 2013 (has links)
Denna rapport har sitt ursprung i det kodefaktoriseringsarbete som utfärdats våren 2013 som examensarbete i dataingenjörsprogrammet vid Örebro Universitet. Arbetet utfärdades på Nethouse i Örebro, och hade stort fokus på koddesign och kodkvalitet. I rapporten diskuteras vilka faktorer som påverkar hur underhållbar och läsbar en kod är, men också hur man på ett rimligt sätt kan utvärdera och mäta kodkvalitet. Den teoretiska biten blandas med den praktiska, där läsaren introduceras för ett flertal metoder, och hur dessa sedan implementerades i det faktiska projektet som Nethouse tillhandahöll. / This report has its origins in the code refactoring work issued in spring 2013 as a Degree Project in the Computer Engineering Programme, at Örebro University. The work took place at Nethouse in Örebro, and had a major focus on code design, and code quality. The report discusses the factors that affect how maintainable and readable a code is, but also how to reasonably evaluate and measure code quality. The theory is mixed with the practical, where the reader is introduced to a variety of methods, and how these were then implemented in the actual project that Nethouse provided.
73

Abstrakčios sintaksės medžių pertvarkymo algoritmų tyrimas / Investigation of Refactoring Algorithms for Abstract Syntax Tree

Jokubauskas, Justas 25 August 2010 (has links)
Per pastaruosius metus, lankstaus programavimo metodikos sulaukė daug dėmesio. Programų kodo pertvarkymas tapo viena iš labiausiai naudojamų praktikų, ypač ekstremaliame programavime. Todėl poreikis turėti programos kodo pertvarkymų įrankį išaugo iki tokio lygio, kad pertvarkymo įrankiai tapo reikalaujama priemone šiuolaikinėse programų kūrimo aplinkose. Tyrimo sritis – abstrakčios sintaksės medžių pertvarkymo algoritmai. Tyrimo objektas – abstrakčios sintaksės medžių pertvarkymo procesai skirtingose programavimo kalbose (C++ ir Java). Šio darbo tikslas – ištirti, kaip efektyviai būtų galima vykdyti pertvarkymus bendriniam AST, tam sukuriant pertvarkymų biblioteką ir atliekant eksperimentus įvairių kalbų atvejais. / Over the past years, agile development methodologies have attracted a lot of attention. Refactoring has become one of the most heavily used practices, especially in Extreme Programming. Therefore the need to have powerful refactoring tools has grown to such an extent, that there is a required feature for modern-day IDEs to have implemented refactoring tools. The aim of this work is to build a refactoring library for the generic abstract syntax tree and test the algorithms for speed. Generic abstract syntax tree (GAST) is a tree structure that can store elements of several programming language. Achieving this goal required doing certain tasks that you find in this document: analysis of technologies and existing software; several algorithms for refactoring; user need and specification of requirements; library model, expressed in UML diagrams; test plan and test procedure. Experiments have shown that it is necessary to improve the refactoring algorithms. But the library and the AST structure is worth further development.
74

A Refactoring-Based Approach to Support Binary Backward-Compatible Framework Upgrades

Savga, Ilie 12 July 2010 (has links) (PDF)
Evolutionary changes applied to a framework API may invalidate existing framework-based applications. While manually adapting applications is expensive and error-prone, automatic adaptation demands cumbersome specifications, which the developers are reluctant to write and maintain. Considering structural changes (so-called refactorings) of framework APIs, our adaptation technology supports backward-compatible framework upgrade. The technology is rigorous defining precisely the structure and automatic derivation of compensating adapters. It is also practical compensating for most application-breaking API changes automatically, while requiring neither manual adaptation nor recompilation of existing application code.
75

[en] HOW DOES REFACTORING AFFECT INTERNAL QUALITY ATTRIBUTES?: A MULTI-PROJECT STUDY / [pt] COMO A REFATORAÇÃO AFETA OS ATRIBUTOS DE QUALIDADE INTERNA?: UM ESTUDO MULTI-PROJETO

ALEXANDER CHÁVEZ LÓPEZ 12 December 2017 (has links)
[pt] Desenvolvedores frequentemente aplicam refatoração para melhorar os atributos internos de qualidade em projetos de software, tais como acoplamento e tamanho. Chamamos de rerrefatoração quando desenvolvedores refatoram um elemento de código-fonte previamente refatorado. O conhecimento empírico é limitado acerca de até que ponto refatoração e rerrefatoração de fato melhoram os atributos internos de qualidade. Nesta dissertação, nós investigamos a limitação supracitada com base em cinco atributos internos de qualidade conhecidos: acoplamento, coesão, complexidade, herança e tamanho. Também nos baseamos no histórico de versionamento de 23 projetos de software de código-fonte aberto, os quais possuem 29,303 operações de refatoração e 49.55 por cento de rerrefatorações. Nossa análise revelou descobertas interessantes apresentadas como segue. Primeiro, desenvolvedores aplicam mais de 93.45 por cento de operações de refatoração e rerrefatoração sobre elementos de código-fonte com ao menos um atributo interno de qualidade crítico, contrariando trabalhos anteriores. Segundo, para 65 por cento das operações, os atributos internos de qualidade relacionados melhoram, enquanto que os demais 35 por cento permanecem não-afetados. Terceiro, sempre que operações de refatoração são aplicadas sem mudanças adicionais no código fonte, o que chamamos de operação de refatoração root-canal, os atributos internos de qualidade frequentemente melhoram, ou ao menos, não pioram. Ao contrário, 55 por cento das operações de refatoração aplicadas com mudanças adicionais, tais como correção de bugs, surpreendentemente melhoram os atributos internos de qualidade, com somente 10 por cento de piora, o que também é válido para rerrefatoração. Nós sumarizamos nossas descobertas na forma de recomendações para desenvolvedores e pesquisadores. / [en] Developers often apply code refactoring to improve the internal quality attributes of a program, such as coupling and size. Given the structural decay of certain program elements, developers may need to apply multiple refactorings to these elements to achieve quality attribute improvements. We call re-refactoring when developers refactor again a previously refactored element in a program, such as a method or a class. There is limited empirical knowledge on to what extent developers successfully improve internal quality attributes through (re-)refactoring in their actual software projects. This dissertation addresses this limitation by investigating the impact of (re-)refactoring on five well-known internal quality attributes: cohesion, complexity, coupling, inheritance, and size. We also rely on the version history of 23 open source projects, which have 29,303 refactoring operations and 49.55 percent of re-refactoring operations. Our analysis revealed relevant findings. First, developers apply more than 93.45 percent of refactoring and re-refactoring operations to code elements with at least one critical internal quality attribute, as oppositely found in previous work. Second, 65 percent of the operations actually improve the relevant attributes, i.e. those attributes that are actually related to the refactoring type being applied; the remaining 35 percent operations keep the relevant quality attributes unaffected. Third, whenever refactoring operations are applied without additional changes, which we call root-canal refactoring, the internal quality attributes are either frequently improved or at least not worsened. Contrarily, 55 percent of the refactoring operations with additional changes, such as bug fixes, surprisingly improve internal quality attributes, with only 10 percent of the quality decline. This finding is also valid for re-refactoring. Finally, we also summarize our findings as concrete recommendations for both practitioners and researchers.
76

Context-aware automated refactoring for unified memory allocation in NVIDIA CUDA programs

Nejadfard, Kian 25 June 2021 (has links)
No description available.
77

A Refactoring-Based Approach to Support Binary Backward-Compatible Framework Upgrades

Savga, Ilie 21 April 2010 (has links)
Evolutionary changes applied to a framework API may invalidate existing framework-based applications. While manually adapting applications is expensive and error-prone, automatic adaptation demands cumbersome specifications, which the developers are reluctant to write and maintain. Considering structural changes (so-called refactorings) of framework APIs, our adaptation technology supports backward-compatible framework upgrade. The technology is rigorous defining precisely the structure and automatic derivation of compensating adapters. It is also practical compensating for most application-breaking API changes automatically, while requiring neither manual adaptation nor recompilation of existing application code.
78

Konzeption, Implementation und quantitative Evaluation einer statischen Clean-Code-Bewertungsapplikation

Eichenseer, Maurice 14 March 2024 (has links)
Refactoring wird angewandt, wenn eine Software-Inspektion Defekte im Programmcode feststellt. Code-Smells sind Beispiele solcher Defekte im Programmcode. Clean-Code ist ein neuerer Ansatz, der genauso wie das Code-Smell-Konzept festlegt, wann Defekte im Programmcode vorliegen. Für Code-Smells gibt es bereits zahlreiche Code-Analyse-Tools, die die automatische Erkennung solcher Defekte ermöglicht. Die vorliegende Arbeit implementiert ein Clean-Code-Analyse-Tool für Programmcode mithilfe von statischer Code-Analyse, das Refactoring-Hinweise ausgibt. Für diesen Zweck werden ein Lexer und ein Parser zur syntaktischen Analyse eines Subsets der Programmiersprache C++ implementiert. Die Evaluation durch quantitative Datenanalyse zeigt, wie nützlich die automatische Erkennung von Clean-Code mithilfe eines statischen Code-Analyse-Tools bei der Erstellung von Programmcode mit höherer Lesbarkeit für Entwicklerinnen und Entwickler ist.:Inhaltsverzeichnis 6 Abbildungsverzeichnis 9 Tabellenverzeichnis 10 Akronyme 13 1. Einleitung 14 2. Theoretische Grundlagen 17 2.1. Wichtige Begriffe und Definitionen 17 2.1.1. Software-Inspektion 17 2.1.2. Fagan-Inspektion 18 2.1.3. Checklistenbasiertes Code-Review 19 2.1.4. Statische vs. dynamische Code-Analyse-Tools 21 2.1.5. Refactoring 24 2.1.6. Code-Smells 24 2.1.7. Clean-Code 25 2.2. Code-Smell-Heuristiken im Detail 27 2.3. Einführung in den Compilerbau 32 2.3.1. Kurze Einführung in die Sprachtheorie 32 2.3.2. Die lexikalische Analyse 34 2.3.3. Die syntaktische Analyse 35 3. Forschungsstand 38 3.1. Code-Smell-Analyse-Tools auf Grundlage von Strukturinformationen 42 3.2. Code-Smell-Analyse durch maschinelles Lernen 50 3.3. Code-Smell-Suche mithilfe von Änderungsdaten 52 3.4. Code-Smell-Erkennung durch Textanalyse 54 4. Forschungsfragen und Konzeptentwicklung 56 4.1. Clean-Code: Welche Teile sind messbar? - Die Konzeptionalisierung hin zu einem maschinenlesbaren Ansatz 56 4.1.1. Größe von Entitäten im Clean-Code 57 4.1.2. Clean-Code und Zugriffsmodifikatoren 59 4.1.3. Bezeichner von Entitäten im Clean-Code 60 4.1.4. Formatierungskonventionen des Clean-Codes 62 4.1.5. Funktionsparameterübergabe im Clean-Code-Konzept 63 4.1.6. Clean-Code-Kommentare 65 4.1.7. Clean-Code — Was nicht geht 66 4.1.8. Platzierung von Entitäten im Clean-Code 67 4.2. Forschungsfragen und Hypothesen 69 5. Methodik 71 5.1. Implementierung des statischen Clean-Code-Analyse-Tools 71 5.1.1. Abhängigkeiten 72 5.1.2. Umsetzung des statischen Clean-Code-Analyse-Tools 72 5.1.3. Abhängige Variablen 78 5.2. Checklistenbasiertes Code-Review 80 5.2.1. Begründung der Methodenauswahl 80 5.2.2. Fragebogen 81 5.2.3. Unabhängige Variablen 85 5.2.4. Ablauf der Studie 86 6. Ergebnisse 87 6.1. Populationsbeschreibung 88 6.2. Deskriptive Werte der Evaluationsitems aus der Lesbarkeitsstudie 88 6.3. Deskriptive Werte der Evaluation des statischen Clean-Code-Analyse-Tools 90 6.4. Inferenzstatistik 91 7. Diskussion 96 7.1. Beantwortung der Forschungsfrage 96 7.2. Bedrohungen der Validität 97 7.3. Ausblick und Vergleich mit ähnlichen Code-Analyse-Tools 98 8. Fazit 100 Literaturverzeichnis 102 A. Anhang 112 A.1. Tabellen 112 A.2. Clean-Code Analyse-Tool Ein- und Ausgabe 124 A.3. Lesbarkeitsstudie 135 A.4. Regressionsergebnisse 144 B. CD 148 C. Selbstständigkeitserklärung 148
79

A wide spectrum type system for transformation theory

Ladkau, Matthias January 2009 (has links)
One of the most difficult tasks a programmer can be confronted with is the migration of a legacy system. Usually, these systems are unstructured, poorly documented and contain complex program logic. The reason for this, in most cases, is an emphasis on raw performance rather than on clean and structured code as well as a long period of applying quick fixes and enhancements rather than doing a proper software reengineering process including a full redesign during major enhancements. Nowadays, the old programming paradigms are becoming an increasingly serious problem. It has been identified that 90% of the costs of a typical software system arise in the maintenance phase. Many companies are simply too afraid of changing their software infrastructure and prefer to continue with principles like "never touch a running system". These companies experience growing pressure to migrate their legacy systems onto newer platforms because the maintenance of such systems is expensive and dangerous as the risk of losing vital parts of sources code or its documentation increases drastically over time. The FermaT transformation system has shown the ability to automatically or semi-automatically restructure and abstract legacy code within a special intermediate language called WSL (Wide Spectrum Language). Unfortunately, the current transformation process only supports the migration of assembler as WSL lacks the ability to handle data types properly. The data structures in assembler are currently directly translated into C data types which involves many assumptional “hard coded” conversions. The absence of an adequate type system for WSL caused several flaws for the whole transformation process and limits its abilities significantly. The main aim of the presented research is to tackle these problems by investigating and formulating how a type system can contribute to a safe and reliable migration of legacy systems. The described research includes the definition of key aspects of type related problems in the FermaT migration process and how to solve them with a suitable type system approach. Since software migration often includes a change in programming language the type system for WSL has to be able to support various type system approaches including the representation of all relevant details to avoid assumptions. This is especially difficult as most programming languages are designed for a special purpose which means that their possible programming constructs and data types differ significantly. This ranges from languages with simple type systems whose program sare prone to unintended side-effects, to languages with strict type systems which are constrained n their flexibility. It is important to include as many type related details as necessary to avoid making assumptions during language to language translation. The result of the investigation is a novel multi layered type system specifically designed to satisfy the needs of WSL for a sophisticated solution without imposing too many limitations on its abilities. The type system has an adjustable expressiveness, able to represent a wide spectrum of typing approaches ranging from weak typing which allows direct memory access and down casting, via very strict typing with a high diversity of data types to object oriented typing which supports encapsulation and data hiding. Looking at the majority of commercial relevant statically typed programming languages, two fundamental properties of type strictness and safety can be identified. A type system can be either weakly or strongly typed and may or may not allow unsafe features such as direct memory access. Each layer of the Wide Spectrum Type System has a different combination of these properties. The approach also includes special Type System Transformations which can be used to move a given WSL program among these layers. Other emphasised key features are explicit typing and scalability. The whole approach is based on a sound mathematical foundation which assures correctness and integrates seamlessly into the present mathematical definition of WSL. The type system is formally introduced to WSL by constructing an attribute grammar for the language. Type checking and type inference are used to annotate the Abstract Syntax Tree of a given WSL program with type derivations which can be used to reveal and indicate possible typing errors or to infer types if the program did not feature explicit type declarations in the first place. Notable in this approach is also the fact that object orientation is introduced to a procedural programming language without the introduction of new semantics. It is shown that object orientation can be introduced just by adjusting type checking rules and adding some syntactical notations. The approach was implemented and tested on two case studies. The thesis describes and discusses both cases in detail and shows how a migration which ignores type systems could accidentally introduce errors due to assumptions during translation. Both case studies use all important aspects of the approach, Including type transformations and object identification. The thesis finalises by summarising the whole work, identifying limitations, presenting future perspectives and drawing conclusions
80

Contribution à la simulation de la stimulation magnétique transcrânienne: vers une approche dirigée par les modèles

Luquet, Sébastien 14 December 2009 (has links) (PDF)
La Stimulation Magnétique Transcrânienne (SMT) est une technique de stimulation neuronale offrant de nombreuses applications médicales. Cependant son utilisation reste empirique. L'objectif de cette thèse était de mettre en place un logiciel permettant de mieux comprendre les effets de la stimulation et d'aider à la réalisation de séances de SMT. Suite au développement de ce logiciel, il est apparu que celui-ci était devenu patrimonial. L'objectif secondaire de ce travail fut donc d'analyser l'obsolescence du logiciel et d'essayer d'apporter des solutions pour limiter ce phénomène via l'utilisation de l'Ingénierie Dirigée par les Modèles (IDM). Après avoir présenté les phénomènes régissant l'activité cérébrale nous présentons les différentes techniques de stimulation neuronale et les avantages qu'offre la SMT. Nous présentons également les bases nécessaires à la compréhension des phénomènes électromagnétiques, puis une introduction aux concepts fondamentaux de l'IDM. Dans une troisième partie, l'accent est mis sur les différentes modélisations et méthodes de calcul des effets de la Stimulation Magnétique Transcrânienne avant de présenter la solution qui a été retenue et implantée dans le simulateur. Les deux dernières parties du manuscrit se focalisent sur le simulateur dans sa globalité (visualisation 3D) puis à la manière dont il pourrait être refactorisé pour faciliter sa maintenance et l'inclusion des évolutions possibles que nous présentons en conclusion.

Page generated in 0.4799 seconds