• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 18
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 73
  • 58
  • 32
  • 26
  • 21
  • 21
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Monitoração de requisitos de qualidade baseada na arquitetura de software / Quality requirements monitoring based on software architecture

Silva, André Almeida 19 February 2015 (has links)
Computer systems gain more space day by day in the lives of individuals, causing the demand for computerized solutions more and more sophisticated and accurate, become increasing. Thus, there is a requirement of effective quality assurance for software produced, checked by monitoring of quality attributes. However, the main current monitoring techniques are turning mainly to service-based systems, setting aside a large number of software. In this context, this work aims to discuss about the monitoring of quality attributes referenced by ISO/IEC 9126 standard. Decision trees will be set relating to the architectural elements monitoring issues, and also a tool that uses the concepts of Aspect-Oriented Programming to automate the process of monitoring the reliability and efficiency requirements by generating aspects-monitors intended for logging and recording exceptions given target system. Still be observed the case study disposal structured by the Goal/Question/Metric (GQM) paradigm, conducted with the purpose of analyze the feasibility of the developed solution which is a simplified way for architects and software developers to define monitors to measure quality attributes in their systems. / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Os sistemas computacionais ganham dia a dia mais espaço na vida dos indivíduos, fazendo com que a demanda por soluções computadorizadas, cada vez mais sofisticadas e precisas, seja crescente. Assim, há a exigência de efetivas garantias de qualidade aos softwares produzidos, conferidas pela monitoração dos atributos de qualidade. Contudo, as principais técnicas de monitoração atuais voltam-se, sobretudo, aos sistemas baseados em serviços, deixando de lado uma grande parcela de softwares. Neste contexto, o presente trabalho possui como objetivo discutir acerca da monitoração dos atributos de qualidade referenciados pela norma ISO/IEC 9126. Serão definidas árvores de decisão, que relacionarão os elementos arquiteturais às questões de monitoração, e ainda uma ferramenta que utilizará conceitos da Programação Orientada a Aspectos para automatizar o processo de monitoração dos requisitos confiabilidade e eficiência, através da geração de aspectos-monitores destinados ao logging e registro de exceções de determinado sistema-alvo. Ainda será observada a disposição de estudo de caso estruturado pelo paradigma Goal/Question/Metric (GQM), realizado com a finalidade de analisar a viabilidade da solução desenvolvida que representa uma maneira simplificada para que arquitetos e desenvolvedores de softwares definam monitores para aferir atributos de qualidade em seus sistemas.
132

Understanding and automating application-level caching / Entendendo e automatizando cache a nível de aplicação

Mertz, Jhonny Marcos Acordi January 2017 (has links)
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%. / Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
133

Understanding the delivery delay of addressed issues in large software projects

Costa, Daniel Alencar da 08 February 2017 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-04-17T21:35:20Z No. of bitstreams: 1 DanielAlencarDaCosta_TESE.pdf: 9198560 bytes, checksum: 463f875eda2671bd33beca615f72eef3 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-04-18T22:34:50Z (GMT) No. of bitstreams: 1 DanielAlencarDaCosta_TESE.pdf: 9198560 bytes, checksum: 463f875eda2671bd33beca615f72eef3 (MD5) / Made available in DSpace on 2017-04-18T22:34:50Z (GMT). No. of bitstreams: 1 DanielAlencarDaCosta_TESE.pdf: 9198560 bytes, checksum: 463f875eda2671bd33beca615f72eef3 (MD5) Previous issue date: 2017-02-08 / The timely delivery of addressed software issues (i.e., bug fixes, enhancements, and new features) is what drives software development. Previous research has investigated what impacts the time to triage and address (or fix) issues. Nevertheless, even though an issue is addressed, i.e., a solution is coded and tested, such an issue may still suffer delay before being delivered to end users. Such delays are frustrating, since end users care most about when an addressed issue is available in the software system (i.e, released). In this matter, there is a lack of empirical studies that investigate why addressed issues take longer to be delivered compared to other issues. In this thesis, we perform empirical studies to understand which factors are associated with the delayed delivery of addressed issues. In our studies, we find that 34% to 98% of the addressed issues of the ArgoUML, Eclipse and Firefox projects have their integration delayed by at least one release. Our explanatory models achieve ROC areas above 0.74 when explaining delivery delay.We also find that the workload of integrators and the moment at which an issue is addressed are the factors with the strongest association with delivery delay.We also investigate the impact of rapid release cycles on the delivery delay of addressed issues. Interestingly, we find that rapid release cycles of Firefox are not related to faster delivery of addressed issues. Indeed, although rapid release cycles address issues faster than traditional ones, such addressed issues take longer to be delivered.Moreover, we find that rapid releases deliver addressed issues more consistently than traditional ones. Finally, we survey 37 developers of the ArgoUML, Eclipse, and Firefox projects to understand why delivery delays occur. We find that the allure of delivering addressed issues more quickly to users is the most recurrent motivator of switching to a rapid release cycle.Moreover, the possibility of improving the flexibility and quality of addressed issues is another advantage that are perceived by our participants. Additionally, the perceived reasons for the delivery delay of addressed issues are related to decision making, team collaboration, and risk management activities. Moreover, delivery delay likely leads to user/developer frustration according to our participants. Our thesis is the first work to study such an important topic in modern software development. Our studies highlight the complexity of delivering issues in a timely fashion (for instance, simply switching to a rapid release cycle is not a silver bullet that would guarantee the quicker delivery of addressed issues).
134

Understanding and automating application-level caching / Entendendo e automatizando cache a nível de aplicação

Mertz, Jhonny Marcos Acordi January 2017 (has links)
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%. / Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
135

Um modelo dinâmico de reputação para apoiar a manutenção colaborativa de software

Lélis, Cláudio Augusto Silveira 30 August 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-10-21T01:04:22Z No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-10-21T13:13:19Z (GMT) No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) / Made available in DSpace on 2017-10-21T13:13:19Z (GMT). No. of bitstreams: 1 claudioaugustosilveiralelis.pdf: 7232359 bytes, checksum: 731c10b688562fad8855da41890a7f97 (MD5) Previous issue date: 2017-08-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A importância dos softwares nas organizações é crescente. No entanto, para manter seu valor, o software deve ser alterado e atualizado. A manutenção de software depende da alocação de recursos humanos para o cumprimento das atividades de alteração definidas. Entretanto, em um cenário distribuído no qual a colaboração é fundamental para o bom funcionamento das atividades, torna-se uma tarefa não trivial designar desenvolvedores para as atividades de manutenção. Neste contexto, a reputação se torna um elemento chave, afetando os elementos de colaboração, tais como: a coordenação, a cooperação, e a comunicação. Portanto, o acompanhamento da evolução da reputação é importante para promover a colaboração nas atividades de manutenção. A teoria de Dinâmica de Sistemas pode ser aplicada no acompanhamento da evolução da reputação. Através dos dados obtidos, é possível compreender o passado, estabelecer o que ocorre no presente e projetar o comportamento futuro da reputação. Diante disso, este trabalho apresenta um modelo para cálculo da reputação dos desenvolvedores de software, apoiado por técnicas de Dinâmica de Sistemas, o qual permite simular como a reputação se comporta ao longo do tempo. Este modelo serviu de base para a construção de uma infraestrutura para informações de reputação dinâmica, cujo objetivo é possibilitar o gerenciamento e acompanhamento de informações de reputação dos desenvolvedores geograficamente distribuídos de forma a apoiar a alocação desses desenvolvedores às tarefas de manutenção. Além disso, oferece elementos de visualização e colaboração, em um ambiente integrado às atividades de manutenção de software. Uma prova de conceito e um experimento realizados com dados reais de uma empresa são apresentados com o intuito de identificar a viabilidade e aderência do modelo proposto, bem como dos demais recursos oferecidos pela infraestrutura. / The importance of software in organizations is growing. However, to maintain its value, the software must be changed and updated. Software maintenance depends on the allocation of human resources to fulfill defined change activities. However, in a distributed scenario in which collaboration is critical to the well running of activities, designate developers for maintenance activities becomes a non-trivial task. In this context, reputation becomes a key element, affecting elements of collaboration, such as: coordination, cooperation, and communication. Therefore, tracking reputation evolution is important to promote collaboration in maintenance activities. The theory of System Dynamics can be applied in monitoring the evolution of reputation. Through the data obtained, it is possible to understand the past, establish what occurs in the present, and project future reputation behavior. Therefore, this work presents a model for calculating the reputation of software developers, supported by System Dynamics techniques, which allows simulating how reputation behaves over time. This model served as the basis for building an infrastructure for dynamic reputation information, which aims to enable the management and tracking reputation information of geographically distributed developers to support the allocation of these developers to maintenance tasks. In addition, it provides visualization and collaboration elements in an integrated environment for software maintenance activities. A proof of concept and an experiment made with real data of a company are presented with the intention of identifying the feasibility and adherence of the proposed model, as well as of the other resources offered by the infrastructure.
136

Creating Markup : Exploring the concept of users defining syntax

Van den Weghe, Matthias January 2016 (has links)
A variety of markup languages exist for formatting text and exporting to HTML. These languages are tailored to the needs in a specific context by specialising tags, selecting tags and limiting the number of possible distinctions to a subset of what is available in HTML. However, limiting the number of possible distinctions creates problems when changes occur in the context. The real world is ever-changing, thus that which models it must be able to reflect the changes in the operational environment to remain relevant and satisfactory. Incorporating new requirements and adjusting to the changes in requirements means adapting and evolving. This thesis explores giving document authors the possibility to extend and modify the repertoire of available markup tags when new user requirements demand it. What is presented is a prototype which allows the user to tailor the markup and also adapt it to changes in the environment. The system allows users to create their own set of markup tags, annotate their documents with them, and export a generated XML document. Users create the tag and assign a meaning to it, when changes occur in the requirements they can be implemented by modifying the tags, extending the repertoire by adding tags, or changing the meaning of a defined tag.
137

Exploring Kanban in software engineering

Ahmad, M. O. (Muhammad Ovais) 15 November 2016 (has links)
Abstract To gain competitive advantage and thrive in the market, companies have introduced Kanban in software development. Kanban has been used in the manufacturing industry for over six decades. In the software engineering domain, Kanban was introduced in 2004 to increase flexibility in coping with dynamic requirements, bring visibility to workflow and related tasks, improve communication, and promote the pull system. However, the existing scientific literature lacks empirical evidence of the use of Kanban in software companies. This doctoral thesis aims to improve the understanding of the use of Kanban in software engineering. The research was performed in two phases: 1) analysis of scientific literature on Kanban in software engineering and industrial engineering and 2) investigation of Kanban implementation trends in software companies. The data was collected through systematic literature reviews, survey and semi-structured interviews. The results were synthesized to draw conclusions and outline implications for research and practice. The results indicate growing interest in the use of Kanban in software companies. The findings suggest that Kanban is applicable to software development, software maintenance, and portfolio management in software companies. Kanban brings visibility to task and offering status, limits work in progress at any given time gives people greater control over their work and limit task switching. Although Kanban offers several benefits, as reported in this dissertation, the findings show that software companies find it challenging to implement Kanban incrementally. / Tiivistelmä Ohjelmistoteollisuudessa Kanbanin käyttö on yleistynyt vuodesta 2004 alkaen. Sillä pyritään tuomaan joustavuutta muuttuvien vaatimusten hallintaan, tuomaan näkyvyyttä työnkulkuun ja toisiinsa liittyviin tehtäviin, parantamaan kommunikaatiota sekä edistämään imuohjauksen hyödyntämistä. Kanbania on käytetty valmistavassa teollisuudessa jo yli kuuden vuosikymmenen ajan. Olemassa olevassa tieteellisessä kirjallisuudessa on kuitenkin esitetty hyvin vähän empiirisiä tutkimustuloksia Kanbanin käytöstä ohjelmistoyrityksissä. Väitöskirjan tavoitteena on parantaa ymmärrystä Kanbanin käytöstä ohjelmistotuotannossa. Tutkimus toteutettiin kahdessa vaiheessa: 1) Kirjallisuusanalyysi Kanbanin käytöstä ohjelmistotuotannossa ja tuotantotekniikassa ja 2) Empiirinen tutkimus Kanbanin käyttöönoton trendeistä ohjelmistoyrityksissä. Tutkimusaineisto kerättiin systemaattisten kirjallisuuskatsausten, kyselytutkimuksen ja puolistrukturoitujen teemahaastattelujen kautta. Tutkimustulosten synteesin pohjalta tehtiin johtopäätöksiä Kanbanin käytöstä ohjelmistotuotannossa sekä niiden merkityksestä alan tutkimukselle ja Kanbanin käytölle yrityksissä. Tutkimuksen tulokset osoittavat kasvavaa kiinnostusta Kanbanin käyttöä kohtaan ohjelmistoyrityksissä. Tulosten perusteella Kanban soveltuu käytettäväksi ohjelmistokehityksessä, ohjelmistojen ylläpidossa sekä tuoteportfolion hallinnassa. Kanban tuo näkyvyyttä ohjelmistokehitykseen, niin meneillään olevien tehtävien kuin portfoliotarjoaman osalta. Se myös auttaa rajoittamaan työtehtävien ruuhkautumista ja antaa kehittäjille paremman tavan hallita työtään rajoittamalla työtehtävien vaihtoa. Vaikka Kanbanin käytöllä on mahdollista saavuttaa väitöskirjatutkimuksessa esitettyjä hyötyjä, tulokset osoittavat, että ohjelmistoyrityksillä on haasteita Kanbanin inkrementaalisessa käyttöönotossa.
138

Measuring feature team characteristics of software development teams

Gidlund, Maja January 2016 (has links)
This report evaluates the team-structure of three software maintenance teams in order to decide their level of featureness (a term that defines to what extent a team has the quality (the set of characteristics) of being a feature team). Simulations of changes that are expressed as beneficial in an agile environment and that could increase the teams‘ level of featureness within the team structure are performed. The results show that each team‘s level of featureness is affected differently by each change. Partly, this underlines the importance of understanding the current team-structure before implementing changes that aim to increase the level of featureness. And secondly, within the scope of the study, the change where a user expert is declared a team member is concluded as the change that increases the teams‘ level of featureness the most. Based on the results the report also concludes that it is essential to implement changes that affect different, which in combination can increase the level of featureness.
139

Static MySQL Error Checking

Zarinkhail, Mohammad Shuaib January 2010 (has links)
Masters of Science / Coders of databases repeatedly face the problem of checking their Structured Query Language (SQL) code. Instructors face the difficulty of checking student projects and lab assignments in database courses. We collect and categorize common MySQL programming errors into three groups: data definition errors, data manipulation errors, and transaction control errors. We build these into a comprehensive list of MySQL errors, which novices are inclined make during database programming. We collected our list of common MySQL errors both from the technical literature and directly by noting errors made in assignments handed in by students. In the results section of this research, we check and summarize occurrences of these errors based on three characteristics as semantics, syntax, and logic. These data form the basis of a future static MySQL checker that will eventually assist database coders to correct their code automatically. These errors also form a useful checklist to guide students away from the mistakes that they are prone to make.
140

A Mono- and Multi-objective Approach for Recommending Software Refactoring

Ouni, Ali 11 1900 (has links)
Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception). / Software systems have become prevalent and important in our society. There is a constant need for high-quality software. Hence, to improve software quality, one of the most-used techniques is the refactoring which improves design structure while preserving the external behavior. Refactoring has promised, if applied well, to improve software readability, maintainability and extendibility while increasing the speed at which programmers can write and maintain their code. In general, refactoring can be performed in various levels such as the requirement, design, or code level. In this thesis, we mainly focus on the source code level where automated refactoring recommendation can be performed through two main steps: 1) detection of code fragments that need to be improved/fixed (e.g., code-smells), and 2) identification of refactoring solutions to achieve this goal. For the code-smells identification step, we translate regularities that can be found in such code-smell examples into detection rules. To this end, we use genetic programming to automatically generate detection rules from examples of code-smells. For the refactoring identification step, a search-based approach is used. The process aims at finding the optimal sequence of refactoring operations that improve software quality by minimizing the number of detected code-smells while prioritizing the most critical ones. In addition, we explore other objectives to optimize using a multi-objective approach: the code changes needed to apply refactorings, semantics preservation, and the consistency with development change history. Hence, reducing code changes allows us to keep as much as possible the initial design. On the other hand, semantics preservation insures that the refactored program is semantically coherent, and that it models correctly the domain-semantics. Indeed, we use knowledge from historical code change to suggest new refactorings in similar contexts. Furthermore, we introduce a novel multi-objective approach to improve software quality attributes (i.e., flexibility, maintainability, etc.), fix “bad” design practices (i.e., code-smells) while promoting “good” design practices (i.e., design patterns).

Page generated in 0.0796 seconds