• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 20
  • 10
  • 4
  • 1
  • 1
  • Tagged with
  • 116
  • 116
  • 41
  • 39
  • 29
  • 24
  • 22
  • 19
  • 17
  • 17
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Eyes Of Darwin : une fenêtre ouverte sur l'évolution du logiciel

Tanteri, Julien 12 1900 (has links)
De nos jours, les logiciels doivent continuellement évoluer et intégrer toujours plus de fonctionnalités pour ne pas devenir obsolètes. C'est pourquoi, la maintenance représente plus de 60% du coût d'un logiciel. Pour réduire les coûts de programmation, les fonctionnalités sont programmées plus rapidement, ce qui induit inévitablement une baisse de qualité. Comprendre l’évolution du logiciel est donc devenu nécessaire pour garantir un bon niveau de qualité et retarder le dépérissement du code. En analysant à la fois les données sur l’évolution du code contenues dans un système de gestion de versions et les données quantitatives que nous pouvons déduire du code, nous sommes en mesure de mieux comprendre l'évolution du logiciel. Cependant, la quantité de données générées par une telle analyse est trop importante pour être étudiées manuellement et les méthodes d’analyses automatiques sont peu précises. Dans ce mémoire, nous proposons d'analyser ces données avec une méthode semi automatique : la visualisation. Eyes Of Darwin, notre système de visualisation en 3D, utilise une métaphore avec des quartiers et des bâtiments d'une ville pour visualiser toute l'évolution du logiciel sur une seule vue. De plus, il intègre un système de réduction de l'occlusion qui transforme l'écran de l'utilisateur en une fenêtre ouverte sur la scène en 3D qu'il affiche. Pour finir, ce mémoire présente une étude exploratoire qui valide notre approche. / Software must continuously evolve and integrate more functionalities to remain useful. Consequently, more than 60% of a software system's cost is related to maintenance. To reduce this cost, programming must performed faster, witch leads to a decrease of the system code's quality. Therefore, understanding software evolution is becoming a necessity to prevent code decay and to increase the system life span. To ease software understanding, we perform a cross analysis of the historical data extracted from a version control system, with quantitative data that we obtain from the source code. However, the significant amount of data generated by this kind of analysis makes it necessary to have tools to support the maintainer’s analysis. First, tools help because examining them manually is impossible. Second, they help because automatics methods are not accurate enough. We present a new semiautomatic approach to help analysis. Our 3D visualization system, Eyes Of Darwin, uses a cityscape metaphor to show software's evolution on a single view. It integrates an occlusion reduction system, witch turns the screen to an open window on the 3D world. We conclude, with an exploratory study in order to validate our approach.
92

AURA : a hybrid approach to identify framework evolution

Wu, Wei 02 1900 (has links)
Les cadriciels et les bibliothèques sont indispensables aux systèmes logiciels d'aujourd'hui. Quand ils évoluent, il est souvent fastidieux et coûteux pour les développeurs de faire la mise à jour de leur code. Par conséquent, des approches ont été proposées pour aider les développeurs à migrer leur code. Généralement, ces approches ne peuvent identifier automatiquement les règles de modification une-remplacée-par-plusieurs méthodes et plusieurs-remplacées-par-une méthode. De plus, elles font souvent un compromis entre rappel et précision dans leur résultats en utilisant un ou plusieurs seuils expérimentaux. Nous présentons AURA (AUtomatic change Rule Assistant), une nouvelle approche hybride qui combine call dependency analysis et text similarity analysis pour surmonter ces limitations. Nous avons implanté AURA en Java et comparé ses résultats sur cinq cadriciels avec trois approches précédentes par Dagenais et Robillard, M. Kim et al., et Schäfer et al. Les résultats de cette comparaison montrent que, en moyenne, le rappel de AURA est 53,07% plus que celui des autre approches avec une précision similaire (0,10% en moins). / Software frameworks and libraries are indispensable to today's software systems. As they evolve, it is often time-consuming for developers to keep their code up-to-date. Approaches have been proposed to facilitate this. Usually, these approaches cannot automatically identify change rules for one-replaced-by-many and many-replaced-by-one methods, and they trade off recall for higher precision using one or more experimentally-evaluated thresholds. We introduce AURA (AUtomatic change Rule Assistant), a novel hybrid approach that combines call dependency and text similarity analyses to overcome these limitations. We implement it in a Java system and compare it on five frameworks with three previous approaches by Dagenais and Robillard, M. Kim et al., and Schäfer et al. The comparison shows that, on average, the recall of AURA is 53.07% higher while its precision is similar (0.10% lower).
93

Intégration de la visualisation à multiples vues pour le développement du logiciel

Langelier, Guillaume 12 1900 (has links)
Le développement du logiciel actuel doit faire face de plus en plus à la complexité de programmes gigantesques, élaborés et maintenus par de grandes équipes réparties dans divers lieux. Dans ses tâches régulières, chaque intervenant peut avoir à répondre à des questions variées en tirant des informations de sources diverses. Pour améliorer le rendement global du développement, nous proposons d'intégrer dans un IDE populaire (Eclipse) notre nouvel outil de visualisation (VERSO) qui calcule, organise, affiche et permet de naviguer dans les informations de façon cohérente, efficace et intuitive, afin de bénéficier du système visuel humain dans l'exploration de données variées. Nous proposons une structuration des informations selon trois axes : (1) le contexte (qualité, contrôle de version, bogues, etc.) détermine le type des informations ; (2) le niveau de granularité (ligne de code, méthode, classe, paquetage) dérive les informations au niveau de détails adéquat ; et (3) l'évolution extrait les informations de la version du logiciel désirée. Chaque vue du logiciel correspond à une coordonnée discrète selon ces trois axes, et nous portons une attention toute particulière à la cohérence en naviguant entre des vues adjacentes seulement, et ce, afin de diminuer la charge cognitive de recherches pour répondre aux questions des utilisateurs. Deux expériences valident l'intérêt de notre approche intégrée dans des tâches représentatives. Elles permettent de croire qu'un accès à diverses informations présentées de façon graphique et cohérente devrait grandement aider le développement du logiciel contemporain. / Nowadays, software development has to deal more and more with huge complex programs, constructed and maintained by large teams working in different locations. During their daily tasks, each developer may have to answer varied questions using information coming from different sources. In order to improve global performance during software development, we propose to integrate into a popular integrated development environment (Eclipse) our new visualization tool (VERSO), which computes, organizes, displays and allows navigation through information in a coherent, effective, and intuitive way in order to benefit from the human visual system when exploring complex data. We propose to structure information along three axes: (1) context (quality, version control, etc.) determines the type of information; (2) granularity level (code line, method, class, and package) determines the appropriate level of detail; and (3) evolution extracts information from the desired software version. Each software view corresponds to a discrete coordinate according to these three axes. Coherence is maintained by navigating only between adjacent views, which reduces cognitive effort as users search information to answer their questions. Two experiments involving representative tasks have validated the utility of our integrated approach. The results lead us to believe that an access to varied information represented graphically and coherently should be highly beneficial to the development of modern software.
94

[en] UNDERSTANDING AND IMPROVING BATCH REFACTORING IN SOFTWARE SYSTEMS / [pt] ENTENDENDO E MELHORANDO A PRÁTICA DE REFATORAÇÕES EM LOTE EM SISTEMAS DE SOFTWARE

DIEGO CEDRIM GOMES REGO 15 January 2019 (has links)
[pt] Em um sistema de software, as anomalias de código indicam problemas estruturais que podem ser resolvidos através da refatoração. No entanto, desenvolvedores podem negligenciar ou acabar criando novas anomalias ao refatorar. Pouco foi relatado sobre os efeitos benéficos e prejudiciais da refatoração de anomalias de código. Evidências sugerem que os desenvolvedores frequentemente precisam aplicar uma sequência de refatorações (refatoração em lote) para remover completamente as estruturas anômalas. Assim, nesta tese, realizamos uma série de estudos para entender o impacto de refatorações simples e em lote em anomalias de código. Em nossos primeiros estudos, analisamos com que frequência os tipos de refatoração comumente usados afetam a densidade de anomalias ao longo das histórias de dezenas de projetos. Mesmo que 79,4 por cento das refatorações tenham tocado em elementos anômalos, 57 por cento não reduziram suas ocorrências. Surpreendentemente, apenas 9,7 por cento das refatorações removeram anomalias de código, enquanto 33 por cento induziram a introdução de novas. Por um lado, observamos padrões nocivos de introdução de anomalias. Por outro lado, observamos que muitas anomalias podem ser removidas apenas por refatorações em lote. Assim, nossos últimos estudos investigam o impacto de refatorações em lote nas anomalias. Mesmo quando aplicadas em lotes, as refatorações tendem a não afetar ou mesmo aumentar a densidade de anomalias. Também identificamos padrões entre tipos de lotes e tipos de anomalias, levando-nos à criação de heurísticas que podem orientar os desenvolvedores durante tarefas de remoção de anomalias de código. O último estudo avaliou essas heurísticas e concluímos que os resultados são promissores. / [en] Code smells in a program represent indications of structural quality problems, which can be addressed by software refactoring. However, developers may neglect or end up creating new code smells through single refactoring. Little has been reported about recurring beneficial and harmful effects of refactoring on the program structural quality. As a consequence, developers still miss guidance along non-trivial smell-removing tasks. In fact, evidence suggests developers often need to apply a sequence of refactorings, so-called batch refactoring, to entirely remove a smelly code structure. Thus, in this thesis, we have conducted a series of studies to understand the impact of single and batch refactorings on code smells. In our first studies, we analyze how often commonly-used types of single refactoring affect the density of code smells along the version histories of dozens of projects. Even though 79.4 percent of the refactorings touched smelly elements, 57 percent had no impact on the smell removal. Surprisingly, only 9.7 percent of refactorings removed smells, while 33 percent induced the introduction of new ones. On one hand, we observed that harmful refactoring-smell patterns could be used to guide developers to avoid smell-inducing refactoring. On the other hand, we observed that many smells can be removed only through batch refactoring. Thus, our last studies investigate the impact of batch refactorings on smells. Even when applied in batches, refactorings tend to maintain or even increase the density of code smells. We also identified common batch-smell patterns, which enable us to create heuristics that can guide developers through smell-removing tasks. The last study evaluated those heuristics, and we conclude the outcomes are promising.
95

Análise de correlação entre métricas de evolução e qualidade de design de software. / Correlation analysis between evolution metrics and software design quality.

ASSIS, Pablo Oliveira Antonino de. 16 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-16T14:17:44Z No. of bitstreams: 1 PABLO OLIVEIRA ANTONINO DE ASSIS - DISSERTAÇÃO PPGCC 2009..pdf: 1244760 bytes, checksum: 30e75bebed5cedb9f7f2d0eb80097c6f (MD5) / Made available in DSpace on 2018-08-16T14:17:44Z (GMT). No. of bitstreams: 1 PABLO OLIVEIRA ANTONINO DE ASSIS - DISSERTAÇÃO PPGCC 2009..pdf: 1244760 bytes, checksum: 30e75bebed5cedb9f7f2d0eb80097c6f (MD5) Previous issue date: 2009-03-13 / Capes / Nós investigamos a evolução de oito softwares open source e cinco proprietários, a fim de verificar a existência de correlações estatísticas entre complexidade e medidas de qualidade em termos de bad smells e bugs. Em todos os projetos, encontramos fortes correlações estatísticas entre medidas de complexidade (WMC) e qualidade. Todos os softwares proprietários e cinco open source apresentaram índices de correlação muito forte (r > 0.9). Surpreendentemente, em três dos softwares open source, a correlação encontrada foi forte, porém negativa. Isto é atribuído ao fato de que, nestes projetos, os bad smells foram removidos intencionalmente. Este resultado sugere que, apesar da correlação, não existe necessariamente relação de causa-efeito entre métricas de complexidade e de qualidade. Dessa maneira, concluímos que apenas eliminar bad smells não é uma boa estratégia a ser seguida se o objetivo for reduzir a complexidade do design e melhorar a qualidade nos termos associados à redução da complexidade. / We have studied the evolution of eight open source projects and five proprietary ones, looking for statistical correlations between complexity and quality measures in terms of bad smells and bugs detected. In all projects, we found strong statistical correlations between complexity (WMC) and quality measures. In all the legacies softwares and five of open sources, the correlation can be considered very strong (r > 0.9). Surprisingly, in three of the open source, the correlation is strong, but negative. This has been attributed to the fact that, in these projects, designers have intentionally controlled the quality measures under study, by applying refactoring strategies. These results suggest that, despite the correlation, there is no necessary cause-effect relation between complexity and quality measures. We conclude that just eliminate bad smells is not a good strategy to be followed if the desired objective is to reduce software design complexity. Then also does not improve software quality in terms associated to software complexity reduction.
96

Un meta-modèle de composants pour la réalisation d'applications temps-réel flexibles et modulaires / A component metamodel for the development of modular and flexible real-time applications

Rodrigues Americo, Joao Claudio 04 November 2013 (has links)
La croissante complexité du logiciel a mené les chercheurs en génie logiciel à chercher des approcher pour concevoir et projéter des nouveaux systèmes. Par exemple, l'approche des architectures orientées services (SOA) est considérée actuellement comme le moyen le plus avancé pour réaliser et intégrer rapidement des applications modulaires et flexibles. Une des principales préocuppations des solutions en génie logiciel et la réutilisation, et par conséquent, la généralité de la solution, ce qui peut empêcher son application dans des systèmes où des optimisation sont souvent utilisées, tels que les systèmes temps réels. Ainsi, créer un système temps réel est devenu très couteux. De plus, la plupart des systèmes temps réel ne beneficient pas des facilités apportées par le genie logiciel, tels que la modularité et la flexibilité. Le but de cette thèse c'est de prendre en compte ces aspects temps réel dans des solutions populaires et standards SOA pour faciliter la conception et le développement d'applications temps réel flexibles et modulaires. Cela sera fait à l'aide d'un modèle d'applications temps réel orienté composant autorisant des modifications dynamiques dans l'architecture de l'application. Le modèle de composant sera une extension au standard SCA qui intègre des attributs de qualité de service sur le consomateur et le fournisseur de services pour l'établissement d'un accord de niveau de service spécifique au temps réel. Ce modèle sera executé sur une plateforme de services OSGi, le standard de facto pour le developpement d'applications modulaires en Java. / The increase of software complexity along the years has led researchers in the software engineering field to look for approaches for conceiving and designing new systems. For instance, the service-oriented architectures approach is considered nowadays as the most advanced way to develop and integrate fastly modular and flexible applications. One of the software engineering solutions principles is re-usability, and consequently generality, which complicates its appilication in systems where optimizations are often used, like real-time systems. Thus, create real-time systems is expensive, because they must be conceived from scratch. In addition, most real-time systems do not beneficiate of the advantages which comes with software engineering approches, such as modularity and flexibility. This thesis aim to take real time aspects into account on popular and standard SOA solutions, in order to ease the design and development of modular and flexible applications. This will be done by means of a component-based real-time application model, which allows the dynamic reconfiguration of the application architecture. The component model will be an extension to the SCA standard, which integrates quality of service attributs onto the service consumer and provider in order to stablish a real-time specific service level agreement. This model will be executed on the top of a OSGi service platform, the standard de facto for development of modular applications in Java.
97

Robusta : une approche pour la construction d'applications dynamiques / Robusta : An approach to building dynamic applications

Rudametkin Ivey, Walter Andrew 21 February 2013 (has links)
Les domaines de recherche actuels, tels que l'informatique ubiquitaire et l'informatique en nuage (cloud computing), considèrent que ces environnements d’exécution sont en changement continue. Les applications dynamiques, où les composants peuvent être ajoutés et supprimés pendant l'exécution, permettent à un logiciel de s'adapter et de s'ajuster à l'évolution des environnements, et de tenir compte de l’évolution du logiciel. Malheureusement, les applications dynamiques soulèvent des questions de conception et de développement qui n'ont pas encore été pleinement explorées.Dans cette thèse, nous montrons que le dynamisme est une préoccupation transversale qui rompt avec un grand nombre d’hypothèses que les développeurs d’applications classiques sont autorisés à prendre. Le dynamisme affecte profondément la conception et développement de logiciels. S'il n'est pas manipulé correctement, le dynamisme peut « silencieusement » corrompre l'application. De plus, l'écriture d'applications dynamiques est complexe et sujette à erreur. Et compte tenu du niveau de complexité et de l’impact du dynamisme sur le processus du développement, le logiciel ne peut pas devenir dynamique sans (de large) modification et le dynamisme ne peut pas être totalement transparent (bien que beaucoup de celui-ci peut souvent être externalisées ou automatisées).Ce travail a pour but d’offrir à l’architecte logiciel le contrôle sur le niveau, la nature et la granularité du dynamisme qui est nécessaire dans les applications dynamiques. Cela permet aux architectes et aux développeurs de choisir les zones de l'application où les efforts de programmation des composants dynamiques seront investis, en évitant le coût et la complexité de rendre tous les composants dynamiques. L'idée est de permettre aux architectes de déterminer l'équilibre entre les efforts à fournir et le niveau de dynamisme requis pour les besoins de l'application. / Current areas of research, such as ubiquitous and cloud computing, consider execution environments to be in a constant state of change. Dynamic applications—where components can be added, removed and substituted during execution—allow software to adapt and adjust to changing environments, and to accommodate evolving features. Unfortunately, dynamic applications raise design and development issues that have yet to be fully addressed. In this dissertation we show that dynamism is a crosscutting concern that breaks many of the assumptions that developers are otherwise allowed to make in classic applications. Dynamism deeply impacts software design and development. If not handled correctly, dynamism can silently corrupt the application. Furthermore, writing dynamic applications is complex and error-prone, and given the level of complexity and the impact dynamism has on the development process, software cannot become dynamic without (extensive) modification and dynamism cannot be entirely transparent (although much of it may often be externalized or automated). This work focuses on giving the software architect control over the level, the nature and the granularity of dynamism that is required in dynamic applications. This allows architects and developers to choose where the efforts of programming dynamic components are best spent, avoiding the cost and complexity of making all components dynamic. The idea is to allow architects to determine the balance between the efforts spent and the level of dynamism required for the application's needs. At design-time we perform an impact analysis using the architect's requirements for dynamism. This serves to identify components that can be corrupted by dynamism and to—at the architect's disposition—render selected components resilient to dynamism. The application becomes a well-defined mix of dynamic areas, where components are expected to change at runtime, and static areas that are protected from dynamism and where programming is simpler and less restrictive. At runtime, our framework ensures the application remains consistent—even after unexpected dynamic events—by computing and removing potentially corrupt components. The framework attempts to recover quickly from dynamism and to minimize the impact of dynamism on the application. Our work builds on recent Software Engineering and Middleware technologies—namely, OSGi, iPOJO and APAM—that provide basic mechanisms to handle dynamism, such as dependency injection, late-binding, service availability notifications, deployment, lifecycle and dependency management. Our approach, implemented in the Robusta prototype, extends and complements these technologies by providing design and development-time support, and enforcing application execution consistency in the face of dynamism.
98

Eyes Of Darwin : une fenêtre ouverte sur l'évolution du logiciel

Tanteri, Julien 12 1900 (has links)
No description available.
99

Compreensão de mudanças estruturais no código fonte usando análise dinâmica e estática / Understanding structural changes in source code using dynamic and static analysis

Silva, Janio Rosa 17 December 2015 (has links)
A compreensão de sistemas é fundamental para a atividade de manutenção. Durante a manutenção e evolução dos sistemas, mudanças contínuas podem degradar o projeto modular do sistema, aumentando sua complexidade. Consequentemente, as empresas gastam muito tempo e recursos financeiros na compreensão e manutenção dos sistemas. Portanto, entender como o sistema evolui é uma etapa importante para o bom planejamento, desenvolvimento e gerenciamento de mudanças. Os desenvolvedores geralmente precisam entender rapidamente mudanças recentes antes de implementar novas mudanças. Mesmo que já existam abordagens para a compreensão, elas ainda são limitadas para detectar componentes diferentes com tarefas similares, para localizar tarefas no código fonte e para medir o impacto de uma determinada mudança nas funcionalidades do sistema. Neste trabalho, é proposto um mecanismo para localizar impactos causados por uma mudança no projeto e quais mudanças estruturais ocorreram em um sistema de uma versão para outra. Dada uma funcionalidade específica, o objetivo é localizar mudanças estruturais e mudanças de relacionamento entre componentes entre duas versões. Cada mudança estrutural detectada na primeira etapa é checada no código fonte para ambas as versões. Depois, as mudanças são classificadas em cinco padrões: i) movimentação de classe; ii) movimentação de método; iii) adição de método; iv) remoção de método; e v) mudança no modificador de acesso (os três últimos que representam mudanças de interface nas classes). A abordagem proposta é avaliada com três sistemas de código aberto com o objetivo de validar a metodologia: jFreeChart, Tomcat e JHotDraw. Os resultados revelam mudanças estruturais como movimentação de método, movimentação de classe e mudança de relacionamento de pacotes. Além disso, também é feito um levantamento de mudanças estruturais que afetam múltiplas funcionalidades, o que é chamado de avaliação de impacto de mudanças; análises sobre mudanças de relacionamento de pacotes no jFreeChart validado em termos de precisão e recall. Os resultados mostram que movimentações em métodos, em média, aparecem em 28,4% dos casos. Existem classes que afetam muitas funcionalidades, logo o desenvolvedor terá noção de quais funcionalidades serão afetadas com tais mudanças. As mudanças no jFreeChart mostram que a abordagem detecta os padrões de alterações em pacotes a uma precisão de 100% e revocação de 83%. Ao detectar uma mudança considerável na relação de pacotes entre o jFreeChart versões 0.7.0 e 0.9.5, foi constatado que os novos pacotes possuem um menor acoplamento, conforme a métrica efferent coupling, podendo indicar uma melhor modularização. Portanto detectar estas mudanças nos pacotes pode ajudar o desenvolvedor a explicar a coesão, acoplamento e a modularidade do sistema em termo de pacotes. Um dos resultados importantes da abordagem, a avaliação de impacto de mudanças, permite ao desenvolvedor avaliar o passado do sistema, e prever o impacto e abrangência de mudanças futuras em classes e funcionalidades. / Software system comprehension is a key maintenance activity. During the software maintenance and evolution, continuous changes may degrade the modular design overtime, thus, increasing its complexity. Consequently, companies spend a lot of time and resources trying to understand and implement changes on software. Therefore, understanding how system changes evolve is an important step towards future development planing and management. Developers usually need to rapidly understand recent changes before implementing a new feature. Despite of several approaches to improve software comprehension, they are still limited to different components with similar roles, to locate features in the source code and to measure the impact of an specific change in other features. In this work, we present an approach centered on dynamic and static analysis to reduce program comprehension effort. More specifically, we propose a mechanism to locate what structural changes have occurred in a program from one version to another. Given one specific functionality, we locate structural changes and component relationship changes between two versions. Each structural change previously detected in the first step, is then verified by a next step of static analysis to confirm if the method in the trace really exists in only one version or both versions. The candidate changes are classified in five patterns by parsing the source code of both analyzed version: i) Move Class, ii) Move Method, iii) Add Method, iv) Remove Method, and v) Access Modifier Change (where they represent Class Interface Change). We evaluated our approach with three open source-software systems: jFreeChart, Tomcat, and JHotDraw. Our results reveals structural changes such as, move method, move class, and package relationship changes. In this study, we further investigate the impact of structural changes over multiple functionalities. We also evaluated the package relationship change found in jFreeChart using precision and recall. The results show that the pattern Move Method dominates, in average, appearing in 28,4% of the changes. Also, there are changes in classes that affect many funcionalities. Also the results show that in jFreeChart there were changes in packages detected with a precision of 100% and a recall of 83%. After the approach detected many changes between versions 0.7.0 and 0.9.5 of jFreeChart, further analysis showed that the new package structure has less coupling measures, according to the Efferent Coupling metric. That can mean the package structure has a better modular structure. Then detecting those changes in the package structure can be valuable to the developer evaluate the cohesion, coupling and package modular structure. One of the results presented by this approach, the impact analysis, allow the developer, by evaluating the past of the system, foresee the impact and coverage of future changes that will be made in the system. / Dissertação (Mestrado)
100

[pt] IDENTIFICANDO CANDIDATOS A MICROSSERVIÇOS EM CÓDIGO LEGADO / [en] IDENTIFYING MICROSERVICES CANDIDATES IN LEGACY CODE

10 December 2020 (has links)
[pt] Microsserviços é uma técnica industrial para promover melhor escalabilidade e manutenibilidade de pequenos e autônomos serviços. Estudos prévios sugerem que a arquitetura de microsserviços vem sendo amplamente usada para reduzir limitações encontradas em sistemas monolíticos legados tais como melhoria de inovação, uso de diferentes tecnologias, entre outras. O processo de migração para a arquitetura de microsserviços não é trivial. Este é particularmente o caso da tarefa de identificar candidatos a microsserviço e o código fonte associado com cada candidato que é dispendiosa e propensa a erro. Consequentemente, abordagens automatizadas têm sido propostas para reduzir o esforço relacionado a essa atividade. Essas abordagens comumente adotam um ou dois critérios para suportar a identificação de microsserviços com base no sistema monolítico legado. Contudo, existe uma falta de compreensão da utilidade desses critérios adotados na prática. Além disso, há limitado conhecimento em quais são os critérios que profissionais consideram relevantes. Levando em consideração esses limitantes existentes, nós conduzimos um survey e entrevista para melhor compreender a utilidade de critérios relatados em estudos empíricos (e.g, estudos de caso e relatos) do ponto de vista dos profissionais. Os resultados do survey e da entrevista mostram que as abordagens automatizadas e ferramentas existentes não são totalmente alinhadas com necessidades práticas. Para atender às necessidades deles, este trabalho define uma abordagem automatizada chamada toMicroservices. A abordagem baseia-se em uma combinação de análise estática e dinâmica do código legado. A abordagem visa indicar os candidatos a microsserviço e a fonte correspondente extraído do sistema legado. toMicroservices faz uso da engenharia de software baseada em busca para otimizar e balancear os cinco critérios comumente adotados por profisionais, nomeados de modularização de funcionalidade, redução de sobrecarga de rede, reúso, acoplamento e coesão. Além disso, um estudo de caso e grupo focal foram conduzidos a posteriori para avaliar e melhorar toMicroservices. / [en] Microservices is an industrial technique to promote better scalability and maintainability of small and autonomous services. Previous studies suggested that microservice architectures have been widely used to reduce limitations found in legacy monolithic systems such as the inclusion of innovation, use of a different stack of technologies, among others. The process of migrating to a microservices architecture is far from trivial. This is particularly the case for the task of identifying candidate microservices and the source code associated with each candidate, which is recognizably time-consuming and error-prone. Thus, automated approaches have been proposed to reduce the effort related to that task. These approaches commonly adopt one or two criteria to support the identification of microservices from a legacy monolithic system. However, there is a lack of understanding on the usefulness of these criteria in practical settings. Moreover, there is limited knowledge on what are the criteria that practitioners consider relevant. Taking into account these existing limitations, we conducted a survey and interviews to better understand the usefulness of criteria reported in empirical studies (e.g, case studies and reports) from the point of view of practitioners. The results of the survey and interviews revealed that existing automated approaches and tools are far from being aligned with practical needs. To fulfill these needs, this work defines a automated approach named toMicroservices. The approach relies on a combination of static and dynamic analysis of the legacy code. The approach aims at indicating the microservice candidates and the corresponding source extracted from the legacy system. toMicroservices makes use of search-based software engineering (SBSE) to optimize and balance the five criteria commonly adopted by practitioners, namely feature modularization, network overhead reduction, reuse, coupling and cohesion. Additionally, an industrial case study and a focus group were conducted a posteriori to support the evaluation and improvements of toMicroservices.

Page generated in 0.0826 seconds