Spelling suggestions: "subject:"refactoring"" "subject:"refactorings""
81 |
A wide spectrum type system for transformation theoryLadkau, Matthias January 2009 (has links)
One of the most difficult tasks a programmer can be confronted with is the migration of a legacy system. Usually, these systems are unstructured, poorly documented and contain complex program logic. The reason for this, in most cases, is an emphasis on raw performance rather than on clean and structured code as well as a long period of applying quick fixes and enhancements rather than doing a proper software reengineering process including a full redesign during major enhancements. Nowadays, the old programming paradigms are becoming an increasingly serious problem. It has been identified that 90% of the costs of a typical software system arise in the maintenance phase. Many companies are simply too afraid of changing their software infrastructure and prefer to continue with principles like "never touch a running system". These companies experience growing pressure to migrate their legacy systems onto newer platforms because the maintenance of such systems is expensive and dangerous as the risk of losing vital parts of sources code or its documentation increases drastically over time. The FermaT transformation system has shown the ability to automatically or semi-automatically restructure and abstract legacy code within a special intermediate language called WSL (Wide Spectrum Language). Unfortunately, the current transformation process only supports the migration of assembler as WSL lacks the ability to handle data types properly. The data structures in assembler are currently directly translated into C data types which involves many assumptional “hard coded” conversions. The absence of an adequate type system for WSL caused several flaws for the whole transformation process and limits its abilities significantly. The main aim of the presented research is to tackle these problems by investigating and formulating how a type system can contribute to a safe and reliable migration of legacy systems. The described research includes the definition of key aspects of type related problems in the FermaT migration process and how to solve them with a suitable type system approach. Since software migration often includes a change in programming language the type system for WSL has to be able to support various type system approaches including the representation of all relevant details to avoid assumptions. This is especially difficult as most programming languages are designed for a special purpose which means that their possible programming constructs and data types differ significantly. This ranges from languages with simple type systems whose program sare prone to unintended side-effects, to languages with strict type systems which are constrained n their flexibility. It is important to include as many type related details as necessary to avoid making assumptions during language to language translation. The result of the investigation is a novel multi layered type system specifically designed to satisfy the needs of WSL for a sophisticated solution without imposing too many limitations on its abilities. The type system has an adjustable expressiveness, able to represent a wide spectrum of typing approaches ranging from weak typing which allows direct memory access and down casting, via very strict typing with a high diversity of data types to object oriented typing which supports encapsulation and data hiding. Looking at the majority of commercial relevant statically typed programming languages, two fundamental properties of type strictness and safety can be identified. A type system can be either weakly or strongly typed and may or may not allow unsafe features such as direct memory access. Each layer of the Wide Spectrum Type System has a different combination of these properties. The approach also includes special Type System Transformations which can be used to move a given WSL program among these layers. Other emphasised key features are explicit typing and scalability. The whole approach is based on a sound mathematical foundation which assures correctness and integrates seamlessly into the present mathematical definition of WSL. The type system is formally introduced to WSL by constructing an attribute grammar for the language. Type checking and type inference are used to annotate the Abstract Syntax Tree of a given WSL program with type derivations which can be used to reveal and indicate possible typing errors or to infer types if the program did not feature explicit type declarations in the first place. Notable in this approach is also the fact that object orientation is introduced to a procedural programming language without the introduction of new semantics. It is shown that object orientation can be introduced just by adjusting type checking rules and adding some syntactical notations. The approach was implemented and tested on two case studies. The thesis describes and discusses both cases in detail and shows how a migration which ignores type systems could accidentally introduce errors due to assumptions during translation. Both case studies use all important aspects of the approach, Including type transformations and object identification. The thesis finalises by summarising the whole work, identifying limitations, presenting future perspectives and drawing conclusions
|
82 |
Contribution à la simulation de la stimulation magnétique transcrânienne: vers une approche dirigée par les modèlesLuquet, Sébastien 14 December 2009 (has links) (PDF)
La Stimulation Magnétique Transcrânienne (SMT) est une technique de stimulation neuronale offrant de nombreuses applications médicales. Cependant son utilisation reste empirique. L'objectif de cette thèse était de mettre en place un logiciel permettant de mieux comprendre les effets de la stimulation et d'aider à la réalisation de séances de SMT. Suite au développement de ce logiciel, il est apparu que celui-ci était devenu patrimonial. L'objectif secondaire de ce travail fut donc d'analyser l'obsolescence du logiciel et d'essayer d'apporter des solutions pour limiter ce phénomène via l'utilisation de l'Ingénierie Dirigée par les Modèles (IDM). Après avoir présenté les phénomènes régissant l'activité cérébrale nous présentons les différentes techniques de stimulation neuronale et les avantages qu'offre la SMT. Nous présentons également les bases nécessaires à la compréhension des phénomènes électromagnétiques, puis une introduction aux concepts fondamentaux de l'IDM. Dans une troisième partie, l'accent est mis sur les différentes modélisations et méthodes de calcul des effets de la Stimulation Magnétique Transcrânienne avant de présenter la solution qui a été retenue et implantée dans le simulateur. Les deux dernières parties du manuscrit se focalisent sur le simulateur dans sa globalité (visualisation 3D) puis à la manière dont il pourrait être refactorisé pour faciliter sa maintenance et l'inclusion des évolutions possibles que nous présentons en conclusion.
|
83 |
Structured representation of composite software changesChabra, Aarti 13 December 2011 (has links)
In a software development cycle, programs go through many iterations. Identifying and
understanding program changes is a tedious but necessary task for programmers, especially when
software is developed in a collaborative environment. Existing tools used by the programmers
either lack in finding the structural differences, or report the differences as atomic changes, such as
updates of individual syntax tree nodes.
Programmers frequently use program restructuring techniques, such as refactorings that are
composed of several individual atomic changes. Current version differencing tools omit these
high-level changes, reporting just the set of individual atomic changes. When a large number of
refactorings are performed, the number of reported atomic changes is very large. As a result, it will
be very difficult to understand the program differences.
This problem can be addressed by reporting the program differences as composite changes, thereby
saving programmers the effort of navigating through the individual atomic changes. This thesis
proposes a methodology to explore the atomic changes reported by existing version differencing
tools to infer composite changes. First, we will illustrate the different approaches that can be used
for representing object language program differences using a variation representation. Next we will
present the process of composite change inference from the structured representation of atomic
changes. This process describes patterns that specify the expected structure of an expression
corresponding to each composite change that has to be inferred. The information in patterns is then
used to design the change inference algorithm. The composite changes inferred from a given
expression are annotated in the expression, allowing the changes to be reported as desired. / Graduation date: 2012
|
84 |
Continuous architecture in a large distributed agile organization : A case study at EricssonStandar, Magnus January 2017 (has links)
Agile practices have become norm, also in large scale organizations. Applying agile methods includes introducing continuous practices, including continuous architecture. For web scale applications microservices is a rising star. This thesis investigates if microservices could be an answer also for embedded systems to tackle the synchronizing problem of many parallel teams.
|
85 |
[en] PRIORITIZATION OF CODE ANOMALIES BASED ON ARCHITECTURE SENSITIVENESS / [pt] PRIORIZAÇÃO DE ANOMALIAS DE CÓDIGO SENSÍVEL A ARQUITETURAROBERTA LOPES ARCOVERDE 30 January 2015 (has links)
[pt] Um dos principais sintomas de declínio da qualidade arquitetural em projetos de software é a manifestação contínua de anomalias de código. Quando estas anomalias não são detectadas e removidas com antecedência, a capacidade de evoluir e manter estes sistemas pode ser comprometida, e, eventualmente, uma reestruturação completa de suas arquiteturas é inevitável. Apesar da existência de diversas técnicas e ferramentas para detecção automática de anomalias de código, a identificação de anomalias que efetivamente causam problemas arquiteturais é ainda uma tarefa desafiadora e não trivial. Ademais, estudos realizados no contexto desta dissertação ostraram que desenvolvedores tendem a refatorar mais frequentemente anomalias que não causam problemas arquiteturais. Em especial, percebeu-se que desenvolvedores priorizam a refatoração de elementos de código que não afetam a arquitetura dos sistemas, como métodos privados ou módulos internos de um componente arquitetural. Neste contexto, o presente trabalho propõe uma abordagem para priorização de anomalias de código. Esta abordagem é composta por heurísticas que exploram diferentes fatores para identificar e ordenar as anomalias detectadas de acordo com suas relevâncias arquiteturais. Tais fatores compreendem desde a quantidade de mudanças realizadas no código ao longo da evolução dos sistemas, até os papéis arquiteturais por ele desempenhados. Foi ainda implementada uma ferramenta para aplicar tais heurísticas de priorização automaticamente em projetos Java. A abordagem proposta foi avaliada em 4 projetos de software de diferentes domínios. Tal avaliação revelou que mantenedores de software poderiam ser beneficiados pelas recomendações de priorização produzidas pela ferramenta, de modo a investir seus esforços de refatoração na solução de problemas arquiteturalmente relevantes. / [en] The progressive manifestation of code anomalies in a software system is a key symptom of its architecture quality decline. When those anomalies are not detected and removed early, the maintainability of software projects can be compromised irreversibly, and, eventually, a complete redesign is inevitable. Despite the existence of many techniques and tools for code anomaly detection, identifying anomalies that are more likely to cause architecture problems remains a challenging task. In fact, studies performed in the context of this dissertation show that even when there is tool upport for detecting code anomalies, developers seem to invest more time refactoring those that are not related to architectural problems. Moreover, we also found that developers frequently prioritize refactoring of code elements that do not contribute to a better adherence to the intended software architecture. In this context, this dissertation proposes a prioritization approach for identifying which anomalies in a system implementation are more harmful to the architecture. The proposed approach is composed of heuristic strategies that exploit several software project factors to identify and rank code anomalies by their architecture relevance. These factors range from the change characteristics to the potential architecture roles of software modules. Furthermore, we implemented tool support for applying our prioritization approach in Java projects. We also evaluated the prioritization approach on 4 software projects from different application domains. Our evaluation revealed that software maintainers could benefit from the recommended rankings for identifying which code anomalies are harming architecture the most, helping them investing their refactoring efforts into solving the architecturally relevant problems.
|
86 |
Replicação assíncrona em bancos de dados evolutivos / Asynchronous Replication in Evolutionary DatabasesDomingues, Helves Humberto 02 June 2011 (has links)
A modelagem evolutiva de bancos de dados é necessária devido às frequentes mudanças de requisitos das aplicações. O desafio é ainda maior quando o banco de dados tem de atender simultaneamente a várias aplicações. A solução proposta por Scott Ambler para evolução utiliza refatorações e define um período de transição, durante o qual tanto o esquema antigo quanto o novo coexistem e os dados são replicados por meio de um processo síncrono que apresenta várias dificuldades, como a interferência no funcionamento normal das aplicações. Para minimizar essas dificuldades, esta tese propõe um processo assíncrono para manter atualizados esses esquemas e apresenta um protótipo de uma ferramenta para auxiliar as evoluções dos bancos de dados. A proposta foi validada por meio de um experimento em laboratório que comparou a solução aqui apresentada com a proposta por Ambler. / Evolutionary database modeling is necessary due to the frequent changes in application requirements. The challenge is greater when the database must support multiple applications simultaneously. The solution for evolution proposed by Scott Ambler is refactoring with a transition period, during which both the old and the new database schemas coexist and data is replicated via a synchronous process, what brings several difficulties, such as interference with the normal operation of applications. To minimize these difficulties, this thesis proposes an asynchronous process to keep these schemas updated and presents a prototype tool to evolve databases. The proposal was validated by a laboratory experiment in which the solution presented here was compared with the one proposed by Ambler.
|
87 |
Aplicação de práticas ágeis na construção de data warehouse evolutivo / Application of agile practices in the traditional method of data warehouse engineeringCarvalho, Guilherme Tozo de 28 April 2009 (has links)
Um Data Warehouse (DW) é um banco de dados centralizado, orientado por assunto, integrado, não volátil e histórico, criado com o objetivo de dar apoio ao processo de tomada de decisão e que estrutura os dados em uma arquitetura analítica bastante distinta da arquitetura relacional utilizada nos bancos de dados transacionais. Construir um DW é um projeto de engenharia bastante complexo pois envolve muitas tecnologias e muitas pessoas, de diferentes equipes, em um grande esforço conjunto para construir esta base central de informações corporativas. O processo tradicional de construção de um DW não utiliza conceitos ágeis e, pelo escopo de desenvolvimento ser grande, pode levar muito tempo até que funcionalidades sejam entregues aos clientes. Os métodos ágeis de engenharia de software são muito usados como uma alternativa aos métodos tradicionais de desenvolvimento e têm diferenciais que trazem muito valor a projetos grandes pois, além de buscar desenvolver versões funcionais em prazos curtos, defendem que todos os sistemas têm a constante necessidade de se adaptar a mudanças. Neste trabalho são aplicadas práticas ágeis no processo tradicional de engenharia de DW para que o desenvolvimento seja realizado em ciclos iterativos curtos, tornando possível o desenvolvimento rápido e evolutivo de um DW com entregas constantes de novas funcionalidades. A contínua evolução deste complexo ambiente analítico é apoiada por conceitos de banco de dados evolutivos e também por fundamentos de métodos ágeis. / A data warehouse (DW) is a central database, subject-oriented, integrated, nonvolatile, and time-variant collection of data in support of management\'s decision making process and that summarize the data in an analytic architecture quite different from the relational one, used in transactional databases. Building a DW is a complex engineering project because it involves many technologies and many people, from different teams, in a huge corporative effort to build a central database with corporative data. The traditional engineering process to build a DW does not use agile concepts and, as its scope is quite big, it might takes a long time until the customer can use its features. Agile methods of software engineering are commonly used as an alternative to the traditional methods and they have some differentials that lead a lot of value to big projects, as the continuous attempt to develop short releases in short periods of time, or the belief that every system needs to be continuously adapted to the changes on its environment. This work applies agile practices in the traditional DW engineering method, so that the development can be done in short iterative cycles, making possible a fast and evolutive DW project, with frequent delivering of new functionalities. The continuous evolution of this complex analytical environment is supported by evolutive database concepts and also for agile methods foundations.
|
88 |
Harnessing synthetic biology for the bioprospecting and engineering of aromatic polyketide synthasesCummings, Matthew January 2018 (has links)
Antimicrobial resistant microorganisms are predicted to pose an existential threat to humanity inside of the next 3 decades. Characterisation of novel acting antimicrobial small molecules from microorganisms has historically counteracted this evolutionary arms race, however the bountiful source of pharmaceutically relevant bioactive specialised metabolites discovered in the Golden era of drug discovery has long since dried up. The clinicians' arsenal of useful antimicrobials is diminishing, and a fresh perspective on specialised metabolite discovery is necessary. This call to action is being answered, in part, through advances in genome sequencing, bioinformatics predictions and the development of next generation synthetic biology tools aiming to translate the biological sciences into an engineering discipline. To expedite our route to new pharmaceutically relevant specialised metabolites using the synthetic biology toolbox several bottlenecks need to be addressed, and are tackled here in. Biosynthetic gene clusters (BGCs) represent blueprints to pharmaceuticals, however to date the vast wealth of knowledge about biosynthetic gene clusters is inconsistently reported and sporadically disseminated throughout the literature and databases. To bring the reporting of BGCs in line with engineering principles we designed and built a community supported standard, the Minimum Information about a Biosynthetic Gene cluster (MIBiG), for reporting BGCs in a consistent manner, and centralised this information in an easy to operate and open access repository for rapid retrieval of information, an essential resource for the bioengineer. Prioritisation represents the next bottleneck in specialised metabolite discovery. Bioinformatics tools have predicted a cache of thousands of BGCs within publicly available genome sequences, however high experimental attrition rates drastically slows characterisation of the corresponding specialised metabolite. We designed and built an Output Ordering and Prioritisation System (OOPS), to rank thousands of BGCs in parallel against molecular biology relevant parameters, pairing BGCs with appropriate heterologous expression hosts and facilitating a judicious choice of BGCs for characterisation to reduce experimental attrition. To fully realise the potential of synthetic biology in specialised metabolite discovery a genetically amenable heterologous host, capable of completing rapid design-build-test-learn cycles, is necessary. This cannot be achieved for the pharmaceutically important type II polyketides, as their biosynthetic machinery is largely restricted to Actinobacteria. Using MIBiG datasets, antiSMASH and BLASTP we identify 5 sets of soluble type II polyketide synthases (PKS) in Escherichia coli for the first time. We construct and test the robustness of a plug-and-play scaffold for bioproduction of aromatic polyketides using one PKS in E. coli, yielding anthraquinones, dianthrones and benzoisochromanequinones intermediates. Through bioprospecting for biological 'parts' to expand the chemical diversity of our plug-and-play scaffold we describe a new lineage of type II PKSs predominantly from non-Actinobacteria. The standards, softwares, and plug-and-play scaffold and biosynthetic 'parts' described here-in will act as an engine for rapid and automated bioproduction of existing, and novel, pharmaceutically relevant aromatic polyketides in E. coli using the synthetic biology toolbox.
|
89 |
Meta-Model Guided Error Correction for UML ModelsBäckström, Fredrik, Ivarsson, Anders January 2007 (has links)
<p>Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.</p><p>Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach. </p><p>The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.</p><p>The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.</p>
|
90 |
Cobertura entre pruebas a distintos niveles para refactorizaciones más segurasFontela, Moisés Carlos 27 August 2013 (has links)
Esta tesis busca encontrar una práctica metodológica que permita definir distintos niveles de pruebas que operen como garantía de refactorizaciones seguras, independientemente del alcance de las mismas. Se enmarca en el tema general de refactoring, con elementos de Test Driven Development (TDD), utilizando las prácticas recomendadas en el marco de Behavior Driven Development (BDD) y de Acceptance Test Driven Development (ATDD). La práctica de refactoring descansa fuertemente en la existencia de pruebas unitarias automatizadas, que funcionan como red de seguridad que garantiza que el comportamiento de la aplicación no varía luego de una refactorización. Sin embargo, este simple enunciado no prevé que hay ocasiones en que las pruebas dejan de funcionar al realizar las refactorizaciones, con lo cual se pierde la sincronización entre código y pruebas, y la cualidad de red de seguridad de estas últimas. Esto es especialmente cierto ante refactorizaciones estructurales y rediseños macro. Por lo tanto, y dado que el uso de pruebas como red de contención es uno de los supuestos más fuertes de la práctica del refactoring, vamos a desarrollar, como objetivo de esta tesis, una práctica metodológica para permitir definir distintos niveles de pruebas que aseguren distintos tipos de refactorizaciones, validándola con un caso de estudio y apoyándonos en una herramienta automática desarrollada en el marco de este trabajo.
|
Page generated in 0.0531 seconds