• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 36
  • 8
  • 6
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 185
  • 97
  • 43
  • 38
  • 34
  • 34
  • 33
  • 32
  • 29
  • 27
  • 18
  • 17
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Structured representation of composite software changes

Chabra, Aarti 13 December 2011 (has links)
In a software development cycle, programs go through many iterations. Identifying and understanding program changes is a tedious but necessary task for programmers, especially when software is developed in a collaborative environment. Existing tools used by the programmers either lack in finding the structural differences, or report the differences as atomic changes, such as updates of individual syntax tree nodes. Programmers frequently use program restructuring techniques, such as refactorings that are composed of several individual atomic changes. Current version differencing tools omit these high-level changes, reporting just the set of individual atomic changes. When a large number of refactorings are performed, the number of reported atomic changes is very large. As a result, it will be very difficult to understand the program differences. This problem can be addressed by reporting the program differences as composite changes, thereby saving programmers the effort of navigating through the individual atomic changes. This thesis proposes a methodology to explore the atomic changes reported by existing version differencing tools to infer composite changes. First, we will illustrate the different approaches that can be used for representing object language program differences using a variation representation. Next we will present the process of composite change inference from the structured representation of atomic changes. This process describes patterns that specify the expected structure of an expression corresponding to each composite change that has to be inferred. The information in patterns is then used to design the change inference algorithm. The composite changes inferred from a given expression are annotated in the expression, allowing the changes to be reported as desired. / Graduation date: 2012
82

Continuous architecture in a large distributed agile organization : A case study at Ericsson

Standar, Magnus January 2017 (has links)
Agile practices have become norm, also in large scale organizations. Applying agile methods includes introducing continuous practices, including continuous architecture. For web scale applications microservices is a rising star. This thesis investigates if microservices could be an answer also for embedded systems to tackle the synchronizing problem of many parallel teams.
83

[en] PRIORITIZATION OF CODE ANOMALIES BASED ON ARCHITECTURE SENSITIVENESS / [pt] PRIORIZAÇÃO DE ANOMALIAS DE CÓDIGO SENSÍVEL A ARQUITETURA

ROBERTA LOPES ARCOVERDE 30 January 2015 (has links)
[pt] Um dos principais sintomas de declínio da qualidade arquitetural em projetos de software é a manifestação contínua de anomalias de código. Quando estas anomalias não são detectadas e removidas com antecedência, a capacidade de evoluir e manter estes sistemas pode ser comprometida, e, eventualmente, uma reestruturação completa de suas arquiteturas é inevitável. Apesar da existência de diversas técnicas e ferramentas para detecção automática de anomalias de código, a identificação de anomalias que efetivamente causam problemas arquiteturais é ainda uma tarefa desafiadora e não trivial. Ademais, estudos realizados no contexto desta dissertação ostraram que desenvolvedores tendem a refatorar mais frequentemente anomalias que não causam problemas arquiteturais. Em especial, percebeu-se que desenvolvedores priorizam a refatoração de elementos de código que não afetam a arquitetura dos sistemas, como métodos privados ou módulos internos de um componente arquitetural. Neste contexto, o presente trabalho propõe uma abordagem para priorização de anomalias de código. Esta abordagem é composta por heurísticas que exploram diferentes fatores para identificar e ordenar as anomalias detectadas de acordo com suas relevâncias arquiteturais. Tais fatores compreendem desde a quantidade de mudanças realizadas no código ao longo da evolução dos sistemas, até os papéis arquiteturais por ele desempenhados. Foi ainda implementada uma ferramenta para aplicar tais heurísticas de priorização automaticamente em projetos Java. A abordagem proposta foi avaliada em 4 projetos de software de diferentes domínios. Tal avaliação revelou que mantenedores de software poderiam ser beneficiados pelas recomendações de priorização produzidas pela ferramenta, de modo a investir seus esforços de refatoração na solução de problemas arquiteturalmente relevantes. / [en] The progressive manifestation of code anomalies in a software system is a key symptom of its architecture quality decline. When those anomalies are not detected and removed early, the maintainability of software projects can be compromised irreversibly, and, eventually, a complete redesign is inevitable. Despite the existence of many techniques and tools for code anomaly detection, identifying anomalies that are more likely to cause architecture problems remains a challenging task. In fact, studies performed in the context of this dissertation show that even when there is tool upport for detecting code anomalies, developers seem to invest more time refactoring those that are not related to architectural problems. Moreover, we also found that developers frequently prioritize refactoring of code elements that do not contribute to a better adherence to the intended software architecture. In this context, this dissertation proposes a prioritization approach for identifying which anomalies in a system implementation are more harmful to the architecture. The proposed approach is composed of heuristic strategies that exploit several software project factors to identify and rank code anomalies by their architecture relevance. These factors range from the change characteristics to the potential architecture roles of software modules. Furthermore, we implemented tool support for applying our prioritization approach in Java projects. We also evaluated the prioritization approach on 4 software projects from different application domains. Our evaluation revealed that software maintainers could benefit from the recommended rankings for identifying which code anomalies are harming architecture the most, helping them investing their refactoring efforts into solving the architecturally relevant problems.
84

Replicação assíncrona em bancos de dados evolutivos / Asynchronous Replication in Evolutionary Databases

Domingues, Helves Humberto 02 June 2011 (has links)
A modelagem evolutiva de bancos de dados é necessária devido às frequentes mudanças de requisitos das aplicações. O desafio é ainda maior quando o banco de dados tem de atender simultaneamente a várias aplicações. A solução proposta por Scott Ambler para evolução utiliza refatorações e define um período de transição, durante o qual tanto o esquema antigo quanto o novo coexistem e os dados são replicados por meio de um processo síncrono que apresenta várias dificuldades, como a interferência no funcionamento normal das aplicações. Para minimizar essas dificuldades, esta tese propõe um processo assíncrono para manter atualizados esses esquemas e apresenta um protótipo de uma ferramenta para auxiliar as evoluções dos bancos de dados. A proposta foi validada por meio de um experimento em laboratório que comparou a solução aqui apresentada com a proposta por Ambler. / Evolutionary database modeling is necessary due to the frequent changes in application requirements. The challenge is greater when the database must support multiple applications simultaneously. The solution for evolution proposed by Scott Ambler is refactoring with a transition period, during which both the old and the new database schemas coexist and data is replicated via a synchronous process, what brings several difficulties, such as interference with the normal operation of applications. To minimize these difficulties, this thesis proposes an asynchronous process to keep these schemas updated and presents a prototype tool to evolve databases. The proposal was validated by a laboratory experiment in which the solution presented here was compared with the one proposed by Ambler.
85

Aplicação de práticas ágeis na construção de data warehouse evolutivo / Application of agile practices in the traditional method of data warehouse engineering

Carvalho, Guilherme Tozo de 28 April 2009 (has links)
Um Data Warehouse (DW) é um banco de dados centralizado, orientado por assunto, integrado, não volátil e histórico, criado com o objetivo de dar apoio ao processo de tomada de decisão e que estrutura os dados em uma arquitetura analítica bastante distinta da arquitetura relacional utilizada nos bancos de dados transacionais. Construir um DW é um projeto de engenharia bastante complexo pois envolve muitas tecnologias e muitas pessoas, de diferentes equipes, em um grande esforço conjunto para construir esta base central de informações corporativas. O processo tradicional de construção de um DW não utiliza conceitos ágeis e, pelo escopo de desenvolvimento ser grande, pode levar muito tempo até que funcionalidades sejam entregues aos clientes. Os métodos ágeis de engenharia de software são muito usados como uma alternativa aos métodos tradicionais de desenvolvimento e têm diferenciais que trazem muito valor a projetos grandes pois, além de buscar desenvolver versões funcionais em prazos curtos, defendem que todos os sistemas têm a constante necessidade de se adaptar a mudanças. Neste trabalho são aplicadas práticas ágeis no processo tradicional de engenharia de DW para que o desenvolvimento seja realizado em ciclos iterativos curtos, tornando possível o desenvolvimento rápido e evolutivo de um DW com entregas constantes de novas funcionalidades. A contínua evolução deste complexo ambiente analítico é apoiada por conceitos de banco de dados evolutivos e também por fundamentos de métodos ágeis. / A data warehouse (DW) is a central database, subject-oriented, integrated, nonvolatile, and time-variant collection of data in support of management\'s decision making process and that summarize the data in an analytic architecture quite different from the relational one, used in transactional databases. Building a DW is a complex engineering project because it involves many technologies and many people, from different teams, in a huge corporative effort to build a central database with corporative data. The traditional engineering process to build a DW does not use agile concepts and, as its scope is quite big, it might takes a long time until the customer can use its features. Agile methods of software engineering are commonly used as an alternative to the traditional methods and they have some differentials that lead a lot of value to big projects, as the continuous attempt to develop short releases in short periods of time, or the belief that every system needs to be continuously adapted to the changes on its environment. This work applies agile practices in the traditional DW engineering method, so that the development can be done in short iterative cycles, making possible a fast and evolutive DW project, with frequent delivering of new functionalities. The continuous evolution of this complex analytical environment is supported by evolutive database concepts and also for agile methods foundations.
86

Harnessing synthetic biology for the bioprospecting and engineering of aromatic polyketide synthases

Cummings, Matthew January 2018 (has links)
Antimicrobial resistant microorganisms are predicted to pose an existential threat to humanity inside of the next 3 decades. Characterisation of novel acting antimicrobial small molecules from microorganisms has historically counteracted this evolutionary arms race, however the bountiful source of pharmaceutically relevant bioactive specialised metabolites discovered in the Golden era of drug discovery has long since dried up. The clinicians' arsenal of useful antimicrobials is diminishing, and a fresh perspective on specialised metabolite discovery is necessary. This call to action is being answered, in part, through advances in genome sequencing, bioinformatics predictions and the development of next generation synthetic biology tools aiming to translate the biological sciences into an engineering discipline. To expedite our route to new pharmaceutically relevant specialised metabolites using the synthetic biology toolbox several bottlenecks need to be addressed, and are tackled here in. Biosynthetic gene clusters (BGCs) represent blueprints to pharmaceuticals, however to date the vast wealth of knowledge about biosynthetic gene clusters is inconsistently reported and sporadically disseminated throughout the literature and databases. To bring the reporting of BGCs in line with engineering principles we designed and built a community supported standard, the Minimum Information about a Biosynthetic Gene cluster (MIBiG), for reporting BGCs in a consistent manner, and centralised this information in an easy to operate and open access repository for rapid retrieval of information, an essential resource for the bioengineer. Prioritisation represents the next bottleneck in specialised metabolite discovery. Bioinformatics tools have predicted a cache of thousands of BGCs within publicly available genome sequences, however high experimental attrition rates drastically slows characterisation of the corresponding specialised metabolite. We designed and built an Output Ordering and Prioritisation System (OOPS), to rank thousands of BGCs in parallel against molecular biology relevant parameters, pairing BGCs with appropriate heterologous expression hosts and facilitating a judicious choice of BGCs for characterisation to reduce experimental attrition. To fully realise the potential of synthetic biology in specialised metabolite discovery a genetically amenable heterologous host, capable of completing rapid design-build-test-learn cycles, is necessary. This cannot be achieved for the pharmaceutically important type II polyketides, as their biosynthetic machinery is largely restricted to Actinobacteria. Using MIBiG datasets, antiSMASH and BLASTP we identify 5 sets of soluble type II polyketide synthases (PKS) in Escherichia coli for the first time. We construct and test the robustness of a plug-and-play scaffold for bioproduction of aromatic polyketides using one PKS in E. coli, yielding anthraquinones, dianthrones and benzoisochromanequinones intermediates. Through bioprospecting for biological 'parts' to expand the chemical diversity of our plug-and-play scaffold we describe a new lineage of type II PKSs predominantly from non-Actinobacteria. The standards, softwares, and plug-and-play scaffold and biosynthetic 'parts' described here-in will act as an engine for rapid and automated bioproduction of existing, and novel, pharmaceutically relevant aromatic polyketides in E. coli using the synthetic biology toolbox.
87

Meta-Model Guided Error Correction for UML Models

Bäckström, Fredrik, Ivarsson, Anders January 2007 (has links)
<p>Modeling is a complex process which is quite hard to do in a structured and controlled way. Many companies provide a set of guidelines for model structure, naming conventions and other modeling rules. Using meta-models to describe these guidelines makes it possible to check whether an UML model follows the guidelines or not. Providing this error checking of UML models is only one step on the way to making modeling software an even more valuable and powerful tool.</p><p>Moreover, by providing correction suggestions and automatic correction of these errors, we try to give the modeler as much help as possible in creating correct UML models. Since the area of model correction based on meta-models has not been researched earlier, we have taken an explorative approach. </p><p>The aim of the project is to create an extension of the program MetaModelAgent, by Objektfabriken, which is a meta-modeling plug-in for IBM Rational Software Architect. The thesis shows that error correction of UML models based on meta-models is a possible way to provide automatic checking of modeling guidelines. The developed prototype is able to give correction suggestions and automatic correction for many types of errors that can occur in a model.</p><p>The results imply that meta-model guided error correction techniques should be further researched and developed to enhance the functionality of existing modeling software.</p>
88

Cobertura entre pruebas a distintos niveles para refactorizaciones más seguras

Fontela, Moisés Carlos 27 August 2013 (has links)
Esta tesis busca encontrar una práctica metodológica que permita definir distintos niveles de pruebas que operen como garantía de refactorizaciones seguras, independientemente del alcance de las mismas. Se enmarca en el tema general de refactoring, con elementos de Test Driven Development (TDD), utilizando las prácticas recomendadas en el marco de Behavior Driven Development (BDD) y de Acceptance Test Driven Development (ATDD). La práctica de refactoring descansa fuertemente en la existencia de pruebas unitarias automatizadas, que funcionan como red de seguridad que garantiza que el comportamiento de la aplicación no varía luego de una refactorización. Sin embargo, este simple enunciado no prevé que hay ocasiones en que las pruebas dejan de funcionar al realizar las refactorizaciones, con lo cual se pierde la sincronización entre código y pruebas, y la cualidad de red de seguridad de estas últimas. Esto es especialmente cierto ante refactorizaciones estructurales y rediseños macro. Por lo tanto, y dado que el uso de pruebas como red de contención es uno de los supuestos más fuertes de la práctica del refactoring, vamos a desarrollar, como objetivo de esta tesis, una práctica metodológica para permitir definir distintos niveles de pruebas que aseguren distintos tipos de refactorizaciones, validándola con un caso de estudio y apoyándonos en una herramienta automática desarrollada en el marco de este trabajo.
89

IVCon: A GUI-based Tool for Visualizing and Modularizing Crosscutting Concerns

Saigal, Nalin 10 April 2009 (has links)
Code modularization provides benefits throughout the software life cycle; however, the presence of crosscutting concerns (CCCs) in software hinders its complete modularization. This thesis describes IVCon, a GUI-based tool that provides a novel approach to modularization of CCCs. IVCon enables users to create, examine, and modify their code in two different views, the woven view and the unwoven view. The woven view displays program code in colors that indicate which CCCs various code segments implement. The unwoven view displays code in two panels, one showing the core of the program and the other showing all the code implementing each concern in an isolated module. IVCon aims to provide an easy-to-use interface for conveniently creating, examining, and modifying code in, and translating between, the woven and unwoven views.
90

Specification, implementation and verification of refactorings

Schaefer, Max January 2010 (has links)
Refactoring is the process of reorganising or restructuring code by means of behaviour-preserving program transformations, themselves called refactorings. Most modern development environments come with built-in support for refactoring in the form of automated refactorings that the user can perform at the push of a button. Implementing refactorings is notoriously complex, however, and even state-of-the-art implementations have very low standards of correctness and can introduce subtle changes of behaviour into refactored programs. In this thesis, we develop concepts and techniques that make it possible to give concise, modular specifications of refactorings. These specifications are precise enough to cover all details of the object language, and thus give rise to full featured, high-quality refactoring implementations. Their modularity, on the other hand, makes them amenable to formal proof, and hence opens the door to the rigorous verification of refactorings. We discuss a disciplined approach to maintaining name bindings and avoiding name capture by treating the binding from a name to the declaration it refers to as a dependency that the refactoring has to preserve. This approach readily generalises to other types of dependencies for capturing control flow, data flow and synchronisation behaviour. To implement complex refactorings, it is often helpful for the refactoring to internally work on a richer language with language extensions that make the transformation easier to express. We show how this allows the decomposition of refactorings into small microrefactorings that can be specified, implemented and verified in isolation. We evaluate our approach by giving specifications and implementations of many commonly used refactorings that are concise, yet match the implementations in the popular Java development environment Eclipse in terms of features, and outperform them in terms of correctness. We give detailed informal correctness proofs for some of our specifications, which are greatly aided by their modular structure. Finally, we discuss a rigorous formalisation of the central name binding framework used by most of our specifications in the theorem prover Coq, and show how its correctness can be established mechanically.

Page generated in 0.0754 seconds