• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 113
  • 18
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 173
  • 173
  • 73
  • 58
  • 32
  • 26
  • 21
  • 21
  • 17
  • 17
  • 16
  • 16
  • 13
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

The identification of semantics for the file/database problem domain and their use in a template-based software environment /

Shubra, Charles John January 1984 (has links)
No description available.
122

Applying software maintenance metrics in the object oriented software development life cylce

Li, Wei 20 October 2005 (has links)
Software complexity metrics have been studied in the procedural paradigm as a quantitative means of assessing the software development process as well as the quality of software products. Several studies have validated that various metrics are useful indicators of maintenance effort in the procedural paradigm. However, software complexity metrics have rarely been studied in the object oriented paradigm. Very few complexity metrics have been proposed to measure object oriented systems, and the proposed ones have not been validated. This research concentrates on several object oriented software complexity metrics and the validation of these metrics with maintenance effort in two commercial systems. The results of an empirical study of the maintenance activities in the two commercial systems are also described. A metric instrumentation in an object oriented software development framework is presented. / Ph. D.
123

Automated Adaptive Software Maintenance: A Methodology and Its Applications

Tansey, Wesley 11 August 2008 (has links)
In modern software development, maintenance accounts for the majority of the total cost and effort in a software project. Especially burdensome are those tasks which require applying a new technology in order to adapt an application to changed requirements or a different environment. This research explores methodologies, techniques, and approaches for automating such adaptive maintenance tasks. By combining high-level specifications and generative techniques, a new methodology shapes the design of approaches to automating adaptive maintenance tasks in the application domains of high performance computing (HPC) and enterprise software. Despite the vast differences of these domains and their respective requirements, each approach is shown to be effective at alleviating their adaptive maintenance burden. This thesis proves that it is possible to effectively automate tedious and error-prone adaptive maintenance tasks in a diverse set of domains by exploiting high-level specifications to synthesize specialized low-level code. The specific contributions of this thesis are as follows: (1) a common methodology for designing automated approaches to adaptive maintenance, (2) a novel approach to automating the generation of efficient marshaling logic for HPC applications from a high-level visual model, and (3) a novel approach to automatically upgrading legacy enterprise applications to use annotation-based frameworks. The technical contributions of this thesis have been realized in two software tools for automated adaptive maintenance: MPI Serializer, a marshaling logic generator for MPI applications, and Rosemari, an inference and transformation engine for upgrading enterprise applications. This thesis is based on research papers accepted to IPDPS '08 and OOPSLA '08. / Master of Science
124

Efficient Automatic Change Detection in Software Maintenance and Evolutionary Processes

Hönel, Sebastian January 2020 (has links)
Software maintenance is such an integral part of its evolutionary process that it consumes much of the total resources available. Some estimate the costs of maintenance to be up to 100 times the amount of developing a software. A software not maintained builds up technical debt, and not paying off that debt timely will eventually outweigh the value of the software, if no countermeasures are undertaken. A software must adapt to changes in its environment, or to new and changed requirements. It must further receive corrections for emerging faults and vulnerabilities. Constant maintenance can prepare a software for the accommodation of future changes. While there may be plenty of rationale for future changes, the reasons behind historical changes may not be accessible longer. Understanding change in software evolution provides valuable insights into, e.g., the quality of a project, or aspects of the underlying development process. These are worth exploiting, for, e.g., fault prediction, managing the composition of the development team, or for effort estimation models. The size of software is a metric often used in such models, yet it is not well-defined. In this thesis, we seek to establish a robust, versatile and computationally cheap metric, that quantifies the size of changes made during maintenance. We operationalize this new metric and exploit it for automated and efficient commit classification. Our results show that the density of a commit, that is, the ratio between its net- and gross-size, is a metric that can replace other, more expensive metrics in existing classification models. Models using this metric represent the current state of the art in automatic commit classification. The density provides a more fine-grained and detailed insight into the types of maintenance activities in a software project. Additional properties of commits, such as their relation or intermediate sojourn-times, have not been previously exploited for improved classification of changes. We reason about the potential of these, and suggest and implement dependent mixture- and Bayesian models that exploit joint conditional densities, models that each have their own trade-offs with regard to computational cost and complexity, and prediction accuracy. Such models can outperform well-established classifiers, such as Gradient Boosting Machines. All of our empirical evaluation comprise large datasets, software and experiments, all of which we have published alongside the results as open-access. We have reused, extended and created datasets, and released software packages for change detection and Bayesian models used for all of the studies conducted.
125

A technique for identifying high maintenance legacy software based on complexity and usage

Harrison, Matthew S. 01 April 2000 (has links)
No description available.
126

Development of a tool to test computer protocols

Myburgh, W. D 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University, 2003. / ENGLISH ABSTRACT: Software testing tools simplify and automate the menial work associated with testing. Moreover, for complex concurrent software such as computer protocols, testing tools allow testing on an abstract level that is independent of specific implementations. Standard conformance testing methodologies and a number of testing tools are commercially available, but detailed descriptions of the implementation of such testing tools are not widely available. This thesis investigates the development of a tool for automated protocol testing in the ETH Oberon development environment. The need to develop a protocol testing tool that automates the execution of specified test cases was identified in collaboration with a local company that develops protocols in the programming language Oberon. Oberon is a strongly typed secure language that supports modularisation and promotes a readable programming style. The required tool should translate specified test cases into executable test code supported by a runtime environment. A test case consists of a sequence of input actions to which the software under test is expected to respond by executing observable output actions. A number of issues are considered of which the first is concerned with the representation of test case specifications. For this, a notation was used that is basically a subset of the test specification language TTCN-3 as standardised by the European Telecommunications Standards Institute. The second issue is the format of executable test cases and a suitable runtime environment. A translator was developed that generates executable Oberon code from specified test cases. The compiled test code is supported by a runtime library, which is part of the tool. Due to the concurrent nature of a protocol environment, concurrent processes in the runtime environment are identified. Since ETH Oberon supports multitasking in a limited sense, test cases are executed as cooperating background tasks. The third issue is concerned with the interaction between an executing test case and a system under test. It is addressed by an implementation dependent interface that maps specified test interactions onto real interactions as required by the test context in which an implementation under test operates. A supporting protocol to access the service boundary of an implementation under test remotely and underlying protocol service providers are part of a test context. The ETH Oberon system provides a platform that simplifies the implementation of protocol test systems, due to its size and simple task mechanism. Operating system functionality considered as essential is pointed out in general terms since other systems could be used to support such testing tools. In conclusion, directions for future work are proposed. / AFRIKAANSE OPSOMMING: Toetsstelsels vir programmatuur vereenvoudig en outomatiseer die slaafse werk wat met toetsing assosieer word. 'n Toetsstelsel laat verder toe dat komplekse gelyklopende programmatuur, soos rekenaarprotokolle, op 'n abstrakte vlak getoets word, wat onafhanklik van spesifieke implementasies is. Daar bestaan standaard metodes vir konformeringstoetsing en 'n aantal toetsstelsels is kommersiëel beskikbaar. Uitvoerige beskrywings van die implementering van sulke stelsels is egter nie algemeen beskikbaar nie. Hierdie tesis ondersoek die ontwikkeling van 'n stelsel vir outomatiese toetsing van protokolle in die ontwikkelingsomgewing van ETH Oberon. Die behoefte om 'n protokoltoetsstelsel te ontwikkel, wat die uitvoering van gespesifiseerde toetsgevalle outomatiseer, is geïdentifiseer in oorleg met 'n plaaslike maatskappy wat protokolle ontwikkel in die Oberon programmeertaal. Oberon is 'n sterkgetipeerde taal wat modularisering ondersteun en a leesbare programmeerstyl bevorder. Die toestsstelsel moet gespesifiseerde toetsgevalle vertaal na uitvoerbare toetskode wat ondersteun word deur 'n looptydomgewing. 'n Toetsgeval bestaan uit 'n reeks van toevoeraksies waarop verwag word dat die programmatuur wat getoets word, sal reageer deur die uitvoering van afvoeraksies wat waargeneem kan word. 'n Aantal kwessies word aangeraak, waarvan die eerste te make het met die voorstelling van die spesifikasie van toetsgevalle. Hiervoor is 'n notasie gebruik wat in wese 'n subversameling van die toetsspesifikasietaal TTCN-3 is. TTCN-3 is gestandardiseer deur die European Telecommunications Standards Institute. Die tweede kwessie is die formaat van uitvoerbare toetsgevalle en 'n geskikte looptydomgewing. 'n Vertaler is ontwikkel wat uitvoerbare Oberon-kode genereer vanaf gespesifiseerde toetsgevalle. Die vertaalde toetskode word ondersteun deur 'n biblioteek van looptydfunksies, wat deel van die stelsel is. As gevolg van die eienskap dat 'n protokolomgewing uit gelyklopende prosesse bestaan, word daar verskillende tipes van gelyklopende prosesse in 'n protokoltoetsstelsel geïdentifiseer. Aangesien ETH Oberon 'n beperkte multitaakstelsel is, word toetsgevalle vertaal na eindige outomate wat uitgevoer word as samewerkende agtergrondtake. Die derde kwessie het te make met die interaksie tussen 'n toetsgeval wat uitgevoer word en die stelsel wat getoets word. Dit word aangespreek deur 'n koppelvlak wat gespesifiseerde interaksies afbeeld op werklike interaksies soos vereis deur die konteks waarin 'n implementasie onderworpe aan toetsing uitvoer. 'n Ondersteunende protokolom die dienskoppelvlak van die implementasie oor 'n afstand te bereik en ander onderliggende protokoldienste is deel van 'n toetskonteks. Die ETH Oberon-stelsel help in die vereenvoudiging van die implementasie van protokol toetsstelsels, as gevolg van die stelsel se grootte en die eenvoudige taakhanteerder . Die essensiële funksionaliteit van bedryfsstelsels word uitgelig in algemene terme omdat ander stelsels gebruik kan word om toetsstelsels te ondersteun. Ten slotte word voorstelle vir opvolgwerk gemaak.
127

Understanding and automating application-level caching / Entendendo e automatizando cache a nível de aplicação

Mertz, Jhonny Marcos Acordi January 2017 (has links)
O custo de serviços na Internet tem encorajado o uso de cache a nível de aplicação para suprir as demandas dos usuários e melhorar a escalabilidade e disponibilidade de aplicações. Cache a nível de aplicação, onde desenvolvedores manualmente controlam o conteúdo cacheado, tem sido adotada quando soluções tradicionais de cache não são capazes de atender aos requisitos de desempenho desejados. Apesar de sua crescente popularidade, este tipo de cache é tipicamente endereçado de maneira ad-hoc, uma vez que depende de detalhes específicos da aplicação para ser desenvolvida. Dessa forma, tal cache consiste em uma tarefa que requer tempo e esforço, além de ser altamente suscetível a erros. Esta dissertação avança o trabalho relacionado a cache a nível de aplicação provendo uma compreensão de seu estado de prática e automatizando a identificação de conteúdo cacheável, fornecendo assim suporte substancial aos desenvolvedores para o projeto, implementação e manutenção de soluções de caching. Mais especificamente, este trabalho apresenta três contribuições: a estruturação de conhecimento sobre caching derivado de um estudo qualitativo, um levantamento do estado da arte em abordagens de cache estáticas e adaptativas, e uma técnica que automatiza a difícil tarefa de identificar oportunidades de cache O estudo qualitativo, que envolveu a investigação de dez aplicações web (código aberto e comercial) com características diferentes, permitiu-nos determinar o estado de prática de cache a nível de aplicação, juntamente com orientações práticas aos desenvolvedores na forma de padrões e diretrizes. Com base nesses padrões e diretrizes derivados, também propomos uma abordagem para automatizar a identificação de métodos cacheáveis, que é geralmente realizado manualmente por desenvolvedores. Tal abordagem foi implementada como um framework, que pode ser integrado em aplicações web para identificar automaticamente oportunidades de cache em tempo de execução, com base na monitoração da execução do sistema e gerenciamento adaptativo das decisões de cache. Nós avaliamos a abordagem empiricamente com três aplicações web de código aberto, e os resultados indicam que a abordagem é capaz de identificar oportunidades de cache adequadas, melhorando o desempenho das aplicações em até 12,16%. / Latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users’ demands, and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an adhoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. This dissertation advances work on application-level caching by providing an understanding of its state-of-practice and automating the decision regarding cacheable content, thus providing developers with substantial support to design, implement and maintain application-level caching solutions. More specifically, we provide three key contributions: structured knowledge derived from a qualitative study, a survey of the state-of-the-art on static and adaptive caching approaches, and a technique and framework that automate the challenging task of identifying cache opportunities The qualitative study, which involved the investigation of ten web applications (open-source and commercial) with different characteristics, allowed us to determine the state-of-practice of application-level caching, along with practical guidance to developers as patterns and guidelines to be followed. Based on such patterns and guidelines derived, we also propose an approach to automate the identification of cacheable methods, which is often manually done and is not supported by existing approaches to implement application-level caching. We implemented a caching framework that can be seamlessly integrated into web applications to automatically identify and cache opportunities at runtime, by monitoring system execution and adaptively managing caching decisions. We evaluated our approach empirically with three open-source web applications, and results indicate that we can identify adequate caching opportunities by improving application throughput up to 12.16%. Furthermore, our approach can prevent code tangling and raise the abstraction level of caching.
128

Semantics-based change-merging of abstract data types

Chadha, Vineet. January 2002 (has links)
Thesis (M.S.)--Mississippi State University. Department of Computer Science. / Title from title screen. Includes bibliographical references.
129

The traceable lifecycle model

Nadon, Robert Gerard 01 August 2011 (has links)
Software systems today face many challenges that were not even imagined decades prior. Challenges including the need to evolve at a very high rate, lifecycle phase drift or erosion, inability to prevent the butterfly effect where the slightest change causes unimaginable side effects throughout the system, lack of discipline to define metrics and use measurement to drive operations, and no "silver bullet" or single solution to solve all the problems of every domain, just to name a few. This is not to say that the issues stated above are the only problems. In fact, it would be impossible to list all possible problems--software itself is infinitely flexible bounded only by the human imagination. These are just a portion of the primary challenges today's software engineer faces. There have been attempts throughout the history of software to resolve each one of these challenges. There have been those who tried to tackle them individually, simultaneously, as well as various combinations of them at one time. One such method was to define and encapsulate the various phases within software, which has come to be called a software lifecycle or lifecycle model. Another area of recent research has lead to the hypothesis that many of these challenges can be resolved or at least facilitated through proper traceability methods. Virtually none of today's software components are completely derived from scratch. Rather, code reuse and software evolution become a large portion of the software engineer's duties. As Vance Hilderman at HighRely puts it, "Research has shown that proper traceability is vital. For high quality and safety-critical engineering development efforts however, traceability is a cornerstone not just for achieving success, but to proving it as well." So if software is not derived from scratch, having the traceability to know about its origination is invaluable. Given today's struggles, what is in store for the future software engineer? This paper is an attempt to quantify and answer (or at least project a possibility) that involves a new mindset and a new lifecycle model or structure change that may assist in tackling some of the above referenced issues. / text
130

Visualization techniques for the analysis of software behavior and related structures

Trümper, Jonas January 2014 (has links)
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios. / Die Softwarewartung umfasst alle Änderungen an einem Softwaresystem nach dessen initialer Bereitstellung und stellt damit eine der wesentlichen Phasen im typischen Softwarelebenszyklus dar. In der Softwarewartung müssen wir insbesondere strukturelle und verhaltensbezogene Aspekte verstehen, welche z.B. alleine durch Lesen von Quelltext schwer herzuleiten sind. Die Softwareanalyse ist daher ein unverzichtbares Werkzeug zur Wartung solcher Systeme: Sie bietet - vorzugsweise automatisierte - Mittel, um Informationen über deren Artefakte, wie Softwarestruktur, Laufzeitverhalten und verwandte Prozesse, zu extrahieren und zu evaluieren. Eine solche Analyse resultiert jedoch typischerweise in großen und größten Rohdaten, die selbst erfahrene Softwareingenieure direkt nur schwer untersuchen, bewerten und verstehen können. Unter Anderem dann, wenn vorab keine klare Frage formulierbar ist, benötigen sie Werkzeuge, um diese Daten zu erforschen. Hierfür bietet die Softwareanalyse und Visualisierung ihren Nutzern leistungsstarke, interaktive Mittel. Diese ermöglichen es Aufgaben zu automatisieren und insbesondere wertvolle und belastbare Einsichten aus den Rohdaten zu erlangen. Beispielsweise ist die Visualisierung von Software-Traces ein Mittel, um das Laufzeitverhalten eines Systems zu ergründen. Diese Arbeit zielt darauf ab, den "Werkzeugkasten" der visuellen Softwareanalyse zu erweitern und zu verbessern, indem sie sich auf bestimmte, offene Herausforderungen in den Bereichen der dynamischen und statischen Analyse von Softwaresystemen konzentriert. Die Arbeit entwickelt eine Reihe von Konzepten und Werkzeugen für die explorative Visualisierung der entsprechenden Daten, um Nutzer darin zu unterstützen, Informationen über betroffene Systemartefakte zu lokalisieren und zu verstehen. Da es insbesondere an geeigneten Visualisierungsmetaphern mangelt, ist dies eine schwierige Aufgabe. Es bestehen, insbesondere bei komplexen Softwaresystemen, verschiedenste offene technische sowie konzeptionelle Fragestellungen und Herausforderungen. Diese Arbeit konzentriert sich auf Techniken zur visuellen Darstellung kontrollflussbezogener Aspekte aus Software-Traces von Shared-Memory Softwaresystemen: Ein Trace-Visualisierungskonzept, basierend auf Icicle Plots, unterstützt das Verstehen von single- und multi-threaded Laufzeitverhalten auf Funktionsebene. Die Erweiterbarkeit des Konzepts ermöglicht es zudem spezifische Aspekte des Multi-Threading, wie Synchronisation, zu visualisieren und zu analysieren, derartige Traces mit Daten aus der statischen Softwareanalyse zu korrelieren sowie Traces mit einander zu vergleichen. Darüber hinaus werden komplementäre Techniken für die kombinierte Analyse von Systemstrukturen und der Evolution zugehöriger Eigenschaften vorgestellt. Diese zielen darauf ab, die Langzeitplanung von Softwarearchitekturen und Management-Entscheidungen in Softwareprojekten mittels Erweiterungen an der Circular-Bundle-View-Technik zu unterstützen: Eine Erweiterung auf den 3-dimensionalen Raum ermöglicht es zusätzliche visuelle Variablen zu nutzen; Strukturen können mithilfe von Interaktionstechniken visuell bearbeitet werden. Die gezeigten Techniken und Konzepte sind allgemein verwendbar und lassen sich daher auch jenseits der Softwareanalyse einsetzen, um ähnlich strukturierte Daten zu visualisieren. Mehrere qualitative Studien an Softwaresystemen in industriellem Maßstab stellen die Praktikabilität der Techniken dar. Die Ergebnisse sind erste Belege dafür, dass die Anwendung der Techniken in verschiedenen Szenarien nützliche Einsichten in die untersuchten Daten und deren Zusammenhänge liefert.

Page generated in 0.5211 seconds