• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 18
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 70
  • 54
  • 30
  • 26
  • 21
  • 20
  • 17
  • 16
  • 15
  • 15
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Effektiwiteit van die herwinningsprogrammatuur van leeskompakskyfdatabassise

18 March 2015 (has links)
M.A. / CD-ROM products provide access to information by means of different modes of interaction, often on the same database. Although command language is still widely recognized as the interaction mode which retrieves the most relevant references, it is regarded as difficult to use because of its complex structure. More user friendly modes, for example, menu and direct manipulation are viewed as being more accessible to the up-and-coming end-user. The purpose of this study was to determine by means of an empirical study whether the retrieval effectiveness between two modes of interaction on the same database differed significantly. A literature survey pointed out the unique characteristics of existing modes. It was also established that the traditional measures of retrieval effectiveness through relevance and precision could not be applied in this research. A method was devised in which the results of the two modes were compared. The empirical study was done on the command and form fill-in modes of Wilson Business Abstracts. Total results retrieved through each mode were compared, as well as the ease of entering the search by means of appropriate search facilities for each mode. The results of the research revealed that the presence of unique search facilities in a mode results in better retrieval effectiveness. Searches in both modes also require specific ways of input for optimum quality retrieval and thus has implications for intensive training in search methods.
82

Renewal of a linear electrical network simulator into Ada

Buckle, Warren Dean January 1993 (has links)
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand, Johannesburg, in fulfilment Of the requirements for the degree of Master of Science in Engineering. Johannesburg, 1993 / Renewal is the extraction of the intellectual content (algorithms, data structures) from an existing program, and then puilding a new more maiatainable program using more modem progra1Tlming methods and languages. A survey of software structure on maintenance. highlighted the different hierarchies produced by functional and object-oriented design methods. Elecsim, a linear circuit sL~ulator written in Pascal, was chosen as the existing program to be renewed, The new version follows the approach of decoupling the user interface and introducing an explicit scheduler. The object-oriented design technique is used extensively. Other issues addressed include online-help and. documentation for the program. Conclusions are drawn which are generally applicable from the specificlessons learnt from the Elecsim/Elector case study. / MT2017
83

Software Engineering Using design RATionale

Burge, Janet E 02 May 2005 (has links)
For a number of years, members of the Artificial Intelligence (AI) in Design community have studied Design Rationale (DR), the reasons behind decisions made while designing. DR is invaluable as an aid for revising, maintaining, documenting, evaluating, and learning the design. The presence of DR would be especially valuable for software maintenance. The rationale would provide insight into why the system is the way it is by giving the reasons behind the design decisions, could help to indicate where changes might be needed during maintenance if design goals change, and help the maintainer avoid repeating earlier mistakes by explicitly documenting alternatives that were tried earlier that did not work. Unfortunately, while everyone agrees that design rationale is useful, it is still not used enough in practice. Possible reasons for this are that the uses proposed for rationale are not compelling enough to justify the effort involved in its capture and that there are few systems available to support rationale use and capture. We have addressed this problem by developing and evaluating a system called SEURAT (Software Engineering Using RATionale) which integrates with a software development environment and goes beyond mere presentation of rationale by inferencing over it to check for completeness and consistency in the reasoning used while a software system is being developed and maintained. We feel that the SEURAT system will be invaluable during development and maintenance of software systems. During development, SEURAT will help the developers ensure that the systems they build are complete and consistent. During maintenance, SEURAT will provide insight into the reasons behind the choices made by the developers during design and implementation. The benefits of DR are clear but only with appropriate tool support, such as that provided by SEURAT, can DR live up to its full potential as an aid for revising, maintaining, and documenting the software design and implementation.
84

Swarm debugging : the collective debugging intelligence of the crowd / Depuração em enxame : a inteligência coletiva na depuração pela multidão

Petrillo, Fábio dos Santos January 2016 (has links)
As formigas são criaturas fascinantes que, além dos avanços na biologia também inspiraram pesquisas sobre teoria da informação. Em particular, o estudo resultou na criação da Teoria da Forragem de Informação, que descreve como os agentes de buscam informações em seu ambiente. Esta teoria também explica fenômenos recentes e bem-sucedidos, como crowd sourcing. Crowdsourcing tem sido aplicado a muitas atividades em engenharia de software, incluindo desenvolvimento, tradução e testes, mas uma atividade parece resistir: depuração. No entanto, os desenvolvedores sabem que a depuração pode exigir dedicação, esforço, longas horas de trabalho, por vezes, para mudar uma linha de código único. Nós introduzimos o conceito de Depuração em Enxame, para trazer crowd sourcing para a atividade de depuração. Através de crowd sourcing, pretendemos ajudar os desenvolvedores, capitalizando a sua dedicação, esforço e longas horas de trabalho para facilitar atividades de depuração. Mostramos que a depuração enxame requer uma abordagem específica para recolher informações relevantes, e descrevemos sua infra-estrutura. Mostramos também que a depuração em enxame pode reduzir o esforço desenvolvedores. Concluímos com as vantagens e limitações atuais de depuração enxame, e sugerir caminhos para superar estas limitações e ainda mais a adoção de crowd sourcing para atividades de depuração. / Ants are fascinating creatures that beyond the advances in biology have also inspired research on information theory. In particular, their study resulted in the creation of the Information Foraging Theory, which describes how agents forages for information in their environment. This theory also explains recent and fruitful phenomena, such as crowdsourcing. Many activities in software engineering have applied crowdsourcing, including development, translation, and testing, but one action seems to resist: debugging. Developers know that debugging can require dedication, effort, long hours of work, sometimes for changing one line of code only. We introduce the concept of Swarm Debugging, to bring crowdsourcing to the activity of debugging. Through crowdsourcing, we aim at helping developers by capitalizing on their dedication, effort, and long hours of work to ease debugging activities of their peers or theirs, on other bugs. We show that swarm debugging requires a particular approach to collect relevant information, and we describe the Swarm Debugging Infrastructure. We also show that swarm debugging minimizes developers effort. We conclude with the advantages and current limitations of swarm debugging and suggest directions to overcome these limitations and further the adoption of crowdsourcing for debugging activities.
85

Supporting Software Evolution in Agent Systems

Dam, Khanh Hoa, s3007289@student.rmit.edu.au January 2009 (has links)
Software maintenance and evolution is arguably a lengthy and expensive phase in the life cycle of a software system. A critical issue at this phase is change propagation: given a set of primary changes that have been made to software, what additional secondary changes are needed to maintain consistency between software artefacts? Although many approaches have been proposed, automated change propagation is still a significant technical challenge in software maintenance and evolution. Our objective is to provide tool support for assisting designers in propagating changes during the process of maintaining and evolving models. We propose a novel, agent-oriented, approach that works by repairing violations of desired consistency rules in a design model. Such consistency constraints are specified using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) metamodel, which form the key inputs to our change propagation framework. The underlying change propagation mechanism of our framework is based on the well-known Belief-Desire-Intention (BDI) agent architecture. Our approach represents change options for repairing inconsistencies using event-triggered plans, as is done in BDI agent platforms. This naturally reflects the cascading nature of change propagation, where each change (primary or secondary) can require further changes to be made. We also propose a new method for generating repair plans from OCL consistency constraints. Furthermore, a given inconsistency will typically have a number of repair plans that could be used to restore consistency, and we propose a mechanism for semi-automatically selecting between alternative repair plans. This mechanism, which is based on a notion of cost, takes into account cascades (where fixing the violation of a constraint breaks another constraint), and synergies between constraints (where fixing the violation of a constraint also fixes another violated constraint). Finally, we report on an evaluation of the approach, covering both effectiveness and efficiency.
86

Rejuvenating C++ Programs through Demacrofictation

Aditya Kumar, - 14 March 2013 (has links)
As we migrate software to new versions of programming languages, we would like to improve the style of its design and implementation by replacing brittle idioms and abstractions with the more robust features of the language and its libraries. This process is called source code rejuvenation. In this context, we are interested in replacing C preprocessor macros in C++ programs with C++11 declarations. The kinds of problems engendered by the C preprocessor are many and well known. Because the C preprocessor operates on the token stream independently from the host language’s syntax, its extensive use can lead to hard-to-debug semantic errors. In C++11, the use of generalized constant expressions, type deduction, perfect forwarding, lambda expressions, and alias templates eliminate the need for many previous preprocessor-based idioms and solutions. Additionally, these features can be used to replace macros from legacy code providing better type safety and reducing software-maintenance efforts. In order to remove the macros, we have established a correspondence between different kinds of macros and the C++11 declarations to which they could be trans- formed. We have also developed a set of tools to automate the task of demacrofying C++ programs. One of the tools suggest a one-to-one mapping between a macro and its corresponding C++11 declaration. Other tools assist in carrying out iterative application of refactorings into a software build and generating rejuvenated programs. We have applied the tools to seven C++ libraries to assess the extent to which these libraries might be improved by demacrofication. Results indicate that between 52% and 98% of potentially refactorable macros could be transformed into C++11 declarations.
87

Policy-Driven Framework for Static Identification and Verification of Component Dependencies

Livogiannis, Anastasios 02 June 2011 (has links)
Software maintenance is considered to be among the most difficult, lengthy and costly parts of a software application's life-cycle. Regardless of the nature of the software application and the software engineering efforts to reduce component coupling to minimum, dependencies between software components in applications will always exist and initiate software maintenance operations as they tend to threaten the "health" of the software system during the evolution of particular components. The situation is more serious with modern technologies and development paradigms, such as Service Oriented Architecture Systems and Cloud Computing that introduce larger software systems that consist of a substantial number of components which demonstrate numerous types of dependencies with each other. This work proposes a reference architecture and a corresponding software framework that can be used to model the dependencies between components in software systems and can support the verification of a set of policies that are derived from system dependencies and are relative to the software maintenance operations being applied. Dependency modelling is performed using configuration information from the system, as well as information harvested from component interface descriptions. The proposed approach has been applied to a medium scale SOA system, namely the SCA Travel Sample from Apache Software Foundation, and has been evaluated for performance in a configuration specification related to a simulated SOA system consisting to up to a thousand web services offered in a few hundred components.
88

Enabling and supporting the debugging of software failures

Clause, James Alexander 21 March 2011 (has links)
This dissertation evaluates the following thesis statement: Program analysis techniques can enable and support the debugging of failures in widely-used applications by (1) capturing, replaying, and, as much as possible, anonymizing failing executions and (2) highlighting subsets of failure-inducing inputs that are likely to be helpful for debugging such failures. To investigate this thesis, I developed techniques for recording, minimizing, and replaying executions captured from users' machines, anonymizing execution recordings, and automatically identifying failure-relevant inputs. I then performed experiments to evaluate the techniques in realistic scenarios using real applications and real failures. The results of these experiments demonstrate that the techniques can reduce the cost and difficulty of debugging.
89

A Verification Framework for Access Control in Dynamic Web Applications

Alalfi, Manar 30 April 2010 (has links)
Current technologies such as anti-virus software programs and network firewalls provide reasonably secure protection at the host and network levels, but not at the application level. When network and host-level entry points are comparatively secure, public interfaces of web applications become the focus of malicious software attacks. In this thesis, we focus on one of most serious web application vulnerabilities, broken access control. Attackers often try to access unauthorized objects and resources other than URL pages in an indirect way; for instance, using indirect access to back-end resources such as databases. The consequences of these attacks can be very destructive, especially when the web application allows administrators to remotely manage users and contents over the web. In such cases, the attackers are not only able to view unauthorized content,but also to take over site administration. To protect against these types of attacks, we have designed and implemented a security analysis framework for dynamic web applications. A reverse engineering process is performed on an existing dynamic web application to extract a role-based access-control security model. A formal analysis is applied on the recovered model to check access-control security properties. This framework can be used to verify that a dynamic web application conforms to access control polices specified by a security engineer. Our framework provides a set of novel techniques for the analysis and modeling of web applications for the purpose of security verification and validation. It is largely language independent, and based on adaptable model recovery which can support a wide range of security analysis tasks. / Thesis (Ph.D, Computing) -- Queen's University, 2010-04-30 14:30:53.018
90

Management Aspects of Software Clone Detection and Analysis

2014 June 1900 (has links)
Copying a code fragment and reusing it by pasting with or without minor modifications is a common practice in software development for improved productivity. As a result, software systems often have similar segments of code, called software clones or code clones. Due to many reasons, unintentional clones may also appear in the source code without awareness of the developer. Studies report that significant fractions (5% to 50%) of the code in typical software systems are cloned. Although code cloning may increase initial productivity, it may cause fault propagation, inflate the code base and increase maintenance overhead. Thus, it is believed that code clones should be identified and carefully managed. This Ph.D. thesis contributes in clone management with techniques realized into tools and large-scale in-depth analyses of clones to inform clone management in devising effective techniques and strategies. To support proactive clone management, we have developed a clone detector as a plug-in to the Eclipse IDE. For clone detection, we used a hybrid approach that combines the strength of both parser-based and text-based techniques. To capture clones that are similar but not exact duplicates, we adopted a novel approach that applies a suffix-tree-based k-difference hybrid algorithm, borrowed from the area of computational biology. Instead of targeting all clones from the entire code base, our tool aids clone-aware development by allowing focused search for clones of any code fragment of the developer's interest. A good understanding on the code cloning phenomenon is a prerequisite to devise efficient clone management strategies. The second phase of the thesis includes large-scale empirical studies on the characteristics (e.g., proportion, types of similarity, change patterns) of code clones in evolving software systems. Applying statistical techniques, we also made fairly accurate forecast on the proportion of code clones in the future versions of software projects. The outcome of these studies expose useful insights into the characteristics of evolving clones and their management implications. Upon identification of the code clones, their management often necessitates careful refactoring, which is dealt with at the third phase of the thesis. Given a large number of clones, it is difficult to optimally decide what to refactor and what not, especially when there are dependencies among clones and the objective remains the minimization of refactoring efforts and risks while maximizing benefits. In this regard, we developed a novel clone refactoring scheduler that applies a constraint programming approach. We also introduced a novel effort model for the estimation of efforts needed to refactor clones in source code. We evaluated our clone detector, scheduler and effort model through comparative empirical studies and user studies. Finally, based on our experience and in-depth analysis of the present state of the art, we expose avenues for further research and development towards a versatile clone management system that we envision.

Page generated in 0.1035 seconds