• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2635
  • 483
  • 390
  • 370
  • 59
  • 45
  • 35
  • 19
  • 10
  • 10
  • 9
  • 7
  • 6
  • 3
  • 2
  • Tagged with
  • 4633
  • 4633
  • 2051
  • 1971
  • 1033
  • 617
  • 521
  • 485
  • 458
  • 448
  • 421
  • 416
  • 408
  • 337
  • 310
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Runtime Verification with Controllable Time Predictability and Memory Utilization

Kumar, Deepak 20 September 2013 (has links)
The goal of runtime verifi cation is to inspect the well-being of a system by employing a monitor during its execution. Such monitoring imposes cost in terms of resource utilization. Memory usage and predictability of monitor invocations are the key indicators of the quality of a monitoring solution, especially in the context of embedded systems. In this work, we propose a novel control-theoretic approach for coordinating time predictability and memory utilization in runtime monitoring of real-time embedded systems. In particular, we design a PID controller and four fuzzy controllers with di erent optimization control objectives. Our approach controls the frequency of monitor invocations by incorporating a bounded memory bu er that stores events which need to be monitored. The controllers attempt to improve time predictability, and maximize memory utilization, while ensuring the soundness of the monitor. Unlike existing approaches based on static analysis, our approach is scalable and well-suited for reactive systems that are required to react to stimuli from the environment in a timely fashion. Our experiments using two case studies (a laser beam stabilizer for aircraft tracking, and a Bluetooth mobile payment system) demonstrate the advantages of using controllers to achieve low variation in the frequency of monitor invocations, while maintaining maximum memory utilization in highly non-linear environments. In addition to this problem, the thesis presents a brief overview of our preceding work on runtime verifi cation.
482

Empirical Studies of Code Clone Genealogies

BARBOUR, LILIANE JEANNE 31 January 2012 (has links)
Two identical or similar code fragments form a clone pair. Previous studies have identified cloning as a risky practice. Therefore, a developer needs to be aware of any clone pairs so as to properly propagate any changes between clones. A clone pair experiences many changes during the creation and maintenance of software systems. A change can either maintain or remove the similarity between clones in a clone pair. If a change maintains the similarity between clones, the clone pair is left in a consistent state. However, if a change makes the clones no longer similar, the clone pair is left in an inconsistent state. The set of states and changes experienced by clone pairs over time form an evolution history known as a clone genealogy. In this thesis, we provide a formal definition of clone genealogies, and perform two case studies to examine clone genealogies. In the first study, we examine clone genealogies to identify fault-prone “patterns” of states and changes. We also build prediction models using clone metrics from one snapshot and compare them to models that include historical evolutionary information about code clones. We examine three long-lived software systems and identify clones using Simian and CCFinder clone detection tools. The results show that there is a relationship between the size of the clone and the time interval between changes and fault-proneness of a clone pair. Additionally, we show that adding evolutionary information increases the precision, recall, and F-Measure of fault prediction models by up to 26%. In our second study, we define 8 types of late propagation and compare them to other forms of clone evolution. Our results not only verify that late propagation is more harmful to software systems, but also establish that some specific cases of late propagations are more harmful than others. Specifically, two cases are most risky: (1) when a clone experiences inconsistent changes and then a re-synchronizing change without any modification to the other clone in a clone pair; and (2) when two clones undergo an inconsistent modification followed by a re-synchronizing change that modifies both the clones in a clone pair. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2012-01-31 11:39:10.503
483

An Exploration of Challenges Limiting Pragmatic Software Defect Prediction

Shihab, Emad 09 August 2012 (has links)
Software systems continue to play an increasingly important role in our daily lives, making the quality of software systems an extremely important issue. Therefore, a significant amount of recent research focused on the prioritization of software quality assurance efforts. One line of work that has been receiving an increasing amount of attention is Software Defect Prediction (SDP), where predictions are made to determine where future defects might appear. Our survey showed that in the past decade, more than 100 papers were published on SDP. Nevertheless, the adoption of SDP in practice to date is limited. In this thesis, we survey the state-of-the-art in SDP in order to identify the challenges that hinder the adoption of SDP in practice. These challenges include the fact that the majority of SDP research rarely considers the impact of defects when performing their predictions, seldom provides guidance on how to use the SDP results, and is too reactive and defect-centric in nature. We propose approaches that tackle these challenges. First, we present approaches that predict high-impact defects. Our approaches illustrate how SDP research can be tailored to consider the impact of defects when making their predictions. Second, we present approaches that simplify SDP models so they can be easily understood and illustrates how these simple models can be used to assist practitioners in prioritizing the creation of unit tests in large software systems. These approaches illustrate how SDP research can provide guidance to practitioners using SDP. Then, we argue that organizations are interested in proactive risk management, which covers more than just defects. For example, risky changes may not introduce defects but they could delay the release of projects. Therefore, we present an approach that predicts risky changes, illustrating how SDP can be more encompassing (i.e., by predicting risk, not only defects) and proactive (i.e., by predicting changes before they are incorporated into the code base). The presented approaches are empirically validated using data from several large open source and commercial software systems. The presented research highlights how challenges of pragmatic SDP can be tackled, making SDP research more beneficial and applicable in practice. / Thesis (Ph.D, Computing) -- Queen's University, 2012-08-02 13:12:39.707
484

Improving the Quality of Use Case Models and their Utilization in Software Development

El-Attar, Mohamed Unknown Date
No description available.
485

An analysis of system development tools

Barratt, Dean M. January 1990 (has links)
The development of a software package is a complex and time consuming process. Computer Assisted Software Engineering (CASE) tools, such as Excelerator, DesignAid and SA Tools have offered an alternative to the traditional methods of system design. While the use of these design tools can lessen the burden of project management, there currently exists no systematic method for describing or evaluating existing products.This study identifies criteria for software development tools by examining three products used in a PC-based computing environment. The three software development tools studied are DesignAid version 4.0 by Nastec Corporation, SA Tools by Tekcase Corporation, and Excelerator version 1.7 by Index Technology Corporation. In order to give the "look and feel" of the products, the same design project is implemented on each of the tools. Then each product is evaluated with respect to a given set of criteria. / Department of Computer Science
486

A Feature Interaction Resolution Scheme Based on Controlled Phenomena

Bocovich, Cecylia 13 May 2014 (has links)
Systems that are assembled from independently developed features suffer from feature interactions, in which features affect one another's behaviour in surprising ways. To ensure that a system behaves as intended, developers need to analyze all potential interactions -- and many of the identified interactions need to be fixed and their fixes verified. The feature-interaction problem states that the number of potential interactions to be considered is exponential in the number of features in a system. Resolution strategies combat the feature-interaction problem by offering general strategies that resolve entire classes of interactions, thereby reducing the work of the developer who is charged with the task of resolving interactions. In this thesis, we focus on resolving interactions due to conflict. We present an approach, language, and implementation based on resolver modules modelled in the situation calculus in which the developer can specify an appropriate resolution for each variable under conflict. We performed a case study involving 24 automotive features, and found that the number of resolutions to be specified was much smaller than the number of possible feature interactions (6 resolutions for 24 features), that what constitutes an appropriate resolution strategy is different for different variables, and that the subset of situation calculus we used was sufficient to construct nontrivial resolution strategies for six distinct output variables.
487

Building a foundation for the future of software practices within the multi-core domain

Berg, Celina 31 August 2011 (has links)
Multi-core programming presents developers with a dramatic paradigm shift. Where the conceptual models of sequential programming largely supported the decoupling of source from underlying architecture, it is now unwise to develop new patterns, abstractions and parallel software in complete isolation from issues of modern hardware utilization. Challenging issues historically associated with complex systems code are now compounded within the parallel domain. These issues are manifested at all stages of software development including design, development, testing and maintenance. Programmers currently lack the essential tools to even partially automate reasoning techniques, resource utilization and system configuration management. Current trial and error strategies lack a systematic approach that will scale to growing multi-core and multi-processor environments. In fact, current algorithm and data layout conceptual models applied to design, implementation and pedagogy often conflict with effective parallelization strategies. This disertation calls for a rethinking, rebuilding and retooling of conceptual models, taking into account opportunities to introduce parallelism for multi-core architectures from the ground up. In order to establish new conceptual models, we must first 1) identify inherent complexities in multi-core development, 2) establish support strategies to make handling them more explicit and 3) evaluate the impact of these strategies in terms of proposed software development practices and tool support. / Graduate
488

Visualization and analysis of assembly code in an integrated comprehension environment

Pucsek, Dean W. 26 June 2013 (has links)
Computing has reached a point where it is visible in almost every aspect of one’s daily activities. Consider, for example, a typical household. There will be a desktop computer, game console, tablet computer, and smartphones built using different types of processors and instruction sets. To support the pervasive and heterogeneous nature of computing there has been many advances in programming languages, hardware features, and increasingly complex software systems. One task that is shared by all people who work with software is the need to develop a concrete understanding of foreign code so that tasks such as bug fixing, feature implementation, and security audits can be conducted. To do this tools are needed to help present the code in a manner that is conducive to comprehension and allows for knowledge to be transferred. Current tools for program comprehension are aimed at high-level languages and do not provide a platform for assembly code comprehension that is extensible both in terms of the supported environment as well as the supported analysis. This thesis presents ICE, an Integrated Comprehension Environment, that is de- veloped to support comprehension of assembly code while remaining extensible. ICE is designed to receive data from external tools, such as disassemblers and debuggers, which is then presented in a series of visualizations: Cartographer, Tracks, and a Control Flow Graph. Cartographer displays an interactive function call graph while Tracks displays a navigable sequence diagram. Support for new visualizations is provided through the extensible implementation enabling analysts to develop visual- izations tailored to their needs. Evaluation of ICE is completed through a series of case studies that demonstrate different aspects of ICE relative to currently available tools. / Graduate / 0984 / dpucsek@uvic.ca
489

A view mechanism for an integrated project support environment

Brown, Alan William January 1988 (has links)
In the last few years the rapidly expanding application of computer technology has brought the problems of software production increasingly to the fore, and it is widely accepted that current software development techniques are unable to produce high quality software at the rate required to keep pace with this. To try to improve this imbalance, the fte1d of Software Engineering has been advanced as a possible solution, attempting to apply the formal methods of an engineering discipline to software design and implementation. One of the most promising areas to be developed in this fte1d is that of Integrated Project Support Environments (IPSE's), which attempt to spread the focus of attention during software development from the coding stage to embrace the whole development cycle, from initial requirements speciftcation through to operational maintenance. These environments hope to improve prodUction efficiency and quality by providing a complete set of support tools to help with each stage of software development, and to supply the necessary tool integration to ensure a smooth transition of use between these tools. This thesis examines the services provided as part of an IPSE which allow users and tools to interact in a meaningful way throughout the life of a project. In particular, a view mechanism is seen as a vital component of these services allowing indiVidual external views of the facilities to be deftned which suit different users' needs. A model of a view mechanism for an IPSE is developed, and within a particular implementation of an IPSE, a view mechanism is formally deftned and implemented. Finally, the view mechanism is analysed and discussed before concluding with some directions for future research.
490

Comparison of Exact and Approximate Multi-Objective Optimization for Software Product Lines

Olaechea Velazco, Rafael Ernesto January 2013 (has links)
Software product lines (SPLs) manage product variants in a systematical way and allow stakeholders to derive variants by selecting features. Finding a desirable variant is hard, due to the huge configuration space and usually conflicting objectives (e.g., lower cost and higher performance). This scenario can be reduced to a multi-objective optimization prob- lem in SPLs. We address the problem using an exact and an approximate algorithm and compare their accuracy, time consumption, scalability and parameter setting requirements on five case studies with increasing complexity. Our empirical results show that (1) it is feasible to use exact techniques for small SPL multi-objective optimization problems, and (2) approximate methods can be used for large problems but require substantial effort to find the best parameter settings for acceptable approximation. Finally, we discuss the tradeoff between accuracy and time consumption when using exact and approximate techniques for SPL multi-objective optimization and guide stakeholders to choose one or the other in practice.

Page generated in 0.0826 seconds