• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Selected Papers of the International Workshop on Smalltalk Technologies (IWST’10) : Barcelona, Spain, September 14, 2010

January 2010 (has links)
The goal of the IWST workshop series is to create and foster a forum around advancements of or experience in Smalltalk. The workshop welcomes contributions to all aspects, theoretical as well as practical, of Smalltalk-related topics. / Zweck der IWST-Workshop-Reihe ist die Formung und Pflege eines Forums fŸr die Diskussion von Fortschritten und Arbeitsergebnissen mit der Programmierumgebung Smalltalk. Der Workshop beinhaltet BeitrŠge zu allen Aspekten von auf Smalltalk bezogenen Arbeiten sowohl theoretischer als auch praktischer Natur.
182

Dynamic state alteration techniques for automatically locating software errors

Jeffrey, Dennis Bernard. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 223-234). Also issued in print.
183

Program reliability through algorithmic design and analysis

Samanta, Roopsha 10 February 2014 (has links)
Software systems are ubiquitous in today's world and yet, remain vulnerable to the fallibility of human programmers as well as the unpredictability of their operating environments. The overarching goal of this dissertation is to develop algorithms to enable automated and efficient design and analysis of reliable programs. In the first and second parts of this dissertation, we focus on the development of programs that are free from programming errors. The intent is not to eliminate the human programmer, but instead to complement his or her expertise, with sound and efficient computational techniques, when possible. To this end, we make contributions in two specific domains. Program debugging --- the process of fault localization and error elimination from a program found to be incorrect --- typically relies on expert human intuition and experience, and is often a lengthy, expensive part of the program development cycle. In the first part of the dissertation, we target automated debugging of sequential programs. A broad and informal statement of the (automated) program debugging problem is to suitably modify an erroneous program, say P, to obtain a correct program, say P'. This problem is undecidable in general; it is hard to formalize; moreover, it is particularly challenging to assimilate and mechanize the customized, expert programmer intuition involved in the choices made in manual program debugging. Our first contribution in this domain is a methodical formalization of the program debugging problem, that enables automation, while incorporating expert programmer intuition and intent. Our second contribution is a solution framework that can debug infinite-state, imperative, sequential programs written in higher-level programming languages such as C. Boolean programs, which are smaller, finite-state abstractions of infinite-state or large, finite-state programs, have been found to be tractable for program verification. In this dissertation, we utilize Boolean programs for program debugging. Our solution framework involves two main steps: (a) automated debugging of a Boolean program, corresponding to an erroneous program P, and (b) translation of the corrected Boolean program into a correct program P'. Shared-memory concurrent programs are notoriously difficult to write, verify and debug; this makes them excellent targets for automated program completion, in particular, for synthesis of synchronization code. Extant work in this domain has focused on either propositional temporal logic specifications with simplistic models of concurrent programs, or more refined program models with the specifications limited to just safety properties. Moreover, there has been limited effort in developing adaptable and fully-automatic synthesis frameworks that are capable of generating synchronization at different levels of abstraction and granularity. In the second part of this dissertation, we present a framework for synthesis of synchronization for shared-memory concurrent programs with respect to temporal logic specifications. In particular, given a concurrent program composed of synchronization-free processes, and a temporal logic specification describing their expected concurrent behaviour, we generate synchronized processes such that the resulting concurrent program satisfies the specification. We provide the ability to synthesize readily-implementable synchronization code based on lower-level primitives such as locks and condition variables. We enable synchronization synthesis of finite-state concurrent programs composed of processes that may have local and shared variables, may be straight-line or branching programs, may be ongoing or terminating, and may have program-initialized or user-initialized variables. We also facilitate expression of safety and liveness properties over both control and data variables by proposing an extension of propositional computation tree logic. Most program analyses, verification, debugging and synthesis methodologies target traditional correctness properties such as safety and liveness. These techniques typically do not provide a quantitative measure of the sensitivity of a computational system's behaviour to unpredictability in the operating environment. We propose that the core property of interest in reasoning in the presence of such uncertainty is robustness --- small perturbations to the operating environment do not change the system's observable behavior substantially. In well-established areas such as control theory, robustness has always been a fundamental concern; however, the techniques and results therein are not directly applicable to computational systems with large amounts of discretized, discontinuous behavior. Hence, robustness analysis of software programs used in heterogeneous settings necessitates development of new theoretical frameworks and algorithms. In the third part of this dissertation, we target robustness analysis of two important classes of discrete systems --- string transducers and networked systems of Mealy machines. For each system, we formally define robustness of the system with respect to a specific source of uncertainty. In particular, we analyze the behaviour of transducers in the presence of input perturbations, and the behaviour of networked systems in the presence of channel perturbations. Our overall approach is automata-theoretic, and necessitates the use of specialized distance-tracking automata for tracking various distance metrics between two strings. We present constructions for such automata and use them to develop decision procedures based on reducing the problem of robustness verification of our systems to the problem of checking the emptiness of certain automata. Thus, the system under consideration is robust if and only if the languages of particular automata are empty. / text
184

Retrospect on contemporary Internet organization and its challenges in the future

Gutierrez De Lara, Felipe 25 July 2011 (has links)
The intent of this report is to expose the audience to the contemporary organization of the Internet and to highlight the challenges it has to deal with in the future as well as the current efforts being made to overcome such threats. This report aims to build a frame of reference for how the Internet is currently structured and how the different layers interact together to make it possible for the Internet to exist as we know it. Additionally, the report explores the challenges the current Internet architecture design is facing, the reasons why these challenges are arising, and the multiple efforts taking place to keep the Internet working. In order to reach these objectives I visited multiple sites of organizations whose only reason for existence is to support the Internet and keep it functioning. The approach used to write this report was to research the topic by accessing multiple technical papers extracted from the IEEE database and network conferences reviews and to analyze and expose their findings. This report utilizes this vii information to elaborate on how network engineers are handling the challenges of keeping the Internet functional while supporting dynamic requirements. This report exposes the challenges the Internet is facing with scalability, the existence of debugging tools, security, mobility, reliability, and quality of service. It is explained in brief how each of these challenges are affecting the Internet and the strategies in place to vanquish them. The final objectives are to inform the reader of how the Internet is working with a set of ever changing and growing requirements, give an overview of the multiple institutions dedicated to reinforcing the Internet and provide a list of current challenges and the actions being taken to overcome them. / text
185

K-MORPH: Knowledge Morphing via Reconciliation of Contextualized Sub-ontologies

Hussain, Syed Sajjad 29 March 2011 (has links)
Knowledge-driven problem solving demands 'complete' knowledge about the domain and its interpretation under different contexts. Knowledge Morphing aims at a context-driven integration of heterogeneous knowledge sources--in order to provide a comprehensive and networked view of all knowledge about a domain-specific problem, pertaining to the context at hand. In this PhD thesis, we have proposed a Semantic Web based framework, K-MORPH, for Knowledge Morphing via Reconciliation of Contextualized Sub-ontologies. In order to realize our K-MORPH framework, we have developed: (i) a sub-ontology extraction method for generating contextualized sub-ontologies from the source ontologies pertinent to the problem-context at hand; (ii) two ontology matching approaches: triple-based ontology matching (TOM) and proof-based ontology matching (POM) for finding both atomic and complex correspondences between two extracted contextualized sub-ontologies; and (iii) our approach for resolving inconsistencies in ontologies by generating minimal inconsistent resolve candidates (MIRCs), where removing any of the MIRCs from the inconsistent ontology results in a maximal consistent sub-ontology. Thus, K-MORPH performs knowledge morphing among ontology-modelled knowledge sources and generates a context-sensitive and comprehensive knowledge-base pertinent to the problem at hand by (a) extracting problem-specific knowledge components from ontology-modelled knowledge sources using our sub-ontology extraction method; (b) aligning and merging the extracted knowledge components using our matching approaches; and (c) repairing inconsistencies in the morphed knowledge by applying our approach for detecting and resolving inconsistencies. We demonstrated the application of our K-MORPH framework in the healthcare domain, where K-MORPH generated a merged ontology for providing a comprehensive therapeutic knowledge-base for Urinary Tract Infections (UTI) by first (i) extracting 20 contextualized sub-ontologies from various UTI ontologies of different healthcare institutions, (ii) aligning and merging the extracted UTI sub-ontologies, and (iii) detecting and resolving inconsistencies in the merged UTI ontology.
186

The analysis of Di, a detailed design metric, on large-scale software

McDaniel, Patrick Drew January 1991 (has links)
There is no abstract available for this thesis. / Department of Computer Science
187

Application for Debugging and Calibration of an Underwater Robot

Lannebjer, Patrik, Forssman, Alexander January 2014 (has links)
In this thesis we present a suitable way of calibrating and debugging an autonomous underwater vehicle (AUV). The issues that occur when working with an AUV are the inconvenient way of having to constantly recompile the software to change the behavior of the AUV and the lack of feedbacksreceived. If the vehicle does not behave as it should the information needed to be able to trace and fix the problems that occur isin general difficult to retrieve. To tackle this problem a literature study was made on logging libraries, communication protocols as well as AUVs in general. This resulted in identifying a set of existing logging libraries and possible communication protocols. From testing and analyzing these results, Zlog was chosen as the logging library and UDP as the communication protocol. Zlog has then been used in the AUV application to log relevant information on the AUV and UDP allows establishing a connection between the AUV and a desktop program created for Windows to send this logging information to. The desktop program also allows filtering of any incoming logs with the use of a parser. This has been an essential part of the solution to be able to identify specific logging data and help presenting this in a convenient way. To be able to change the format of the log file, the parser has been given a grammar which can be adjusted to adapt to a different log file. Additionally, the desktop application has the ability to send commands to the AUVapplication via the UDP connection to change the behavior of the AUV live.
188

Effective fault localization techniques for concurrent software

Park, Sang Min 12 January 2015 (has links)
Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.
189

Static and hybrid analysis in model-based debugging

Mayer, Wolfgang January 2007 (has links)
Defects in computer programs have great social and economic impacts and should be eliminated as much as possible. Since testing and debugging are among the most costly and time consuming tasks in the software development life cycle, a variety of intelligent debugging aids have been proposed within the last three decades. Model-based software debugging (MBSD) is a particular technique that exploits discrepancies between a program execution and the intended behaviour to isolate program fragments that could potentially explain an observed misbehaviour. In contrast to other techniques, model-based debugging does not require a formal specification of a program's behaviour, making the approach suitable for developers without training in formal software engineering practices. A key aspect of model-based debugging is the transformation of the given program into a model suitable for debugging. In this thesis, several models for analysing programs written in an object-oriented language are investigated, with Java as concrete example. The aim of this work is to assess the suitability of value-based models and generalisations thereof for debugging of programs making use of dynamically allocated data structures, recursive methods and polymorphic method invocations.
190

A declarative debugger for Haskell /

Pope, Bernard James. January 2006 (has links)
Thesis (Ph.D.)--University of Melbourne, Dept. of Computer Science and Software Engineering, 2007. / Typescript. Includes bibliographical references (leaves 253-264).

Page generated in 0.045 seconds