• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 293
  • 148
  • 121
  • 72
  • 53
  • 40
  • 34
  • 31
  • 30
  • 30
  • 27
  • 23
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Werkzeuggestützte Entwicklung kooperativer Agenten im Dienstkontext

Fricke, Stefan. Unknown Date (has links)
Techn. Universiẗat, Diss., 2000--Berlin.
122

Complex patterns in gender HCI : a data mining study of factors leading to end-user debugging success for females and males /

Grigoreanu, Valentina I. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2008. / Printout. Includes bibliographical references (leaves 88-90). Also available on the World Wide Web.
123

Online Anomaly Detection

Ståhl, Björn January 2006 (has links)
Where the role of software-intensive systems has shifted from the traditional one of fulfilling isolated computational tasks, larger collaborative societies with interaction as primary resource, is gradually taking its place. This can be observed in anything from logistics to rescue operations and resource management, numerous services with key-roles in the modern infrastructure. In the light of this new collaborative order, it is imperative that the tools (compilers, debuggers, profilers) and methods (requirements, design, implementation, testing) that supported traditional software engineering values also adjust and extend towards those nurtured by the online instrumentation of software intensive systems. That is, to adjust and to help to avoid situations where limitations in technology and methodology would prevent us from ascertaining the well-being and security of systems that assists our very lives. Coupled with most perspectives on software development and maintenance is one well established member of, and complement to, the development process. Debugging; or the art of discovering, localising, and correcting undesirable behaviours in software-intensive systems, the need for which tend to far outlive development in itself. Debugging is currently performed based on a premise of the developer operating from a god-like perspective. A perspective that implies access and knowledge regarding source code, along with minute control over execution properties. However, the quality as well as accessibility of such information steadily decline with time as requirements, implementation, hardware components and their associated developers, all alike fall behind their continuously evolving surroundings. In this thesis, it is argued that the current practice of software debugging is insufficient, and as precursory action, introduce a technical platform suitable for experimenting with future methods regarding online debugging, maintenance and analysis. An initial implementation of this platform will then be used for experimenting with a simple method that is targeting online observation of software behaviour.
124

Methods and measures for statistical fault localisation

Landsberg, David January 2016 (has links)
Fault localisation is the process of finding the causes of a given error, and is one of the most costly elements of software development. One of the most efficient approaches to fault localisation appeals to statistical methods. These methods are characterised by their ability to estimate how faulty a program artefact is as a function of statistical information about a given program and test suite. However, the major problem facing statistical approaches is their effectiveness -- particularly with respect to finding single (or multiple) faults in large programs typical to the real world. A solution to this problem hinges on discovering new formal properties of faulty programs and developing scalable statistical techniques which exploit them. In this thesis I address this by identifying new properties of faulty programs, developing the formal frameworks and methods which are formally proven to exploit them, and demonstrating that many of our new techniques substantially and statistically significantly outperform competing algorithms at given fault localisation tasks (using p = 0.01) on what (to our knowledge) is one of the largest scale set of experiments in fault localisation to date. This research is thus designed to corroborate the following thesis statement: That the new algorithms presented in this thesis are effective and efficient at software fault localisation and outperform state of the art statistical techniques at a range of fault localisation tasks. In more detail, the major thesis contributions are as follows: 1. We perform a thorough investigation into the existing framework of (sbfl), which currently stands at the cutting edge of statistical fault localisation. To improve on the effectiveness of sbfl, our first contribution is to introduce and motivate many new statistical measures which can be used within this framework. First, we show that many are well motivated to the task of sbfl. Second, we formally prove equivalence properties of large classes of measures. Third, we show that many of the measures perform competitively with the existing measures in experimentation -- in particular our new measure m9185 outperforms all existing measures on average in terms of effectiveness, and along with Kulkzynski2, is in a class of measures which statistically significantly outperforms all other measures at finding a single fault in a program (p = 0.01). 2. Having investigated sbfl, our second contribution is to motivate, introduce, and formally develop a new formal framework which we call probabilistic fault localisation (pfl). pfl is similar to sbfl insofar as it can leverage any suspiciousness measure, and is designed to directly estimate the probability that a given program artefact is faulty. First, we formally prove that pfl is theoretically superior to sbfl insofar as it satisfies and exploits a number of desirable formal properties which sbfl does not. Second, we experimentally show that pfl methods (namely, our measure pfl-ppv) substantially and statistically significantly outperforms the best performing sbfl measures at finding a fault in large multiple fault programs (p = 0.01). Furthermore, we show that for many of our benchmarks it is theoretically impossible to design strictly rational sbfl measures which outperform given pfl techniques. 3. Having addressed the problem of localising a single fault in a pro- gram, we address the problem of localising multiple faults. Accord- ingly, our third major contribution is the introduction and motiva- tion of a new algorithm M<sub>Opt(g)</sub> which optimises any ranking-based method g (such as pfl/sbfl/Barinel) to the task of multiple fault localisation. First we prove that MOpt(g) formally satisfies and exploits a newly identified formal property of multiple fault optimality. Secondly, we experimentally show that there are values for g such that M<sub>Opt(g)</sub> substantially and statistically significantly outperforms given ranking-based fault localisation methods at the task of finding multiple faults (p = 0.01). 4. Having developed methods for localising faults as a function of a given test suite, we finally address the problem of optimising test suites for the purposes of fault localisation. Accordingly, we first present an algorithm which leverages model checkers to improve a given test suite by making it satisfy a property of single bug opti- mality. Second, we experimentally show that on small benchmarks single bug optimal test suites can be generated (from scratch) efficiently when the algorithm is used in conjunction with the cbmc model checker, and that the test suite generated can be used effectively for fault localisation.
125

Vyhodnocování užitečnosti ladících nástrojů / Evaluation of Usefulness of Debugging Tools

Martinec, Tomáš January 2015 (has links)
Debugging is a very time-consuming activity for programmers. Although the number of proposed debugging tools is large, the number of tools that are actually adopted by practitioners and used during development of software is less than one may expect. Many believe that one reason for the situation is that it is hard to estimate whether the implementation efforts of proposed debugging tools or approaches are worth the gain. The first goal of this thesis is to propose a methodology for the evaluation of usefulness of debugging tools. To provide an exemplary usage of the methodology, a study of usefulness of typical debugging tools for development of operating systems is conducted. Secondly, the thesis also explores and documents further aspects of how programmers debug software. Powered by TCPDF (www.tcpdf.org)
126

The justificatory structure of OWL ontologies

Bail, Samantha Patricia January 2013 (has links)
The Web Ontology Language OWL is based on the highly expressive description logic SROIQ, which allows OWL ontology users to employ out-of-the-box reasoners to compute information that is not only explicitly asserted, but entailed by the ontology. Explanation facilities for entailments of OWL ontologies form an essential part of ontology development tools, as they support users in detecting and repairing errors in potentially large and highly complex ontologies, thus helping to ensure ontology quality. Justifications, minimal subsets of an ontology that are sufficient for an entailment to hold, are currently the prevalent form of explanation in OWL ontology development tools. They have been found to significantly reduce the time and effort required to debug erroneous entailments. A large number of entailments, however, have not only one but many justifications, which can make it considerably more challenging for a user to find a suitable repair for the entailment.In this thesis, we investigate the relationships between multiple justifications for both single and multiple entailments, with the goal of exploiting this justificatory structure in order to devise new coping strategies for multiple justifications. We describe various aspects of the justificatory structure of OWL ontologies, such as shared axiom cores and structural similarities. We introduce a model for measuring user effort in the debugging process and propose debugging strategies that exploit the justificatory structure in order to reduce user effort. Finally, an analysis of a large corpus of ontologies from the biomedical domain reveals that OWL ontologies used in practice frequently exhibit a rich justificatory structure.
127

System management of a redundant clocking network

Manush, Charles Edward. January 1976 (has links)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1976 / Bibliography: p.110. / by Charles E. Manush, III. / M.S. / M.S. Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
128

OpenAFS: Debugging-Methoden und -Tools

Müller, Thomas 22 October 2002 (has links)
Unterlagen zu einem Vortrag im Rahmen des AFS-Workshops 2002 an der ETH Zürich. Gegenstand der Vortrags sind Tools zum Debugging und zur Analyse des Verhaltens von AFS-Servern und -Clients. Die meisten dieser Tools sind im Source-Baum von OpenAFS enthalten, jedoch kaum dokumentiert.
129

Automated Debugging Methodology for FPGA-based Systems

Khan, Habib ul Hasan 30 December 2019 (has links)
Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present.
130

Integration of Ontology Alignment and Ontology Debugging for Taxonomy Networks

Ivanova, Valentina January 2014 (has links)
Semantically-enabled applications, such as ontology-based search and data integration, take into account the semantics of the input data in their algorithms. Such applications often use ontologies, which model the application domains in question, as well as alignments, which provide information about the relationships between the terms in the different ontologies. The quality and reliability of the results of such applications depend directly on the correctness and completeness of the ontologies and alignments they utilize. Traditionally, ontology debugging discovers defects in ontologies and alignments and provides means for improving their correctness and completeness, while ontology alignment establishes the relationships between the terms in the different ontologies, thus addressing completeness of alignments. This thesis focuses on the integration of ontology alignment and debugging for taxonomy networks which are formed by taxonomies, the most widely used kind of ontologies, connected through alignments. The contributions of this thesis include the following. To the best of our knowledge, we have developed the first approach and framework that integrate ontology alignment and debugging, and allow debugging of modelling defects both in the structure of the taxonomies as well as in their alignments. As debugging modelling defects requires domain knowledge, we have developed algorithms that employ the domain knowledge intrinsic to the network to detect and repair modelling defects. Further, a system has been implemented and several experiments with real-world ontologies have been performed in order to demonstrate the advantages of our integrated ontology alignment and debugging approach. For instance, in one of the experiments with the well-known ontologies and alignment from the Anatomy track in Ontology Alignment Evaluation Initiative 2010, 203 modelling defects (concerning incomplete and incorrect information) were discovered and repaired.

Page generated in 0.0706 seconds