Spelling suggestions: "subject:"debugging inn computer cience"" "subject:"debugging inn computer cscience""
71 |
Model-based automatic performance diagnosis of parallel computations /Li, Li, January 2007 (has links)
Thesis (Ph. D.)--University of Oregon, 2007. / Typescript. Includes vita and abstract. Includes bibliographical references (leaves 119-123). Also available for download via the World Wide Web; free to University of Oregon users.
|
72 |
Managing bug reports in free/open source software (FOSS) communitiesMohan, Nitin 09 March 2012 (has links)
Free/Open Source Software (FOSS) communities often use open bug reporting to allow users to participate by reporting bugs. This practice can lead to more duplicate reports, as inexperienced users can be less rigorous about researching existing bug reports. The purpose of this research is to determine the extent of this problem, and how FOSS projects deal with duplicate bug reports. We examined 12 FOSS projects: 4 small, 4 medium and 4 large, where size was determined by number of code contributors. First, we found that contrary to what has been reported from studies of individual large projects like Mozilla and Eclipse, duplicate bug reports are a problem for FOSS projects, especially medium-sized projects. These medium sized projects struggle with a large number of submissions and duplicates without the resources large projects use for dealing with these. Second, we found that the focus of a project does not affect the number of duplicate bug reports. Our findings point to a need for additional scaffolding and training for bug reporters of all types. Finally, we examine the impact that automatic crash reporting has on these bug repositories. These systems are quickly gaining in popularity and aim to help end-users submit vital bug information to the developers. These tools generate stack traces and memory dumps from software crashes and package these up so end-users can submit them to the project with a single mouse-click. We examined Mozilla's automatic crash reporting systems, Breakpad and Socorro, to determine how these integrate with the open bug reporting process, and whether they add to the confusion of duplicate bug reports. We found that though initial adoption exhibited teething troubles, these systems add significant value and knowledge, though the signal to noise ratio is high and the number of bugs identified per thousand reports is low. / Graduation date: 2012
|
73 |
Detecting bad smells in spreadsheetsAsavametha, Atipol 15 June 2012 (has links)
Spreadsheets are a widely used end-user programming tool. Field audits have found that 80-90% of spreadsheets created by end users contain textual and formula errors in spreadsheets. Such errors may have severe negative consequences for users in terms of productivity, credibility, or profits. To solve the problem of spreadsheet errors, researchers have presented manual and automatic error detection. Manual error detection is both tedious and time-consuming, while automatic error detection is limited to only finding some formula error categories such as formula reference errors. Both approaches do not provide the optimum result in error detection.
We have tested a new error detection approach by detecting bad smells in spreadsheets, which is an indication that an error might be present. Originally developed for object-oriented programming, examples include the large class, and the lazy class. We have adapted the concept of bad smells to spreadsheets. Each bad smell detector might indicate an issue in the spreadsheet, but the indication is not definitive, since the user must examine the spreadsheet and make a final judgment about whether an error is actually present. We evaluated 11 bad smell detectors by analyzing the true positives against the false positives. The result shows that six detectors can highlight some error categories, such as categorical errors and typographical errors. / Graduation date: 2013
|
74 |
Garbage in, garbage out? An empirical look at oracle mistakes by end-user programmersPhalgune, Amit 12 October 2005 (has links)
Graduation date: 2006 / End-user programmers, because they are human, make mistakes. However, past research has not considered how visual end-user debugging devices could be designed to ameliorate the effects of mistakes. This paper empirically examines oracle mistakes mistakes users make about which values are right and which are wrong to reveal differences in how different types of oracle mistakes impact the quality of visual feedback about bugs. We then consider the implications of these empirical results for designers of end-user software engineering environments.
|
75 |
Assembly Instruction Level Reverse Execution for DebuggingAkgul, Tankut 12 April 2004 (has links)
Reverse execution can be defined as a method which recovers the states that a program attains during its execution. Therefore, reverse execution eliminates the need for repetitive program restarts every time a bug location is missed. This potentially shortens debug time considerably.
This thesis presents a new approach which, for the first time ever (to the best of the author's knowledge), achieves reverse execution at the assembly instruction level on general purpose processors via execution of a reverse program. A reverse program almost always regenerates destroyed states rather than restoring them from a record. Furthermore, a reverse program provides assembly instruction by assembly instruction execution in the backward direction. This significantly reduces state saving and thus decreases the associated memory and time costs of reverse execution support.
Furthermore, this thesis presents a new dynamic slicing algorithm that is built on top of assembly instruction level reverse execution. Dynamic slicing is a technique which isolates the code parts that influence an erroneous variable at a program point. The algorithm presented in this thesis achieves dynamic slicing via execution of a reduced reverse program. A reduced reverse program is obtained from a full reverse program by omitting the instructions that recover states irrelevant to the dynamic slice under consideration. This provides a reverse execution capability along a designated dynamic slice only. The use of a reduced reverse program for dynamic slicing removes the need for runtime execution trajectories.
The methodology of this thesis has been implemented on a PowerPC processor with a custom made debugger. As compared to previous work, all of which heavily use state saving techniques, the experimental results show up to 2206X reduction in runtime memory usage, up to 403X reduction in forward execution time overhead and up to 2.32X reduction in forward execution time for the tested benchmarks. Measurements on the selected benchmarks also indicate that the dynamic slicing method presented in this thesis can achieve up to six orders of magnitude (1,928,500X) speedups in reverse execution along the dynamic slice as compared to full-scale reverse execution.
|
76 |
Enabling and supporting the debugging of software failuresClause, James Alexander 21 March 2011 (has links)
This dissertation evaluates the following thesis statement: Program analysis techniques can enable and support the debugging of failures in widely-used applications by (1) capturing, replaying, and, as much as possible, anonymizing failing executions and (2) highlighting subsets of failure-inducing inputs that are likely to be helpful for debugging such failures. To investigate this thesis, I developed techniques for recording, minimizing, and replaying executions captured from users' machines, anonymizing execution recordings, and automatically identifying failure-relevant inputs. I then performed experiments to evaluate the techniques in realistic scenarios using real applications and real failures. The results of these experiments demonstrate that the techniques can reduce the cost and difficulty of debugging.
|
77 |
Dynamic state alteration techniques for automatically locating software errorsJeffrey, Dennis Bernard. January 2009 (has links)
Thesis (Ph. D.)--University of California, Riverside, 2009. / Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 223-234). Also issued in print.
|
78 |
The analysis of Di, a detailed design metric, on large-scale softwareMcDaniel, Patrick Drew January 1991 (has links)
There is no abstract available for this thesis. / Department of Computer Science
|
79 |
Semi-automatic fault localizationJones, James Arthur 17 January 2008 (has links)
One of the most expensive and time-consuming components of the debugging
process is locating the errors or faults. To locate faults, developers must identify
statements involved in failures and select suspicious statements that might contain
faults. In practice, this localization is done by developers in a tedious and manual
way, using only a single execution, targeting only one fault, and having a limited
perspective into a large search space.
The thesis of this research is that fault localization can be partially automated
with the use of commonly available dynamic information gathered from test-case
executions in a way that is effective, efficient, tolerant of test cases that pass but also
execute the fault, and scalable to large programs that potentially contain multiple
faults. The overall goal of this research is to develop effective and efficient fault
localization techniques that scale to programs of large size and with multiple faults.
There are three principle steps performed to reach this goal: (1) Develop practical
techniques for locating suspicious regions in a program; (2) Develop techniques to
partition test suites into smaller, specialized test suites to target specific faults; and
(3) Evaluate the usefulness and cost of these techniques.
In this dissertation, the difficulties and limitations of previous work in the area
of fault-localization are explored. A technique, called Tarantula, is presented that
addresses these difficulties. Empirical evaluation of the Tarantula technique shows
that it is efficient and effective for many faults. The evaluation also demonstrates
that the Tarantula technique can loose effectiveness as the number of faults increases.
To address the loss of effectiveness for programs with multiple faults, supporting
techniques have been developed and are presented. The empirical evaluation of these
supporting techniques demonstrates that they can enable effective fault localization in
the presence of multiple faults. A new mode of debugging, called parallel debugging, is
developed and empirical evidence demonstrates that it can provide a savings in terms
of both total expense and time to delivery. A prototype visualization is provided to
display the fault-localization results as well as to provide a method to interact and
explore those results. Finally, a study on the effects of the composition of test suites
on fault-localization is presented.
|
80 |
A declarative debugger for Haskell /Pope, Bernard James. January 2006 (has links)
Thesis (Ph.D.)--University of Melbourne, Dept. of Computer Science and Software Engineering, 2007. / Typescript. Includes bibliographical references (leaves 253-264).
|
Page generated in 0.1035 seconds