• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 50
  • 50
  • 19
  • 14
  • 14
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Depuração de programas baseada em cobertura de integração / Program debugging based on integration coverage

Souza, Higor Amario de 20 December 2012 (has links)
Depuração é a atividade responsável pela localização e correção de defeitos gerados durante o desenvolvimento de programas. A depuração ocorre devido à atividade de teste bem-sucedida, na qual falhas no comportamento do programa são reveladas, indicando a existência de defeitos. Diversas técnicas têm sido propostas para automatizar a tarefa de depuração de programas. Algumas delas utilizam heurísticas baseadas em informações de cobertura obtidas da execução de testes. O objetivo é indicar trechos de código do programa mais suspeitos de conter defeitos. As informações de cobertura mais usadas em depuração automatizada são baseadas no teste estrutural de unidade. A cobertura de integração, obtida por meio da comunicação entre as unidades de um programa, pode trazer novas informações sobre o código executado, possibilitando a criação de novas estratégias para a tarefa de localização de defeitos. Este trabalho apresenta uma nova técnica de localização de defeitos chamada Depuração de programas baseada em Cobertura de Integração (DCI). São apresentadas duas coberturas de integração baseadas nas chamadas de métodos de um programa. Essas coberturas são usadas para a proposição de roteiros de busca dos defeitos a partir dos métodos considerados mais suspeitos. As informações de cobertura de unidade são então utilizadas para a localização dos defeitos dentro dos métodos. A DCI também utiliza uma nova heurística para atribuição de valores de suspeição a entidades de integração estática dos programas como pacotes, classes e métodos, fornecendo também um roteiro para a procura dos defeitos. Os experimentos realizados em programas reais mostram que a DCI permite realizar a localização de defeitos de forma mais eficaz do que o uso de informações de cobertura de unidade isoladamente. / Debugging is the activity responsible for localizing and fixing faults generated during software development. Debugging occurs due to a successful testing activity, in which failures in the behavior of the program are revealed, indicating the existence of faults. Several techniques have been proposed to automate the debugging tasks, especially the fault localization task. Some techniques use heuristics based on coverage data obtained from the execution of tests. The goal is to indicate program code excerpts more likely to contain faults. The coverage data mostly used in automated debugging is based on white-box unit testing. Integration coverage data, obtained from the communication between the units of a program, can bring about new information with respect to the executed code, which allows new strategies to the fault localization task to be devised. This work presents a new fault localization technique called Debugging based on Integration Coverage (DIC). Two integration coverages based on method invocations are presented. These coverages are used to propose two search strategies that provides a roadmap to locate faults by investigating the more suspicious methods. The unit coverage information are used to search the faulty statement inside the suspicious methods. The DIC technique also proposes a heuristic that assigns suspiciousness values to static integration entities of the programs, namely, packages, classes, and methods. This heuristic also provides a roadmap to search for the faults. Experiments using real programs show that DIC is more effective to locate faults than solely using unit coverage information.
12

Depuração de programas baseada em cobertura de integração / Program debugging based on integration coverage

Higor Amario de Souza 20 December 2012 (has links)
Depuração é a atividade responsável pela localização e correção de defeitos gerados durante o desenvolvimento de programas. A depuração ocorre devido à atividade de teste bem-sucedida, na qual falhas no comportamento do programa são reveladas, indicando a existência de defeitos. Diversas técnicas têm sido propostas para automatizar a tarefa de depuração de programas. Algumas delas utilizam heurísticas baseadas em informações de cobertura obtidas da execução de testes. O objetivo é indicar trechos de código do programa mais suspeitos de conter defeitos. As informações de cobertura mais usadas em depuração automatizada são baseadas no teste estrutural de unidade. A cobertura de integração, obtida por meio da comunicação entre as unidades de um programa, pode trazer novas informações sobre o código executado, possibilitando a criação de novas estratégias para a tarefa de localização de defeitos. Este trabalho apresenta uma nova técnica de localização de defeitos chamada Depuração de programas baseada em Cobertura de Integração (DCI). São apresentadas duas coberturas de integração baseadas nas chamadas de métodos de um programa. Essas coberturas são usadas para a proposição de roteiros de busca dos defeitos a partir dos métodos considerados mais suspeitos. As informações de cobertura de unidade são então utilizadas para a localização dos defeitos dentro dos métodos. A DCI também utiliza uma nova heurística para atribuição de valores de suspeição a entidades de integração estática dos programas como pacotes, classes e métodos, fornecendo também um roteiro para a procura dos defeitos. Os experimentos realizados em programas reais mostram que a DCI permite realizar a localização de defeitos de forma mais eficaz do que o uso de informações de cobertura de unidade isoladamente. / Debugging is the activity responsible for localizing and fixing faults generated during software development. Debugging occurs due to a successful testing activity, in which failures in the behavior of the program are revealed, indicating the existence of faults. Several techniques have been proposed to automate the debugging tasks, especially the fault localization task. Some techniques use heuristics based on coverage data obtained from the execution of tests. The goal is to indicate program code excerpts more likely to contain faults. The coverage data mostly used in automated debugging is based on white-box unit testing. Integration coverage data, obtained from the communication between the units of a program, can bring about new information with respect to the executed code, which allows new strategies to the fault localization task to be devised. This work presents a new fault localization technique called Debugging based on Integration Coverage (DIC). Two integration coverages based on method invocations are presented. These coverages are used to propose two search strategies that provides a roadmap to locate faults by investigating the more suspicious methods. The unit coverage information are used to search the faulty statement inside the suspicious methods. The DIC technique also proposes a heuristic that assigns suspiciousness values to static integration entities of the programs, namely, packages, classes, and methods. This heuristic also provides a roadmap to search for the faults. Experiments using real programs show that DIC is more effective to locate faults than solely using unit coverage information.
13

On the use of control- and data-ow in fault localization / Sobre o uso de fluxo de controle e de dados para a localizao de defeitos

Henrique Lemos Ribeiro 19 August 2016 (has links)
Testing and debugging are key tasks during the development cycle. However, they are among the most expensive activities during the development process. To improve the productivity of developers during the debugging process various fault localization techniques have been proposed, being Spectrum-based Fault Localization (SFL), or Coverage-based Fault Localization (CBFL), one of the most promising. SFL techniques pinpoints program elements (e.g., statements, branches, and definition-use associations), sorting them by their suspiciousness. Heuristics are used to rank the most suspicious program elements which are then mapped into lines to be inspected by developers. Although data-flow spectra (definition-use associations) has been shown to perform better than control-flow spectra (statements and branches) to locate the bug site, the high overhead to collect data-flow spectra has prevented their use on industry-level code. A data-flow coverage tool was recently implemented presenting on average 38% run-time overhead for large programs. Such a fairly modest overhead motivates the study of SFL techniques using data-flow information in programs similar to those developed in the industry. To achieve such a goal, we implemented Jaguar (JAva coveraGe faUlt locAlization Ranking), a tool that employ control-flow and data-flow coverage on SFL techniques. The effectiveness and efficiency of both coverages are compared using 173 faulty versions with sizes varying from 10 to 96 KLOC. Ten known SFL heuristics to rank the most suspicious lines are utilized. The results show that the behavior of the heuristics are similar both to control- and data-flow coverage: Kulczynski2 and Mccon perform better for small number of lines investigated (from 5 to 30 lines) while Ochiai performs better when more lines are inspected (30 to 100 lines). The comparison between control- and data-flow coverages shows that data-flow locates more defects in the range of 10 to 50 inspected lines, being up to 22% more effective. Moreover, in the range of 20 and 100 lines, data-flow ranks the bug better than control-flow with statistical significance. However, data-flow is still more expensive than control-flow: it takes from 23% to 245% longer to obtain the most suspicious lines; on average data-flow is 129% more costly. Therefore, our results suggest that data-flow is more effective in locating faults because it tracks more relationships during the program execution. On the other hand, SFL techniques supported by data-flow coverage needs to be improved for practical use at industrial settings / Teste e depuração são tarefas importantes durante o ciclo de desenvolvimento de programas, no entanto, estão entre as atividades mais caras do processo de desenvolvimento. Diversas técnicas de localização de defeitos têm sido propostas a fim de melhorar a produtividade dos desenvolvedores durante o processo de depuração, sendo a localização de defeitos baseados em cobertura de código (Spectrum-based Fault Localization (SFL) uma das mais promissoras. A técnica SFL aponta os elementos de programas (e.g., comandos, ramos e associações definição-uso), ordenando-os por valor de suspeição. Heursticas são usadas para ordenar os elementos mais suspeitos de um programa, que então são mapeados em linhas de código a serem inspecionadas pelos desenvolvedores. Embora informações de fluxo de dados (associações definição-uso) tenham mostrado desempenho melhor do que informações de fluxo de controle (comandos e ramos) para localizar defeitos, o alto custo para coletar cobertura de fluxo de dados tem impedido a sua utilização na prática. Uma ferramenta de cobertura de fluxo de dados foi recentemente implementada apresentando, em média, 38% de sobrecarga em tempo de execução em programas similares aos desenvolvidos na indústria. Tal sobrecarga, bastante modesta, motiva o estudo de SFL usando informações de fluxo de dados. Para atingir esse objetivo, Jaguar (Java coveraGe faUlt locAlization Ranking), uma ferramenta que usa técnicas SFL com cobertura de fluxo de controle e de dados, foi implementada. A eficiência e eficácia de ambos os tipos de coberturas foram comparados usando 173 versões com defeitos de programas com tamanhos variando de 10 a 96 KLOC. Foram usadas dez heursticas conhecidas para ordenar as linhas mais suspeitas. Os resultados mostram que o comportamento das heursticas são similares para fluxo de controle e de dados: Kulczyski2 e Mccon obtêm melhores resultados para números menores de linhas investigadas (de 5 a 30), enquanto Ochiai é melhor quando mais linhas são inspecionadas (de 30 a 100). A comparação entre os dois tipos de cobertura mostra que fluxo de dados localiza mais defeitos em uma variação de 10 a 50 linhas inspecionadas, sendo até 22% mais eficaz. Além disso, na faixa entre 20 e 100 linhas, fluxo de dados classifica com significância estatstica melhor os defeitos. No entanto, fluxo de dados é mais caro do que fluxo de controle: leva de 23% a 245% mais tempo para obter os resultados; fluxo de dados é em média 129% mais custoso. Portanto, os resultados indicam que fluxo de dados é mais eficaz para localizar os defeitos pois rastreia mais relacionamentos durante a execução do programa. Por outro lado, técnicas SFL apoiadas por cobertura de fluxo de dados precisam ser mais eficientes para utilização prática na indústria
14

Localização de faltas em linhas de transmissão baseada em métodos heurísticos utilizando dados de um terminal. / Transmission line fault location with one-terminal data using heuristic methods.

Ronald Adrian Poma Fuentes 16 October 2015 (has links)
Este trabalho apresenta o desenvolvimento e a implementação computacional de um algoritmo para a localização de faltas em linhas de transmissão. O algoritmo proposto é baseado em métodos heurísticos, isto é, Algoritmos Genéticos (AG) e Pattern Search (PS), sendo capaz de identificar o ponto de ocorrência da falta utilizando fasores de tensão e corrente de pré e pós-falta, estimados a partir de medições disponíveis apenas no terminal local da linha de tranmsissão. Nesta abordagem, ambas as ferramentas de otimização possuem natureza heurísticas sendo menos propensas a cair em valores mínimos locais, o que implica uma maior eficiência e precisão na determinação da localização da falta. Além disso, o método utiliza potências de curto-circuito monofásicas e trifásicas de ambos os terminais da linha (local e remoto), a fim de obter seus equivalentes de Thévenin, e os parâmetros elétricos da linha de transmissão. Com o objetivo de avaliar o desempenho do algoritmo proposto, consideram-se nas simulações quatro sistemas de transmissão diferentes, que representam sistemas reais de transmissão brasileiros. O primeiro está constituído por uma típica linha de transmissão de 138 [kV] com circuito duplo, e os outros três sistemas são constituídos por linhas de transmissão típicas de 230, 500 e 765 [kV] com circuito simples. A partir dos quatro sistemas de transmissão simulados no software Alterative Transients Program (ATP/EMTP), foram gerados um total de 928 situações de falta. A versão do algoritmo desenvolvido para localização de faltas foram descritas e implementadas, através do software científico MATrix LABoratory (MATLAB), apresentando resultados com altos níveis de precisão. / This paper presents the development and implementation of a computational algorithm for location faults in transmission line. The proposed algorithm is based on heuristic methods, namely AG and PS, being able to identify the occurrence of the fault point using phasor voltage and current pre and post-fault, estimated from measurements available only on the local terminal of the transmission line. In this approach, both optimization tools have heuristic nature being less prone to falling into local minimum values, which implies grater efficiency and accuracy in the determination of fault location. Moreover, the method use singles-phase and three-phase short-circuit powers of both the line terminals (Local and remote), to obtain the Thévenin equivalent and the electrical parameters of the transmission line. In order to evaluate the performance of the proposed algorithm were considered four different transmission systems; these systems represent real systems of Brazilian transmission. The first this made up of a transmission line typical of 138 [kV] with double circuit, the other three systems are made up of transmission line typical of 230, 500 and 765 [kV] of single circuit. Whereas the four simulated transmission systems in the softwareATP/ EMTP, generated a total of 928 situations fault. The version of the algorithm designed to locate faults, have been described and implemented through scientific software MATLAB, presenting results high levels of accuracy.
15

Localização de faltas em linhas de transmissão baseada em métodos heurísticos utilizando dados de um terminal. / Transmission line fault location with one-terminal data using heuristic methods.

Poma Fuentes, Ronald Adrian 16 October 2015 (has links)
Este trabalho apresenta o desenvolvimento e a implementação computacional de um algoritmo para a localização de faltas em linhas de transmissão. O algoritmo proposto é baseado em métodos heurísticos, isto é, Algoritmos Genéticos (AG) e Pattern Search (PS), sendo capaz de identificar o ponto de ocorrência da falta utilizando fasores de tensão e corrente de pré e pós-falta, estimados a partir de medições disponíveis apenas no terminal local da linha de tranmsissão. Nesta abordagem, ambas as ferramentas de otimização possuem natureza heurísticas sendo menos propensas a cair em valores mínimos locais, o que implica uma maior eficiência e precisão na determinação da localização da falta. Além disso, o método utiliza potências de curto-circuito monofásicas e trifásicas de ambos os terminais da linha (local e remoto), a fim de obter seus equivalentes de Thévenin, e os parâmetros elétricos da linha de transmissão. Com o objetivo de avaliar o desempenho do algoritmo proposto, consideram-se nas simulações quatro sistemas de transmissão diferentes, que representam sistemas reais de transmissão brasileiros. O primeiro está constituído por uma típica linha de transmissão de 138 [kV] com circuito duplo, e os outros três sistemas são constituídos por linhas de transmissão típicas de 230, 500 e 765 [kV] com circuito simples. A partir dos quatro sistemas de transmissão simulados no software Alterative Transients Program (ATP/EMTP), foram gerados um total de 928 situações de falta. A versão do algoritmo desenvolvido para localização de faltas foram descritas e implementadas, através do software científico MATrix LABoratory (MATLAB), apresentando resultados com altos níveis de precisão. / This paper presents the development and implementation of a computational algorithm for location faults in transmission line. The proposed algorithm is based on heuristic methods, namely AG and PS, being able to identify the occurrence of the fault point using phasor voltage and current pre and post-fault, estimated from measurements available only on the local terminal of the transmission line. In this approach, both optimization tools have heuristic nature being less prone to falling into local minimum values, which implies grater efficiency and accuracy in the determination of fault location. Moreover, the method use singles-phase and three-phase short-circuit powers of both the line terminals (Local and remote), to obtain the Thévenin equivalent and the electrical parameters of the transmission line. In order to evaluate the performance of the proposed algorithm were considered four different transmission systems; these systems represent real systems of Brazilian transmission. The first this made up of a transmission line typical of 138 [kV] with double circuit, the other three systems are made up of transmission line typical of 230, 500 and 765 [kV] of single circuit. Whereas the four simulated transmission systems in the softwareATP/ EMTP, generated a total of 928 situations fault. The version of the algorithm designed to locate faults, have been described and implemented through scientific software MATLAB, presenting results high levels of accuracy.
16

CONTEXT-AWARE DEBUGGING FOR CONCURRENT PROGRAMS

Chu, Justin 01 January 2017 (has links)
Concurrency faults are difficult to reproduce and localize because they usually occur under specific inputs and thread interleavings. Most existing fault localization techniques focus on sequential programs but fail to identify faulty memory access patterns across threads, which are usually the root causes of concurrency faults. Moreover, existing techniques for sequential programs cannot be adapted to identify faulty paths in concurrent programs. While concurrency fault localization techniques have been proposed to analyze passing and failing executions obtained from running a set of test cases to identify faulty access patterns, they primarily focus on using statistical analysis. We present a novel approach to fault localization using feature selection techniques from machine learning. Our insight is that the concurrency access patterns obtained from a large volume of coverage data generally constitute high dimensional data sets, yet existing statistical analysis techniques for fault localization are usually applied to low dimensional data sets. Each additional failing or passing run can provide more diverse information, which can help localize faulty concurrency access patterns in code. The patterns with maximum feature diversity information can point to the most suspicious pattern. We then apply data mining technique and identify the interleaving patterns that are occurred most frequently and provide the possible faulty paths. We also evaluate the effectiveness of fault localization using test suites generated from different test adequacy criteria. We have evaluated Cadeco on 10 real-world multi-threaded Java applications. Results indicate that Cadeco outperforms state-of-the-art approaches for localizing concurrency faults.
17

Garbage in, garbage out? An empirical look at oracle mistakes by end-user programmers

Phalgune, Amit 12 October 2005 (has links)
Graduation date: 2006 / End-user programmers, because they are human, make mistakes. However, past research has not considered how visual end-user debugging devices could be designed to ameliorate the effects of mistakes. This paper empirically examines oracle mistakes mistakes users make about which values are right and which are wrong to reveal differences in how different types of oracle mistakes impact the quality of visual feedback about bugs. We then consider the implications of these empirical results for designers of end-user software engineering environments.
18

Automatic fault detection and localization in IPnetworks : Active probing from a single node perspective

Pettersson, Christopher January 2015 (has links)
Fault management is a continuously demanded function in any kind of network management. Commonly it is carried out by a centralized entity on the network which correlates collected information into likely diagnoses of the current system states. We survey the use of active-on-demand-measurement, often called active probes, together with passive readings from the perspective of one single node. The solution is confined to the node and is isolated from the surrounding environment. The utility for this approach, to fault diagnosis, was found to depend on the environment in which the specific node was located within. Conclusively, the less environment knowledge, the more useful this solution presents. Consequently this approach to fault diagnosis offers limited opportunities in the test environment. However, greater prospects was found for this approach while located in a heterogeneous customer environment.
19

Framtagning av laboration kring kabelfelsmätning / Development of laboratory exercise regarding cablefault detection

Stolt, Jan-Olof January 2015 (has links)
Att hitta fel, att kunna lokalisera och identifiera kablar i mark är en aktuell fråga då allt mer friledning ersätts med markförlagd kabel. Att snabbt lokalisera och reparera ett fel sparar pengar för energibolagen och minskar den tid kunderna är utan elförsörjning. Dagens skyddsutrustning har integrerade fellokalisering inbyggd, detta ger dock endast en ungefärlig bild av var fel finns. Mer exakt lokalisering sker med en förlokalisering, nu förtiden används pulsekometoden. En puls skickas ut på den skadade ledaren och en oskadad ledare, pulsen reflekteras i felstället och i kabelendan. Tiden från det pulserna lämnar pulsgeneratorn tills de återvänder jämförs. Detta ger avståndet till felet i meter. Där efter så används en av två efterlokaliseringsmetoder. Stötspänningsaggregat med en markmikrofon, aggregatet genererar ett överslag i felstället, urladdningen kan lokaliseras med hjälp av markmikrofonen. Andra metoden är tongenerator och en sökstav. Tongeneratorn skickar ut en högfrekvent puls på den trasiga ledaren, detta genererar ett magnetfält som kan följas med hjälp av sökstaven. När signalen försvinner så har felet lokaliserats. Kursen Elförfattning och Elinstallation (ELI 200) på Högskolan Väst saknar ett praktiskt övningstillfälle för kabellokalisering, kabelfelsökning och kabelidentifiering. Uppgiften var att ta fram en laboration, där en eller flera mätningar utförs på olika kablar i mark. Laborationen skall ingå som ett obligatoriskt moment i kursen. Målet var att laborationen skulle återspegla en verklig kabellokalisering och en kabelidentifiering. Geografiskt kommer laborationen att utföras på Magnus Åbergsgymnasiets laborationsområde i Trollhättan. Metoder att genomföra examenarbetet bestod av faktainsamling via litteratur och rapporter. Laborationsplatsen var redan förutbestämd. Kontakt med personal ifrån Magnus Åbergsgymnasiet etablerades, laborationen utformades och testades av gruppen samt oberoende part för att upptäcka eventuella brister och för att se att instruktionerna var tydliga. Ett facit med svar och referensvärden utformades och förslag på lärarhandledning skapades. På grund av begränsad tillgång av mätinstrument och av praktisk anledning begränsades laborationen till ett moment med kabellokalisering med hjälp av söksond och ett moment med kabelidentifiering. / Be able to quickly locate and identify a faulty cable and repair it is quite necessary these days, since most power-line this days are buried and power-shortages cost the energy companies a lot of money and are very irritating to the customers. Modern system are integrated whit pre-localizations equipment, the precision is not grate. To locate the problem a method of eco-localizations are deployed, a pulse in sent threw the faulty line and also threw a functional line. The pulse reflects in the fault and in the end of the line, the time deferens results in distends to the faults location. To achieve a more exact position there are two methods in use. A shock-voltage-generator produce an audible detonation in the faults location, the detonations can then be located whit a ground-microphone. The other method in use is a frequency-generator that produces a magnetic-field around the cable. The field can then be traced whit a locator-staff. When the signal disperse the approximate location of the fault is located. The course Electrical ordinance and Electrical installation lacks an applied training session regarding cable fault detection and cable identifying. The task was to develop a laboratory experiment in which one or more cables are identified and located. The lab would be included as a compulsory part of the course. The goal was that the lab would reflect a real work situation including pre-localizing, precise- localizing and in case of multiple cables, a moment of identifying the specific cable. The geographic location for the training session is pre-determined and are placed at Magnus Åbergs high school outdoor practice area in Trollhättan. Methods to carry out the thesis work consisted of gathering facts through literature, reports. The lab was designed and tested to detect any flaws. An answer key of reference values was designed and suggestions for a teacher's guide were created. The delimitations where the access to measuring instruments and fore practical reasons the advisors from University West and the representative from Magnus Åbergs high school decided to limit the training session to cable localization and cable identifying.
20

Effective fault localization techniques for concurrent software

Park, Sang Min 12 January 2015 (has links)
Multicore and Internet cloud systems have been widely adopted in recent years and have resulted in the increased development of concurrent programs. However, concurrency bugs are still difficult to test and debug for at least two reasons. Concurrent programs have large interleaving space, and concurrency bugs involve complex interactions among multiple threads. Existing testing solutions for concurrency bugs have focused on exposing concurrency bugs in the large interleaving space, but they often do not provide debugging information for developers to understand the bugs. To address the problem, this thesis proposes techniques that help developers in debugging concurrency bugs, particularly for locating the root causes and for understanding them, and presents a set of empirical user studies that evaluates the techniques. First, this thesis introduces a dynamic fault-localization technique, called Falcon, that locates single-variable concurrency bugs as memory-access patterns. Falcon uses dynamic pattern detection and statistical fault localization to report a ranked list of memory-access patterns for root causes of concurrency bugs. The overall Falcon approach is effective: in an empirical evaluation, we show that Falcon ranks program fragments corresponding to the root-cause of the concurrency bug as "most suspicious" almost always. In principle, such a ranking can save a developer's time by allowing him or her to quickly hone in on the problematic code, rather than having to sort through many reports. Others have shown that single- and multi-variable bugs cover a high fraction of all concurrency bugs that have been documented in a variety of major open-source packages; thus, being able to detect both is important. Because Falcon is limited to detecting single-variable bugs, we extend the Falcon technique to handle both single-variable and multi-variable bugs, using a unified technique, called Unicorn. Unicorn uses online memory monitoring and offline memory pattern combination to handle multi-variable concurrency bugs. The overall Unicorn approach is effective in ranking memory-access patterns for single- and multi-variable concurrency bugs. To further assist developers in understanding concurrency bugs, this thesis presents a fault-explanation technique, called Griffin, that provides more context of the root cause than Unicorn. Griffin reconstructs the root cause of the concurrency bugs by grouping suspicious memory accesses, finding suspicious method locations, and presenting calling stacks along with the buggy interleavings. By providing additional context, the overall Griffin approach can provide more information at a higher-level to the developer, allowing him or her to more readily diagnose complex bugs that may cross file or module boundaries. Finally, this thesis presents a set of empirical user studies that investigates the effectiveness of the presented techniques. In particular, the studies compare the effectiveness between a state-of-the-art debugging technique and our debugging techniques, Unicorn and Griffin. Among our findings, the user study shows that while the techniques are indistinguishable when the fault is relatively simple, Griffin is most effective for more complex faults. This observation further suggests that there may be a need for a spectrum of tools or interfaces that depend on the complexity of the underlying fault or even the background of the user.

Page generated in 0.0802 seconds