• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 17
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 98
  • 98
  • 98
  • 31
  • 20
  • 17
  • 13
  • 12
  • 11
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Monitoring and analysis system for performance troubleshooting in data centers

Wang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.
92

A Sparse Learning Approach for Linux Kernel Data Race Prediction

Ryan, Gabriel January 2023 (has links)
Operating system kernels rely on fine-grained concurrency to achieve optimal performance on modern multi-core processors. However, heavy usage of fine-grained concurrency mechanisms make modern operating system kernels prone to data races, which can cause severe and often elusive bugs. In this thesis, I propose a new approach to identifying data races in OS Kernels based on learning a model to predict which memory accesses can be feasibly executed concurrently with one another. To develop an efficient learning method for memory access feasibility, I develop a novel approach based on encoding feasibility as a boolean indicator function of system calls and ordered memory accesses. A memory access feasibility function encoded this way will have a naturally sparse latent representation due to the sparsity of interthread communications and synchronization interactions, and can therefore be accurately approximated based on a small number of observed concurrent execution traces. This thesis introduces two key contributions. First, Probabilistic Lockset Analysis (PLA), is a new analysis that exploits sparsity in input dependencies in conjunction with a conservative lockset analysis to efficiently predict data races in the Linux OS Kernel. Second, approximate happens-before analysis in the fourier domain (HBFourier) generalizes the approach used by PLA to reason about interthread memory communications and synchronization events through sparse fourier learning. In addition to being theoretically grounded, these techniques are highly practical: they find hundreds of races in a recent Linux development kernel, an order of magnitude improvement over prior work, and find races with severe security impacts that have been overlooked by existing kernel testing systems for years.
93

A program slicer for LF

Louw, Francoise 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2006. / ENGLISH ABSTRACT: Program slicing was originally described by Mark Weiser in 1984. He proposed this as a technique to aid in debugging because he conjectured that this is what programmers did naturally when debugging. Here program slicing is applied to an experimental concurrent language called LF. Existing techniques are adapted to accommodate the unique features of the language. / AFRIKAANSE OPSOMMING: Programdeling is oorspronklik deur Mark Weiser beskryf in 1984. Hy het dit voorgestel as ’n tegniek om ontfouting te vergemaklik, want hy het geglo dat dit is wat programmeerders van nature self doen. Programdeling word hier toegepas op ’n eksperimentele gelyklopende taal genaamd LF. Bestaande tegnieke word gewysig om die taal se unieke eienskappe in ag te neem.
94

SOFTVIZ... A Step Forward

Singh, Mahim 30 April 2004 (has links)
Complex software systems are difficult to understand and very hard to debug. Programmers trying to understand or debug these systems must read through source code which may span over thousands of files. Software Visualization tries to ease this burden by using graphics and animation to convey important information about the program to the user, which may be used either for understanding the behavior of the program or for detecting any defects within the code. SoftViz is one such software visualization system, developed by Ben Kurtz under the guidance of Prof. George T. Heineman at WPI. We carry forward the work initiated with SoftViz. Our preliminary study showed various avenues for making the system more effective and user-friendly. Specifically I completed the unfinished work, made optimizations, implemented new functionality and added new visualization plug-ins, all aimed at making the system a more versatile and user-friendly debugging framework. We built a solid core functionality that would be able to support various functionalities and created new plug-ins that would make understanding and bug-detection easier. Further we integrated SoftViz with the Eclipse development environment, making the system easily accessible and potentially widely used. We created an error classification framework relating the common error classes and the visualizations that could be used to detect them. We believe this will be helpful in both selecting the right visualization options as well as constructing new plug-ins.
95

Combining over- and under-approximating program analyses for automatic software testing

Csallner, Christoph 07 July 2008 (has links)
This dissertation attacks the well-known problem of path-imprecision in static program analysis. Our starting point is an existing static program analysis that over-approximates the execution paths of the analyzed program. We then make this over-approximating program analysis more precise for automatic testing in an object-oriented programming language. We achieve this by combining the over-approximating program analysis with usage-observing and under-approximating analyses. More specifically, we make the following contributions. We present a technique to eliminate language-level unsound bug warnings produced by an execution-path-over-approximating analysis for object-oriented programs that is based on the weakest precondition calculus. Our technique post-processes the results of the over-approximating analysis by solving the produced constraint systems and generating and executing concrete test-cases that satisfy the given constraint systems. Only test-cases that confirm the results of the over-approximating static analysis are presented to the user. This technique has the important side-benefit of making the results of a weakest-precondition based static analysis easier to understand for human consumers. We show examples from our experiments that visually demonstrate the difference between hundreds of complicated constraints and a simple corresponding JUnit test-case. Besides eliminating language-level unsound bug warnings, we present an additional technique that also addresses user-level unsound bug warnings. This technique pre-processes the testee with a dynamic analysis that takes advantage of actual user data. It annotates the testee with the knowledge obtained from this pre-processing step and thereby provides guidance for the over-approximating analysis. We also present an improvement to dynamic invariant detection for object-oriented programming languages. Previous approaches do not take behavioral subtyping into account and therefore may produce inconsistent results, which can throw off automated analyses such as the ones we are performing for bug-finding. Finally, we address the problem of unwanted dependencies between test-cases caused by global state. We present two techniques for efficiently re-initializing global state between test-case executions and discuss their trade-offs. We have implemented the above techniques in the JCrasher, Check 'n' Crash, and DSD-Crasher tools and present initial experience in using them for automated bug finding in real-world Java programs.
96

Metodologias de suporte a verificação e análise de modelos de plataformas em alto nível de abstração / Analysis and verification support methodologies for high abstractions level platforms

Albertini, Bruno de Carvalho, 1980- 10 June 2011 (has links)
Orientadores: Sandro Rigo, Guido Costa Souza de Araújo / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-20T09:24:51Z (GMT). No. of bitstreams: 1 Albertini_BrunodeCarvalho_D.pdf: 1167408 bytes, checksum: 90cc3a4bfc9be31f93622556f91b0b51 (MD5) Previous issue date: 2011 / Resumo: A crescente complexidade das descrições de hardware em alto nível tem motivado a criação de metodologias de desenvolvimento por vários anos, sendo o mais recente nível de abstração representado pelo que é chamado de projeto Electronic System Level (ESL) e os projetos baseados em plataformas. Neste cenário, a exploração simultânea de diversos modelos arquiteturais, como os Systems-on-Chip (SoC), é a chave para se obter um bom balanceamento no particionamento hardware-software e melhorar o desempenho tanto do hardware quanto do software. Isto demanda uma infraestrutura de simulação de plataformas capaz de simular com rapidez, em um alto nível de abstração, tanto o software quanto o hardware. SystemC despontou como uma das linguagens de descrição mais adotadas e, juntamente com a modelagem em nível de transação (TLM, do inglês Transaction Level Modeling), vem sendo amplamente reconhecido como a técnica mais propícia para desenvolvimento em ESL. Uma das características mais marcantes de TLM é a possibilidade de reutilizar toda a infraestrutura da plataforma para a simulação de hardware e software [12]. A integração da verificação no fluxo de projeto é muito importante em uma metodologia baseada em TLM. Uma das técnicas de verificação mais conhecidas é a injeção de estímulos, usada para guiar a simulação para casos limite. Este tipo de funcionalidade é útil para aumentar a cobertura da verificação. As ferramentas disponíveis para descrições SystemC não permitem injeção de estímulos sem que o modelo seja alterado ou sem a utilização de um núcleo de simulação modificado para tal tarefa. Para a depuração, não temos notícia de nenhuma ferramenta de código aberto disponível, porém existem boas ferramentas comerciais preparadas especificamente para a depuração de modelos em SystemC. Nesta tese são propostas três metodologias para melhorar a capacidade de introspecção, depuração e análise de modelos de hardware descritos em alto nível de abstração. A primeira delas é composta por uma metodologia de reflexão computacional aplicável a módulos SystemC através da inserção de módulos de inspeção, que chamamos de ReFlexBox. A segunda técnica desenvolvida foi chamada de SignalReplay, e representa uma evolução da primeira técnica voltada para a captura, análise e injeção dos dados coletados pela reflexão. A última metodologia proposta, chamada de PDFA (do inglês, Platform Dataflow Analysis) visa extrair metadados através da reflexão de tipos sobrecarregados, permitindo que o projetista aplique técnicas de compiladores para a análise de hardware. Os resultados obtidos são apresentados como experimentos, implementados na forma de estudos de caso. Estes experimentos permitiram avaliar a eficácia das técnicas propostas que, ao contrário de trabalhos correlatos, aderem a seis princípios que consideramos fundamentais: (1) não são intrusivas em relação as modificações no modelo que podem ser necessárias para implementar a introspecção; (2) não necessitam de modificações no ambiente de simulação, compiladores ou bibliotecas, incluindo nossa linguagem alvo: SystemC; (3) geram uma sobrecarga pequena no tempo de simulação; (4) proveem observabilidade e controlabilidade; (5) são extensíveis, permitindo a adaptação para utilização em trabalhos similares, com pouca ou nenhuma modificação nas metodologias; e (6) protegem a propriedade intelectual do módulo sob verificação / Abstract: The increasing complexity of high level hardware descriptions has motivated the creation of development methodologies for several years, being the most recent level of abstraction represented by projects based on platforms and on the so called Electronic System Level design (ESL). In this scenario, simultaneously exploring different architectural models, like Systems-on-Chip (SoC), is the key to achieve a good balance on hardware-software partitioning and improve performance of both hardware and software. This requires a platform simulation infrastructure able to simulate at high speeds and high level of abstraction, both software and hardware. SystemC emerged as one of the most widely adopted description languages and, when used with the Transaction Level Modeling (TLM), has been widely recognized as the most suitable for ESL development. One of the most striking features of TLM is the possibility to reuse all the infrastructure platform for the simulation of hardware and software [12]. Integration of the verification into design flow is a key point in a TLM-based methodology. One well-known verification technique is the injection stimuli, used to guide the simulation to borderline states. This kind of functionality is useful to increase the coverage of the verification. The tools currently available for SystemC descriptions do not allow stimuli injection without model modifications, or without the use of a modified SystemC simulation core specially crafted for this task. We could not find any open source tool for debugging, but there are good commercial tools specifically prepared to SystemC model debugging. This thesis proposes three methodologies focused on improving the support for introspection, debug, and analysis of hardware models described in high abstraction level. First one is a methodology using computational reflection, applicable to SystemC descriptions by inserting inspection modules, that we call ReflexBoxes. The second technique is called SignalReplay, an evolution of the first technique focused on the capture, injection, and analysis of data collected by reflection. The last proposed methodology, called Platform Dataflow Analysis (PDFA), aims on the metadata extraction through overloaded type reflection, allowing the designer to use compiler techniques for hardware analysis. The results are presented as experiments, implemented as case studies. These experiments allowed us to evaluate the effectiveness of the proposed techniques that, unlike related work, adhere to what we consider six fundamental principles: (1) are not intrusive regarding any model modifications that may be necessary to implement introspection; (2) do not require any change in simulation environment, compilers, or libraries, including our target language: SystemC; (3) generate minimal overhead in simulation time; (4) provide observability and controllability; (5) are extensible, allowing the adaptation for use in similar work with little or no change in the methodology; and (6) protect the intellectual property of the module under verification / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
97

Algorithms And Models For Debugging Distributed Programs

Sampath, D 07 1900 (has links) (PDF)
No description available.
98

An Empirical Study of Software Debugging Games with Introductory Students

Reynolds, Lisa Marie 08 1900 (has links)
Bug Fixer is a web-based application that complements lectures with hands-on exercises that encourage students to think about the logic in programs. Bug Fixer presents students with code that has several bugs that they must fix. The process of fixing the bugs forces students to conceptually think about the code and reinforces their understanding of the logic behind algorithms. In this work, we conducted a study using Bug Fixer with undergraduate students in the CSCE1040 course at University of North Texas to evaluate whether the system increases their conceptual understanding of the algorithms and improves their Software Testing skills. Students participated in weekly activities to fix bugs in code. Most students enjoyed Bug Fixer and recommend the system for future use. Students typically reported a better understanding of the algorithms used in class. We observed a slight increase of passing grades for students who participated in our study compared to students in other sections of the course with the same instructor who did not participate in our study. The students who did not report a positive experience provide comments for future improvements that we plan to address in future work.

Page generated in 0.1429 seconds