• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Developing an Affordable Authoring Tool For Intelligent Tutoring Systems

Choksey, Sanket Dinesh 25 August 2004 (has links)
"Intelligent tutoring systems (ITSs) are computer based tutoring systems that provide individualized tutoring to the students. Building an ITS is recognized to be expensive task in terms of cost and resources. Authoring tools provide a framework and an environment for building the ITSs that help to reduce the resources like skills, time and cost required to build an intelligent tutoring system. In this thesis we have implemented the Cognitive Tutor Authoring Tools (CTAT) and performed experiments to empirically determine the common programming errors that authors tend to make while building an ITS and study what is hard in authoring an ITS. The CTAT were used in a graduate class at Worcester Polytechnic Institute and also at the 4th Summer school organized at the Carnegie Mellon University. Based on the analysis of the experiments we suggest future work to reduce the debugging time and thereby reduce the time required to author an ITS. We also implemented the model tracing algorithm in JESS, evaluated its performance and compared to that of the model tracing algorithm in TDK. This research is funded by the Office of Naval Research (Grant # N00014-0301-0221)."
232

Displaying data structures for interactive debugging

Myers, Brad Allen January 1980 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Vita. / Bibliography: leaves 98-102. / by Brad Allen Myers. / M.S.
233

Qualidade da água e poder de depuração do rio Marrecas em seu médio e baixo curso / Water quality and power of purification of Marrecas river in middle and lower course

Biguelini, Cristina Poll 08 February 2013 (has links)
Made available in DSpace on 2017-05-12T14:41:51Z (GMT). No. of bitstreams: 1 Teste_29.pdf: 5594078 bytes, checksum: 308d893fac7679189bf937de88435834 (MD5) Previous issue date: 2013-02-08 / The scarcity of water quality and concern for their management occurs in a more decentralized as possible is one of the biggest problems of today and would not be different in the municipality of Francisco Beltrão, that due to the large and rapid urban development, has become the most populous southwestern region of Paraná. Situation that was derived in an apparent degradation of the river Marrecas with wide decrease in vegetation cover, and several irregular housing effluent releases (many illegal).We tried to monitor the quality of the river in its middle reaches Marrecas, verifying the levels of pollution, eutrophication and the power to purify contamination suffered in the urban area, three monitoring points (before and after the urban area and near its mouth), by means of physico-chemical and microbiological control carried out seasonally, between higher and lower rainfall (August-October 2011), diagnosing the current situation through the classification methodology "Quality Index of waters "(ANA, 2005), for taking corrective and preventive actions in the future. These variables were associated with the results of flow and rainfall for the period. The results showed that some variables are analyzed in situ, showed normal values and other surpass the limits stipulated by law. The index of the final water quality varied between the range of "bad" to "regular" in August 2011, and between "bad" to "good" in October. The trophic state index ranged from eutrophic, mesotrophic and supereutrófico. The power of purification of the river, we used the variables BOD, COD and OD values were compared between the two point tracking (outside the city) and paragraph 3 of monitoring (near the mouth), concluding that in August the river could not debug the load of pollutants, but in October the debugging process was observed. / A escassez da qualidade da água e a preocupação para que a sua gestão ocorra de modo mais descentralizado possível é uma das maiores problemáticas da atualidade e não seria diferente no município de Francisco Beltrão, que devido ao rápido e elevado desenvolvimento urbano, tornou-se o mais populoso da região sudoeste do Paraná. Situação que derivou em um aparente processo de degradação do rio Marrecas, com ampla diminuição da cobertura vegetal, moradias irregulares e diversos lançamentos de efluentes (muitos irregulares). Neste contexto, buscou-se monitorar a qualidade do rio Marrecas em seu médio e baixo curso, para verificação dos níveis de poluição, do processo de eutrofização e do poder de depuração através do monitoramento em três locais de seu percurso pela área urbana de Francisco Beltrão (antes e após a área urbana e próximo a sua foz), utilizando-se da teoria sistêmica da caixa preta, por meio de análises físico-químicas e microbiológicas de controle, realizados sazonalmente, no período de maior e menor pluviosidade (agosto e outubro de 2011), diagnosticando a situação atual através da metodologia de classificação Índice de Qualidade das águas (ANA, 2005), para tomada de ações corretivas e preventivas futuras. Tais variáveis foram associadas aos resultados de vazão e de pluviosidade do período. Os resultados obtidos demonstraram que algumas variáveis, se analisadas in loco, apresentaram valores de normalidade e outras extrapolaram os limites estipulados na legislação vigente. O Índice de qualidade da água final variou entre a faixa de ruim a regular no mês de agosto de 2011; e entre ruim a bom em outubro. O Índice de estado trófico oscilou entre eutrófico, mesotrófico e supereutrófico. Quanto ao poder de depuração do rio, utilizaram-se as variáveis DBO, DQO e OD, comparando-se os valores á montante e à jusante, com os encontrados próximo a foz com o rio Santana, concluindo-se que no mês de agosto o rio não conseguiu depurar a carga de poluentes, mas em outubro o processo de depuração foi observado.
234

Qualidade da água e poder de depuração do rio Marrecas em seu médio e baixo curso / Water quality and power of purification of Marrecas river in middle and lower course

Biguelini, Cristina Poll 08 February 2013 (has links)
Made available in DSpace on 2017-07-10T17:30:15Z (GMT). No. of bitstreams: 1 Teste_29.pdf: 5594078 bytes, checksum: 308d893fac7679189bf937de88435834 (MD5) Previous issue date: 2013-02-08 / The scarcity of water quality and concern for their management occurs in a more decentralized as possible is one of the biggest problems of today and would not be different in the municipality of Francisco Beltrão, that due to the large and rapid urban development, has become the most populous southwestern region of Paraná. Situation that was derived in an apparent degradation of the river Marrecas with wide decrease in vegetation cover, and several irregular housing effluent releases (many illegal).We tried to monitor the quality of the river in its middle reaches Marrecas, verifying the levels of pollution, eutrophication and the power to purify contamination suffered in the urban area, three monitoring points (before and after the urban area and near its mouth), by means of physico-chemical and microbiological control carried out seasonally, between higher and lower rainfall (August-October 2011), diagnosing the current situation through the classification methodology "Quality Index of waters "(ANA, 2005), for taking corrective and preventive actions in the future. These variables were associated with the results of flow and rainfall for the period. The results showed that some variables are analyzed in situ, showed normal values and other surpass the limits stipulated by law. The index of the final water quality varied between the range of "bad" to "regular" in August 2011, and between "bad" to "good" in October. The trophic state index ranged from eutrophic, mesotrophic and supereutrófico. The power of purification of the river, we used the variables BOD, COD and OD values were compared between the two point tracking (outside the city) and paragraph 3 of monitoring (near the mouth), concluding that in August the river could not debug the load of pollutants, but in October the debugging process was observed. / A escassez da qualidade da água e a preocupação para que a sua gestão ocorra de modo mais descentralizado possível é uma das maiores problemáticas da atualidade e não seria diferente no município de Francisco Beltrão, que devido ao rápido e elevado desenvolvimento urbano, tornou-se o mais populoso da região sudoeste do Paraná. Situação que derivou em um aparente processo de degradação do rio Marrecas, com ampla diminuição da cobertura vegetal, moradias irregulares e diversos lançamentos de efluentes (muitos irregulares). Neste contexto, buscou-se monitorar a qualidade do rio Marrecas em seu médio e baixo curso, para verificação dos níveis de poluição, do processo de eutrofização e do poder de depuração através do monitoramento em três locais de seu percurso pela área urbana de Francisco Beltrão (antes e após a área urbana e próximo a sua foz), utilizando-se da teoria sistêmica da caixa preta, por meio de análises físico-químicas e microbiológicas de controle, realizados sazonalmente, no período de maior e menor pluviosidade (agosto e outubro de 2011), diagnosticando a situação atual através da metodologia de classificação Índice de Qualidade das águas (ANA, 2005), para tomada de ações corretivas e preventivas futuras. Tais variáveis foram associadas aos resultados de vazão e de pluviosidade do período. Os resultados obtidos demonstraram que algumas variáveis, se analisadas in loco, apresentaram valores de normalidade e outras extrapolaram os limites estipulados na legislação vigente. O Índice de qualidade da água final variou entre a faixa de ruim a regular no mês de agosto de 2011; e entre ruim a bom em outubro. O Índice de estado trófico oscilou entre eutrófico, mesotrófico e supereutrófico. Quanto ao poder de depuração do rio, utilizaram-se as variáveis DBO, DQO e OD, comparando-se os valores á montante e à jusante, com os encontrados próximo a foz com o rio Santana, concluindo-se que no mês de agosto o rio não conseguiu depurar a carga de poluentes, mas em outubro o processo de depuração foi observado.
235

Depuração automática de programas baseada em modelos: uma abordagem hierárquica para auxílio ao aprendizado de programação / Automated model based software debugging: a hierarchical approach to help programming learning

Pinheiro, Wellington Ricardo 07 May 2010 (has links)
Diagnóstico baseado em modelos (Model Based Diagnosis - MBD) é uma técnica de Inteligência Artificial usada para encontrar componentes falhos em dispositivos físicos. MBD também tem sido utilizado para auxiliar programadores experientes a encontrarem falhas em seus programas, sendo essa técnica chamada de Depuração de Programas baseada em Modelos (Model Based Software Debugging - MBSD). Embora o MBSD possa auxiliar programadores experientes a entenderem e corrigirem suas falhas, essa abordagem precisa ser aprimorada para ser usada por aprendizes de programação. Esse trabalho propõe o uso da técnica de depuração hierárquica de programas, uma extensão da técnica MBSD, para que aprendizes de programação sejam capazes de depurar seus programas raciocinando sobre componentes abstratos, tais como: padrões elementares, funções e procedimentos. O depurador hierárquico de programas proposto foi integrado ao Dr. Java e avaliado com um grupo de alunos de uma disciplina de Introdução à Programação. Os resultados mostram que a maioria dos alunos foi capaz de compreender as hipóteses de falha geradas pelo depurador automático e usar essas informações para corrigirem seus programas. / Model Based Diagnosis (MBD) in Artificial Intelligence is a technique that has been used to detect faulty components in physical devices. MBD has also been used to help senior programmers to locate faults in software with a technique known as Model Based Software Debugging (MBSD). Although this approach can help experienced programmers to detect and correct faults in their programs, this approach must be improved to be used with novice programmers. This work proposes a hierarchical program diagnosis, a MBSD extension, to help novice programmers to debug programs by exploring the idea of abstract components, such as: elementary patterns, functions and procedures. The hierarchical program debugger proposed was integrated to the Dr. Java tool and evaluated with students of an introductory programming course. The results showed that most of the students were able to understand the hypotheses of failure presented by the automated debugger and use this information to provide a correction for their programs
236

Statistical causal analysis for fault localization

Baah, George Kofi 08 August 2012 (has links)
The ubiquitous nature of software demands that software is released without faults. However, software developers inadvertently introduce faults into software during development. To remove the faults in software, one of the tasks developers perform is debugging. However, debugging is a difficult, tedious, and time-consuming process. Several semi-automated techniques have been developed to reduce the burden on the developer during debugging. These techniques consist of experimental, statistical, and program-structure based techniques. Most of the debugging techniques address the part of the debugging process that relates to finding the location of the fault, which is referred to as fault localization. The current fault-localization techniques have several limitations. Some of the limitations of the techniques include (1) problems with program semantics, (2) the requirement for automated oracles, which in practice are difficult if not impossible to develop, and (3) the lack of theoretical basis for addressing the fault-localization problem. The thesis of this dissertation is that statistical causal analysis combined with program analysis is a feasible and effective approach to finding the causes of software failures. The overall goal of this research is to significantly extend the state of the art in fault localization. To extend the state-of-the-art, a novel probabilistic model that combines program-analysis information with statistical information in a principled manner is developed. The model known as the probabilistic program dependence graph (PPDG) is applied to the fault-localization problem. The insights gained from applying the PPDG to fault localization fuels the development of a novel theoretical framework for fault localization based on established causal inference methodology. The development of the framework enables current statistical fault-localization metrics to be analyzed from a causal perspective. The analysis of the metrics show that the metrics are related to each other thereby allowing the unification of the metrics. Also, the analysis of metrics from a causal perspective reveal that the current statistical techniques do not find the causes of program failures instead the techniques find the program elements most associated with failures. However, the fault-localization problem is a causal problem and statistical association does not imply causation. Several empirical studies are conducted on several software subjects and the results (1) confirm our analytical results, (2) demonstrate the efficacy of our causal technique for fault localization. The results demonstrate the research in this dissertation significantly improves on the state-of-the-art in fault localization.
237

Comparing Mobile Applications' Energy Consumption

Wilke, Claas, Richly, Sebastian, Piechnick, Christian, Götz, Sebastian, Püschel, Georg, Aßmann, Uwe 17 January 2013 (has links) (PDF)
As mobile devices are nowadays used regularly and everywhere, their energy consumption has become a central concern for their users. However, mobile applications often do not consider energy requirements and users have to install and try them to reveal information on their energy behavior. In this paper, we compare mobile applications from two domains and show that applications reveal different energy consumption while providing similar services. We define microbenchmarks for emailing and web browsing and evaluate applications from these domains. We show that non-functional features such as web page caching can but not have to have a positive influence on applications' energy consumption.
238

A Comparative Study of Automated Test Explorers

Gustavsson, Johan January 2015 (has links)
With modern computer systems becoming more and more complicated, theimportance of rigorous testing to ensure the quality of the product increases.This, however, means that the cost to perform tests also increases. In orderto address this problem, a lot of research has been conducted during thelast years to find a more automated way of testing software systems. Inthis thesis, different algorithms to automatically explore and test a systemhave been implemented and evaluated. In addition to this, a second setof algorithms have been implemented with the objective to isolate whichinteractions with the system were responsible for a failure. These algorithmswere also evaluated and compared against each other. In the first evaluationtwo explorers, which I called DeBruijn and LStarExplorer, were consideredsuperior to the other. The first used a DeBruijn sequence to brute forcea solution while the second used the L*-algorithm to build an FSM overthe system under test. This FSM could then be used to provide a moreaccurate description for when the failure occurred. The result from thesecond evaluation were two reducers which both tried to recreate a failureby first applying interactions performed just before the failure occurred. Ifthis was not successful, they tried interactions further and further away, untilthe failure was triggered. In addition to this, the thesis contains descriptionsabout the framework used to run the different strategies. / D ̊a v ̊ara moderna datasystem blir allt mer komplicerade,  ̈okar detta st ̈andigtbehovet av rigor ̈osa tester f ̈or att s ̈akerst ̈alla kvaliteten p ̊a den slutgiltiga pro-dukten. Det h ̈ar inneb ̈ar dock att kostnaden f ̈or att utf ̈ora testerna ocks ̊ao  ̈ kar. F ̈or att f ̈ors ̈oka hitta en l ̈osning p ̊a det h ̈ar problemet har forsknin-gen under senare tid arbetat med att ta fram automatiserade metoder atttesta mjukvarusystem. I den h ̈ar uppsatsen har olika algoritmer, f ̈or attutforska och testa ett system, implementerats och utv ̈arderats. D ̈arut ̈overhar ocks ̊a en grupp algoritmer implementerats som ska kunna isolera vilkainteraktioner med ett system som f ̊ar det att fallera.  ̈aven dessa algoritmerhar utv ̈arderats och testats mot varandra. Resultatet fr ̊an det f ̈orsta ex-perimentet var tv ̊a explorers, h ̈ar kallade DeBruijn och LStarExplorer, somvisade sig vara b ̈attre  ̈an de andra. Den f ̈orsta av dessa anv ̈ande en DeBruijn-sekvens f ̈or att hitta felen, medan den andra anv ̈ande en L*-algoritm f ̈or attbygga upp en FSM  ̈over systemet. Den h ̈ar FSM:en kunde sedan anv ̈andasf ̈or att mer precist beskriva n ̈ar felet uppstod. Resultatet fr ̊an det andraexperimentet var tv ̊a reducers, vilka b ̊ada f ̈ors ̈okte  ̊aterskapa fel genom attf ̈orst applicera interaktioner som ursprungligen utf ̈ordes percis innan feletuppstod. Om felet inte kunde  ̊aterskapas p ̊a detta s ̈att, fortsatte de medatt applicera interaktioner l ̈angre bort tills felet kunde  ̊aterskapas. Ut ̈overdetta inneh ̊aller uppsatsen ocks ̊a beskrivningar av ramverken som anv ̈andsf ̈or att k ̈ora de olika strategierna.
239

Test-driven fault navigation for debugging reproducible failures

Perscheid, Michael January 2013 (has links)
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected. / Die Beseitigung von Softwarefehlern kann sehr kostenintensiv sein, da die Suche nach der Fehlerursache meist sehr lange dauert. Während der Fehlersuche versuchen Entwickler vor allem die Ursache für den Fehler zu verstehen: Angefangen mit einem Testfall, welcher den sichtbaren Fehler reproduziert, folgen sie den Fehlerursachen entlang der Infektionskette bis hin zum ursprünglichen Defekt. Dieses idealisierte Vorgehen benötigt ein grundlegendes Verständnis über das Systemverhalten, da Fehler und Defekt sehr weit auseinander liegen können. Bedauerlicherweise bieten jedoch gebräuchliche Entwicklungswerkzeuge wenig Unterstützung, um solche Infektionsketten detailliert zu untersuchen. Dementsprechend müssen Entwickler primär auf ihr Gespür vertrauen, so dass die Lokalisierung von Fehlerursachen sehr viel Zeit in Anspruch nehmen kann. Um ein willkürliches Vorgehen zu verhindern, verwenden erfahrene Entwickler deshalb die wissenschaftliche Methode, um systematisch Hypothesen über Fehlerursachen zu prüfen. Jedoch kann auch noch mittels der wissenschaftlichen Methode die Suche nach Fehlerursachen sehr mühsam sein, da passende Hypothesen meist manuell und ohne die systematische Hilfe von Werkzeugen aufgestellt werden müssen. Diese Dissertation präsentiert die test-getriebene Fehlernavigation als einen zusammenhängenden Wegweiser zur Beseitigung von reproduzierbaren Fehlern mit Hilfe der wissenschaftlichen Methode. Basierend auf der Analyse von funktionierenden und fehlschlagenden Testfällen werden Anomalien aufgedeckt und in eine Breitensuche integriert, um Entwickler zum Defekt zu führen. Diese systematische Suche besteht aus vier spezifischen Navigationstechniken, welche zusammen die Erstellung, Evaluierung und Verfeinerung von Hypothesen für die wissenschaftliche Methode unterstützen. Erstens grenzt die Strukturnavigation verdächtige Systemteile und den initialen Suchraum ein. Zweitens empfiehlt die Team-Navigation erfahrene Entwickler zur Behebung von Fehlern. Drittens erlaubt es die Verhaltensnavigation Entwicklern, die hervorgehobene Infektionskette eines fehl- schlagenden Testfalls zurückzuverfolgen. Viertens identifiziert die Zustandsnavigation fehlerhafte Zustände, um automatisch Teile der Infektionskette offenzulegen. Alle vier Navigationen wurden innerhalb des Path Tools Framework für die Squeak/Smalltalk Entwicklungsumgebung implementiert. Dabei bauen alle Werkzeuge auf die inkrementelle dynamische Analyse, welche die Berechnungskosten über mehrere Testdurchläufe abhängig von den Bedürfnissen des Nutzers aufteilt und somit schnelle Ergebnisse während der Fehlersuche liefert. Folglich können wichtige Fragen in kurzer Zeit beantwortet werden: Wo wird mit der Fehlersuche begonnen? Wer versteht Fehlerursachen am Besten? Was passierte bevor der Fehler auftrat? Welche Programmzustände sind betroffen?
240

Monitoring and analysis system for performance troubleshooting in data centers

Wang, Chengwei 13 January 2014 (has links)
It was not long ago. On Christmas Eve 2012, a war of troubleshooting began in Amazon data centers. It started at 12:24 PM, with an mistaken deletion of the state data of Amazon Elastic Load Balancing Service (ELB for short), which was not realized at that time. The mistake first led to a local issue that a small number of ELB service APIs were affected. In about six minutes, it evolved into a critical one that EC2 customers were significantly affected. One example was that Netflix, which was using hundreds of Amazon ELB services, was experiencing an extensive streaming service outage when many customers could not watch TV shows or movies on Christmas Eve. It took Amazon engineers 5 hours 42 minutes to find the root cause, the mistaken deletion, and another 15 hours and 32 minutes to fully recover the ELB service. The war ended at 8:15 AM the next day and brought the performance troubleshooting in data centers to world’s attention. As shown in this Amazon ELB case.Troubleshooting runtime performance issues is crucial in time-sensitive multi-tier cloud services because of their stringent end-to-end timing requirements, but it is also notoriously difficult and time consuming. To address the troubleshooting challenge, this dissertation proposes VScope, a flexible monitoring and analysis system for online troubleshooting in data centers. VScope provides primitive operations which data center operators can use to troubleshoot various performance issues. Each operation is essentially a series of monitoring and analysis functions executed on an overlay network. We design a novel software architecture for VScope so that the overlay networks can be generated, executed and terminated automatically, on-demand. From the troubleshooting side, we design novel anomaly detection algorithms and implement them in VScope. By running anomaly detection algorithms in VScope, data center operators are notified when performance anomalies happen. We also design a graph-based guidance approach, called VFocus, which tracks the interactions among hardware and software components in data centers. VFocus provides primitive operations by which operators can analyze the interactions to find out which components are relevant to the performance issue. VScope’s capabilities and performance are evaluated on a testbed with over 1000 virtual machines (VMs). Experimental results show that the VScope runtime negligibly perturbs system and application performance, and requires mere seconds to deploy monitoring and analytics functions on over 1000 nodes. This demonstrates VScope’s ability to support fast operation and online queries against a comprehensive set of application to system/platform level metrics, and a variety of representative analytics functions. When supporting algorithms with high computation complexity, VScope serves as a ‘thin layer’ that occupies no more than 5% of their total latency. Further, by using VFocus, VScope can locate problematic VMs that cannot be found via solely application-level monitoring, and in one of the use cases explored in the dissertation, it operates with levels of perturbation of over 400% less than what is seen for brute-force and most sampling-based approaches. We also validate VFocus with real-world data center traces. The experimental results show that VFocus has troubleshooting accuracy of 83% on average.

Page generated in 0.4938 seconds