• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Nprof : uma ferramenta para monitoramento de aplicações distribuídas / Nprof : a monitoring tool for distributed applications

Brugnara, Telmo January 2006 (has links)
A crescente complexidade dos programas de computador e o crescimento da carga de trabalho a qual eles são submetidos têm sido tendências recorrentes nos sistemas computacionais, em especial para sistemas distribuídos como aplicações web e sistemas corporativos. O aumento da carga de trabalho gera uma demanda por sistemas que façam melhor uso dos recursos computacionais disponíveis, enquanto a maior complexidade gera uma demanda por sistemas que se preocupem em minimizar o número de erros. Portanto, podem-se identificar dois objetivos a serem perseguidos pelos desenvolvedores de sistemas de software: melhorar o desempenho e aumentar a confiabilidade dos sistemas. A fim de alcançar os objetivos expostos, são desenvolvidos sistemas de monitoramento para automatizar a coleta e análise de dados sobre os sistemas computacionais alvo. O presente trabalho visa contribuir nos seguintes aspectos: na identificação dos dados relevantes para o monitoramento de aplicações distribuídas desenvolvidas para a plataforma Java; e na criação de uma ferramenta de monitoramento de aplicações distribuídas, explorando os novos recursos do JDK 1.5, bem como os recursos já disponíveis em Java, como carga dinâmica de classes e transformação de bytecodes A fim de avaliar a ferramenta proposta foram elaborados três estudos de caso: um utiliza uma aplicação existente sem necessidade de sua adaptação; outro avalia a sobrecarga da ferramenta frente a diferentes parâmetros; e o terceiro avalia o monitoramento de um sistema distribuído. Entende-se que a ferramenta atinge o objetivo de monitoramento de aplicações distribuídas, por meio da incorporação de técnicas e APIs distintas, ao permitir: o monitoramento de uma aplicação distribuída por meio do monitoramento de diversos nodos de tal aplicação concomitantemente; e a visualização das informações coletadas de forma online. Adicionalmente, a coleta simultânea de dados de diferentes nodos de uma aplicação distribuída pode ser útil para a descoberta de relações entre eventos que ocorrem durante a execução de tal aplicação. / The growing complexity of software and the increasing workload to which systems have been submitted are known trends in the computing system field, especially when distributed and web systems are considered. The increasing workload generates demand for systems that can make a better use of computing resources, while the increment of system complexity demands specific actions to prevent design faults. Therefore, software engineers have two main objectives to be concerned with: optimization and dependability. In order to accomplish these objectives, monitoring systems have been proposed to gather data from running systems so that their behavior can be analyzed. The present dissertation intends to contribute in the following domains: identifying relevant metrics for monitoring distributed Java applications; and developing a tool to monitor and profile distributed applications, using the new resources available in JDK 1 .5 as well as some already known techniques like dynamic classloading and bytecode instrumentation. In order to evaluate the proposed tool, three test cases have been developed: one with a well known application running without modification; another for evaluating the tools’ overhead in different scenarios; and a third one to evaluate a distributed application been monitored. We understand that the proposed tool is successful in monitoring distributed applications by the use of distinct APIs and techniques because: Nprof can monitor a distributed application by monitoring different nodes of the application simultaneously; and Nprof allows the online visualization of the collected data. Also, simultaneous collection of data from different nodes of a distributed application can be useful for discovering relations among events that occur during the execution of the application.
202

Statistical Debugging of Programs written in Dynamic Programming Language : RUBY / Statistisk Debugging av program skrivna i dynamiskt programmeringsspråk : RUBY

Akhter, Adeel, Azhar, Hassan January 2010 (has links)
Debugging is an important and critical phase during the software development process. Software debugging is serious and tough practice involved in functional base test driven development. Software vendors encourages their programmers to practice test driven development during the initial development phases to capture the bug traces and the associated code coverage infected from diagnosed bugs. Application’s source code with fewer threats of bug existence or faulty executions is assumed as highly efficient and stable especially when real time software products are in consideration. Due to the fact that process of development of software projects relies on great number of users and testers which required having an effective fault localization technique. This specific fault localization technique can highlight the most critical areas of software system at code as well as modular level so that debugging algorithm can be used to debug the application source code. Nowadays many complex or simple software systems are in corporation with open bug repositories to localize the bugs. Any inconsistency or imperfection in early development phase of software product results in low efficient system and less reliability. Statistical debugging of program source code for visualization of fault is an important and efficient way to select and rank the suspicious lines of code. This research provides guidelines for practicing statistical debugging technique for programs coded in Ruby programming language. This thesis presents statistical debugging techniques available for dynamic programming languages. Firstly, the statistical debugging techniques were thoroughly observed with different predicate base approaches followed in previous work done in the subject area. Secondly, the new process of statistical debugging for programs coded in Ruby programming language is introduced by generating dynamic predicates. Results were analyzed by implementing multiple programs written in Ruby programming language with different complexity level. The analysis of experimentation performed on candidate programs depict that SOBER is more efficient and accurate in bug identification than Cause Isolation Scheme. It is concluded that despite of extensive research in the field of statistical debugging and fault localization it is not possible to identify majority of the bugs. Moreover SOBER and Cause Isolation Scheme algorithms are found to be two most mature and effective statistical debugging algorithms for bug identification with in software source code. / Address: School of Computing Blekinge Institute of Technology SE-371 79 Karlskrona, Sweden Phone: +46-(0)455-385804 Fax: +46-(0)455-385057
203

Heapy: A Memory Profiler and Debugger for Python

Nilsson, Sverker January 2006 (has links)
Excessive memory use may cause severe performance problems and system crashes. Without appropriate tools, it may be difficult or impossible to determine why a program is using too much memory. This applies even though Python provides automatic memory management --- garbage collection can help avoid many memory allocation bugs, but only to a certain extent due to the lack of information during program execution. There is still a need for tools helping the programmer to understand the memory behaviour of programs, especially in complicated situations. The primary motivation for Heapy is that there has been a lack of such tools for Python. The main questions addressed by Heapy are how much memory is used by objects, what are the objects of most interest for optimization purposes, and why are objects kept in memory. Memory leaks are often of special interest and may be found by comparing snapshots of the heap population taken at different times. Memory profiles, using different kinds of classifiers that may include retainer information, can provide quick overviews revealing optimization possibilities not thought of beforehand. Reference patterns and shortest reference paths provide different perspectives of object access patterns to help explain why objects are kept in memory.
204

Debugging Equation-Based Languages in OpenModelica Environment

Sjöholm, Klas January 2009 (has links)
The need for debugging tools for declarative programming languages has increased due to the rapid development of modeling and simulation tools/programs. Declarative equation-based programming languages have the problem of equation systems being over-, or under-constrained. This means that the system of equations has more equations than variables or more variables than equations respectively, making the system of equations unsolvable. In this study a static debugger is implemented in OpenModelica compiler for the equation-based programming language Modelica to make it easier for the programmer or modeler to locate the equation/s causing the unconstrained system of equations. The debugging techniques used by the debugger are developed by Peter Bunus. Those techniques are able to detect unconstrained systems of equations and give solutions by identifying the minimal set ofequation/s that should be removed or which variable/s should be added to an equation/s to make the system solvable. In this study the debugging techniques for detecting and giving a solution for over-constrained system of equations are shown suitable to be used for the programming language Modelica in the OpenModelica compiler.
205

Static/Dynamic Analyses for Validation and Improvements of Multi-Model HPC Applications. / Analyse statique/dynamique pour la validation et l'amélioration des applications parallèles multi-modèles

Saillard, Emmanuelle 24 September 2015 (has links)
L’utilisation du parallélisme des architectures actuelles dans le domaine du calcul hautes performances, oblige à recourir à différents langages parallèles. Ainsi, l’utilisation conjointe de MPI pour le parallélisme gros grain, à mémoire distribuée et OpenMP pour du parallélisme de thread, fait partie des pratiques de développement d’applications pour supercalculateurs. Des erreurs, liées à l’utilisation conjointe de ces langages de parallélisme, sont actuellement difficiles à détecter et cela limite l’écriture de codes, permettant des interactions plus poussées entre ces niveaux de parallélisme. Des outils ont été proposés afin de palier ce problème. Cependant, ces outils sont généralement focalisés sur un type de modèle et permettent une vérification dite statique (à la compilation) ou dynamique (à l’exécution). Pourtant une combinaison statique/- dynamique donnerait des informations plus pertinentes. En effet, le compilateur est en mesure de donner des informations relatives au comportement général du code, indépendamment du jeu d’entrée. C’est par exemple le cas des problèmes liés aux communications collectives du modèle MPI. Cette thèse a pour objectif de développer des analyses statiques/dynamiques permettant la vérification d’une application parallèle mélangeant plusieurs modèles de programmation, afin de diriger les développeurs vers un code parallèle multi-modèles correct et performant. La vérification se fait en deux étapes. Premièrement, de potentielles erreurs sont détectées lors de la phase de compilation. Ensuite, un test au runtime est ajouté pour savoir si le problème va réellement se produire. Grâce à ces analyses combinées, nous renvoyons des messages précis aux utilisateurs et évitons les situations de blocage. / Supercomputing plays an important role in several innovative fields, speeding up prototyping or validating scientific theories. However, supercomputers are evolving rapidly with now millions of processing units, posing the questions of their programmability. Despite the emergence of more widespread and functional parallel programming models, developing correct and effective parallel applications still remains a complex task. Although debugging solutions have emerged to address this issue, they often come with restrictions. However programming model evolutions stress the requirement for a convenient validation tool able to handle hybrid applications. Indeed as current scientific applications mainly rely on the Message Passing Interface (MPI) parallel programming model, new hardwares designed for Exascale with higher node-level parallelism clearly advocate for an MPI+X solutions with X a thread-based model such as OpenMP. But integrating two different programming models inside the same application can be error-prone leading to complex bugs - mostly detected unfortunately at runtime. In an MPI+X program not only the correctness of MPI should be ensured but also its interactions with the multi-threaded model, for example identical MPI collective operations cannot be performed by multiple nonsynchronized threads. This thesis aims at developing a combination of static and dynamic analysis to enable an early verification of hybrid HPC applications. The first pass statically verifies the thread level required by an MPI+OpenMP application and outlines execution paths leading to potential deadlocks. Thanks to this analysis, the code is selectively instrumented, displaying an error and synchronously interrupting all processes if the actual scheduling leads to a deadlock situation.
206

Extension of E([theta]) metric for evaluation of reliability

Mondal, Subhajit January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / The calculation of reliability based on running test cases refers to the probability of the software not generating faulty output consequent to the testing process. The metric used to measure this reliability is referred in terms of E(Θ) value. The concept of E(Θ) gives precise formulae to calculate the probability of failure of software after testing, debug or operational. This report aims at extending the functionalities of E(Θ) into the realm of multiple faults spread across multiple sub-domains. This generalization involves introduction of a new set of formulae for E(Θ) calculation which can account for faults spread over both single as well as multiple sub-domains in a code. The validity of the formulae is verified by matching the obtained theoretical results against the empirical data generated from running a test case simulator. The report further examines the possibility of an upper bound calculation on the derived formulae and its possible ramifications.
207

Effiziente externe Beobachtung von CPU-Aktivitäten auf SoCs

Weiss, Alexander 28 October 2015 (has links) (PDF)
Die umfassende Beobachtbarkeit von System‐on‐Chips (SoCs) ist eine wichtige Voraussetzung für das effiziente Testen und Debuggen eingebetteter Systeme. Ausgehend von einer Analyse verschiedener Anwendungsfälle ergibt sich ein Katalog von Anforderungen an die Beobachtbarkeit von SoCs. Ein wichtiges Kriterium ist hier die Vollständigkeit der Beobachtung und umfasst die Aktivitäten der CPU (ausgeführte Instruktionen, gelesene und geschriebene Daten, Verhalten des Caches, Ausführungszeiten), des Bussystems und von Umgebungsbedingungen. Weitere Kriterien sind die Echtzeitfähigkeit und die Kontinuität der Beobachtung sowie die gleichzeitige Durchführung verschiedener Beobachtungsaufgaben. Dabei soll es zu einer möglichst geringen Beeinflussung des SoCs kommen. Weitere wichtige Aspekt sind die Kosten der Lösung, die Universalität, die Skalierbarkeit sowie die Latenz der Verfügbarkeit der Beobachtungsergebnisse. Für viele Anwendungen, besonders in sicherheitskritischen Bereichen, muss zudem nachgewiesen werden, dass das Beobachtungsverfahren kein Fehlverhalten des SoCs bewirkt bzw. ein solches maskiert. Eine besondere Herausforderung stellen Multiprozessor‐SoCs (MPSoCs) dar, da hier die Kommunikation zwischen den einzelnen CPUs im Inneren des SoC stattfindet und entsprechend schwierig für einen externen Bobachter sichtbar zu machen ist. Der Stand der Technik zur Beobachtung von SoCs wird im Wesentlichen durch zwei Verfahren dargestellt. Bei der Software‐Instrumentierung wird zum funktionalen Programmcode zusätzlicher Code hinzugefügt, welcher zur Beobachtung des Programms dient. Diese Methode ist einfach und universell anwendbar, erfüllt aber die genannten Kriterien nur sehr eingeschränkt. Nachteilig ist hier der Ressourcenverbrauch im Falle des Verbleibs der Instrumentierung im fertigen Produkt. Wird die Instrumentierung nur temporär dem Code hinzugefügt, muss sichergestellt werden, dass das Beobachtungsergebnis auch für den finalen Code anwendbar ist – was besonders bei ressourcen‐abhängigen Integrationstests nur schwierig erfüllbar ist. Eine alternative Lösung stellt eine spezielle Hardware‐Unterstützung in SoCs („embedded Trace“) dar. Hier werden im SoC Zustandsinformationen (z.B. Taskwechsel, ausgeführte Instruktionen, Datentransfers) gesammelt und mittels Trace‐Nachrichten an den Beobachter übermittelt. Dabei stellt die Bandbreite, die zur Ausgabe der Trace‐Nachrichten vom SoC verfügbar ist, ein entscheidendes Nadelöhr dar ‐ im SoC sind viel mehr den Beobachter interessierende Informationen verfügbar als nach außen transferiert werden können. Damit haben beide dem gegenwärtige Stand der Technik entsprechende Beobachtungsverfahren eine Reihe von Einschränkungen, die sich besonders bei der Vollständigkeit der Beobachtung, der Flexibilität, der Kontinuität und der Unterstützung von MPSoCs zeigen. In dieser Arbeit wird nun ein neuer Ansatz vorgestellt, welcher gegenüber dem Stand der Technik in einigen Bereichen deutliche Verbesserungen bietet. Dabei werden die Trace‐Daten nicht vom zu beobachtenden SoC direkt, sondern aus einer parallel mitlaufenden Emulation gewonnen. Die Bandbreite der für die Synchronisation der Emulation erforderlichen Daten ist in vielen Fällen deutlich geringer als bei der Ausgabe von umfassenden Trace‐Nachrichten mittels „embedded Trace“‐Lösungen. Gleichzeitig ist eine vollständige, äußerst detaillierte Beobachtung der Vorgänge innerhalb des SoC möglich. Das neue Beobachtungsverfahren wurde mittels verschiedener FPGA-basierter Implementierungen evaluiert, hier konnte auch die Anwendbarkeit für MPSoCs gezeigt werden.
208

Profiling and debugging by efficient tracing of hybrid multi-threaded HPC applications / Profilage et débogage par prise de traces efficaces d'applications hybrides multi-threadées HPC

Besnard, Jean-Baptiste 16 July 2014 (has links)
L’évolution des supercalculateurs est à la source de défis logiciels et architecturaux. Dans la quête de puissance de calcul, l’interdépendance des éléments du processus de simulation devient de plus en plus impactante et requiert de nouvelles approches. Cette thèse se concentre sur le développement logiciel et particulièrement sur l’observation des programmes parallèles s’exécutant sur des milliers de cœurs. Dans ce but, nous décrivons d’abord le processus de développement de manière globale avant de présenter les outils existants et les travaux associés. Dans un second temps, nous détaillons notre contribution qui consiste d’une part en des outils de débogage et profilage par prise de traces, et d’autre part en leur évolution vers un couplage en ligne qui palie les limitations d’entrées–sorties. Notre contribution couvre également la synchronisation des horloges pour la prise de traces avec la présentation d’un algorithme de synchronisation probabiliste dont nous avons quantifié l’erreur. En outre, nous décrivons un outil de caractérisation machine qui couvre l’aspect MPI. Un tel outil met en évidence la présence de bruit aussi bien sur les communications de type point-à-point que de type collective. Enfin, nous proposons et motivons une alternative à la collecte d’événements par prise de traces tout en préservant la granularité des événements et un impact réduit sur les performances, tant sur le volet utilisation CPU que sur les entrées–sorties / Supercomputers’ evolution is at the source of both hardware and software challenges. In the quest for the highest computing power, the interdependence in-between simulation components is becoming more and more impacting, requiring new approaches. This thesis is focused on the software development aspect and particularly on the observation of parallel software when being run on several thousand cores. This observation aims at providing developers with the necessary feedback when running a program on an execution substrate which has not been modeled yet because of its complexity. In this purpose, we firstly introduce the development process from a global point of view, before describing developer tools and related work. In a second time, we present our contribution which consists in a trace based profiling and debugging tool and its evolution towards an on-line coupling method which as we will show is more scalable as it overcomes IOs limitations. Our contribution also covers our time-stamp synchronisation algorithm for tracing purposes which relies on a probabilistic approach with quantified error. We also present a tool allowing machine characterisation from the MPI aspect and demonstrate the presence of machine noise for both point to point and collectives, justifying the use of an empirical approach. In summary, this work proposes and motivates an alternative approach to trace based event collection while preserving event granularity and a reduced overhead
209

Improving Stability and Parameter Selection of Data Processing Programs

Wen-Chuan Lee (8206287) 07 January 2020 (has links)
<div>Data-processing programs are becoming increasingly important in the Big-data era. However, two notable problems of these programs may cause sub-optimal data- processing results. On one hand, these programs contain large number of floating-point computations. Due to the limited precision of floating-point representations, errors are introduced, propagated and accumulated in series of computations, making the computation results unreliable. We call this problem as floating-point instability. On the other hand, these programs are heavily parameterized. As no universal optimal parameter configuration exists for all possible inputs, the setting of program parameters should be carefully chosen and tuned for each input. Otherwise, the result would be sub-optimal. Manual tuning is infeasible because the number of parameters and the range of each parameter value may be big.</div><div><br></div><div>We try to address these two challenges in this dissertation. For floating-point instability problem, we develop a novel runtime technique to capture different output variations in the presence of instability. It features the idea of transforming every floating point value to a vector of multiple values $-$ the values added to create the vector are obtained by introducing artificial errors that are upper bounds of actual errors. The propagation of artificial errors models the propagation of actual errors. When values in vectors result in discrete execution differences (e.g., following different paths), the execution is forked to capture the resulting output variations.</div><div><br></div><div>For parameterized data-processing programs, we develop a white-box program tuning framework to tune the program parameter configuration for optimal data-processing result of each program input. </div><div>To further reduce the parameter configuration overhead, we propose the first general framework to inject artificial intelligence (AI) in the program, so the intelligent program is able to predict the parameter configuration for each incoming input directly. However, similar to many other ML/AI applications, the crucial challenge lies in feature selection, i.e., selection of the feature variables for predicting the target parameter specified by the users.</div><div>Thus, we propose a novel approach by combining program analysis and statistical analysis for better program feature variables selection which further helps better target parameter prediction and improves the result.</div>
210

Requirements Elicitation for the User Experience in Troubleshooting and Debugging of Microservice Networks : A Study of how Visualization of Call Paths can be Utilized in a Software Tool / Kravställning av användarupplevelsen vid felsökning och debugging av nätverk med mikrotjänster

Sestorp, Isak January 2020 (has links)
No description available.

Page generated in 0.0399 seconds