• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithms and Low Cost Architectures for Trace Buffer-Based Silicon Debug

Prabhakar, Sandesh 17 December 2009 (has links)
An effective silicon debug technique uses a trace buffer to monitor and capture a portion of the circuit response during its functional, post-silicon operation. Due to the limited space of the available trace buffer, selection of the critical trace signals plays an important role in both minimizing the number of signals traced and maximizing the observability/restorability of other untraced signals during post-silicon validation. In this thesis, a new method is proposed for trace buffer signal selection for the purpose of post-silicon debug. The selection is performed by favoring those signals with the most number of implications that are not implied by other signals. Then, based on the values of the traced signals during silicon debug, an algorithm which uses a SAT-based multi-node implication engine is introduced to restore the values of untraced signals across multiple time-frames. A new multiplexer-based trace signal interconnection scheme and a new heuristic for trace signal selection based on implication-based correlation are also described. By this approach, we can effectively trace twice as many signals with the same trace buffer width. A SAT-based greedy heuristic is also proposed to prune the selected trace signal list further to take into account those multi-node implications. A state restoration algorithm is developed for the multiplexer-based trace signal interconnection scheme. Experimental results show that the proposed approaches select the trace signals effectively, giving a high restoration percentage compared with other techniques. We finally propose a lossless compression technique to increase the capacity of the trace buffer. We propose real-time compression of the trace data using Frequency-Directed Run-Length (FDR) code. In addition, we also propose source transformation functions, namely difference vector computation, efficient ordering of trace flip-flops and alternate vector reversal that reduces the entropy of the trace data, making them more amenable for compression. The order of the trace flip-flops is computed off-chip using a probabilistic algorithm. The difference vector computation and alternate vector reversal are implemented on-chip and incurs negligible hardware overhead. Experimental results for sequential benchmark circuits shows that this method gives a better compression percentage compared to dictionary-based techniques and yields up to 3X improvement in the diagnostic capability. We also observe that the area overhead of the proposed approach is less compared to dictionary-based compression techniques. / Master of Science
2

Reverse Engineering Behavioural Models by Filtering out Utilities from Execution Traces

Braun, Edna 10 September 2013 (has links)
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software. We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria. We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems. By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
3

Reverse Engineering Behavioural Models by Filtering out Utilities from Execution Traces

Braun, Edna January 2013 (has links)
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software. We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria. We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems. By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
4

Modellierung von On-Chip-Trace-Architekturen für eingebettete Systeme

Irrgang, Kai-Uwe 05 June 2015 (has links)
Das als Trace bezeichnete nicht-invasive Aufzeichnen von Systemzuständen, während ein eingebettetes System unter realen Einsatzbedingungen in Echtzeit läuft und mit der Systemumgebung interagiert, ist ein wichtiger Teil von Softwaretests. Die Notwendigkeit für den On-Chip-Trace resultiert aus der rückläufigen Einsetzbarkeit etablierter Werkzeuge für den Off-Chip-Trace. Ein wesentlicher Bestandteil von On-Chip-Trace-Architekturen ist die Volumenreduktion der Tracedaten in deren Entstehungsgeschwindigkeit direkt auf dem Chip. Der Schwerpunkt liegt auf dem Trace des Instruktionsflusses von Prozessoren. Der aktuelle Stand der Forschung zeigt zwei Ausprägungen. Bei einfachen Lösungen ist der Kompressionsfaktor zu klein. Aufwendigere Lösungen liefern einen unvollständigen Instruktionstrace, wenn auch sequentielle Befehle bedingt ausgeführt werden. Bisher existieren keine Lösungen, die einen vollständigen Instruktionstrace mit hoher Kompression realisieren. Diese Lücke wird in der vorliegenden Arbeit geschlossen. Der systematische Entwurf der neuen On-Chip-Trace-Architektur beginnt mit der umfassenden Analyse typischer Benchmarkprogramme. Aus den Ergebnissen werden grundlegende Entwurfsentscheidungen abgeleitet. Diese Bitsequenzen von Ausführungsbits, die bei der bedingten Befehlsausführung entstehen, und die Zieladressen ausgeführter indirekter Sprünge werden in unabhängigen Kompressoren verarbeitet. Ein nachgeschalteter Kompressor für die Messages der anderen beiden Kompressoren ist optional und kann die Kompression weiter steigern. Diese Aufteilung stellt ein architektonisches Novum dar. Die Kompression von Bitsequenzen ist bisher ein weitestgehend unbehandeltes Feld. Implementiert worden ist hierfür ein gleitendes Wörterbuch mit der Granularität von Einzelbits. Die Vergleiche mit den untersuchten existierenden Architekturen zeigen die Überlegenheit der neuen Architektur bei der Kompression. Ein vollständiger Instruktionstrace ist für Prozessoren mit und ohne bedingt ausführbaren sequentiellen Befehlen realisiert worden.

Page generated in 0.0586 seconds