• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Usability and productivity for silicon debug software: a case study

Singh, Punit 24 February 2012 (has links)
Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low power consumption. The new research drives new features in chips. The case study research reported here is about the usability and productivity of the silicon debug software tools. Silicon debug software tools are a set of software used to find bugs before delivering chips to the customer. The study has an objective to improve usability and productivity of the tools, by introducing metrics. The results of the measurements drive a concrete plan of action. The GQM (Goal, Questions, Metrics) methodology was used to define and gather data for the measurements. The project was developed in two parts or phases. We took the measurements using the method over the two phases of the tool development. The findings from phase one improved the tool usability in the second phase. The lesson learnt is that tool usability is a complex measurement. Improving usability means that the user will use less of the tool help button; the user will have less downtime and will not input incorrect data. Even though for this study the focus was on three important tools, the same usability metrics can be applied to the remaining five tools. For defining productivity metrics, we also used the GQM methodology. A productivity measurement using historic data was done to establish a baseline. The baseline measurements identified some existing bottlenecks in the overall silicon debug process. We link productivity to time it takes for a debug tool user to complete the assigned task(s). The total time taken for using all the tools does not give us any actionable items for improving productivity. We will need to measure the time it takes for use of each tool in the debug process to give us actionable items. This is identified as future work. To improve usability we recommend making tools that are more robust to error handling and having good help features. To improve productivity we recommend getting data on where the user is spending most of the debug time. Then, we can focus on improving that time-consuming part of debug to make the users more productive. / text
2

A Novel Simulation Based Approach for Trace Signal Selection in Silicon Debug

Komari, Prabanjan 20 October 2016 (has links)
No description available.
3

Algorithms and Low Cost Architectures for Trace Buffer-Based Silicon Debug

Prabhakar, Sandesh 17 December 2009 (has links)
An effective silicon debug technique uses a trace buffer to monitor and capture a portion of the circuit response during its functional, post-silicon operation. Due to the limited space of the available trace buffer, selection of the critical trace signals plays an important role in both minimizing the number of signals traced and maximizing the observability/restorability of other untraced signals during post-silicon validation. In this thesis, a new method is proposed for trace buffer signal selection for the purpose of post-silicon debug. The selection is performed by favoring those signals with the most number of implications that are not implied by other signals. Then, based on the values of the traced signals during silicon debug, an algorithm which uses a SAT-based multi-node implication engine is introduced to restore the values of untraced signals across multiple time-frames. A new multiplexer-based trace signal interconnection scheme and a new heuristic for trace signal selection based on implication-based correlation are also described. By this approach, we can effectively trace twice as many signals with the same trace buffer width. A SAT-based greedy heuristic is also proposed to prune the selected trace signal list further to take into account those multi-node implications. A state restoration algorithm is developed for the multiplexer-based trace signal interconnection scheme. Experimental results show that the proposed approaches select the trace signals effectively, giving a high restoration percentage compared with other techniques. We finally propose a lossless compression technique to increase the capacity of the trace buffer. We propose real-time compression of the trace data using Frequency-Directed Run-Length (FDR) code. In addition, we also propose source transformation functions, namely difference vector computation, efficient ordering of trace flip-flops and alternate vector reversal that reduces the entropy of the trace data, making them more amenable for compression. The order of the trace flip-flops is computed off-chip using a probabilistic algorithm. The difference vector computation and alternate vector reversal are implemented on-chip and incurs negligible hardware overhead. Experimental results for sequential benchmark circuits shows that this method gives a better compression percentage compared to dictionary-based techniques and yields up to 3X improvement in the diagnostic capability. We also observe that the area overhead of the proposed approach is less compared to dictionary-based compression techniques. / Master of Science
4

Enhancing silicon debug techniques via DFD hardware insertion

Yang, Joon Sung 22 October 2009 (has links)
As technology is advancing, larger and denser devices are being manufactured with shorter time to market requirements. Identifying and resolving problems in integrated circuits (ICs) are the main focus of the pre-silicon and post-silicon debug process. As indicated in the International Technology Roadmap for Semiconductors (ITRS), post-silicon debug is a major time consuming challenge that has significant impact on the development cycle of a new chip. Since it is difficult to acquire the internal signal values, conventional debug techniques typically involve performing a binary search for failing vectors and performing mechanical measurement with a probing needle. Silicon debug is a labor intensive task and requires much experience in validating the first silicon. Finding information about when (temporal) and where (spatial) failures occur is the key issue in post-silicon debug. Test vectors and test applications are run on first silicon to verify the functionality when it arrives. Scan chains and on-chip memories have been used to provide the valuable internal signal observation information for the silicon debug process. In this dissertation, a scan-based technique is presented to detect the circuit misbehavior without halting the system. A debugging technique that uses a trace buffer is introduced to efficiently store a series of data obtained by a two dimensional compaction technique. Debugging capability can be maximized by observing the right set of signals to observe. A method for an automated selection of signals to observe is proposed for efficient selection. Investigation in signal observability is further extended to signal controllability in test point insertion. Noble test point insertion techniques are presented to reduce the area overhead for test point insertion. / text
5

On-chip Tracing for Bit-Flip Detection during Post-silicon Validation

Vali, Amin January 2018 (has links)
Post-silicon validation is an important step during the implementation flow of digital integrated circuits and systems. Most of the validation strategies are based on ad-hoc solutions, such as guidelines from best practices, decided on a case-by-case basis for a specific design and/or application domain. Developing systematic approaches for post-silicon validation can mitigate the productivity bottlenecks that have emerged due to both design diversification and shrinking implementation cycles. Ever since integrating on-chip memory blocks became affordable, embedded logic analysis has been used extensively for post-silicon validation. Deciding at design time which signals to be traceable at the post-silicon phase, has been posed as an algorithmic problem a decade ago. Most of the proposed solutions focus on how to restore as much data as possible within a software simulator in order to facilitate the analysis of functional bugs, assuming that there are no electrically-induced design errors, e.g., bit- flips. In this thesis, first it is shown that analyzing the logic inconsistencies from the post-silicon traces can aid with the detection of bit-flips and their root-cause analysis. Furthermore, when a bit-flip is detected, a list of suspect nets can be automatically generated. Since the rate of bit-flip detection as well the size of the list of suspects depends on the debug data that was acquired, it is necessary to select the trace signals consciously. Subsequently, new methods are presented to improve the bit-flip detectability through an algorithmic approach to selecting the on-chip trace signals. Hardware assertion checkers can also be integrated on-chip in order to detect events of interest, as defined by the user. For example, they can detect a violation of a design property that captures a relationship between internal signals that is supposed to hold indefinitely, so long as no bit-flips occur in the physical prototype. Consequently, information collected from hardware assertion checkers can also provide useful debug information during post-silicon validation. Based on this observation, the last contribution from this thesis presents a novel method to concurrently select a set of trace signals and a set of assertions to be integrated on-chip. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0678 seconds