• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 295
  • 149
  • 121
  • 72
  • 53
  • 41
  • 34
  • 31
  • 30
  • 30
  • 27
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Evaluation of partial reconfiguration for FPGA debugging

Siverskog, Jacob January 2010 (has links)
<p>Reconfigurable computing is an old concept that during the past couple of decades has become increasingly popular. The concept combines the flexibility of software with the performance of hardware. One important contributing factor to the uprising in popularity is the presence of FPGAs (field-programmable gate arrays), which realize the concept by allowing the hardware to be reconfigured dynamically. The current state of reconfigurable computing is discussed further in the thesis.</p><p>Debugging is a vital part in the development of a hardware design. It can be done in several ways depending on the situation. The most common way is to perform simulations but in some cases the fault-finding has to be done when the design is implemented in hardware.</p><p>In this thesis a framework concept is designed that utilizes and evaluates some of the reconfigurable computing ideas. The framework provides debugging possibilities for FPGA designs in a novel way, with a modular system where each module provide means to aid finding a specific fault. The framework is added to an existing design, and offers the user a glimpse into the design behavior and the hardware it runs on.</p><p>One of the debug modules will be released separately under a free license. It allows the developer to see the contents of the memories in a design without requiring special debugging equipment.</p>
92

Empirical study - pairwise prediction of fault based on coverage

Shamasunder, Shalini 14 June 2012 (has links)
Researchers/engineers in the field of software testing have valued coverage as a testing metric for decades now. There have been various empirical results that have shown that as coverage increases the ability of the test program to detect a fault also increases. As a result numerous coverage techniques have been introduced. Which coverage criteria co-relates better with fault detection? Which coverage criteria on the other hand have lower correlation with fault detection? In other words, does it make more sense to achieve a higher percentage of c1 kind of coverage over a higher percentage of c2 coverage to gain good fault detection rate. Do the popular block and branch coverage perform better or does path coverage outperform them? Answering these questions will help future engineers/researchers in generating more efficient test suites and in gaining a better metric of measurement. This also helps in test suite minimization. This thesis studies the relationship between coverage and mutant kill-rates over large, randomly generated test suites for statement, branch, predicate, and path coverage of two realistic programs to answer the above open questions. The experiments both confirm conventional wisdom about these coverage criteria and contains a few surprises. / Graduation date: 2013
93

Expert Systems: Where Are We? And Where Do We Go from Here?

Davis, Randall 01 June 1982 (has links)
Work on Expert Systems has received extensive attention recently, prompting growing interest in a range of environments. Much has been made of the basic concept and the rule-based system approach typically used to construct the programs. Perhaps this is a good time then to review what we know, assess the current prospects, and suggest directions appropriate for the next steps of basic research. I'd like to do that today and propose to do it by taking you on a journey of sorts, a metaphorical trip through the State of the Art of Expert Systems. We'll wander about the landscape, ranging from the familiar territory of the Land of Accepted Wisdom, to the vast unknowns at the Frontiers of Knowledge. I guarantee we'll all return safely, so come along...
94

Intelligent Assistance for Program Recognition, Design, Optimization, and Debugging

Rich, Charles, Waters, Richard C. 01 January 1989 (has links)
A recognition assistant will help reconstruct the design of a program, given only its source code. A design assistant will assist a programmer by detecting errors and inconsistencies in his design choices and by automatically making many straightforward implementation decisions. An optimization assistant will help improve the performance of programs by identifying intermediate results that can be reused. A debugging assistant will aid in the detection, localization, and repair of errors in designs as well as completed programs.
95

Combining Associational and Causal Reasoning to Solve Interpretation and Planning Problems

Simmons, Reid G. 01 August 1988 (has links)
This report describes a paradigm for combining associational and causal reasoning to achieve efficient and robust problem-solving behavior. The Generate, Test and Debug (GTD) paradigm generates initial hypotheses using associational (heuristic) rules. The tester verifies hypotheses, supplying the debugger with causal explanations for bugs found if the test fails. The debugger uses domain-independent causal reasoning techniques to repair hypotheses, analyzing domain models and the causal explanations produced by the tester to determine how to replace faulty assumptions made by the generator. We analyze the strengths and weaknesses of associational and causal reasoning techniques, and present a theory of debugging plans and interpretations. The GTD paradigm has been implemented and tested in the domains of geologic interpretation, the blocks world, and Tower of Hanoi problems.
96

Embedded In-Circuit Emulation and Tracing for Bus-based System-on-Chip Integration

Kao, Chung-fu 10 September 2007 (has links)
In the System-on-Chip (SoC) era, common industry estimates are that functional verification takes approximately 70% of the total effort on a project. For the time-to-market constrain, it¡¦s a challenge to reduce the SoC verification/debugging time efficiently. In an SoC, a microprocessor is an essential part of it. First, we focus the debugging problem on microprocessors. An in-circuit emulation (ICE) module that can be embedded with a microprocessor core. The ICE module, based on the IEEE 1149.1 JTAG architecture, supports typical debugging and testing mechanisms, including boundary scan paths, partial scan paths, single stepping, internal resource monitoring and modification, breakpoint detection, and mode switching between debugging and normal modes. The architecture of the ICE module is parameterized and retargetable to different microprocessors. It has been successfully integrated with two microprocessors with significantly different architectures: one 8-bit industrial embedded microcontroller HT48x00 and one 32-bit ARM7-like embedded microprocessor. FPGA prototypes and chip implementation have been accomplished. Experiments show that real-time (on-line) debugging at full speed is possible with the embedded ICE at a minor gate count overhead. Collecting the program execution traces at full speed is essential to the analysis and debugging of real-time software behavior of a complex system. However, the generation rate and the size of real time program traces are so huge such that real-time program tracing is often infeasible without proper hardware support. This paper presents a hardware approach to compress program execution traces in real time in order to reduce the trace size. The approach consists of three modularized phases: (1) branch/target filtering, (2) branch/target address encoding and (3) Lempel-Ziv-based data compression. A synthesizable RTL code for the proposed hardware is constructed to analyze the hardware cost and speed and typical multimedia benchmarks are used to measure the compression results. The results show that our hardware is capable of real time compression and achieving compression ratio of 454:1, far better than 5:1 achieved by typical existing hardware approaches. Furthermore, our modularized approach makes it possible to trade off between the hardware cost (typically from 1K to 50K gates) and the achievable compression ratio (typically from 5:1 to 454:1). For SoC debugging, bus signal tracing represents that the information which is generated from the system can be collected for later observation, debugging and analysis. However, the generation rate and the size of real time system traces are so huge such that a mechanism for system tracing that can reduce trace size efficiently is needed. In this paper, we propose a multi-resolution bus trace approach. The hardware bus tracer consists of two major stages: (1) signal monitor & tracing stage, and (2) trace compression stage. In the first stage, designer can trace the signals in detail or in rough depends on the debug purpose. In other word, the multi-resolution trace approach provides the trade-off between trace accuracy and trace depth. In the second stage, the bus tracer compresses the trace size efficiently; therefore the capability of on-chip storage is increased. In the host, the analyzer tool decompresses the trace data for future observation and debugging.
97

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.
98

Scaling SAT-based Automated Design Debugging with Formal Methods

Keng, Brian 12 February 2010 (has links)
The size and complexity of modern VLSI computer chips are growing at a rapid pace. Functional debugging is increasingly becoming a bottleneck in the design flow where it can take up to 60% of the total verification time. Scaling existing automated debugging tools is necessary in order to continue along this path of rapid growth and innovation in the semiconductor industry. This thesis aims to scale automated debugging techniques with two contributions. The first contribution introduces a succinct memory model for automated design debugging that dramatically lowers the memory requirements for the debugging problem. The second contribution presents a scalable SAT-based design debugging algorithm that uses a mathematical technique called interpolation to divide the debugging problem into multiple parts across time which greatly reduces the peak memory requirements of the debugging problem. Extensive experiments on real designs demonstrate the benefit of this work.
99

Evaluation of partial reconfiguration for FPGA debugging

Siverskog, Jacob January 2010 (has links)
Reconfigurable computing is an old concept that during the past couple of decades has become increasingly popular. The concept combines the flexibility of software with the performance of hardware. One important contributing factor to the uprising in popularity is the presence of FPGAs (field-programmable gate arrays), which realize the concept by allowing the hardware to be reconfigured dynamically. The current state of reconfigurable computing is discussed further in the thesis. Debugging is a vital part in the development of a hardware design. It can be done in several ways depending on the situation. The most common way is to perform simulations but in some cases the fault-finding has to be done when the design is implemented in hardware. In this thesis a framework concept is designed that utilizes and evaluates some of the reconfigurable computing ideas. The framework provides debugging possibilities for FPGA designs in a novel way, with a modular system where each module provide means to aid finding a specific fault. The framework is added to an existing design, and offers the user a glimpse into the design behavior and the hardware it runs on. One of the debug modules will be released separately under a free license. It allows the developer to see the contents of the memories in a design without requiring special debugging equipment.
100

On the Porting and Debugging of Linux Kernel

Li, Chih-Yuen 08 February 2006 (has links)
In recent years, more and more vendors adopt Linux to be the embedded operating system for their electronic products because of its combination of reliability, performance, good tool chains, portability, and configurability. However, Linux kernel is complex, and different electronic products may use different platforms. For this reason, it often requires that Linux be ported to different platforms. In this thesis, we describe the details of how we port Linux to a new platform which is similar to but not exactly the same as another platform and thus is not currently supported by the kernel. Moreover, we propose two robust debugging techniques to solve the problems we had encountered in this thesis. One is to make it easier to trace a module with ICE; the other is to allow us to access the internal registers of the processor through the /proc filesystem rather than write a program every time we need to access those internal registers for the purpose of, say, debugging. By using these techniques, we show that the time required to port and debug a Linux kernel can be definitely reduced.

Page generated in 0.0602 seconds