• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 30
  • 17
  • 10
  • 7
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 293
  • 148
  • 121
  • 72
  • 53
  • 40
  • 34
  • 31
  • 30
  • 30
  • 27
  • 23
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Design of Robust Micro-Control Unit

Shih, Wei-Chih 19 August 2008 (has links)
With the progress in VLSI technology, the external environment makes it easier for the interference affected the operation of microcontroller. The design of the recently microcontroller, not only the pursuit of speed and performance, also began the study of the various fault-tolerant technology to enhance the reliability and safety. This thesis, being designed for the Fault-tolerant microcontroller according market, presents a Robust Micro-Control Unit : RMCU for dual core architecture of ARM9 ISA. The RMCU provides two operation modes: synchronize mode and Processor test mode for fault-tolerant mechanism. In synchronize mode, both processors are executing the same program concurrently. The results generated by processors are compared, and every mismatch indicates a transient fault in one of the two processors. When the transient fault occurred, the two processors will use Instruction retry mechanism, recover system operation. If the same address's errors larger than the number of settings are considered permanent fault, processors will be held, and entered the processor test mode for processor functional test. In accordance with the test results to close the wrong processor and operating system back to normal. This approach to solve the traditional dual-core processor fault-tolerant architecture that can not be fixed to permanent-fault restrictions. In addition to the design of fault-tolerance mechanism, for the upgrading of software and hardware development and validation of this paper design of the RMCU debug platform. RMCU debug platform including JTAG-based OCD (On-Chip Debugging) unit, and debug interface program. In addition to providing read and write registers and memory, set Breakpoint, Watchpoint and single-step but also take the initiative to increase the external interrupt inserted to provide a more effective ISR (Interrupt Service Routine) debug. In the last of the thesis, we use the FPGA Implementation of the RMCU fault-tolerant mechanisms and debug platform. After simulation and testing, the results prove the feasibility of RMCU.
102

Performance understanding and tuning of iterative computation using profiling techniques

Ozarde, Sarang Anil 18 May 2010 (has links)
Most applications spend a significant amount of time in the iterative parts of a computation. They typically iterate over the same set of operations with different values. These values either depend on inputs or values calculated in previous iterations. While loops capture some iterative behavior, in many cases such a behavior is spread over whole program sometimes through recursion. Understanding iterative behavior of the computation can be very useful to fine-tune it. In this thesis, we present a profiling based framework to understand and improve performance of iterative computation. We capture the state of iterations in two aspects 1) Algorithmic State 2) Program State. We demonstrate the applicability of our framework for capturing algorithmic state by applying it to the SAT Solvers and program state by applying it to a variety of benchmarks exhibiting completely parallelizable loops. Further, we show that such a performance characterization can be successfully used to improve the performance of the underlying application. Many high performance combinatorial optimization applications involve SAT solving. A variety of SAT solvers have been developed that employ different data structures and different propagation methods for converging on a fixed point for generating a satisfiable solution. The performance debugging and tuning of SAT solvers to a given domain is an important problem encountered in practice. Unfortunately not much work has been done to quantify the iterative efficiency of SAT solvers. In this work, we develop quantifiable measures for calculating convergence efficiency of SAT solvers. Here, we capture the Algorithmic state of the application by tracking the assignment of variables for each iteration. A compact representation of profile data is developed to track the rate of progress and convergence. The novelty of this approach is that it is independent of the specific strategies used in individual solvers, yet it gives key insights into the "progress" and "convergence behavior" of the solver in terms of a specific implementation at hand. An analysis tool is written to interpret the profile data and extract values of the following metrics such as: average convergence rate, efficiency of iteration and variable stabilization. Finally, using this system we produce a study of 4 well known SAT solvers to compare their iterative efficiency using random as well as industrial benchmarks. Using the framework, iterative inefficiencies that lead to slow convergence are identified. We also show how to fine-tune the solvers by adapting the key steps. We also show that the similar profile data representation can be easily applied to loops, in general, to capture their program state. One of the key attributes of the program state inside loops is their branch behavior. We demonstrate the applicability of the framework by profiling completely parallelizable loops (no cross-iteration dependence) and by storing the branching behavior of each iteration. The branch behavior across a group of iterations is important in devising the thread warps from parallel loops for efficient execution on GPUs. We show how some loops can be effectively parallelized on GPUs using this information.
103

Understanding and supporting end-user debugging strategies /

Grigoreanu, Valentina. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2010. / Printout. Includes bibliographical references (leaves 223-236). Also available on the World Wide Web.
104

Classifying atomicity violation warnings using machine learning

Li, Hongjiang. January 2008 (has links)
Thesis (M.S.)--University of Wyoming, 2008. / Title from PDF title page (viewed on August 5, 2009). Includes bibliographical references (p. 38-39).
105

Source level debugging of circuits synthesized from high level language descriptions /

Hemmert, Karl S., January 2004 (has links) (PDF)
Thesis (Ph. D.)--Brigham Young University. Dept. of Electrical and Computer Engineering, 2004. / Includes bibliographical references (p. 143-149).
106

Distributed case based reasoning for fault management

Tran, Ha Manh January 2009 (has links)
Zugl.: Bremen, Univ., Diss., 2009
107

Designing, debugging, and deploying configurable computing machine-based applications using reconfigurable computing application frameworks /

Slade, Anthony Lynn, January 2003 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Electrical and Computer Engineering, 2003. / Includes bibliographical references (p. 229-232).
108

INCREMENTAL SYNTHESIS OF INDUCTIVE ASSERTIONS FOR PROGRAM VERIFICATION

Britton, Dianne Ellen, 1950- January 1977 (has links)
No description available.
109

SIMPLIFYING CODE GENERATION THROUGH PEEPHOLE OPTIMIZATION

Davidson, Jack W. (Jack Winfred) January 1981 (has links)
Producing compilers that generate good object code is difficult. The early phases of the compiler, syntactical and lexical analysis, have been automated. The latter phases, code generation and optimization, are more difficult because of the wide range of machine architectures. This dissertation describes a technique for the rapid implementation of production-quality compilers though the use of a machine-independent retargetable peephole optimizer, PO. PO is retargeted by providing a description of the new machine. PO simplifies many of the tasks associated with developing compilers. It simplifies code generation by eliminating most of the case-analysis typically necessary to produce good code. It simplifies the optimization phase by collecting several disparate optimizations and generalizing them as peephole optimizations. PO also demonstrates the traditional optimizations, such as register allocation, common subexpression elimination, and removal of unreachable code, may be done more thoroughly and completely when information about the target machine is available.
110

MINIFOR: minicomputer real-time FORTRAN programming through simulation

Upchurch, James Kimble, 1943- January 1972 (has links)
No description available.

Page generated in 0.0595 seconds