• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing Units

Li, Min 16 November 2012 (has links)
With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area. Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator. Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board. With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable. In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system. In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to achieve maximal performance on GPUs and obtain significant speedups. The proposed algorithms based on GPU parallelism collectively aim to contribute to improving the performance of EDA tools in Computer aided design (CAD) community on GPUs and other many-core platforms. / Ph. D.
2

Improved Abstractions and Turnaround Time for FPGA Design Validation and Debug

Iskander, Yousef Shafik 11 September 2012 (has links)
Design validation is the most time-consuming task in the FPGA design cycle. Although manufacturers and third-party vendors offer a range of tools that provide different perspectives of a design, many require that the design be fully re-implemented for even simple parameter modifications or do not allow the design to be run at full speed. Designs are typically first modeled using a high-level language then later rewritten in a hardware description language, first for simulation and then later modified for synthesis. IP and third-party cores may differ during these final two stages complicating development and validation. The developed approach provides two means of directly validating synthesized hardware designs. The first allows the original high-level model written in C or C++ to be directly coupled to the synthesized hardware, abstracting away the traditional gate-level view of designs. A high-level programmatic interface allows the synthesized design to be validated with the same arbitrary test data on the same framework as the hardware. The second approach provides an alternative view to FPGAs within the scope of a traditional software debugger. This debug framework leverages partially reconfigurable regions to accelerate the modification of dynamic, software-like breakpoints for low-level analysis and provides a automatable, scriptable, command-line interface directly to a running design on an FPGA. / Ph. D.
3

Analyses de sûreté de fonctionnement multi-systèmes

Bernard, Romain 23 November 2009 (has links)
Cette thèse se situe au croisement de deux domaines : la sûreté de fonctionnement des systèmes critiques et les méthodes formelles. Nous cherchons à établir la cohérence des analyses de sûreté de fonctionnement réalisées à l’aide de modèles représentant un même système à des niveaux de détail différents. Pour cela, nous proposons une notion de raffinement dans le cadre de la conception de modèles AltaRica : un modèle détaillé raffine un modèle abstrait si le modèle abstrait simule le modèle détaillé. La vérification du raffinement de modèles AltaRica est supportée par l’outil de model-checking MecV. Ceci permet de réaliser des analyses multi-systèmes à l’aide de modèles à des niveaux de détail hétérogènes : le système au centre de l’étude est détaillé tandis que les systèmes en interface sont abstraits. Cette approche a été appliquée à l’étude d’un système de contrôle de gouverne de direction d’un avion connecté à un système de génération et distribution électrique. / This thesis links two fields : system safety analyses and formal methods.We aim at checking the consistensy of safety analyses based on formal models that represent a system at different levels of detail. To reach this objective, we introduce a refinement notion in the AltaRica modelling process : a detailed model refines an abstract model if the abstract model simulates the detailed model. The AltaRica model refinement verification is supported by the MecV model-checker. This allows to perform multi-system safety analyses using models with heterogeneous levels of detail : the main system is detailed whereas the interfaced systems remain abstract. This approach has been applied to the analysis of a rudder control system linked to an electrical power generation and distribution system.
4

Generic design and investigation of solar cooling systems

Saulich, Sven January 2013 (has links)
This thesis presents work on a holistic approach for improving the overall design of solar cooling systems driven by solar thermal collectors. Newly developed methods for thermodynamic optimization of hydraulics and control were used to redesign an existing pilot plant. Measurements taken from the newly developed system show an 81% increase of the Solar Cooling Efficiency (SCEth) factor compared to the original pilot system. In addition to the improvements in system design, new efficiency factors for benchmarking solar cooling systems are presented. The Solar Supply Efficiency (SSEth) factor provides a means of quantifying the quality of solar thermal charging systems relative to the usable heat to drive the sorption process. The product of the SSEth with the already established COPth of the chiller, leads to the SCEth factor which, for the first time, provides a clear and concise benchmarking method for the overall design of solar cooling systems. Furthermore, the definition of a coefficient of performance, including irreversibilities from energy conversion (COPcon), enables a direct comparison of compression and sorption chiller technology. This new performance metric is applicable to all low-temperature heat-supply machines for direct comparison of different types or technologies. The achieved findings of this work led to an optimized generic design for solar cooling systems, which was successfully transferred to the market.

Page generated in 0.0929 seconds