• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 906
  • 337
  • 177
  • 171
  • 72
  • 65
  • 55
  • 27
  • 25
  • 19
  • 15
  • 12
  • 10
  • 8
  • 5
  • Tagged with
  • 2146
  • 517
  • 460
  • 310
  • 301
  • 228
  • 226
  • 211
  • 183
  • 183
  • 176
  • 173
  • 167
  • 167
  • 164
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Implementace algoritmu SVM v FPGA / Implementation of SVM Algorithm in FPGAs

Krontorád, Jan January 2009 (has links)
This masters thesis deals with algorithms for learning SVM classifiers on hardware systems and their implementation in FPGA. There are basics about classifiers and learning. Two learning algorithms are introduced SMO algorithm and one hardware-friendly algorithm.
282

Design and implementation of a Radio-Frequency detection algorithm for use within A Radio-Frequency System on Chip

Jereb, Alexander Robert January 2020 (has links)
No description available.
283

Development of a Flexible Open Architecture Controller for a Six-Cylinder Heavy-Duty Diesel Engine

McElmurry, Robert Dennis 15 August 2014 (has links)
The goal of the present work is to develop an open architecture engine controller to operate a production model, heavy-duty diesel engine. Where OEM engine control units (ECUs) are inflexible, this controller is designed to provide the hardware and software flexibility required to facilitate dualuel combustion research. This thesis includes thorough descriptions of the hardware and software development required to interface with all engine sensors and actuators. To establish baseline control settings for the open controller, OEM ECU responses are mapped over a range of speeds and loads. This information is used to calibrate the open controller. Comparison tests considering speed, load, and emissions are performed to ensure the open controller provides a close approximation of OEM engine operation. The results of the tests confirm that the open controller provides full control of the engine with baseline settings close to those of the OEM ECU.
284

FPGA Acceleration of Decision-Based Problems using Heterogeneous Computing

Thong, Jason January 2014 (has links)
The Boolean satisfiability (SAT) problem is central to many applications involving the verification and optimization of digital systems. These combinatorial problems are typically solved by using a decision-based approach, however the lengthy compute time of SAT can make it prohibitively impractical for some applications. We discuss how the underlying physical characteristics of various technologies affect the practicality of SAT solvers. Power dissipation and other physical limitations are increasingly restricting the improvement in performance of conventional software on CPUs. We use heterogeneous computing to maximize the strengths of different underlying technologies as well as different computing architectures. In this thesis, we present a custom hardware architecture for accelerating the common computation within a SAT solver. Algorithms and data structures must be fundamentally redesigned in order to maximize the strengths of customized computing. Generalizable optimizations are proposed to maximize the throughput, minimize communication latencies, and aggressively compact the memory. We tightly integrate as well as jointly optimize the hardware accelerator and the software host. Our fully implemented system is significantly faster than pure software on real-life SAT problems. Due to our insights and optimizations, we are able to benchmark SAT in uncharted territory. / Thesis / Doctor of Philosophy (PhD)
285

FPGA IMPLEMENTATION OF A PARALLEL EBCOT TIER-1 ENCODER THAT PRESERVES ENCODING EFFICIENCY

Damecharla, Hima Bindu 05 October 2006 (has links)
No description available.
286

XDL-Based Hard Macro Generator

Ghosh, Subhrashankha 08 March 2011 (has links) (PDF)
In a conventional hardware design flow, the compilation process to create the physical circuit on the FPGA takes a long time. HMFlow is a design flow that reduces the compilation time by using pre-compiled modules called hard macros. HMFlow uses System Generator to create the designs, which are then converted to hard macros. The hard macro creation process takes a long time and a possible solution is a hard macro generator called XdlCoreGen, which is described in this thesis. XdlCoreGen can quickly create fully mapped and placed hard macros using XDL. XDL is a human readable design format that describes an FPGA and can be manipulated to configure the FPGA. XdlCoreGen also provides a framework to configure a Xilinx Virtex4 FPGA using XDL. In addition to XdlCoreGen, this thesis also describes the FPGA configuration methodology using XDL. This thesis also describes a cache based router, where instead of finding a route, a pre-generated route is used to route the hard macros generated by XdlCoreGen. This thesis also presents test results using XdlCoreGen. However, the main focus of this thesis will be the speed of hard macro generation by XdlCoreGen.
287

3rd Party IP Encryption from Netlist to Bitstream for Xilinx 7-Series FPGAs

Hutchings, Daniel 14 August 2023 (has links) (PDF)
IP vendors need to keep the internal designs of their IP secret from the IP user for security or commercial reasons. The CAD tools provided by FPGA vendors have some built-in functionality to encrypt the IP. However, the IP is consequently decrypted by the CAD tools in order to run the IP through the design flow. An IP user can use APIs provided by the CAD tools to recreate the IP in an unencrypted state. An IP user could also easily learn the internals of a protected IP with the advent of new open-source bitstream to netlist tools. The user can simply generate a bitstream that includes the protected IP and then use the tools to create a netlist of the third party IP, exposing the internals of IP. Any solution to keep IP protected must keep the IP encrypted through the CAD tools and bitstream generation all the way to FPGA configuration. This thesis presents a design methodology, along with a proof-of-concept tool, that demonstrates how IP can remain partially encrypted through the CAD flow and into the bitstream. It shows how this approach can support multiple encryption keys from different vendors, and can be deployed using existing CAD tools and FPGA families. Results are presented that document the benefits and costs of using such an approach to provide much greater protection for 3rd party IP.
288

FPGA-based range-limited molecular dynamics acceleration

Wu, Chunshu 07 September 2023 (has links)
Molecular Dynamics (MD) is a computer simulation technique that executes iteratively over discrete, infinitesimal time intervals. It has been a widely utilized application in the fields of material sciences and computer-aided drug design for many years, serving as a crucial benchmark in high-performance computing (HPC). Numerous MD packages have been developed and effectively accelerated using GPUs. However, as the limits of Moore's Law are reached, the performance of an individual computing node has reached its bottleneck, while the performance of multiple nodes is primarily hindered by scalability issues, particularly when dealing with small datasets. In this thesis, the acceleration with respect to small datasets is the main focus. With the recent COVID-19 pandemic, drug discovery has gained significant attention, and Molecular Dynamics (MD) has emerged as a crucial tool in this process. Particularly, in the critical domain of drug discovery, small simulations involving approximately ~50K particles are frequently employed. However, it is important to note that small simulations do not necessarily translate to faster results, as long-term simulations comprising billions of MD iterations and more are essential in this context. In addition to dataset size, the problem of interest is further constrained. Referred to as the most computationally demanding aspect of MD, the evaluation of range-limited (RL) forces not only accounts for 90% of the MD computation workload but also involves irregular mapping patterns of 3-D data onto 2-D processor networks. To emphasize, this thesis centers around the acceleration of RL MD specifically for small datasets. In order to address the single-node bottleneck and multi-node scaling challenges, the thesis is organized into two progressive stages of investigation. The first stage delves extensively into enhancing single-node efficiency by examining various factors such as workload mapping from 3-D to 2-D, data routing, and data locality. The second stage focuses on studying multi-node scalability, with a particular emphasis on strong scaling, bandwidth demands, and the synchronization mechanisms between nodes. Through our study, the results show our design on a Xilinx U280 FPGA achieves 51.72x and 4.17x speedups with respect to an Intel Xeon Gold 6226R CPU, and a Quadro RTX 8000 GPU. Our research towards strong scaling also demonstrates that 8 Xilinx U280 FPGAs connected to a switch achieves 4.67x speedup compared to an Nvidia V100 GPU
289

Using HLS for Acceleration of FPGA Development: A 3780-Point FFT Case Study

Hejdström, Christoffer January 2022 (has links)
Manually designing hardware for fpga implementations is time consuming. Onepossible way to accelerate the development of hardware is to use high level syn-thesis (hls) tools. Such tools synthesizes a high level model written in a languagesuch as c++ into hardware. This thesis investigates hls and the efficacy of using hls in the hardware design flow. A 3780-point fast Fourier transform optimized for area is used to compare Vitis hls with a manual hardware implementation. Different ways of writing the highlevel model used in hls and their impacts in the synthesized hardware together with other optimizations is investigated. This thesis concludes that the results from the hls implementation are not comparable with the manual implementation, they are significantly worse. Further, high level code written from a non-hardware point of view needs to be rewritten from a hardware point of view to provide good results. The use of high level synthesis is not best used by designers from an algorithm or software background, but rather another tool for hardware designers. High level synthesis can be used as an initial design tool, allowing for quick exploration of different designs andarchitectures.
290

Design and implementation of the Hybrid Detector for Microdosimetry (HDM): Challenges in readout architecture and experimental results

Pierobon, Enrico 05 December 2023 (has links)
This thesis introduces an innovative approach for enhancing the characterization of radiation field quality through microdosimetry. Over the past 30 years, clinical results have shown that ion therapy may be a superior treatment option for several types of cancer, including recurrent cancers, compared to conventional radiation. Despite these promising results, there are still several treatment uncertainties related to biological and physical processes that prevent the full exploitation of particle therapy. Among the physical characterizations, it is paramount to measure the quality of the irradiating field in order to link the biological effect to its physical description. In this way, uncertainties in treatment can be reduced and outcomes optimized. One tool for studying the radiation field that has become increasingly important in the last decade is microdosimetry . Over the last years, microdosimetry has proved to be a superior tool for describing radiation quality, especially when compared to standard reference quantities used nowadays in the clinic. In microdosimetry, the fundamental quantity is the lineal energy y, defined as the energy deposition in the detector divided by the Mean Chord Length (MCL): an approximation used to estimate the track length traveled by radiation in the detector, valid in an isotropic, uniform radiation field. As a consequence, microdosimeters has evolved in obtaining the best possible energy release estimation, without improving the accuracy of the MCL approximation. Measuring the Real Track Length (RTL) traveled by the particle inside the detector could provide a better description of the radiation quality. In fact, from a biological perspective, it is critical if a large amount of energy is released over a long particle track, or if it is extremely dense over a small particle track. If the energy released is more dense, the biological damage induced is likely to be more complex and therefore more significant. For these reasons, a novel approach to microdosimetry is presented that considers the RTL in the radiation quality description. The first chapter of the thesis presents standard microdosimetry and its main quantities. A special emphasis is given to the microdosimeter used in this work, i.e. the TEPC or Tissue Equivalent Proportional Counter, a gas microdosimeter that is equivalent in terms of energy deposition to 2 um of tissue. A comprehensive characterization of the TEPC response to different ions and energies can be found in the literature. A topic missing in the literature is the investigation of the TEPC response to clinical protons of different particles rates. A section is dedicated to the TEPC detector response to pileup. Pileup occurs where two or more energy deposition events are processed together, disrupting the normal signal processing. By exposing the TEPC to particles rates ranging from few particles per seconds to 106 particles per second, it was possible to estimate the distortion of the acquired spectra due to pileup. On the other hand, by using Monte Carlo simulations, it was possible to reproduce the effect of pileup on microdosimetric spectra. Using a quantitative approach, the experimental spectra measured at different particles rate and the spectra simulated at a different pileup probability are matched based on a similarity criteria. In this way, it was possible to build a particle rate-pileup curve for the TEPC, used to quantify the pileup probability contribution. More in general, this approach could be extended and used to other microdosimeters. The acquisition of the data in pileup condition is sometimes inevitable, and some microdosimeters are more likely to suffer from high particle rates. With this part of the thesis, I aim to provide a tool to acquire microdosimetric spectra even in pileup condition. A description of the TEPC acquisition chain is provided in the next section. This is an important topic as any further integration or improvement will require the modification of at least one element of the acquisition. Then, the typical data analysis carried out on the microdosimetric spectra is presented, together with the calibration procedure of the TEPC detector based on Monte Carlo simulation using Geant4. Finally, I provide an overview of the software Mandarina, which is the implemented Graphical User Interface (GUI), written in C# language, and developed specifically to analyze the experimental microdosimetric data. By using this software, users can build a microdosimetric spectra starting from raw acquired data. In addition, the software provides the ability to modify key acquisition parameters and provides real-time feedback on how the microdosimetric spectra change under these modifications. Then, I introduce the concept of Hybrid Detector of Microdosimetry (HDM). HDM is composed of a commercial TEPC, and 4 layers of Low Gain Avalanche Detectors (LGADs). LGADs are silicon detectors featuring an internal gain by exploring the avalanche effect. This makes them suitable to detect particles with a broad range of energy release in the silicon. A detailed description of how the LGADs detect ionizing radiation is provided in this work. LGADs are used in the HDM as a tracking component, capable of reconstructing the particle trajectories inside the TEPC. In this way, instead of relying on the MCL approximation to calculate the value of y, it is possible to define a new quantity: yr. yr differs from the standard y because it uses the real track length instead of the mean chord length approximation. Next, a preliminary Geant4-based study for optimizing the detector geometry is discussed. Tracking capability and simulated microdosimetric spectra with the estimated track length were assessed and are presented in this thesis. To experimentally realize HDM, the acquisition chain of the TEPC must be upgraded since the original acquisition system cannot directly integrate the tracking information from the LGADs strips. A chapter of this work is dedicated to the implementation of the new acquisition system, which allows for the digitalization of the time series signal produced by the detector. The system is based on an Eclypse-Z7 FPGA development board which can host up to 4 Analog to Digital Converters (ADC). Following a bottom-up approach, this chapter describes first the main characteristics of the signal to be digitized. An overview of the Eclypse-Z7 development board with its main capabilities is provided. Finally, the controller in charge of driving the ADC is described. Being a Zynq FPGA, both Programming Logic (PL) and Processing System (PS) need to be programmed. The PL is responsible for driving the ADC at a low level, controlling the triggering and the data flow to the PS. The PS hosts a custom Linux distribution with the task of supervising the acquisition by setting the main parameters, like the number of samples to acquire, the trigger condition and position with respect to the acquisition window. The PS is also responsible for storing the data safely into an SD card connected to the Eclypse-Z7. With a fully customizable system, it is then possible to integrate other systems by properly synchronizing the acquisition with other devices. In the specific case of HDM, a correspondence between the energy release and the LGAD-based tracking component needs to be implemented. Once the time series is properly acquired, the data analysis needs to be developed. A specific section of the thesis is dedicated to this important task, as the correct processing of the signals is a requirement to obtain robust microdosimetric spectra. The time series processing features a classification algorithm that allows to identify artifacts of the acquired signals, such as saturation, double hits and noisy signals. Once the time series are correctly processed and the relevant information is extracted, it is possible to calculate the microdosimetric spectra. In this acquisition chain the detector signal is processed with 3 different levels of gain, obtaining the same version of the signal but with different amplification. In this way it is possible to span a large dynamic range while maintaining the required resolution typical of microdosimetry. However, the three signals must be then joined together to span the required dynamic range. This process goes under the name of intercalibration and has a dedicated section in the chapter. Once the signals are intercalibrated, it is necessary to apply a calibration. The new calibration process developed within this work differs from the previously adopted calibration method based on Monte Carlo simulation, and is described in detail. Finally, the spectra obtained with the new acquisition are compared to those obtained with the original acquisition chain. The next chapter is dedicated to the LGAD readout. Again, following a bottom up approach, an introduction to the LGAD signal is provided. This readout acquisition chain is already partially available since it has been developed by the INFN-TO (Istituto Nazionale di Fisica Nucleare) of Turin. For the first stage of signal processing, two main components developed by the aforementioned INFN-TO are available: the ABACUS chip and the ESA_ABACUS printed circuit board (PCB) board. The ABACUS chip is an ASIC (application-specific integrated circuit) designed to process directly the small signal coming from the LGADs strips. At each activation of one LGAD strip, a digital signal is generated. Each ABACUS is capable of handling up to 24 LGADs strips and can adjust the threshold of each channel within a limited range. Threshold adjustment is required to separate the signal from the noise, as it is expected that all the channels do not share a common threshold due to their specific noise. The ABACUS PCB has been developed to physically host up to 6 ABACUS chips plus the LGAD sensor. It is equipped with an internal DAC (Digital to Analog Converted) used to set a common threshold for all 24 channels managed by one ABACUS chip. In this way, a common threshold can be selected using the ABACUS DAC, and then, to satisfy the specific needs of each channel, the ABACUS chip is used. In order to program the thresholds, the manufacturer required specific serial communication protocols. It is necessary to integrate this communication protocol into the acquisition system. To meet these requirements, I developed an FPGA-based readout system capable of processing the signal from the ABACUS chip and setting the threshold for each channel. I describe in detail the implementation of such a system in a dedicated chapter, again following a bottom-up approach starting from the PL, and moving to the PS. In a specific section, I show how the communication protocol has been implemented and tested and how the fast digital pulses, coming from the ABACUS chip, are processed in the PL. I also describe how the PS system was built. As in the case of the new TEPC acquisition, a Linux system was run on the PS. This made it easier for the end user to work with the acquired data and threshold controls. The movement of data from the PL to the PS is accomplished using DMA or Direct Memory Access. This is a critical component because it allows fast (within one clock cycle) data transfer from the PL to the user in the PS. The implementation of such architecture is quite complex and demands both knowledge in advanced electronics and Linux systems. In fact, the DMA requires the implementation of a Linux kernel driver to correctly move the data. This process is described in a dedicated section of this thesis. With this implementation design in the FPGA it was possible to acquire the signal from 24 LGADs strips and control the thresholds. An experimental campaign was conducted at the proton therapy center in Trento where the whole acquisition system was tested extensively. The results are reported in a dedicated section of this thesis. All the signals coming from the protons with energies ranging from 70 to 228 MeV were correctly discriminated, proving that the readout system can work with protons of clinical energies. Finally, thermal tests were conducted on the acquisition setups since during the experimental campaign some thermal drifts of the baseline were observed. The test results are shown in a dedicated section of this thesis. Finally, I included a chapter on discussion on the results achieved and on future perspective.

Page generated in 0.0515 seconds