• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 234
  • 34
  • 26
  • 18
  • 13
  • 10
  • 8
  • 8
  • 7
  • 6
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 436
  • 125
  • 76
  • 57
  • 55
  • 52
  • 50
  • 44
  • 44
  • 42
  • 39
  • 39
  • 38
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

ORGFX : a Wishbone compatible Graphics Accelerator for the OpenRISC processor

Lenander, Per, Fosselius, Anton January 2012 (has links)
Modern embedded systems such as cellphones or medical instrumentation use increasingly complex graph-ical interfaces. Currently there are no widely used open hardware solutions to accelerate embedded graphicalapplications. This thesis presents the ORSoC graphics accelerator (ORGFX), an open hardware graphics ac-celerator that can be used with programmable hardware. A standalone software implementation is providedto help for a quick development of accelerated applications.The accelerator is able to render 2D, 3D and vector graphics. The example implementation of theORGFX is integrated with the OpenRISC Reference Platform System on Chip version 2 (ORPSoCv2). Thenal implementation runs on a Xilinx FPGA at 50 MHz, and provides accelerated graphics output froman HDMI port. An extensive software driver and a set of utilities to ease development for the graphicsaccelerator are provided along with the hardware. The software implementation of the accelerator uses thesame API as the hardware drivers, making it possible to quickly develop applications for the acceleratorwithout access to a physical platform.The nal implementation trades performance against platform independence and generality. The com-ponent can be integrated with any CPU or memory chip and works alongside a custom display core thatrenders the output to an external screen. The software drivers can be run bare metal or modied to run onan operating system.All of the hardware and software developed in this project is provided as open source under the GNULesser General Public License (LGPL), and can be downloaded from www.opencores.com. The authorshope that future releases will be integrated as a standard component into the OpenRISC Reference PlatformSystem on Chip.
42

A superconducting RF deflecting cavity for the ARIEL e-linac separator

Storey, Douglas W. 13 March 2018 (has links)
The ARIEL electron linac is a 0.3MW accelerator that will drive the production of rare isotopes in TRIUMF's new ARIEL facility. A planned upgrade will allow a second beam to be accelerated in the linac simultaneously, driving a Free Electron Laser while operating as an energy recovery linac. To not disrupt beam delivery to the ARIEL facility, an RF beam separator is required to separate the interleaved beams after they exit the accelerating cavities. A 650MHz superconducting RF deflecting mode cavity has been designed, built, and tested for providing the required 0.3MV transverse deflecting voltage to separate the interleaved beams. The cavity operates in a TE-like mode, and has been optimized through the use of simulation tools for high shunt impedance with minimal longitudinal footprint. The design process and details about the resulting electromagnetic and mechanical design are presented, covering the cavity's RF performance, coupling to the operating and higher order modes, multipacting susceptibility, and the physical design. The low power dissipation on the cavity walls at the required deflecting field allows for the cavity to be fabricated using non-conventional techniques. These include fabricating from bulk, low purity niobium and the use of TIG welding for joining the cavity parts. A method for TIG welding niobium is developed that achieves minimal degradation in purity of the weld joint while using widely available fabrication equipment. Applying these methods to the fabrication of the separator cavity makes this the first SRF cavity to be built at TRIUMF. The results of cryogenic RF tests of the separator cavity at temperatures down to 2K are presented. At the operating temperature of 4.2K, the cavity achieves a quality factor of 4e8 at the design deflecting voltage of 0.3MV. A maximum deflecting voltage of 0.82MV is reached at 4.2K, with peak surface fields of 26MV/m and 33mT. The cavity's performance exceeds the goal deflecting voltage and quality factor required for operation. / Graduate
43

Beam dynamics and interaction regions for the LHeC and HL-LHC

Thompson, Luke January 2013 (has links)
The Large Hadron Electron Collider (LHeC) is a proposal for a TeV scale, 10^33 cm^−2 s^−1 luminosity electron-proton collider at CERN. In the proposal, an electron accelerator collides a beam of electrons with one of the Large Hadron Collider (LHC) proton beams at one LHC interaction point (IP). At the time of writing, the project has been approved as part of the CERN mid-term plan. The LHeC project is planned for the 2020s, around the time of the High Luminosity LHC (HL-LHC) upgrade. The LHeC thus depends upon the success of the HL-LHC project, which plans to deliver p-p luminosity of L=5×10^34 cm^−2 s^−1. Unique challenges are presented by the LHeC, particularly by the interaction region (IR) and long straight section (LSS), and constraints must be considered from beam, particle and detector physics and engineering. This thesis presents the study and design of a complete collision insertion solution for a ring-ring LHeC. This provides a solution at a conceptual level to the problem of delivering TeV scale e−p collisions at L∼10^33 cm^−2 s^−1 for the first time, with detector coverage within 1 degree of the beam. This high acceptance, high luminosity solution substantially increases the value of the project, allowing high statistics across an unprecedented kinematic range. Further studies are presented into optimising the optical flexibility of the LHC LSSs, and into the effects of fringe fields in the HL-LHC large aperture quadrupoles. Modifications are proposed which maximise LSS flexibility, and fringe effects are found to be significant but manageable.
44

Accélération laser-plasma : mise en forme de faisceaux d’électrons pour les applications / Laser plasma acceleration : electron beams shaping for applications

Maitrallain, Antoine 11 October 2017 (has links)
L'accélération laser plasma (ALP) est le produit de l'interaction non linéaire entre un faisceau laser intense (≈10¹⁸ W/cm²) et une cible gazeuse. Sous certaines conditions, l’onde plasma générée peut piéger et accélérer des électrons jusqu’à des énergies très importantes grâce à des champs accélérateurs élevés (≈ 50 GV/m). Ce processus très prometteur fait l'objet de nombreux travaux au sein de la communauté, qui, après avoir identifié les mécanismes de base, cherche aujourd’hui à améliorer les propriétés de la source (énergie, divergence, reproductibilité...).Les applications de ces faisceaux d'électrons issus de sources ultra-compactes sont variées. Parmi celles-ci, la physique des hautes énergies pour laquelle a été conçu le schéma d'accélération multi-étages. Il s’agit d’un concept basé sur la succession d’étages accélérateurs pour répondre à la problématique de l’augmentation de la longueur d’accélération en vue d’augmenter l’énergie des électrons. Dans sa version de base, un premier étage (injecteur) fournit un faisceau d'électrons d'énergie modérée doté d’une charge très importante. Ce faisceau est alors accéléré vers de plus hautes énergies dans un second étage appelé accélérateur. Cette thèse s'inscrit dans une série de travaux préliminaires aux expériences d'accélération laser-plasma double étages prévues sur la plateforme expérimentale CILEX autour du laser APOLLON 10 PW.Dans ce cadre, une nouvelle cible a été conçue et caractérisée avec le laser UHI100. Les propriétés du faisceau d'électrons ont ensuite été modifiées par mise en forme optique du faisceau laser produisant l'onde de plasma, ainsi que par mise en forme magnétique.Ce dernier dispositif nous a permis de pouvoir utiliser la source pour une application visant à mettre au point un système de dosimétrie adapté au fort débit de dose associé aux électrons issus de l'ALP. / Laser plasma acceleration (LPA) comes from the nonlinear interaction between an intense laser beam (≈10¹⁸ W/cm²) and a gas target. The plasma wave which is generated can, trap and accelerate electrons to very high energies due to large accelerating fields (≈ 50 GV/m). Numerous studies have been done on this promising process among our scientific community aiming at understanding the basic mechanisms involved. As a second step, we now try tries to improve the properties of the source (energy, divergence, reproducibility…).Such ultra-compact electronic sources can be used for various applications. Among them, high energy physics for which a specific scheme was designed, based on the multi-stage acceleration. The scheme relies on the addition of successive accelerating modules to increase the effective accelerating length and therefore the final electron energy. In its basic version, a first stage (injector) delivers an electron beam at moderate energy including a high charge. This beam is then further accelerated to high energy through a second stage (accelerator). This thesis is part of preliminary studies performed to prepare the future 2-stages laser plasma accelerator that will be developed on platform CILEX with APOLLON 10 PW laser.In this context, a new target has been designed and characterized with the UHI100 laser. Then the electron beam properties have been adjusted by optical shaping of the laser generating the plasma wave, and also by magnetic shaping.The electron beam, magnetically shaped, has been used for a specific application devoted to the set-up of a new dosimetric diagnostic, dedicated to the measurement of high dose rate delivered by these electrons from LPA.
45

COMPILER FOR A TRACE-BASED DEEP NEURAL NETWORK ACCELERATOR

Andre Xian Ming Chang (6789503) 12 October 2021 (has links)
Deep Neural Networks (DNNs) are the algorithm of choice for various applications that require modeling large datasets, such as image classification, object detection and natural language processing. DNNs present highly parallel workloads<br>that lead to the need of custom hardware accelerators. Deep Learning (DL) models specialized on different tasks require a programmable custom hardware, and a compiler to efficiently translate various DNNs into an efficient dataflow to be executed on the accelerator. Given a DNN oriented custom instructions set, various compilation phases are needed to generate efficient code and maintain generality to support<br>many models. Different compilation phases need to have different levels of hardware awareness so that it exploits the hardware’s full potential, while abiding with the hardware constraints. The goal of this work is to present a compiler workflow and its hardware aware optimization passes for a custom DNN hardware accelerator. The compiler uses model definition files created from popular frameworks to generate custom instructions. Different levels of hardware aware code optimizations are applied to improve performance and data reuse. The software also exposes an interface to run the accelerator implemented on various FPGA platforms, proving an end-to-end solution.
46

Development of 6 MV tandem acclerator mass spectrometry facility and its applications

Sekonya, Kamela Godwin January 2017 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy, School of Physics. Johannesburg, 2017. / Accelerator Mass Spectrometry (AMS) is an ultra-sensitive isotopic analysis technique that allows for the determination of isotopic ratios of rare long-lived radionuclides such as radiocarbon. AMS has become an important tool in many scientific disciplines, due to its sensitivity of detecting isotopic ratios at the level of 10-15 by making use of nuclear physics techniques and methods. The objective of the present work was to design and implement a new AMS system at iThemba LABS, the first of its kind on the African continent. The system is described in detail along with the relevant ion optics simulations using TRACE-3D. Beam optics calculations were performed for carbon isotopes, using the TRACE-3D code, in order to optimize the design of the new spectrometer and assess its overall performance. The AMS technique was applied in two unique South African research projects in relation to archaeology and environmental air pollution studies. The AMS technique, combined with the Proton-Induced X-Ray Emission (PIXE) technique, was also applied in an environmental study with respect to the contribution of contemporary and fossil carbon in air pollution in the Lephalale District, close to both the newly built Medupi coal-fired power station (~5 GW, the largest ever build in South Africa), and the existing Matimba coal-fired power station. The discrimination of contemporary carbon and fossil carbon is accomplished by using the AMS technique in measurements of the 14C/C ratios of aerosol particulate matter. The absence of 14C in fossil carbon material and the known 14C/C ratio levels in contemporary carbon material allows us to distinguish between contemporary carbon and fossil carbon and decipher in this manner different anthropogenic contributions. iv The contemporary carbon throughout our sampling campaign in the Lephalale District has been measured to be approximately 53% of carbon aerosol. As many studies have been performed of contemporary carbon and fossil carbon, no other contemporary and fossil carbon source assessment method provides the definitive results that can be obtained from radiocarbon measurements. PIXE analysis for the determination of the elemental composition of particulate matter in samples near the Medupi coal-fired power station in the Lephalale District was also performed for 6 elements, namely, K, Ca, Ti, Mn, Fe, and Zn. In the samples that were analyzed the particulate matter concentrations did not exceed the air quality standards regulation at Lephalale. The recommended daily limit air quality standard by South African legislation is 75 µg/m3. Enrichment Factor (EF) analysis of soil with respect to Fe shows anomalously high values for Zn. AMS was also applied to archaeological studies of early herding camps of the khoe khoe people at Kasteelberg, situated on the southwest coast in South Africa, and are among the best preserved sites of their kind in the world. Sea-shell samples from the Kasteelberg B (KBB) site have been dated with AMS at Lawrence Livermore National Laboratory (LLNL) in an effort to elucidate the relationship between the herder-foragers of the inland and shoreline sites in terms of migration patterns. The radiocarbon dates obtained are in general agreement with the other studies that have been performed on the site, and show that the ages of artifacts are less than 2000 years. The samples for this study originate from various well defined stratigraphic-levels at square A3 at KBB. It was evident from excavation that the artefacts seem to be of the same period and there is no evidence of mixing from different stratigraphic layers. v Radiocarbon dates were calibrated using Calib 6.1 and each was corrected for marine reservoir effect. The date range between the earliest and most recent dates that were obtained span gap is approximately 400 years from AD 825 to AD 1209. The majority of the radiocarbon dates of the KBB site belong to dates of 1002-1100 AD, the other few belong to 825-958 AD, and the last single date of 1209 AD. The new AMS dates from this work suggest the high probability that indeed there was a hiatus between the two occupations designated as lower and the upper KBB. The significant changes seen in material culture styles as well as in the nature of occupation and change in accumulation rate of deposits therefore do not necessarily indicate a cultural replacement caused by the arrival of a new population. This implies that the occupants of lower KBB may also have been Khoe-speakers, and not local San. / GR2018
47

Assessment of the McMaster KN Accelerator for Nuclear Resonance Absorption and Fluorescence Experiments with 28Si Nucleus Induced by 27Al(p,γ)^28Si Capture Reaction

Atanackovic, Jovica 08 1900 (has links)
<p> This thesis represents a detailed assessment of the McMaster KN Accelerator site for the performance of a nuclear resonance absorption and fluorescence phenomenon in the 28Si nucleus. The main focus of this work is the 27Al(p, γ)^28 Si reaction, although other nuclear reactions are explored, such as: 27Al(p, p'γ)^27Al and 27Al(p, αγ)^24Mg. The gamma yield experiments from all these reactions suggest a repeatable and steady results, as well as very good agreement with the present literature. This is seen in chapter 2. Chapter 3 represents concrete nuclear resonance experiments with a direct ground state transition of the 12.33 MeV gamma energy from the 27Al(p, γ)^28Si reaction. These experiments are reproducible and repeatable with either HPGe or NaI(T1) (NaI elsewhere in text) detectors. Also, they are in close agreement with the literature.</p> <p> However, the main part of this work is described in chapter 4, where the first excited level of Si at 1.78 MeV is studied thoroughly. This is a pilot work that has never been attempted before. A thorough empirical approach is undertaken and described in section 4.1. This approach describes rationale for attempting nuclear resonance experiments with the first excited state of Si. The calculations suggest very close agreement between 12.33 MeV and 1.78 MeV experiments. Based on that, 7 different experimental sets, with several subsets ( within some of the sets) are performed. Very interesting results are obtained. However, so far, it cannot be concluded whether NRA/NRF experiments can be performed using the first excited state of Si. Most likely, hight current proton accelerators should be used and the experiments with 1.78 MeV lines should be repeated. These accelerators are described in chapter 5 and have the proton current output close to 1000 times higher than the McMaster KN accelerator. At the end, the dosimetry measurements suggest a negligible radiation dose from KN accelerator, as well as from these powerful accelerators.</p> / Thesis / Doctor of Philosophy (PhD)
48

Compton polarimeter for Qweak Experiment at Jefferson Laboratory

Zou, David January 2011 (has links)
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 41-42). / The Qweak experiment at Jefferson Lab aims to make the first precision measurement of the proton's weak charge, QP = 1 - 4 sin 2 9w at Q2 = 0.026GeV 2 . Given the precision goals in the Qweak experiment, the electron beam polarization must be known to an absolute uncertainty of 1%. A new Compton polarimeter has been built and installed in Hall C in order to make this important measurement. Compton polarimetry has been chosen for its ability to deliver continuous on-line measuremnts at high currents necessary for Qweak (up to 180pzA). In this thesis, we collected and analyzed electron beam polarization data using the Qweak Compton polarimeter. Currently, data from the Compton can already be used to calculate preliminary values of experimental physics asymmetries and also the electron beam polarization. These preliminary results are promising indications that Qweak will be able to meet its stated precision goals. / by David Zou. / S.B.
49

Design and prototyping of Hardware-Accelerated Locality-aware Memory Compression

Srinivas, Raghavendra 09 September 2020 (has links)
Hardware Acceleration is the most sought technique in chip design to achieve better performance and power efficiency for critical functions that may be in-efficiently handled from traditional OS/software. As technology started advancing with 7nm products already in the market which can provide better power and performance consuming low area, the latency-critical functions that were handled by software traditionally now started moving as acceleration units in the chip. This thesis describes the accelerator architecture, implementation, and prototype for one of such functions namely "Locality-Aware memory compression" which is part of the "OS-controlled memory compression" scheme that has been actively deployed in today's OSes. In brief, OS-controlled memory compression is a new memory management feature that transparently, dramatically, and adaptively increases effective main memory capacity on-demand as software-level memory usage increases beyond physical memory system capacity. OS-controlled memory compression has been adopted across almost all OSes (e.g., Linux, Windows, macOS, AIX) and almost all classes of computing systems (e.g., smartphones, PCs, data centers, and cloud). The OS-controlled memory compression scheme is Locality Aware. But still under OS-controlled memory compression today, applications experience long-latency page faults when accessing compressed memory. To solve this per- performance bottle-neck, acceleration technique has been proposed to manage "Locality Aware Memory compression" within hardware thereby enabling applications to access their OS- compressed memory directly. This Accelerator is referred to as HALK throughout this work, which stands for "Hardware-accelerated Locality-aware Memory Compression". The literal mean- ing of the word HALK in English is 'a hidden place'. As such, this accelerator is neither exposed to the OS nor to the running applications. It is hidden entirely in the memory con- troller hardware and incurs minimal hardware cost. This thesis work explores developing FPGA design prototype and gives the proof of concept for the functionality of HALK by running non-trivial micro-benchmarks. This work also provides and analyses power, performance, and area of HALK for ASIC designs (at technology node of 7nm) and selected FPGA Prototype design. / Master of Science / Memory capacity has become a scarce resource across many digital computing systems spanning from smartphones to large-scale cloud systems. The slowing improvement of memory capacity per dollar further worsens this problem. To address this, almost all industry-standard OSes like Linux, Windows, macOS, etc implement Memory compression to store more data in the same space. This is handled with software in today's systems which is very inefficient and suffers long latency thus degrading the user responsiveness. Hardware is always faster in performing computations compared to software. So, a solution that is implemented in hardware with the low area and low cost is always preferred as it can provide better performance and power efficiency. In the hardware world, such modules that perform specifically targeted software functions are called accelerators. This thesis shows the work on developing such a hardware accelerator to handle ``Locality Aware Memory Compression" so as to allow the applications to directly access compressed data without OS intervention thereby improving the overall performance of the system. The proposed accelerator is locality aware which means least recently allocated uncompressed page would be picked for compression to free up more space on-demand and most recently allocated page is put into an uncompressed format.
50

Open-Source Parameterized Low-Latency Aggressive Hardware Compressor and Decompressor for Memory Compression

Jearls, James Chandler 16 June 2021 (has links)
In recent years, memory has shown to be a constraining factor in many workloads. Memory is an expensive necessity in many situations, from embedded devices with a few kilobytes of SRAM to warehouse-scale computers with thousands of terabytes of DRAM. Memory compression has existed in all major operating systems for many years. However, while faster than swapping to a disk, memory decompression adds latency to data read operations. Companies and research groups have investigated hardware compression to mitigate these problems. Still, open-source low-latency hardware compressors and decompressors do not exist; as such, every group that studies hardware compression must re-implement. Importantly, because the devices that can benefit from memory compression vary so widely, there is no single solution to address all devices' area, latency, power, and bandwidth requirements. This work intends to address the many issues with hardware compressors and decompressors. This work implements hardware accelerators for three popular compression algorithms; LZ77, LZW, and Huffman encoding. Each implementation includes a compressor and decompressor, and all designs are entirely parameterized. There are a total of 22 parameters between the designs in this work. All of the designs are open-source under a permissive license. Finally, configurations of the work can achieve decompression latencies under 500 nanoseconds, much closer than existing works to the 255 nanoseconds required to read an uncompressed 4 KB page. The configurations of this work accomplish this while still achieving compression ratios comparable to software compression algorithms. / Master of Science / Computer memory, the fast, temporary storage where programs and data are held, is expensive and limited. Compression allows for data and programs to be held in memory in a smaller format so they take up less space. This work implements a hardware design for compression and decompression accelerators to make it faster for the programs using the compressed data to access it. This work includes three hardware compressor and decompressor designs that can be easily modified and are free for anyone to use however they would like. The included designs are orders of magnitude smaller and less expensive than the existing state of the art, and they reduce the decompression time by up to 6x. These smaller areas and latencies result in a relatively small reduction in compression ratios: only 13% on average across the tested benchmarks.

Page generated in 0.064 seconds