181 |
Transmisión inalámbrica de imágenes médicas para un ecógrafo ultra-portátilHasbún Avendaño, Nicolás Ignacio January 2018 (has links)
Ingeniero Civil Eléctrico / 23/07/2023
|
182 |
Mapping recursive functions to reconfigurable hardwareFerizis, George, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
Reconfigurable computing is a method of development that provides a developer with the ability to reprogram a hardware device. In the specific case of FPGAs this allows for rapid and cost effective implementation of hardware devices when compared to standard a ASIC design, coupled with an increase in performance when compared to software based solutions. With the advent of development tools such as Celoxica's DK package and Xilinx's Forge package, that support languages traditionally associated with software development, a change in the skill sets required to develop FPGA solutions from hardware designers to software programmers is possible and perhaps desirable to increase the adoption of FPGA technologies. To support developers with these skill sets tools should closely mirror current software development tools in terms of language, syntax and methodology, while at the same time both transparently and automatically take advantage of as much of the increased performance that reconfigurable architectures can provide over traditional software architectures by utilizing the parallelism and the ability to create arbitrary depth pipelines which is not present in traditional microprocessor designs. A common feature of many programming languages that is not supported by many higher level design tools is recursion. Recursion is a powerful method used to elegantly describe many algorithms. Recursion is typically implemented by using a stack to store arguments, context and a return address for function calls. This however limits the controlling hardware to running only a single function at any moment which eliminates an algorithm's ability to take advantage of the parallelism available between successive iterations of a recursive function. This squanders the high amount of parallelism provided by the resources on the FPGA thus reducing the performance of the recursive algorithm. This thesis presents a method to address the lack of support for recursion in design tools that exploits the parallelism available between recursive calls. It does this by unrolling the recursion into a pipeline, in a similar manner to the pipeline obtained from loop unrolling, and then streaming the data through the resulting pipeline. However essential differences between loops and recursive functions such as multiple recursive calls in a function, and hence multiple unrollings, and post-recursive statements add further complexity to the issue of unrolling as the pipeline may take a non-linear shape and contain heterogeneous stages. Unrolling the recursive function on the FPGA increases the parallelism available, however the depth of the pipline and therefore the amount of parallelism available, is limited by the finite resources on the FPGA. To make efficient use of the resources on the FPGA the system must be able to unroll the function in a way to best suit the input but also must ensure that the function is not unrolled past its maximum recursive depth. A trivial solution such as unrolling on-demand introduces a latency into the system when a further instance of the function is unrolled that reduces overall performance. To reduce this penalty it is desirable for the system to be able to predict the behaviour of the recursive function based on the input data and unroll the function to a suitable length prior to it being required. Accurate prediction is possible in cases where the condition for recursion is a simple function on the arguments, however in cases where the condition for recursion is based on complex functions, such as the entire recursive function, accurate prediction is not possible. In situations such as this a heuristic is used which provides a close approximation to the correct depth of recursion at any given time. This prediction allows the system to reduce the performance penalty from real time unrolling without over utilization of the the FPGA resources. Results obtained demonstrate the increase in performance for various recursive functions obtained from the increased parallelism, when compared to a stack based implementation on the same device. In certain instances due to constraints on hardware availability results were gained from device simulation using a simulator developed for this purpose. Details of this simulator are presented in this thesis.
|
183 |
A multi-channel real-time GPS position location systemParkinson, Kevin James, Surveying & Spatial Information Systems, Faculty of Engineering, UNSW January 2008 (has links)
Since its introduction in the early 1980??s, the Global Positioning System (GPS) has become an important worldwide resource. Although the primary use of GPS is for position location, the inherent timing accuracy built into the system has allowed it become an important synchronisation resource for other systems. In most cases the GPS end user only requires a position estimate without awareness of the timing and synchronisation aspects of the system. A low accuracy position (at the several-metre level) with a low update rate of about 1Hz is often acceptable. However, obtaining more accurate position estimates (at the sub-metre level) at higher update rates requires the use of differential correction signals (DGPS) and greater processing power in the receiver. Furthermore, some extra challenges arise when simultaneously gathering information from a group of independently moving remote GPS receivers (rovers) at increased sampling rates (10Hz). This creates the need for a high bandwidth telemetry system and techniques to synchronise the position measurements for tracking each rover. This thesis investigates and develops an overall solution to these problems using GPS for both position location and synchronisation. A system is designed to generate relative position information from 30 or more rovers in real-time. The important contributions of this research are as follows: a) A GPS synchronised telemetry system is developed to transport GPS data from each rover. Proof of concept experiments show why a conventional RF Local Area Network (LAN) is not suitable for this application. The new telemetry system is developed using Field Programmable Gate Array (FPGA) devices to embed both the synchronising logic and the central processor. b) A new system architecture is developed to reduce the processing load of the GPS receiver. Furthermore, the need to transfer the DGPS correction data to the rover is eliminated. Instead, the receiver raw data is processed in a centralised Kalman filter to produce multiple position estimates in real-time. c) Steps are taken to optimise the telemetry data stream by using only the bare essential data from each rover. A custom protocol is developed to deliver the GPS receiver raw data to the central point with minimal latency. The central software is designed to extract and manage common elements such as satellite ephemeris data from the central reference receiver only. d) Methods are developed to make the overall system more robust by identifying and understanding the points of failure, providing fallback options to allow recovery with minimal impact. Based on the above a system is designed and integrated using a mixture of custom hardware, custom software and off-the-shelf hardware. Overall tests show that efforts to minimise latency, minimise power requirements and improve reliability have delivered good results.
|
184 |
Algorithmes d'adaptation pour la couche physique de systèmes multi-porteusesMahmood, Asad 16 July 2008 (has links) (PDF)
Les systèmes multi-porteuses (MCM) actuelles n'atteignent pas leurs potentiel en raison de non-adaptation des paramètres de fonctionnement (e.g. taille de constellation, taux de codage, puissance émis etc.) par rapport l'état de canal (CSI) sur chaque sous-porteuse Cette thèse aborde le problème de la complexité des algorithmes d'adaptation en proposant des nouveaux algorithmes d'optimisation pour MCM. La complexité des algorithmes est ciblé sur le plan théorique / algorithmique ainsi que sur la coté architecture. La conception d'un nouvel algorithme de Bit-Loading (adaptation par rapport taille de constellation) est fait basé sur un rythme d'allocation présent dans l'allocation optimale/greedy. Le nouveau algorithme a une complexité beaucoup plus faible que d'autres algorithmes. Ensuite, développements théoriques et conception d'un nouvel algorithme de répartition optimale de puissance totale, en tenant compte la contrainte de puissance-maximale, est fait. Taux de codage étant un paramètre importante de la couche physique, une nouvelle méthode d'optimisation du profil d'irrégularité des codes LDPC irrégulière basé sur la quantification du phénomène de l'effet de vague (= ' Wave-Effect') est proposé avec les développements théoriques conduisant à une méthode de calcul efficace. Finalement, en utilisant les aspects adaptatif architecturale, une nouvelle méthodologie pour l'optimisation des ressources architecture pour un algorithme d'adaptation donné, en tenant compte les contraintes temporel de la canal de la transmission en temps réel., a été proposé.
|
185 |
Implementation of a centralized scheduler for the Mitrion Virtual Processor / Implementation av en centraliserad skedulerare för Mitrion Virtual ProcessorPersson, Magnus January 2008 (has links)
<p>Mitrionics is a company based in Lund, Sweden. They develop a platform for FPGA-based acceleration, the platform includes a virtual processor, the Mitrion Virtual Processor, that can be custom built to fit the application that is to be accelerated. The purpose of this thesis is to investigate the possible benefits of using a centralized scheduler for the Mitrion Virtual Processor instead of the current solution which is a distributed scheduler. A centralized scheduler has been implemented and evaluated using a set of benchmark applications. It has been found that the centralized scheduler can decrease the number of registers used to implement the Mitrion Virtual Processor on an FPGA. The size of the decrease depends on the application, and certain applications are more suitable than others. It has also been found that the introduction of a centralized scheduler makes it more difficult for the place and route tool to fit a design on the FPGA resulting in failed timing constraints for the largest benchmark application.</p> / <p>Mitrionics är ett företag i Lund. De utvecklar en platform för FPGA-baserad acceleration av applikationer. Platformen innehåller bland annat en virtuell processor, Mitrion Virtual Processor, vilken kan specialanpassas till applikationen som ska accelereras. Syftet med detta arbete är att implementera en centraliserad schedulerare för Mitrion Virtual Processor och utvärdera vilka möjliga fördelar det kan finnas jämfört med att använda den nuvarande lösningen vilket är en distribuerad skedulerare. En centraliserad skedulerare har implementerats och utvärderas genom att avända en uppsättning testapplikationer. Det har funnits att användandet av en centraliserad skedulerare kan minska antalet register som behövs för att implementera Mitrion Virtual Processor på en FPGA. Vidare har det funnits att storleken på minskningen beror på applikationen och att vissa applikationer lämpar sig bättre än andra. Det har även visat sig att processen att placera logik på FPGAn blir svårare om man använder en centraliserad skedulerare, detta har resulterat i att vissa timing krav inte har mötts när den största testapplikation har syntetiserats.</p>
|
186 |
Portning och utökning av processor för ASIC och FPGA / Port and extension of processor for ASIC and FPGAOlsson, Martin January 2009 (has links)
<p>In this master thesis, the possibilities of customizing a low-cost microprocessor with the purpose of replacing an existing microprocessor solution are investigated. A brief survey of suitable processors is carried out wherein a replacement is chosen. The replacement processor is then analyzed and extended with accelerators in order to match set requirements.</p><p>The result is a port of the processor Lattice Mico32 for the FPGA curcuit Xilinx Virtex-5 which replaces an earlier solution using Xilinx MicroBlaze. To reach the set requirements, accelerators for floating point arithmetics and FIR filtering have been developed. The toolchain for the processor has been modified to support the addition of accelerated floating point arithmetics.</p><p>A final evaluation of the presented solution shows that it fulfills the set requirements and constitutes a functional replacement for the previous solution.</p>
|
187 |
Channel coding application for cdma2000 implemented in a FPGA with a Soft processor coreKling, Mikael January 2005 (has links)
<p>With today’s FPGA’s it’s possible to implement complete systems in a single FPGA. With help of Soft Processor Cores like the MicroBlaze processor several microcontrollers can be implemented in the same FPGA.</p><p>The third generation telecommunications system, cdma2000, has several channels, which has specific assignments. The Sync channel purpose is to attain initial time synchronization.</p><p>The purpose with this thesis has been to implement the Sync channel in a FPGA with use of a MicroBlaze processor. An evaluation of the concept of using a Soft Processor Core instead of ordinary DSP’s and microcontrollers would then be conducted.</p><p>This thesis has resulted in a system with a MicroBlaze processor that has the Sync channel as a peripheral. It’s possible to write information via HyperTerminal to the MicroBlaze processor which then uses this data as input to the Sync channel. The Sync channel then modulates the data according to the cdma2000 specifications and then outputs it onto an external pin at the FPGA.</p><p>The evaluation of this concept hasn’t resulted in a general recommendation whether to use ASIC or FPGA’s in a system. The concept of using Soft Processor Cores certainly has its benefits and is something that could be thought of in the future when designing a system.</p>
|
188 |
Implementation of an IEEE 802.11a transmitter in VHDL for Altera Stratix II FPGABrännström, Johannes January 2006 (has links)
<p>The fast growth of wireless local area networks today has opened up a whole new market for wireless solutions. Released in 1999, the IEEE 802.11a is a standard for high-speed wireless data transfer that much of modern Wireless Local Area Network technology is based on.</p><p>This project has been about implementing the transmitter part of the 802.11a physical layer in VHDL to run on the Altera Stratix II FPGA. Special consideration was taken to divide the system into parts based on sample rate. This report contains a brief introduction to Orthogonal Frequency Division Multiplexing and to the IEEE 802.11a physical layer as well as a description of the implemented system.</p>
|
189 |
Utvärdering av Field-Programmable Gate Array (FPGA) som hjälpprocessor för prestandaökningKrantz, Emil January 2008 (has links)
<p>Det här arbetet är en utvärdering om huruvida det finns problem som kan få en prestandavinst då man använder en Field-Programmable Gate Array (FPGA) som hjälpprocessor till en mikroprocessor i jämförelse men att enbart använda en mikro-processor. För att avgöra detta implementerades algoritmen gaussfiltrering dels på en mikroprocessor med språket C och dels för en FPGA med hårdvarubeskrivningsspråket Very-High-Speed Integrated Circuits Hardware Description Language (VHDL). Simuleringar gjordes för dessa två implementationer och resultatet visade att det var möjligt att få en prestandaökning på 25 gånger för denna speciella algoritm.</p>
|
190 |
Design tradeoff analysis of floating-point adder in FPGAsMalik, Ali 19 August 2005 (has links)
Field Programmable Gate Arrays (FPGA) are increasingly being used to design high end computationally intense microprocessors capable of handling both fixed and floating-point mathematical operations. Addition is the most complex operation in a floating-point unit and offers major delay while taking significant area. Over the years, the VLSI community has developed many floating-point adder algorithms mainly aimed to reduce the overall latency.
An efficient design of floating-point adder onto an FPGA offers major area and performance overheads. With the recent advancement in FPGA architecture and area density, latency has been the main focus of attention in order to improve performance. Our research was oriented towards studying and implementing standard, Leading One
Predictor (LOP), and far and close data-path floating-point addition algorithms. Each algorithm has complex sub-operations which lead significantly to overall latency of the design. Each of the sub-operation is researched for different implementations and then synthesized onto a Xilinx Virtex2p FPGA device to be chosen for best performance.
This thesis discusses in detail the best possible FPGA implementation for all the three algorithms and will act as an important design resource. The performance criterion is latency in all the cases. The algorithms are compared for overall latency, area, and levels of logic and analyzed specifically for Virtex2p architecture, one of the latest FPGA architectures provided by Xilinx. According to our results standard algorithm is the best implementation with respect to area but has overall large latency of 27.059 ns while occupying 541 slices. LOP algorithm improves latency by 6.5% on added expense of 38% area compared to standard algorithm. Far and close data-path implementation shows 19% improvement in latency on added expense of 88% in area compared to standard algorithm. The results clearly show that for area efficient design standard algorithm is the best choice but for designs where latency is the criteria of performance far and close data-path is the best alternative. The standard and LOP algorithms were pipelined into five stages and compared with the Xilinx Intellectual Property. The pipelined LOP gives 22% better clock speed on an added expense of 15% area when compared to Xilinx Intellectual Property and thus a better choice for higher throughput applications. Test benches were also developed to test these algorithms both in simulation and hardware.
Our work is an important design resource for development of floating-point adder hardware on FPGAs. All sub components within the floating-point adder and known algorithms are researched and implemented to provide versatility and flexibility to designers as an alternative to intellectual property where they have no control over the design. The VHDL code is open source and can be used by designers with proper reference.
|
Page generated in 0.0296 seconds