• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 14
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 20
  • 12
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

FDTD METHODS USING PARALLEL COMPUTATIONS AND HARDWARE OPTIMIZATION

CULLEY, ROBERT J. 08 October 2007 (has links)
No description available.
2

A HARDWARE IMPLEMENTATION FOR MULTIPLE BACKTRACING ALGORITHM

LU, FEI January 2005 (has links)
No description available.
3

IMPLEMENTATION AND PERFORMANCE OF A HIGHSPEED, VHDL-BASED, MULTI-MODE ARTM DEMODULATOR

Hill, Terrance, Geoghegan, Mark, Hutzel, Kevin 10 1900 (has links)
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California / Legacy telemetry systems, although widely deployed, are being severely taxed to support the high data rate requirements of advanced aircraft and missile platforms. Increasing data rates, in conjunction with loss of spectrum have created a need to use available spectrum more efficiently. In response to this, new modulation techniques have been developed which offer more data capacity in the same operating bandwidth. Demodulation of these new waveforms is a computationally challenging task, especially at high data rates. This paper describes the design, implementation and performance of a high-speed, multi-mode demodulator for the Advanced Range Telemetry (ARTM) program which meets these challenges.
4

AN ADVANCED RECONFIGURABLE MULTI-CHANNEL COMMUNICATION TERMINAL FOR TELEMETRY APPLICATIONS BASED ON FLEXICOM 260A

Chandran, Henry 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Traditional communication hardware has focused on modular architectures. Now, with the incoming high speed DSP and FPGAs a shift from traditional modular architecture to reconfigurable architecture has taken place. The nature of this architecture allows to optimize various telemetry applications in a single platform. This paper describes a reconfigurable multi channel communication system.
5

Mapping recursive functions to reconfigurable hardware

Ferizis, George, Computer Science & Engineering, Faculty of Engineering, UNSW January 2005 (has links)
Reconfigurable computing is a method of development that provides a developer with the ability to reprogram a hardware device. In the specific case of FPGAs this allows for rapid and cost effective implementation of hardware devices when compared to standard a ASIC design, coupled with an increase in performance when compared to software based solutions. With the advent of development tools such as Celoxica's DK package and Xilinx's Forge package, that support languages traditionally associated with software development, a change in the skill sets required to develop FPGA solutions from hardware designers to software programmers is possible and perhaps desirable to increase the adoption of FPGA technologies. To support developers with these skill sets tools should closely mirror current software development tools in terms of language, syntax and methodology, while at the same time both transparently and automatically take advantage of as much of the increased performance that reconfigurable architectures can provide over traditional software architectures by utilizing the parallelism and the ability to create arbitrary depth pipelines which is not present in traditional microprocessor designs. A common feature of many programming languages that is not supported by many higher level design tools is recursion. Recursion is a powerful method used to elegantly describe many algorithms. Recursion is typically implemented by using a stack to store arguments, context and a return address for function calls. This however limits the controlling hardware to running only a single function at any moment which eliminates an algorithm's ability to take advantage of the parallelism available between successive iterations of a recursive function. This squanders the high amount of parallelism provided by the resources on the FPGA thus reducing the performance of the recursive algorithm. This thesis presents a method to address the lack of support for recursion in design tools that exploits the parallelism available between recursive calls. It does this by unrolling the recursion into a pipeline, in a similar manner to the pipeline obtained from loop unrolling, and then streaming the data through the resulting pipeline. However essential differences between loops and recursive functions such as multiple recursive calls in a function, and hence multiple unrollings, and post-recursive statements add further complexity to the issue of unrolling as the pipeline may take a non-linear shape and contain heterogeneous stages. Unrolling the recursive function on the FPGA increases the parallelism available, however the depth of the pipline and therefore the amount of parallelism available, is limited by the finite resources on the FPGA. To make efficient use of the resources on the FPGA the system must be able to unroll the function in a way to best suit the input but also must ensure that the function is not unrolled past its maximum recursive depth. A trivial solution such as unrolling on-demand introduces a latency into the system when a further instance of the function is unrolled that reduces overall performance. To reduce this penalty it is desirable for the system to be able to predict the behaviour of the recursive function based on the input data and unroll the function to a suitable length prior to it being required. Accurate prediction is possible in cases where the condition for recursion is a simple function on the arguments, however in cases where the condition for recursion is based on complex functions, such as the entire recursive function, accurate prediction is not possible. In situations such as this a heuristic is used which provides a close approximation to the correct depth of recursion at any given time. This prediction allows the system to reduce the performance penalty from real time unrolling without over utilization of the the FPGA resources. Results obtained demonstrate the increase in performance for various recursive functions obtained from the increased parallelism, when compared to a stack based implementation on the same device. In certain instances due to constraints on hardware availability results were gained from device simulation using a simulator developed for this purpose. Details of this simulator are presented in this thesis.
6

Adaptation of The ePUMA DSP Platform for Coarse Grain Configurability

Pishgah, Sepehr January 2011 (has links)
Configurable devices have become more and more popularnowadays. This is because they can improve the system performance inmany ways. In this thesis work it is studied how introduction of coarse grain configurability can improve the ePUMA, the low power highspeed DSP platform, in terms ofperformance and power consumption. This study takes two DSP algorithms, Fast Fourier Transform (FFT) and FIR filtering asbenchmarks to study the effect of this new feature. Architectures are presented for calculation of FFT and FIR filters and it is shown how they can contribute to the system performance. Finally it is suggestedto consider coarse grain configurability as an option for improvement of the system.
7

Desenvolvimento e implementação de algoritmos de compressão aplicados à qualidade da energia elétrica

Dapper, Roque Eduardo January 2013 (has links)
Os equipamentos de análise de qualidade da energia elétrica, em sua grande parte, salvam a forma de onda amostrada somente no entorno do instante onde é detectado algum distúrbio, tipicamente um transiente. Essa limitação se deve em grande parte aos limites de armazenamento das memórias retentivas e ao alto custo que estas representam para um equipamento. No entanto uma nova geração de analisadores está se tornando cada vez mais comum, os analisadores de registro contínuo. Essa família de analisadores, além de salvar relatórios baseados no cálculo de parâmetros pré-estabelecidos também realiza o armazenamento contínuo da forma de onda amostrada. Essa abordagem permite que, conforme evoluam as ferramentas matemáticas para análise da qualidade da energia elétrica, novas análises sejam feitas sobre os dados coletados, tirando assim novas conclusões sobre um sistema elétrico. No entanto, para poder aplicar esta abordagem é necessário que o armazenamento dessas informações seja feito da forma mais eficiente possível, dado o grande volume de dados amostrados ao longo de todo um período de análise. Este trabalho visa o desenvolvimento de um algoritmo de compressão de registros de qualidade da energia elétrica, bem como sua implementação em hardware reconfigurável. Os algoritmos de compressão desenvolvidos estão baseados em um sistema de compressão composto por diferentes técnicas de compressão utilizadas em conjunto. Os métodos propostos fazem uso do algoritmo Deflate como algoritmo de compressão sem perdas. Para melhorar a capacidade de compressão do algoritmo Deflate, técnicas de transformação, aproximação polinomial e codificação de dados são aplicadas como meio para diminuir a entropia dos dados e assim aumentar a eficiência de compressão. Por fim, é apresentada a implementação dos algoritmos de compressão polinomial e Deflate, os quais foram implementados em linguagem VHDL e sintetizados para uso em FPGA. / Most of the power quality analyzers, just records the waveform of the sampled signals around the moment where a transient disturbance is detected. This limitation is due to the storage limits of the retentive memories and the high cost that it represents in a equipment. However a new generation of analyzers is becoming very common, the continuous logging power quality analyzers. This family of analyzers, as well as records reports based on the calculation of pre-defined parameters also performs the continuous storage of the sampled waveform. This approach allows new analysis on the collected data, thus allowing new conclusions about an electrical system. However, in order to apply this approach is required that the storage of such information is done as efficiently as possible, given the large amount of sampled data recorded in the entire period of analysis. This work aims to develop a compression algorithm to records of power quality as well as its implementation on reconfigurable hardware. The compression algorithms were developed based on a compression system composed of different compression techniques used together. The proposed algorithms make use of the Deflate algorithm as a lossless compression algorithm. The compression rate of the Deflate algorithm it is improved through the preprocessing of the data using techniques like polynomial transformation and data encode, as a way to reduce the date entropy. It is also presented in the work the implementation of the algorithms in VHDL language for use in FPGA devices.
8

Desenvolvimento e implementação de algoritmos de compressão aplicados à qualidade da energia elétrica

Dapper, Roque Eduardo January 2013 (has links)
Os equipamentos de análise de qualidade da energia elétrica, em sua grande parte, salvam a forma de onda amostrada somente no entorno do instante onde é detectado algum distúrbio, tipicamente um transiente. Essa limitação se deve em grande parte aos limites de armazenamento das memórias retentivas e ao alto custo que estas representam para um equipamento. No entanto uma nova geração de analisadores está se tornando cada vez mais comum, os analisadores de registro contínuo. Essa família de analisadores, além de salvar relatórios baseados no cálculo de parâmetros pré-estabelecidos também realiza o armazenamento contínuo da forma de onda amostrada. Essa abordagem permite que, conforme evoluam as ferramentas matemáticas para análise da qualidade da energia elétrica, novas análises sejam feitas sobre os dados coletados, tirando assim novas conclusões sobre um sistema elétrico. No entanto, para poder aplicar esta abordagem é necessário que o armazenamento dessas informações seja feito da forma mais eficiente possível, dado o grande volume de dados amostrados ao longo de todo um período de análise. Este trabalho visa o desenvolvimento de um algoritmo de compressão de registros de qualidade da energia elétrica, bem como sua implementação em hardware reconfigurável. Os algoritmos de compressão desenvolvidos estão baseados em um sistema de compressão composto por diferentes técnicas de compressão utilizadas em conjunto. Os métodos propostos fazem uso do algoritmo Deflate como algoritmo de compressão sem perdas. Para melhorar a capacidade de compressão do algoritmo Deflate, técnicas de transformação, aproximação polinomial e codificação de dados são aplicadas como meio para diminuir a entropia dos dados e assim aumentar a eficiência de compressão. Por fim, é apresentada a implementação dos algoritmos de compressão polinomial e Deflate, os quais foram implementados em linguagem VHDL e sintetizados para uso em FPGA. / Most of the power quality analyzers, just records the waveform of the sampled signals around the moment where a transient disturbance is detected. This limitation is due to the storage limits of the retentive memories and the high cost that it represents in a equipment. However a new generation of analyzers is becoming very common, the continuous logging power quality analyzers. This family of analyzers, as well as records reports based on the calculation of pre-defined parameters also performs the continuous storage of the sampled waveform. This approach allows new analysis on the collected data, thus allowing new conclusions about an electrical system. However, in order to apply this approach is required that the storage of such information is done as efficiently as possible, given the large amount of sampled data recorded in the entire period of analysis. This work aims to develop a compression algorithm to records of power quality as well as its implementation on reconfigurable hardware. The compression algorithms were developed based on a compression system composed of different compression techniques used together. The proposed algorithms make use of the Deflate algorithm as a lossless compression algorithm. The compression rate of the Deflate algorithm it is improved through the preprocessing of the data using techniques like polynomial transformation and data encode, as a way to reduce the date entropy. It is also presented in the work the implementation of the algorithms in VHDL language for use in FPGA devices.
9

Desenvolvimento e implementação de algoritmos de compressão aplicados à qualidade da energia elétrica

Dapper, Roque Eduardo January 2013 (has links)
Os equipamentos de análise de qualidade da energia elétrica, em sua grande parte, salvam a forma de onda amostrada somente no entorno do instante onde é detectado algum distúrbio, tipicamente um transiente. Essa limitação se deve em grande parte aos limites de armazenamento das memórias retentivas e ao alto custo que estas representam para um equipamento. No entanto uma nova geração de analisadores está se tornando cada vez mais comum, os analisadores de registro contínuo. Essa família de analisadores, além de salvar relatórios baseados no cálculo de parâmetros pré-estabelecidos também realiza o armazenamento contínuo da forma de onda amostrada. Essa abordagem permite que, conforme evoluam as ferramentas matemáticas para análise da qualidade da energia elétrica, novas análises sejam feitas sobre os dados coletados, tirando assim novas conclusões sobre um sistema elétrico. No entanto, para poder aplicar esta abordagem é necessário que o armazenamento dessas informações seja feito da forma mais eficiente possível, dado o grande volume de dados amostrados ao longo de todo um período de análise. Este trabalho visa o desenvolvimento de um algoritmo de compressão de registros de qualidade da energia elétrica, bem como sua implementação em hardware reconfigurável. Os algoritmos de compressão desenvolvidos estão baseados em um sistema de compressão composto por diferentes técnicas de compressão utilizadas em conjunto. Os métodos propostos fazem uso do algoritmo Deflate como algoritmo de compressão sem perdas. Para melhorar a capacidade de compressão do algoritmo Deflate, técnicas de transformação, aproximação polinomial e codificação de dados são aplicadas como meio para diminuir a entropia dos dados e assim aumentar a eficiência de compressão. Por fim, é apresentada a implementação dos algoritmos de compressão polinomial e Deflate, os quais foram implementados em linguagem VHDL e sintetizados para uso em FPGA. / Most of the power quality analyzers, just records the waveform of the sampled signals around the moment where a transient disturbance is detected. This limitation is due to the storage limits of the retentive memories and the high cost that it represents in a equipment. However a new generation of analyzers is becoming very common, the continuous logging power quality analyzers. This family of analyzers, as well as records reports based on the calculation of pre-defined parameters also performs the continuous storage of the sampled waveform. This approach allows new analysis on the collected data, thus allowing new conclusions about an electrical system. However, in order to apply this approach is required that the storage of such information is done as efficiently as possible, given the large amount of sampled data recorded in the entire period of analysis. This work aims to develop a compression algorithm to records of power quality as well as its implementation on reconfigurable hardware. The compression algorithms were developed based on a compression system composed of different compression techniques used together. The proposed algorithms make use of the Deflate algorithm as a lossless compression algorithm. The compression rate of the Deflate algorithm it is improved through the preprocessing of the data using techniques like polynomial transformation and data encode, as a way to reduce the date entropy. It is also presented in the work the implementation of the algorithms in VHDL language for use in FPGA devices.
10

Low-power adaptive control scheme using switching activity measurement method for reconfigurable analog-to-digital converters

Ab Razak, Mohd Zulhakimi January 2014 (has links)
Power consumption is a critical issue for portable devices. The ever-increasing demand for multimode wireless applications and the growing concerns towards power-aware green technology make dynamically reconfigurable hardware an attractive solution for overcoming the power issue. This is due to its advantages of flexibility, reusability, and adaptability. During the last decade, reconfigurable analog-to-digital converters (ReADCs) have been used to support multimode wireless applications. With the ability to adaptively scale the power consumption according to different operation modes, reconfigurable devices utilise the power supply efficiently. This can prolong battery life and reduce unnecessary heat emission to the environment. However, current adaptive mechanisms for ReADCs rely upon external control signals generated using digital signal processors (DSPs) in the baseband. This thesis aims to provide a single-chip solution for real-time and low-power ReADC implementations that can adaptively change the converter resolution according to signal variations without the need of the baseband processing. Specifically, the thesis focuses on the analysis, design and implementation of a low-power digital controller unit for ReADCs. In this study, the following two important reconfigurability issues are investigated: i) the detection mechanism for an adaptive implementation, and ii) the measure of power and area overheads that are introduced by the adaptive control modules. This thesis outlines four main achievements to address these issues. The first achievement is the development of the switching activity measurement (SWAM) method to detect different signal components based upon the observation of the output of an ADC. The second achievement is a proposed adaptive algorithm for ReADCs to dynamically adjust the resolution depending upon the variations in the input signal. The third achievement is an ASIC implementation of the adaptive control module for ReADCs. The module achieves low reconfiguration overheads in terms of area and power compared with the main analog part of a ReADC. The fourth achievement is the development of a low-power noise detection module using a conventional ADC for signal improvement. Taken together, the findings from this study demonstrate the potential use of switching activity information of an ADC to adaptively control the circuits, and simultaneously expanding the functionality of the ADC in electronic systems.

Page generated in 0.0977 seconds