• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 6
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 71
  • 19
  • 18
  • 15
  • 14
  • 14
  • 12
  • 12
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Computational Redundancy in Image Processing

Khalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms. The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware. Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped. Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software. In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
52

Computational Redundancy in Image Processing

Khalvati, Farzad January 2008 (has links)
This research presents a new performance improvement technique, window memoization, for software and hardware implementations of local image processing algorithms. Window memoization combines the memoization techniques proposed in software and hardware with a characteristic of image data, computational redundancy, to improve the performance (in software) and efficiency (in hardware) of local image processing algorithms. The computational redundancy of an image indicates the percentage of computations that can be skipped when performing a local image processing algorithm on the image. Our studies show that computational redundancy is inherited from two principal redundancies in image data: coding redundancy and interpixel redundancy. We have shown mathematically that the amount of coding and interpixel redundancy of an image has a positive effect on the computational redundancy of the image where a higher coding and interpixel redundancy leads to a higher computational redundancy. We have also demonstrated (mathematically and empirically) that the amount of coding and interpixel redundancy of an image has a positive effect on the speedup obtained for the image by window memoization in both software and hardware. Window memoization minimizes the number of redundant computations performed on an image by identifying similar neighborhoods of pixels in the image. It uses a memory, reuse table, to store the results of previously performed computations. When a set of computations has to be performed for the first time, the computations are performed and the corresponding result is stored in the reuse table. When the same set of computations has to be performed again in the future, the previously calculated result is reused and the actual computations are skipped. Implementing the window memoization technique in software speeds up the computations required to complete an image processing task. In software, we have developed an optimized architecture for window memoization and applied it to six image processing algorithms: Canny edge detector, morphological gradient, Kirsch edge detector, Trajkovic corner detector, median filter, and local variance. The typical speedups range from 1.2 to 7.9 with a maximum factor of 40. We have also presented a performance model to predict the speedups obtained by window memoization in software. In hardware, we have developed an optimized architecture that embodies the window memoization technique. Our hardware design for window memoization achieves high speedups with an overhead in hardware area that is significantly less than that of conventional performance improvement techniques. As case studies in hardware, we have applied window memoization to the Kirsch edge detector and median filter. The typical and maximum speedup factors in hardware are 1.6 and 1.8, respectively, with 40% less hardware in comparison to conventional optimization techniques.
53

A high-speed two-step analog-to-digital converter with an open-loop residue amplifier

Dinc, Huseyin 04 April 2011 (has links)
It is well known that feedback is a very valuable tool for analog designers to improve linearity, and desensitize various parameters affected by process, temperature and supply variations. However, using strong global feedback limits the operation speed of analog circuits due to stability requirements. The circuits and techniques explored in this research avoid the usage of strong-global-feedback circuits to achieve high conversion rates in a two-stage analog-to-digital converter (ADC). A two-step, 9-bit, complementary-metal-oxide-semiconductor (CMOS) ADC utilizing an open-loop residue-amplifier is demonstrated. A background-calibration technique was proposed to generate the reference voltage to be used in the second stage of the ADC. This technique alleviates the gain variation in the residue amplifier, and allows an open-loop residue amplifier topology. Even though the proposed calibration idea can be extended to multistage topologies, this design was limited to two stages. Further, the ADC exploits a high-performance double-switching frontend sample-and-hold amplifier (SHA). The proposed double-switching SHA architecture results in exceptional hold-mode isolation. Therefore, the SHA maintains the desired linearity performance over the entire Nyquist bandwidth.
54

Advanced middleware support for distributed data-intensive applications

Du, Wei 12 September 2005 (has links)
No description available.
55

An Investigation of Methods to Improve Area and Performance of Hardware Implementations of a Lattice Based Cryptosystem

Beckwith, Luke Parkhurst 05 November 2020 (has links)
With continuing research into quantum computing, current public key cryptographic algorithms such as RSA and ECC will become insecure. These algorithms are based on the difficulty of integer factorization or discrete logarithm problems, which are difficult to solve on classical computers but become easy with quantum computers. Because of this threat, government and industry are investigating new public key standards, based on mathematical assumptions that remain secure under quantum computing. This paper investigates methods of improving the area and performance of one of the proposed algorithms for key exchanges, "NewHope." We describe a pipelined FPGA implementation of NewHope512cpa which dramatically increases the throughput for a similar design area. Our pipelined encryption implementation achieves 652.2 Mbps and a 0.088 Mbps/LUT throughput-to-area (TPA) ratio, which are the best known results to date, and achieves an energy efficiency of 0.94 nJ/bit. This represents TPA and energy efficiency improvements of 10.05× and 8.58×, respectively, over a non-pipelined approach. Additionally, we investigate replacing the large SHAKE XOF (hash) function with a lightweight Trivium based PRNG, which reduces the area by 32% and improves energy efficiency by 30% for the pipelined encryption implementation, and which could be considered for future cipher specifications. / Master of Science / Cryptography is prevalent in almost every aspect of our lives. It is used to protect communication, banking information, and online transactions. Current cryptographic protections are built specifically upon public key encryption, which allows two people who have never communicated before to setup a secure communication channel. However, due to the nature of current cryptographic algorithms, the development of quantum computers will make it possible to break the algorithms that secure our communications. Because of this threat, new algorithms based on principles that stand up to quantum computing are being investigated to find a suitable alternative to secure our systems. These algorithms will need to be efficient in order to keep up with the demands of the ever growing internet. This paper investigates four hardware implementations of a proposed quantum-secure algorithm to explore ways to make designs more efficient. The improvements are valuable for high throughput applications, such as a server which must handle a large number of connections at once.
56

Design of low OSR, high precision analog-to-digital converters

Rajaee, Omid 30 December 2010 (has links)
Advances in electronic systems have lead to the demand for high resolution, high bandwidth Analog-to-Digital Converters (ADCs). Oversampled ADCs are well- known for high accuracy applications since they benefit from noise shaping and they usually do not need highly accurate components. However, as a consequence of oversampling, they have limited signal bandwidth. The signal bandwidth (BW) of oversampled ADCs can be increased either by increasing the sampling rate or reducing the oversampling ratio (OSR). Reducing OSR is a more promising method for increasing the BW, since the sampling speed is usually limited by the technology. The advantageous properties (e.g. low in-band quantization, relaxed accuracy requirements of components) of oversampled ADCs are usually diminished at lower OSRs and preserving these properties requires complicated and power hungry architectures. In this thesis, different combinations of delta-sigma and pipelined ADCs are explored and new techniques for designing oversampled ADCs are proposed. A Hybrid Delta-Sigma/Pipelined (HDSP) ADC is presented. This ADC uses a pipelined ADC as the quantizer of a single-loop delta-sigma modulator and benefits from the aggressive quantization of the pipelined quantizer at low OSRs. A Noise-Shaped Pipelined ADC is proposed which exploits a delta-sigma modulator as the sub-ADC of a pipeline stage to reduce the sensitivity to the analog imperfection. Three prototype ADCs were fabricated in 0.18μm CMOS technology to verify the effectiveness of the proposed techniques. The performance of these architectures is among the best reported for high bandwidth oversampled ADCs. / Graduation date: 2011
57

Low power design techniques for high speed pipelined ADCs

Lingam, Naga Sasidhar 12 January 2009 (has links)
Real world is analog but the processing of signals can best be done in digital domain. So the need for Analog to Digital Converters(ADCs) is ever rising as more and more applications set in. With the advent of mobile technology, power in electronic equipment is being driven down to get more battery life. Because of their ubiquitous nature, ADCs are prime blocks in the signal chain in which power is intended to be reduced. In this thesis, four techniques to reduce power in high speed pipelined ADCs have been proposed. The first is a capacitor and opamp sharing technique that reduces the load on the first stage opamp by three fold. The second is a capacitor reset technique that aids removing the sample and hold block to reduce power. The third is a modified MDAC which can take rail-to-rail input swing to get an extra bit thus getting rid of a power hungry opamp. The fourth is a hybrid architecture which makes use of an asynchronous SAR ADC as the backend of a pipelined ADC to save power. Measurement and simulation results that prove the efficiency of the proposed techniques are presented. / Graduation date: 2009
58

A study on comparator and offset calibration techniques in high speed Nyquist ADCs

Chan, Chi Hang January 2011 (has links)
University of Macau / Faculty of Science and Technology / Department of Electrical and Electronics Engineering
59

Design of Pipelined Analog-to-Digital Converter with SI Technique in 65 nm CMOS Technology

Rajendran, Dinesh Babu January 2011 (has links)
Analog-to-digital converter (ADC) plays an important role in mixed signal processingsystems. It serves as an interface between analog and digital signal processingsystems. In the last two decades, circuits implemented in current-modetechnique have drawn lots of interest for sensory systems and integrated circuits.Current-mode circuits have a few vital advantages such as low voltage operation,high speed and wide dynamic ranges. These circuits have wide applications in lowvoltage, high speed-mixed signal processing systems. In this thesis work, a 9-bitpipelined ADC with switch-current (SI) technique is designed and implemented in65 nm CMOS technology. The main focus of the thesis work is to implement thepipelined ADC in SI technique and to optimize the pipelined ADC for low power.The ADC has a stage resolution of 3 bits. The proposed architectures combine adifferential sample-and-hold amplifier, current comparator, binary-to-thermometerdecoder, a differential current-steering digital-to-analog converter, delay logic anddigital error correction block. The circuits are implemented at transistor level in 65nm CMOS technology. The static and dynamic performance metrics of pipelinedADC are evaluated. The simulations are carried out by Cadence Virtuoso SpectreCircuit Simulator 5.10. Matlab is used to determine the performance metrics ofADC.
60

Energy Efficient Techniques For Algorithmic Analog-To-Digital Converters

Hai, Noman January 2011 (has links)
Analog-to-digital converters (ADCs) are key design blocks in state-of-art image, capacitive, and biomedical sensing applications. In these sensing applications, algorithmic ADCs are the preferred choice due to their high resolution and low area advantages. Algorithmic ADCs are based on the same operating principle as that of pipelined ADCs. Unlike pipelined ADCs where the residue is transferred to the next stage, an N-bit algorithmic ADC utilizes the same hardware N-times for each bit of resolution. Due to the cyclic nature of algorithmic ADCs, many of the low power techniques applicable to pipelined ADCs cannot be directly applied to algorithmic ADCs. Consequently, compared to those of pipelined ADCs, the traditional implementations of algorithmic ADCs are power inefficient. This thesis presents two novel energy efficient techniques for algorithmic ADCs. The first technique modifies the capacitors' arrangement of a conventional flip-around configuration and amplifier sharing technique, resulting in a low power and low area design solution. The other technique is based on the unit multiplying-digital-to-analog-converter approach. The proposed approach exploits the power saving advantages of capacitor-shared technique and capacitor-scaled technique. It is shown that, compared to conventional techniques, the proposed techniques reduce the power consumption of algorithmic ADCs by more than 85\%. To verify the effectiveness of such approaches, two prototype chips, a 10-bit 5 MS/s and a 12-bit 10 MS/s ADCs, are implemented in a 130-nm CMOS process. Detailed design considerations are discussed as well as the simulation and measurement results. According to the simulation results, both designs achieve figures-of-merit of approximately 60 fJ/step, making them some of the most power efficient ADCs to date.

Page generated in 0.0345 seconds