Spelling suggestions: "subject:"4digital designal aprocessing"" "subject:"4digital designal eprocessing""
121 |
Sistemas de banda ultralarga com pré-processamento. / Ultra-high band systems with pre-processing.Angélico, Bruno Augusto 29 June 2010 (has links)
A resposta impulsiva do canal de um sistema de banda ultralarga típico é caracterizada pelo elevado número de percursos discerníveis. Dessa forma, para uma recepção eficiente, a energia espalhada nessas componentes multipercurso deve ser de alguma forma combinada. Considerando o enlace direto (downlink) de uma rede pessoal de curto alcance, assume-se que o ponto de acesso possui uma capacidade de processamento maior do que os dispositivos portáteis a ele conectados, tais como câmeras fotográficas, celulares e aparelhos de MP3. Este trabalho se concentra no estudo de esquemas de pré-processamento em ambientes mono e multiusuário, com vistas a combinar eficientemente a energia espalhada nas componentes multipercurso do canal e, consequentemente, combater a autointerferência e a interferência entre usuários, sem agregar muito custo computacional ao receptor (dispositivos portáteis da rede). Com isso, boa parte da complexidade é transferida para o transmissor (ponto de acesso), de forma que o receptor necessite apenas de um detector convencional, ou então de um detector convencional seguido de processamento adicional de complexidade moderada para mitigar a interferência residual. / The channel impulse response of a typical ultra wideband system is characterized by a large number of resolvable paths. For a efficient reception, the energy spread over the multipath components has to be somehow combined. Considering the downlink of a wireless personal area network, the access point is assumed to have a good hardware capacity when compared to the portable devices of the network, such as digital cameras, cell phones and MP3 players. This work focuses on preprocessing schemes that are able to combine efficiently the multipath components, and to combat self and multiuser interference without increasing the computational cost at the receiver (portable devices) substantially. Hence, most of the complexity is transferred to the transmitter (access point) in such a way that the receiver needs only a conventional detector or a conventional detector followed by a moderated complexity processing in order to mitigate the residual interference.
|
122 |
Real-Time Beamforming Algorithms for the Focal L-Band Array on the Green Bank TelescopeRuzindana, Mark William 01 December 2017 (has links)
A phased array feed (PAF) provides a contiguous, electronically synthesized wide field of view for large-dish astronomical observatories. Significant progress has been made in recent years in improving the sensitivity of PAF receivers though optimizing the design of the antenna array, cryogenic cooling of the front end, and implementation of real-time correlation and beamforming in digital signal processing. FLAG is a 19 dual-polarized element phased array with cryogenic LNAs, direct digitization of RF signals at the front end, digital signal transport over fiber, and a real time signal processing back end with up to 150 MHz bandwidth. The digital back end includes multiple processing modes, including real-time beamforming, real-time correlation, and a separate real-time beamformer for commensal radio transient searches. Following a polyphase filterbank operation performed in field programmable gate arrays (FPGAs), beamforming, correlation, and integration are implemented on graphical processing units (GPUs) that perform parallelized operations. Parallelization greatly increases processing speed and allows for real-time signal processing. During a recent test/commissioning of FLAG, Tsys/efficiency of approximately 28 K was measured across the PAF field of view and operating bandwidth, corresponding to a system temperature below 20 K. To demonstrate the astronomical capability of the receiver, a pulsar (PSR B1937+21) was detected with the real-time beamformer. This thesis provides details on the development of the FLAG digital back end, the real-time beamformer, and reports on the commissioning tests of the FLAG PAF receiver developed by the National Radio Astronomy Observatory (NRAO), Green Bank Observatory (GBO), West Virginia University (WVU), and Brigham Young University for the Green Bank Telescope (GBT).
|
123 |
A DETECTION AND DATA ACQUISITION SYSTEM FOR PRECISION BETA DECAY SPECTROSCOPYJezghani, Aaron P. 01 January 2019 (has links)
Free neutron and nuclear beta decay spectroscopy serves as a robust laboratory for investigations of the Standard Model of Particle Physics. Observables such as decay product angular correlations and energy spectra overconstrain the Standard Model and serve as a sensitive probe for Beyond the Standard Model physics. Improved measurement of these quantities is necessary to complement the TeV scale physics being conducted at the Large Hadron Collider. The UCNB, 45Ca, and Nab experiments aim to improve upon existing measurements of free neutron decay angular correlations and set new limits in the search for exotic couplings in beta decay. To achieve these experimental goals, a highly-pixelated, thick silicon detector with a 100 nm entrance window has been developed for precision beta spectroscopy and the direct detection of 30 keV beta decay protons. The detector has been characterized for its performance in energy reconstruction and particle arrival time determination. A Monte Carlo simulation of signal formation in the silicon detector and propagation through the electronics chain has been written to develop optimal signal analysis algorithms for minimally biased energy and timing extraction. A tagged-electron timing test has been proposed and investigated as a means to assess the validity of these Monte Carlo efforts.
A universal platform for data acquisition (DAQ) has been designed and implemented in National Instrument's PXIe-5171R digitizer/FPGA hardware. The DAQ retains a ring buffer of the most recent 400 ms of data in all 256 channels, so that a waveform trace can be returned from any combination of pixels and resolution for complete energy reconstruction. Low-threshold triggers on individual channels were implemented in FPGA as a generic piecewise-polynomial filter for universal, real-time digital signal processing, which allows for arbitrary filter implementation on a pixel-by-pixel basis. This system is universal in the sense that it has complete flexible, complex, and debuggable triggering at both the pixel and global level without recompiling the firmware. The culmination of this work is a system capable of a 10 keV trigger threshold, 3 keV resolution, and maximum 300 ps arrival time systematic, even in the presence of large amplitude noise components.
|
124 |
Improving Accuracy in Logarithmic Multiplication using Operand DecompositionVenkataraman, Mahalingam 28 March 2005 (has links)
The arithmetic operations such as multiplication and division in binary number system are computationally complex in terms of area, delay and power. Logarithmic Number Systems (LNS) offer a viable alternative combining the simplicity of fixed point number systems and the precision of floating point number systems. However, the computations in LNS result in some loss of accuracy and thus, are limited to mostly signal processing applications; where in certain amount of error is tolerable. In LNS, the cost of computations can be tradeoff with the level of accuracy needed. The Mitchell algorithm proposed incite[mitchell], is a simple approach commonly used for logarithmic multiplication. The method involves a high error margin due to a piecewise straight line approximation of the logarithm curve. Thus, several methods have been proposed in the literature for improving the accuracy of Mitchell's algorithm.
In this thesis, we propose a new method for improving the accuracy of Mitchell's logarithmic multiplication using operand decomposition. The operand decomposition process decreases the number of bits with the value of '1' in the multiplicands and reduces the amount of approximation. The proposed method brings down the average error percentage of Mitchell's logarithmic multiplication by around 45%. It can be combined with previous methods to further improve the accuracy. Experimental results are presented to show that both the error range and the average error percentage can be significantly improved by using operand decomposition.
|
125 |
A resampling theory for non-bandlimited signals and its applications : a thesis presented for the partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Wellington, New ZealandHuang, Beilei January 2008 (has links)
Currently, digital signal processing systems typically assume that the signals are bandlimited. This is due to our knowledge based on the uniform sampling theorem for bandlimited signals which was established over 50 years ago by the works of Whittaker, Kotel'nikov and Shannon. However, in practice the digital signals are mostly of finite length. This kind of signals are not strictly bandlimited. Furthermore, advances in electronics have led to the use of very wide bandwidth signals and systems, such as Ultra-Wide Band (UWB) communication systems with signal bandwidths of several giga-hertz. This kind of signals can effectively be viewed as having infinite bandwidth. Thus there is a need to extend existing theory and techniques for signals of finite bandwidths to that for non-bandlimited signals. Two recent approaches to a more general sampling theory for non-bandlimited signals have been published. One is for signals with finite rate of innovation. The other introduced the concept of consistent sampling. It views sampling and reconstruction as projections of signals onto subspaces spanned by the sampling (acquisition) and reconstruction (synthesis) functions. Consistent sampling is achieved if the same discrete signal is obtained when the reconstructed continuous signal is sampled. However, it has been shown that when this generalized theory is applied to the de-interlacing of video signals, incorrect results are obtained. This is because de-interlacing is essentially a resampling problem rather than a sampling problem because both the input and output are discrete. While the theory for the resampling for bandlimited signals is well established, the problem of resampling without bandlimited constraints is largely unexplored. The aim of this thesis is to develop a resampling theory for non-bandlimited discrete signals and explore some of its potential applications. The first major contribution is the the theory and techniques for designing an optimal resampling system for signals in the general Hilbert Space when noise is not present. The system is optimal in the sense that the input of the system can always be obtained from the output. The theory is based on the concept of consistent resampling which means that the same continuous signal will be obtained when either the original or the resampled discrete signal is presented to the reconstruction filter. While comparing the input and output of a sampling/reconstruction system is relatively simple since both are continuous signals, comparing the discrete input and output of a resampling system is not. The second major contribution of this thesis is the proposal of a metric that allows us to evaluate the performance of a resampling system. The performance is analyzed in the Fourier domain as well. This performance metric also provides a way by which different resampling algorithms can be compared effectively. It therefore facilitates the process of choosing proper resampling schemes for a particular purpose. Unfortunately consistent resampling cannot always be achieved if noise is present in the signal or the system. Based on the performance metric proposed, the third major contribution of this thesis is the development of procedures for designing resampling systems in the presence of noise which is optimal in the mean squared error (MSE) sense. Both discrete and continuous noise are considered. The problem is formulated as a semi-definite program which can be solved effciently by existing techniques. The usefulness and correctness of the consistent resampling theory is demonstrated by its application to the video de-interlacing problem, image processing, the demodulation of ultra-wideband communication signals and mobile channel detection. The results show that the proposed resampling system has many advantages over existing approaches, including lower computational and time complexities, more accurate prediction of system performances, as well as robustness against noise.
|
126 |
Design and Implementation of a DMA Controller for Digital Signal ProcessorJiang, Guoyou January 2010 (has links)
<p>The thesis work is conducted in the division of computer engineering at thedepartment of electrical engineering in Linköping University. During the thesiswork, a configurable Direct Memory Access (DMA) controller was designed andimplemented. The DMA controller runs at 200MHz under 65nm digital CMOS technology. The estimated gate count is 26595.</p><p>The DMA controller has two address generators and can provide two clocksources. It can thus handle data read and write simultaneously. There are 16channels built in the DMA controller, the data width can be 16-bit, 32-bit and64-bit. The DMA controller supports 2D data access by configuring its intelligentlinking table. The DMA is designed for advanced DSP applications and it is notdedicated for cache which has a fixed priority.</p>
|
127 |
Aural RegenerationPronchuk, Myrna Lee 06 May 2012 (has links)
AURAL REGENERATION
by
MYRNA LEE PRONCHUK
Under the Direction of Craig Dongoski
ABSTRACT
The aim of this thesis is to survey the abstraction of the human experience obscuring the confines between form and expression, sound and visual, experience and imitation. In establishing multiple levels of communication, I began with gathering discarded found objects, which I repurposed through building hybrid musical sculptures. The act of mark making mapped out systems and direction, and escalated into a form of hybrid musical notation. Both forms of hybrids informed each other in its development process. When the hybrid instruments and notation were placed in an environment together with the elements of Digital Signal Processing (DSP), it created a natural progression for performance. The objects required interaction: to be hit, tapped, bowed and plucked, with their sounds processed through DSP, and projected back into the audience, who participated in creating interactivity. In producing mechanical musical instruments, along with mark making, installation and experimental sound recordings, a platform is established allowing for a dialogue between audio and visual elements, and human experience.
|
128 |
Acceleration and Integration of Sound Decoding in FPGA / Accelerering och integrering av ljudavkodning i FPGAHolmér, Johan, Eriksson, Jesper January 2011 (has links)
The task has been to develop a network media renderer on an embedded linux system running on a Spartan 6 FPGA. One of the challenges have been to make the best use of the limited FPGA area. MP3 have been the prioritised format. To achieve fast MP3 decoding a MicroBlaze soft processor have been configured for speed with concern to the small area availabe. Also the software MP3 decoding process have been accelerated with hardware. MP3 files with full quality (320 kbit/s) can be decoded with real time requirements. A sound interface hardware have been designed to handle the decoded sound samples and convert them to the S/PDIF standard interface. Also UPnP commands have been implemented with the MP3 player software to complete the renderer’s network functionality.
|
129 |
Benchmarking of Sleipnir DSP Processor, ePUMA PlatformMurugesan, Somasekar January 2011 (has links)
Choosing a right processor for an embedded application, or designing a new pro-cessor requires us to know how it stacks up against the competition, or sellinga processor requires a credible communication about its performance to the cus-tomers, which means benchmarking of a processor is very important. They arerecognized world wide by processor vendors and customers alike as the fact-basedway to evaluate and communicate embedded processor performance. In this the-sis, the benchmarking of ePUMA multiprocessor developed by the Division ofComputer Engineering, ISY, Linköping University, Sweden will be described indetails. A number of typical digital signal processing algorithms are chosen asbenchmarks. These benchmarks have been implemented in assembly code withtheir performance measured in terms of clock cycles and root mean square errorwhen compared with result computed using double precision. The ePUMA multi-processor platform which comprises of the Sleipnir DSP processor and Senior DSPprocessor was used to implement the DSP algorithms. Matlab inbuilt models wereused as reference to compare with the assembly implementation to derive the rootmean square error values of different algorithms. The execution time for differentDSP algorithms ranged from 51 to 6148 clock cycles and the root mean squareerror values varies between 0.0003 to 0.11.
|
130 |
A Color Filter Array Interpolation Method Based on Sampling TheoryGlotzbach, John William 26 August 2004 (has links)
Digital cameras use a single image sensor array with a color filter array (CFA) to measure a color image. Instead of measuring a red, green, and blue value at every pixel, these cameras have a filter built onto each pixel so that
only one portion of the visible spectrum is measured. To generate a full-color image, the camera must estimate the missing two values at every pixel. This process is known as color filter array interpolation.
The Bayer CFA pattern samples the green image on half of the pixels of the imaging sensor on a quincunx grid. The other half of the pixels measure the red and blue images equally on interleaved rectangular sampling grids.
This thesis analyzes this problem with sampling theory. The red and blue images are sampled at half the rate of the green image and therefore have a higher probability of aliasing in the output image. This is apparent when simple interpolation algorithms like bilinear interpolation are used for CFA interpolation.
Two reference algorithms, a projections onto convex sets (POCS) algorithm and an edge-directed algorithm by Adams and Hamilton (AH), are studied. Both algorithms address aliasing in the green image. Because of the high correlation among the red, green, and blue images, information from the red and blue images can be used to better interpolate the green image. The reference algorithms are studied to learn how this information is used. This leads to two new interpolation algorithms for the green image.
The red and blue interpolation algorithm of AH is also studied to determine how the inter-image correlation is used when interpolating these images. This study shows that because the green image is sampled at a higher rate, it retains much of the high-frequency information in the original image. This information is used to estimate aliasing in the red and blue images. We present a general algorithm based on the AH algorithm to interpolate the red and blue images. This algorithm is able to provide results that are on average, better than both reference algorithms, POCS and AH.
|
Page generated in 0.0722 seconds