• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 739
  • 296
  • 281
  • 277
  • 262
  • 55
  • 54
  • 47
  • 41
  • 41
  • 29
  • 27
  • 26
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

High level embedded DSA system exploration and design

McAllister, J. P. January 2004 (has links)
No description available.
72

DSP architectural synthesis tools for FPGAs

Yi, Ying January 2004 (has links)
No description available.
73

Analysis-by-synthesis coding of narrowband and wideband speech at medium bit rates

Black, Alastair William January 1997 (has links)
The last few years has seen a rapid expansion in the development of efficient speech compression algorithms which has been primarily fuelled by the proliferation of digital mobile communication systems. Low bit rate speech coding algorithms estimate, quantise and efficiently encode the parameters of a speech production model by using the original speech waveform. The most popular of these models is based on the technique of Linear Prediction which has resulted in a class of speech coding algorithms known as Analysis-by-Synthesis Linear Prediction Coding (AbS-LPC). In the AbS-LPC coding system, a closed loop optimisation procedure is used to determine the excitation signal for the Linear Prediction filter. This methodology of speech coding has been the foundation of many algorithms operating at medium to low bit rates. In particular, the Codebook Excited Linear Prediction (CELP) algorithm has received much attention in the past few years which has culminated in numerous standards being based on this principle. CELP achieves its coding efficiency and high quality by representing the excitation signal as a vector. However, in the original implementation of this algorithm the excitation search was very computationally intensive due to the structure of the codebook. In order to reduce this computational complexity and improve the quality of the synthetic speech this thesis explores various structures of secondary excitations which are based on sparsely populated pulsed vectors. A variable rate implementation of the CELP algorithm is also presented where techniques typically found in vocoders are used to provide an accurate classification of the different types of speech. These metrics are then used to vary the speech segment size and coding rate to take advantage of the differing regions of speech. Narrowband speech is defined to be band limited between 300 Hz - 3.4 kHz and is sampled at the Nyquist sampling rate of 8 kHz. However, wideband speech lies between 50 Hz and 7 kHz and is consequently sampled at a higher rate of 16 kHz. Wideband speech exhibits characteristics which are not normally embodied within the narrowband signal. It is these characteristics which contribute to the superior perceived quality and therefore it is imperative that a coding scheme maintains this information. This thesis formulates various strategies for the coding of wideband speech using the CELP coding structure. Particular attention is paid to preserving the information in the higher frequencies so that the overall quality is maintained in the synthetic signal. A low delay variant of the wideband coder is also presented where particular attention is paid to the effects of backward LPC prediction over the full bandwidth of the signal are investigated. This results in a split band architecture which is capable of producing high quality wideband speech.
74

Characteristics of variation in production of normal and disordered fricatives using reduced-variance spectral methods

Blacklock, Oliver S. January 2004 (has links)
No description available.
75

Image segmentation using Markov random field models

Barker, Simon A. January 1998 (has links)
No description available.
76

Time frequency modelling

Coates, Mark J. January 1999 (has links)
No description available.
77

Analysis and applications of iterative decoding

Luo, Quiglin January 2005 (has links)
Iterative decoding provides a practical solution for the approaching of the Shannon limit with acceptable complexity. By decoding in an iterative fashion, the decoding complexity is spread over the time domain while the overall optimality is still approached. Ever since the successful application of iterative decoding in turbo codes in 1993, people have been trying to discover the secrets behind iterative decoding. This PhD thesis contains a new, universal method for the analysis of iterative decoding based on cross-entropy. It is proved that the maximum a posteriori probability (MAP) decoding algorithm minimizes the cross-entropy between the a priori and the extrinsic information subject to given coding constraints, and the error correcting ability of each step of decoding can be evaluated with this cross-entropy for a converging turbo decoder. These theoretical results provide a solid ground for analysis of turbo decoding on convergence rate, derivation of Eb/N0 convergence thresholds, evaluation of error performance in the "error floor" region, and design of asymmetric turbo codes. With the new method, thresholds for convergence of turbo decoders can be more strictly predicted compared with using existing EXIT charts or Gaussian approximation method. For performance evaluation in the "error floor" region, the new method provides more detailed information than bounding techniques but is much less time-consuming than direct BER simulations. An asymmetric turbo code designed with the guidance of the new method also exhibits more than 0.1 dB of gain over that guided by the classical bounding technique in both high and low BER regions. Unlike most conventional analysis methods which rely heavily on either Gaussian approximation of distribution of the a priori/extrinsic information or a full knowledge of source bits, or even both, the new method provides analysis in a totally blind fashion. Since hybrid-ARQ is thought to be the mainstream error controlling technology in future high speed wireless communication systems, this PhD thesis also provides some innovative ideas on the applications of iterative decoding in bandwidth efficient hybrid-ARQ systems. Multilevel coded modulation, which is featured with unequal error protection capability, high bandwidth efficiency, and high flexibility, is employed to construct hybrid-ARQ. Multilevel HARQ schemes, including synchronous, asynchronous, and adaptive multilevel HARQ are proposed, and analysed in both theoretical and numerical ways. Significant gains can be observed in comparison with conventional TCM HARQ schemes. However, conventional Chase diversity combining is found to be applicable only to the former two HARQ schemes, but not applicable to the adaptive multilevel HARQ, which however shows best performance in non-combining scenarios, due to its dynamic packet structure. As a solution, multistage iterative combining, which is based on the principles of iterative decoding, is proposed and verified with simulation results.
78

Design and implementation of low-power CMOS analogue convolutional decoders using the modified feedback decoding algorithm

Tomatsopoulos, Billy Vasileios January 2007 (has links)
Convolutional decoders are very important in digital communication systems, especially in applications where very high noise levels are introduced on the information signal by wireless transmission. Examples of such systems are satellite communications, cellular telephony and digital audio broadcasting (DAB). The most commonly employed decoding method so far in convolutional codes has been the Viterbi algorithm (VA), mainly implemented in digital hardware to accommodate large memory requirements. In this thesis an introduction to convolutional codes and basic digital communication systems is given, followed by an in depth study of the VA and possible Viterbi decoder (VD) implementations. Advantages and limitations are identified in existing analogue VD designs. A recently proposed algorithm, known as the modified feedback decoding algorithm (MFDA), is then presented and clarified. The MFDA incorporates certain key features of the VA while it requires no digital memory and therefore lends itself to an entirely analogue implementation. This in turn improves performance characteristics, effectively trading complexity and power dissipation against operating speed. The first ever realisation of the novel MFDA is also presented here. Firstly, extensive system-level simulations model errors arising from the use of analogue circuits in practical convolutional decoders. Consequently, and based on these results, a mixed-signal hard-decision modified feedback decoder (MFD) is designed as a proof of principle, using the Austriamicrosystems (AMS) CMOS 0.6pm technology. The fabricated chips have 100% yield and measured results indicate that there is a negligible loss in coding performance compared with a VD. This work can potentially launch the advent of future miniaturised ultra low power analogue convolutional decoders.
79

Signal processing for magnetoencephalography

Clarke, Rupert Benjamin January 2010 (has links)
Magnetoencephalography (MEG) is a non-invasive technology for imaging human brain function. Contemporary methods of analysing MEG data include dipole fitting, minimum norm estimation (MNE) and beamforming. These are concerned with localising brain activity, but in isolation they do not provide concrete evidence of interaction among brain regions. Since cognitive neuroscience demands answers to this type of question, a novel signal processing framework has been developed consisting of three stages. The first stage uses conventional MNE to separate a small number of underlying source signals from a large data set. The second stage is a novel time-frequency analysis consisting of a recursive filter bank. Finally, the filtered outputs from different brain regions are compared using a unique partial cross-correlation analysis that accounts for propagation time. The output from this final stage could be used to construct conditional independence graphs depicting the internal networks of the brain. In the second processing stage, a complementary pair of high- and low-pass filters is iteratively applied to a discrete time series. The low-pass output is critically sampled at each stage, which both removes redundant information and effectively scales the filter coefficients in time. The approach is similar to the Fast Wavelet Transform (FWT), but features a more sophisticated resampling step. This, in combination with the filter design procedure leads to a finer frequency resolution than the FWT. The subsequent correlation analysis is unusual in that a latency estimation procedure is included to establish the probable transmission delays between regions of interest. This test statistic does not follow the same distribution as a conventional correlation measures, so an empirical model has been developed to facilitate hypothesis testing.
80

DCD algorithm : architectures, FPGA implementations and applications

Liu, Jie January 2008 (has links)
In areas of signal processing and communications such as antenna array beamforming, adaptive filtering, multi-user and multiple-input multiple-output (MIMO) detection, channel estimation and equalization, echo and interference cancellation and others, solving linear systems of equations often provides an optimal performance. However, this is also a very complicated operation that designers try to avoid by proposing different sub-optimal solutions. The dichotomous coordinate descent (DCD) algorithm allows linear systems of equations to be solved with high computational efficiency. It is a multiplication-free and division-free technique and, therefore, it is well suited for hardware implementation. In this thesis, we present architectures and field-programmable gate array (FPGA) implementations of two variants of the DCD algorithm, known as the cyclic and leading DCD algorithms, for real-valued and complex-valued systems. For each of these techniques, we present architectures and implementations with different degree of parallelism. The proposed architectures allow a trade-off between FPGA resources and the computation time. The fixed-point implementations provide an accuracy performance which is very close to the performance of floating-point counterparts. We also show applications of the designs to complex division, antenna array beamforming and adaptive filtering. The DCD-based complex divider is based on the idea that the complex division can be viewed as a problem of finding the solution of a 2x2 real-valued system of linear equations, which is solved using the DCD algorithm. Therefore, the new divider uses no multiplication and division. Comparing with the classical complex divider, the DCD-based complex divider requires significantly smaller chip area. A DCD-based minimum variance distortionless response (MVDR) beamformer employs the DCD algorithm for multiplication-free finding the antenna array weights. An FPGA implementation of the proposed DCD-MVDR beamformer requires a chip area much smaller and throughput much higher than that achieved with other implementations. The performance of the fixed-point implementation is very close to that of floating-point implementation of the MVDR beamformer using direct matrix inversion. When incorporating the DCD algorithm in recursive least squares (RLS) adaptive filter, a new efficient technique, named as the RLS-DCD algorithm, is derived. The RLS-DCD algorithm expresses the RLS adaptive filtering problem in terms of auxiliary normal equations with respect to increments of the filter weights. The normal equations are approximately solved by using the DCD iterations. The RLS-DCD algorithm is well-suited to hardware implementation and its complexity is as low as O(N2) operations per sample in a general case and O(N) operations per sample for transversal RLS adaptive filters. The performance of the RLS-DCD algorithm, including both fixed-point and floating-point implementations, can be made arbitrarily close to that of the floating-point classical RLS algorithm. Furthermore, a new dynamically regularized RLS-DCD algorithm is also proposed to reduce the complexity of the regularized RLS problem from O(N^3) to O(N^2) in a general case and to O(N) for transversal adaptive filters. This dynamically regularized RLS-DCD algorithm is simple for finite precision implementation and requires small chip resources.

Page generated in 0.0644 seconds