• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 351
  • 65
  • 51
  • 33
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 8
  • 5
  • 3
  • 3
  • 1
  • Tagged with
  • 846
  • 846
  • 406
  • 279
  • 267
  • 137
  • 134
  • 130
  • 111
  • 107
  • 104
  • 96
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Low complexity decoding of cyclic codes

Ho, Hai Pang January 1998 (has links)
This thesis presents three novel low complexity decoding algorithms for Cyclic codes. These algorithms are the Extended Kasami Algorithm (EKA), Permutation Error Trapping (PET) and the Modified Dorsch Algorithm (MDA). The Extended Kasami Algorithm is a novel decoding algorithm combining the Error Trapping Algorithm with cover polynomial techniques. With a revised searching method to locate the best combination of cover positions, the Extended Kasami Algorithm can achieve bounded distance performance with complexity many times lower than other efficient decoding algorithms. In comparison with the Minimum Weight Decoding (MWD) Algorithm on (31,16) BCH codes, the complexity of EKA is only 5% of MWD at 0 dB Eb/No. Comparing EKA with the Kasami Algorithm on the (23,12) Golay code, EKA reduces the complexity consistently for all values of Eb/No. When dealing with Reed Solomon codes, it is found that the additional complexity incurred by finding the error values is a function that increases exponentially with the number of bits in each symbol. To eliminate the problem of finding the error values, Permutation Error Trapping uses a specific cyclic code property to re-shuffle symbol positions. This complements well the Error Trapping approach and most decodable error patterns can be trapped by using this simple approach. PET achieves performance close to that of MWD on the (15,9) RS code with much lower complexity. For more complex codes, like the four-symbol-error correcting (15,7) RS code. Modified Permutation Error Trapping combines part of the cover polynomial approach of EKA with PET resulting in retaining good performance with low complexity. For attempting to decode Reed Solomon codes using soft decision values, the application of a modified Dorsch Algorithm to Reed Solomon codes on various issues has been evaluated. Using a binary form of Reed Solomon codes has been found to be able to achieve near maximum likelihood performance with very few decodings.
432

Texture segmentation by global optimization

Pandit, Sanjay January 1999 (has links)
This thesis is concerned with the investigation of a specific approach to the problem of texture segmentation, namely that based on the global optimization of a cost function. Many tasks in image analysis are expressed as global optimization problems in which the general issue is to find the global minimum of a cost function which describes the interaction between the different variables modelling the image features and the interaction of these variables with the data in a given problem. The minimization of such a global cost function is a difficult problem since the number of hidden variables (labels) is very large and the global cost function may have many local minima. This problem can be overcome to a large extent by using a stochastic relaxation algorithm (for example, Simulated annealing). Initially, various classical techniques on texture segmentation are reviewed. Ideally, any texture segmentation algorithm should segment an image, so that there is one to one correspondence between the segmentated edgels and the ground truth edgels. The effectiveness of an algorithm can be quantified in terms of under and over detection errors for each segmented output image. These measures are used throughout this thesis to quantify the quality of the results. A particular method which uses global optimization for texture segmentation is initially identified as potentially interesting and is implemented and studied. The implementation proved that this method suffered from many shortcomings and it is not really as good as it was reported in the literature. As the general approach to the problem is a well established methodology for image processing problems, the rest of the thesis is devoted into different attempts to make this method work. The novel ideas introduced in order to improve the method are: An improved version of the cost function. The use of alternative statistics that characterize each texture. The use of a combination of statistics to charaterize textures. The use of an implicit dictionary of penalizable label configurations, as opposed to an explicit dictionary, leading to penalties applied to anything not acceptable rather than to a selection of unacceptable configurations. The introduction of a modified transfer function that maps statistical differences to label differences. The use of a database of training patterns instead of assuming that one knows a priori which textures are present in the image to be segmented. The use of alternative time schedules with which the model is imposed to the data gradually, in a linear, non-linear and in an adaptive way. The introduction of an enhanced set of labels that allows the use of local orientation of the boundary. The introduction of a novel way to create new states of the system during the process of simulated annealing in order to achieve faster acceleration, by updating the values of 9 label sites instead of a single label site at a time. The results obtained by all these modifications vastly improve the performance of the algorithm from its original version. However, the whole approach does not really produce the quality of the results expected for real applications and it does not exhibit the robustness of a system that could be used in practice. The reason appears to be the bluntness of the statistical tests used to identify the boundary. So, my conclusion is that although global optimization methods are good for edge detection where the data are the local values of the first derivative, the approach is not very appropriate for texture segmentation where one has to rely on statistical differences.
433

Vocoder model based variable rate narrowband and wideband speech coding below 9 kbps

Stefanovic, Milos January 1999 (has links)
The past two decades have witnessed rapid growth and development within the telecommunications industry. This has been primarily fuelled by the proliferation of digital mobile communication applications and services which have become commonplace and easily within the financial reach of businesses and the general public. Current research trends, involving integration and packetisation of voice, video and data channels into true multimedia communications, promise a similar technological revolution in the next decade. One of the key design issues of the new high quality multimedia services is a requirement for very high data rates. Whilst the available bandwidth in wire based terrestrial network is a relatively cheap and expandable resource, it becomes inherently limited in satellite or cellular radio systems. In order to accommodate ever growing numbers of subscribers whilst maintaining high quality and low operational costs, it is necessary to maximise spectral efficiency and reduce power consumption. This has given rise to the rapid development of signal compression techniques, which in the speech transmission domain are known as speech coding algorithms. The research carried out for this thesis has mainly focused on the design and development of low bit rate narrowband and wideband speech coding systems which utilise a variable rate approach in order to improve their perceptual quality and reduce their transmission rates. The algorithms subsequently developed are based on the existing vocoding schemes, whose rigid fixed rate structure is a major limitation to achieving higher quality and lower rates. The variable rate schemes utilise the time-varying characteristics of the speech signal which is classified according to the developed segmentation algorithms. Two main schemes were developed, a variable bit rate with an average as low as 1.35 kbps and a variable frame rate with an average of 2.1 kbps, both achieving or even surpassing the subjective quality of the existing vocoding standard at 4.15 kbps. Wideband speech exhibits characteristics which are not embodied within narrowband speech and which contribute to the superior perceived quality. A very high quality wideband vocoder operating at rates (fixed and variable) below 9 kbps is presented in this thesis, whereby particular attention is paid to preserving the information in higher frequencies in order to maximise the attainable quality.
434

Interconnection of transputer links using a multiple bus configuration

Adda, M. January 1992 (has links)
The design of an efficient distributed memory transputer network is a difficult issue. In order to construct successfully highly concurrent systems with a large number of processors, their interconnection networks have to be as universal as possible and provide adequate connectivity for most applications. To satisfy these requirements, these communication networks should possess: an ease of expansion, a high bandwidth, a low latency, a deadlock freedom and an acceptable degree of reliability. This thesis presents a new type of interconnected network based on a multiple bus organisation and routing resources (gateways) that offers significant improvements in bandwidth over previously accepted bus-oriented topologies (i.e. multi-bus and spanning bus) and in latency over most directly-connected transputer networks (e.g. ring and mesh configurations). Besides, it has an easier expansion than hypercube-like structures. Relatively high bandwidth, low latency, good processor scalability, semi-adaptive routing and deadlock freedom are the fundamental features by which of our proposal contributes to the design of an efficient interconncetion network for transputers. They have been achieved by separating the routing (gateways) from the computational resources (processors). Although this topology can be exploited by general purpose parallel processors based on shared or distributed memory techniques, transputers and an OCCAM-like programming methodology have been considered as a case study in this project as it is the primary objective of the thesis. Simulation models and analytical results, mainly based on gap equations we have developed, exhibit conclusively the superior performance of our system compared to most transputer topologies. The detail of this architecture is presented in a design form that embodies many of the concepts discussed and proposed throughout the course of this research. As it is important to address uniquely each processor within the network, a dynamic address assignment algorithm that preserves the features of the proposed architecture is also suggested.
435

Enhanced Reed Solomon decoding using soft decision information

Mirza, Javed January 1993 (has links)
Reed Solomon codes are a well-known family of multilevel codes that are used in a variety of applications where errors are assumed to be bursty in nature. One of its appealing features is that it can correct random errors and mixtures of both random and burst errors. Although the codes meet the Singleton bound with equality they are very inefficient with respect to the Hamming bound, allowing a very high proportion of uncorrectable errors to be detected but not corrected. The algebraic decoding techniques traditionally used suffer the disadvantage that soft decision decoding cannot be achieved in a single decoding attempt, although it is possible to erase symbols of low confidence and allow the decoder to fill the erasures. Since Reed Solomon codes are Maximum Distance Separable (MDS) codes, it is possible to erase any n - k symbols (where n is the length and k the dimension of the code) and still achieve a successful decoding. In this thesis a study is made of an approach in which a large number of erasure patterns of weight 2t are generated and the decodings compared with the received sequence using hard and soft decision voting strategies. In addition, two improvements to the enhanced decoder have been investigated. The first is to compare decoded sequences bit by bit with the received sequence. The second is to incorporate soft decisions. In its most basic form, the enhanced decoder is computationally inefficient, consequently various methods were investigated to overcome this problem. This was also true after reducing the exhaustive erasure pattern set to one which covered all possible combinations of 2t - l error patterns in a reduced erasure pattern set. Alternative algorithms investigated various methods to use the soft decision information to locate the most likely error symbols rather than use exhaustive decoding techniques. Although these algorithms were very efficient in terms of performance and substantially reduced the average computation, in the worst case the computational complexity they generated could exceed that of the exhaustive erasure pattern method. For comparison purposes the error correcting algorithm as proposed by Chase was implemented. It was found that the performance of the enhanced decoder was slightly superior to that of the Chase algorithm but with the added advantage of a reduction in complexity.
436

Some studies of random signal analysis using simulated data

Faghih, Nezameddin January 1980 (has links)
This thesis studies some of the problems arising in the analysis of random signals. The digital computer simulations of the first and second order Gaussian processes are employed for the problems requiring empirical investigations. Moreover some exact autocorrelation functions are also used for further demonstrations. The required filter characteristics, for generation of the first and second order processes with prescribed autocorrelation functions, are designed and the equations for the digital computer simulations are derived. The Gaussian data are then generated for a variety of simulation studies being undertaken. The statistical errors in the digital estimates of the probability density functions are considered. The sampling properties of the autocorrelation estimates from uniformly sampled data are also studied; the theoretical and empirical estimate errors are compared and a simplification of the complicated expression, giving the expected error magnitudes, is examined. The maximum determinant method of autocorrelation function extrapolation is studied empirically. The reliability test and the extrapolation errors are examined and the best choice of the truncation point is deduced. The equivalence of the maximum determinant and maximum entropy approaches is shown analytically. Some simulation examples of the maximum entropy spectra and their transformations to the autocorrelation domain are also reported. A problem arising in certain situations is that the zero lag coefficient may be known and followed by a number of unknown coefficients and then a knowledge of the remaining portion of the autocorrelation function. A method of estimating the missing initial coefficients has been introduced in Stone (1978), where further research has also been suggested on it, regarding the selection of the estimates. This and further studies of the method are reported in this thesis. The problem of aliasing is analysed and demonstrated. The effects of data interpolation on the spectral estimates are then investigated. In particular, the application of linear and cubic spline interpolation methods, to the autocorrelation function and the sampled data, are considered. Finally, the thesis studies the sequential sampling scheme. Its contribution in minimizing the problem of aliasing, when the sampling interval is restricted to a minimum allowable value, is proved and demonstrated. The methods of estimating the autocorrelation function and spectra, in sequential sampling, are discussed and presented.
437

Analysis-by-synthesis coding of speech signals at 8 kb/s and below

Soheili, Ramin January 1993 (has links)
The desire for instantaneous communication at any time and place has been a long standing dream for people of different races and cultures. The post-war progress of telecommunications technology has made such dreams a reality. Nowadays, most households in the developed world are fitted with telephone sets capable of communicating with anyone, across the globe. Over the last decade or so the demand for these communication services has seen a sharp rise. One factor that is beginning to constain these desires is the available natural resources. Limited bandwidth and high public demand have resulted in a change from primitive analog based systems to new sophisticated digital systems. The ability to transmit information at varying bit rates (hence varying capacity) has been a major step forward in conquering the problems of channel capacity. In the case of signals such as speech, high bandwidth reductions typically result in quality degradations. Such side effects can be resolved with the use of powerful Digital Signal Processing chips, allowing complex modelling of the speech signals. In any low bit rate digital speech encoding system a mathematical modelling of the signal is required. The complexity of the model is reflected in the algorithms output quality, delay, robustness to errors and computational load. The earliest versions of digital encoding techniques, such as Pulse Code Modulation, are very simple and effective. However, they operate at high bit rates of 64 kb/s, thus occupying large channel capacities. Lately, the need for efficient speech encoding has resulted in complex time domain coding schemes known as Analysis-by-Synthesis algorithms. Such complex schemes are very successful in meeting the quality objectives at low bit rates of around 6 kb/s and above. In this thesis we look at several Analysis-by-Synthesis schemes and examine their problems in meeting certain criteria such as Quality, Complexity, Robustness and Delay. The algorithms examined are all time domain techniques with their main applications in Mobile environments and PSTN services. The quality issue is assessed by looking at three major Analysis-by-Synthesis techniques, (MPE-LPC, RPE-LPC and CELP, introduced in the early eighties), which utilise different glottal excitation modelling techniques. The question of robustness in mobile applications is tackled by discussion of appropriate Forward Error Correcting codes and frame substitution/reconstruction strategies. A current requirement of PSTN services is for low delay algorithms to avoid echo effects and additional delay impairments caused via use of satellite links. Since low bit rate digital schemes incorporate linear prediction techniques resulting in long buffering and thus longer delays, backward prediction modelling is examined for achieving low delay, toll quality coding at bit rates of 8 kb/s and above.
438

Improved quality block-based low bit rate video coding

Kweh, Teck Hock January 1998 (has links)
The aim of this research is to develop algorithms for enhancing the subjective quality and coding efficiency of standard block-based video coders. In the past few years, numerous video coding standards based on motion-compensated block-transform structure have been established where block-based motion estimation is used for reducing the correlation between consecutive images and block transform is used for coding the resulting motion-compensated residual images. Due to the use of predictive differential coding and variable length coding techniques, the output data rate exhibits extreme fluctuations. A rate control algorithm is devised for achieving a stable output data rate. This rate control algorithm, which is essentially a bit-rate estimation algorithm, is then employed in a bit-allocation algorithm for improving the visual quality of the coded images, based on some prior knowledge of the images. Block-based hybrid coders achieve high compression ratio mainly due to the employment of a motion estimation and compensation stage in the coding process. The conventional bit-allocation strategy for these coders simply assigns the bits required by the motion vectors and the rest to the residual image. However, at very low bit-rates, this bit-allocation strategy is inadequate as the motion vector bits takes up a considerable portion of the total bit-rate. A rate-constrained selection algorithm is presented where an analysis-by-synthesis approach is used for choosing the best motion vectors in term of resulting bit rate and image quality. This selection algorithm is then implemented for mode selection. A simple algorithm based on the above-mentioned bit-rate estimation algorithm is developed for the latter to reduce the computational complexity. For very low bit-rate applications, it is well-known that block-based coders suffer from blocking artifacts. A coding mode is presented for reducing these annoying artifacts by coding a down-sampled version of the residual image with a smaller quantisation step size. Its applications for adaptive source/channel coding and for coding fast changing sequences are examined.
439

A study of information fusion applied to subband speaker recognition

Higgins, Jonathan E. January 2002 (has links)
No description available.
440

Sequential detection methods for spread-spectrum code acquisition

Ravi, K. V. January 1991 (has links)
No description available.

Page generated in 0.079 seconds