Spelling suggestions: "subject:"forminformation theory & coding theory"" "subject:"informationation theory & coding theory""
211 |
An agent-based hypermedia digital librarySalampasis, Michail January 1997 (has links)
No description available.
|
212 |
Direct sequence spread spectrum techniques in local area networksSmythe, Colin January 1985 (has links)
This thesis describes the application of a direct sequence spread spectrum modulation scheme to the physical layer of a local area networks subsequently named the SS-LAN. Most present day LANs employ erne form or another of time division multiplexing which performs well in many systems but which is limited by its very nature in real time, time critical and time demanding applications. The use of spread spectrum multiplexing removes these limitations by providing a simultaneous multiple user access capability to the channel which permits each and all nodes to utilise the channel independent of the activity being currently supported by that channel. The theory of spectral spreading is a consequence of the Shannon channel capacity in which the channel capacity may be maintained by the trading of signal to noise ratio for bandwidth. The increased bandwidth provides an increased signal dimensionality which can be utilised in providing noise immunity and/or a simultaneous multiple user environment: the effects of the simultaneous users can be considered as noise from the point of view of any particular constituent signal. The use of code sequences at the physical layer of a LAN permits a wide range of mapping alternatives which can be selected according to the particular application. Each of the mapping techniques possess the general spread spectrum properties but certain properties can be emphasised at the expense of others. The work has Involved the description of the properties of the SS-LAN coupled with the development of the mapping techniques for use In the distribution of the code sequences. This has been followed by an appraisal of a set of code sequences which has resulted in the definition of the ideal code properties and the selection of code families for particular types of applications. The top level design specification for the hardware required in the construction of the SS-LAN has also been presented and this has provided the basis for a simplified and idealised theoretical analysis of the performance parameters of the SS-LAN. A positive set of conclusions for the range of these parameters has been obtained and these have been further analysed by the use of a SS-LAN computer simulation program. This program can simulate any configuration of the SS-LAN and the results it has produced have been compared with those of the analysis and have been found to be in agreement. A tool for the further analysis of complex SS-LAN configurations has therefore been developed and this will form the basis for further work.
|
213 |
Crosstalk-resiliant coding for high density digital recordingAhmed, Mohammed Zaki January 2003 (has links)
Increasing the track density in magnetic systems is very difficult due to inter-track interference (ITI) caused by the magnetic field of adjacent tracks. This work presents a two-track partial response class 4 magnetic channel with linear and symmetrical ITI; and explores modulation codes, signal processing methods and error correction codes in order to mitigate the effects of ITI. Recording codes were investigated, and a new class of two-dimensional run-length limited recording codes is described. The new class of codes controls the type of ITI and has been found to be about 10% more resilient to ITI compared to conventional run-length limited codes. A new adaptive trellis has also been described that adaptively solves for the effect of ITI. This has been found to give gains up to 5dB in signal to noise ratio (SNR) at 40% ITI. It was also found that the new class of codes were about 10% more resilient to ITI compared to conventional recording codes when decoded with the new trellis. Error correction coding methods were applied, and the use of Low Density Parity Check (LDPC) codes was investigated. It was found that at high SNR, conventional codes could perform as well as the new modulation codes in a combined modulation and error correction coding scheme. Results suggest that high rate LDPC codes can mitigate the effect of ITI, however the decoders have convergence problems beyond 30% ITI.
|
214 |
Investigation, development and application of knowledge based digital signal processing methods for enhancing human EEGsHellyar, Mark Tremaine January 1991 (has links)
This thesis details the development of new and reliable techniques for enhancing the human Electroencephalogram (EEG). This development has involved the incorporation of adaptive signal processing (ASP) techniques, within an artificial intelligence (Al) paradigm, more closely matching the implicit signal analysis capabilities of the EEG expert. The need for EEG enhancement, by removal of ocular artefact (OA) , is widely recognised. However, conventional ASP techniques for OA removal fail to differentiate between OAs and some abnormal cerebral waveforms, such as frontal slow waves. OA removal often results in the corruption of these diagnostically important cerebral waveforms. However, the experienced EEG expert is often able to differentiate between OA and abnormal slow waveforms, and between different types of OA. This EEG expert knowledge is integrated with selectable adaptive filters in an intelligent OA removal system (tOARS). The EEG is enhanced by only removing OA when OA is identified, and by applying the OA removal algorithm pre-set for the specific OA type. Extensive EEG data acquisition has provided a database of abnormal EEG recordings from over 50 patients, exhibiting a variety of cerebral abnormalities. Structured knowledge elicitation has provided over 60 production rules for OA identification in the presence of abnormal frontal slow waveforms, and for distinguishing between OA types. The lOARS was implemented on personal computer (PC) based hardware in PROLOG and C software languages. 2-second, 18-channel, EEG signal segments are subjected to digital signal processing, to extract salient features from time, frequency, and contextual domains. OA is identified using a forward/backward hybrid inference engine, with uncertainty management, using the elicited expert rules and extracted signal features. Evaluation of the system has been carried out using both normal and abnormal patient EEGs, and this shows a high agreement (82.7%) in OA identification between the lOARS and an EEG expert. This novel development provides a significant improvement in OA removal, and EEG signal enhancement, and will allow more reliable automated EEG analysis. The investigation detailed in this thesis has led to 4 papers, including one in a special proceedings of the lEE, and been subject to several review articles.
|
215 |
Some cryptographic techniques for secure data communicationVaradharajan, Vijayaraghavan January 1984 (has links)
No description available.
|
216 |
Channel coding techniques for a multiple track digital magnetic recording systemDavey, Paul James January 1994 (has links)
In magnetic recording greater area) bit packing densities are achieved through increasing track density by reducing space between and width of the recording tracks, and/or reducing the wavelength of the recorded information. This leads to the requirement of higher precision tape transport mechanisms and dedicated coding circuitry. A TMS320 10 digital signal processor is applied to a standard low-cost, low precision, multiple-track, compact cassette tape recording system. Advanced signal processing and coding techniques are employed to maximise recording density and to compensate for the mechanical deficiencies of this system. Parallel software encoding/decoding algorithms have been developed for several Run-Length Limited modulation codes. The results for a peak detection system show that Bi-Phase L code can be reliably employed up to a data rate of 5kbits/second/track. Development of a second system employing a TMS32025 and sampling detection permitted the utilisation of adaptive equalisation to slim the readback pulse. Application of conventional read equalisation techniques, that oppose inter-symbol interference, resulted in a 30% increase in performance. Further investigation shows that greater linear recording densities can be achieved by employing Partial Response signalling and Maximum Likelihood Detection. Partial response signalling schemes use controlled inter-symbol interference to increase recording density at the expense of a multi-level read back waveform which results in an increased noise penalty. Maximum Likelihood Sequence detection employs soft decisions on the readback waveform to recover this loss. The associated modulation coding techniques required for optimised operation of such a system are discussed. Two-dimensional run-length-limited (d, ky) modulation codes provide a further means of increasing storage capacity in multi-track recording systems. For example the code rate of a single track run length-limited code with constraints (1, 3), such as Miller code, can be increased by over 25% when using a 4-track two-dimensional code with the same d constraint and with the k constraint satisfied across a number of parallel channels. The k constraint along an individual track, kx, can be increased without loss of clock synchronisation since the clocking information derived by frequent signal transitions can be sub-divided across a number of, y, parallel tracks in terms of a ky constraint. This permits more code words to be generated for a given (d, k) constraint in two dimensions than is possible in one dimension. This coding technique is furthered by development of a reverse enumeration scheme based on the trellis description of the (d, ky) constraints. The application of a two-dimensional code to a high linear density system employing extended class IV partial response signalling and maximum likelihood detection is proposed. Finally, additional coding constraints to improve spectral response and error performance are discussed.
|
217 |
Applications of acousto-optic demodulation and decoding techniquesHicks, Matthew Graham January 1997 (has links)
This thesis describes the operation and performance of an acousto-optic demodulator system consisting of a laser source, an acousto-optic cell and a bi-cell detector. The bi-cell detector is made up of two photodiodes positioned side by side, separated by a small gap. Theory is developed to predict the following; the linear operating range for different gap sizes, absolute frequency sensitivity, system output in response to discrete phase changes, optimum gap size for phase demodulation, absolute descrete phase change sensitivity, the performance of the system in the presence of carrier noise and the effect of clipping the carrier signal on both frequency and phase modulated signals. A detailed model of the system has been written, using the software package Mathcad, which incorporates all the parameters that affect the performance of the physical system. The model has been used to study how the performance of the system changes as these parameters are varied. It is shown that the AO demodulator can be used in a number of ways; as a frequency demodulator, a phase demodulator and to demodulate digitally modulated signals, and that the optimum values of some parameters are different for each application. The model is also used to investigate the response of the system to a number of the most common forms of digital modulation. It is shown that it is possible, without any a priori knowledge of the signal, to identify each of these forms of modulation, and ultimately decode messages contained on the signals. The system can also be used to measure the frequency shift on pulse doppler radar. It is shown that the rms frequency error on a pulse using the AO demodulator is 150% better than that of existing systems. Experimental results are presented that are in good agreement with the results gained from both the theoretical and modelled analysis of the system. Finally suggestions are made for areas of further work on the signal processing of the output signals and possible uses of the demodulator in the future.
|
218 |
Natural algorithms in digital filter designPenberthy Harris, Stephen January 2001 (has links)
Digital filters are an important part of Digital Signal Processing (DSP), which plays vital roles within the modern world, but their design is a complex task requiring a great deal of specialised knowledge. An analysis of this design process is presented, which identifies opportunities for the application of optimisation. The Genetic Algorithm (GA) and Simulated Annealing are problem-independent and increasingly popular optimisation techniques. They do not require detailed prior knowledge of the nature of a problem, and are unaffected by a discontinuous search space, unlike traditional methods such as calculus and hill-climbing. Potential applications of these techniques to the filter design process are discussed, and presented with practical results. Investigations into the design of Frequency Sampling (FS) Finite Impulse Response (FIR) filters using a hybrid GA/hill-climber proved especially successful, improving on published results. An analysis of the search space for FS filters provided useful information on the performance of the optimisation technique. The ability of the GA to trade off a filter's performance with respect to several design criteria simultaneously, without intervention by the designer, is also investigated. Methods of simplifying the design process by using this technique are presented, together with an analysis of the difficulty of the non-linear FIR filter design problem from a GA perspective. This gave an insight into the fundamental nature of the optimisation problem, and also suggested future improvements. The results gained from these investigations allowed the framework for a potential 'intelligent' filter design system to be proposed, in which embedded expert knowledge, Artificial Intelligence techniques and traditional design methods work together. This could deliver a single tool capable of designing a wide range of filters with minimal human intervention, and of proposing solutions to incomplete problems. It could also provide the basis for the development of tools for other areas of DSP system design.
|
219 |
Speech coding in private and broadcast networksSuddle, Muhammad Riaz January 1996 (has links)
No description available.
|
220 |
Soft-demodulation of QPSK and 16-QAM for turbo coded WCDMA mobile communication systemsRosmansyah, Yusep January 2003 (has links)
No description available.
|
Page generated in 0.1487 seconds