101 |
Data compression strategies for RDAT/DDS media in hostile environmentsThomas, Owen David John January 1996 (has links)
This thesis investigates the prevention of error propagation in magnetically recorded compressed data when severe environmental conditions result in uncorrected channel errors. The tape format DDS is examined and a computer simulation of its error correction procedures is described. This software implementation uses explicit parity byte equations and these are presented for all three Reed-Solomon codes. The simulation allows the calculation of the uncorrected error patterns when the recording is compromised and uncorrected byte errors are determined for given initial random and burst errors. Some of the more familiar data compression algorithms are visited before the little known adaptive Rice algorithm is described in detail. An analytic example is developed which demonstrates the coding mechanism. A synchronized piecewise compression strategy is adopted in which the synchronizing sequences are placed periodically into the compressed data stream. The synchronizing sequences are independent of the compression algorithm and may occur naturally in the compressed data stream. A cyclic count is added to the compressed data stream to number the groups of data between synchronizing sequences and prevent slippage in the data. The Rice algorithm is employed in the strategy to compress correlated physical data. A novel compressor is developed to compress mixed correlated physical data and text within the synchronization strategy. This compressor uses the Rice algorithm to compress the correlated data and a sliding window algorithm to compress the text and switches between the two algorithms as the data type varies. The sliding window compressor T.ZR is adopted when the same principles are applied to the robust compression of English text alone. TJ7R is modified to improve compression of relatively small pieces of English text. The synchronization strategy incorporating these algorithms has been simulated computationally. This simulation is linked to that of DDS in each test performed when the errors are both random and bursty. The decompressed data is compared to the original. The strategy is demonstrated to be effective in preventing error propagation beyond the data immediately affected by errors without significant damage to the compression ratio.
|
102 |
Augmentative communication device design, implementation and evaluationPlant, Richard Robert January 1996 (has links)
The ultimate aim of this thesis was to design and implement an advanced software based Augmentative Communication Device (ACD) , or Voice Output Communication Aid NOCA), for non-vocal Learning Disabled individuals by applying current psychological models, theories, and experimental techniques. By taking account of potential user's cognitive and linguistic abilities a symbol based device (Easy Speaker) was produced which outputs naturalistic digitised human speech and sound and makes use of a photorealistic symbol set. In order to increase the size of the available symbol set a hypermedia style dynamic screen approach was employed. The relevance of the hypermedia metaphor in relation to models of knowledge representation and language processing was explored. Laboratory based studies suggested that potential user's could learn to productively operate the software, became faster and more efficient over time when performing set conversational tasks. Studies with unimpaired individuals supported the notion that digitised speech was less cognitively demanding to decode, or listen to. With highly portable, touch based, PC compatible systems beginning to appear it is hoped that the otherwise silent will be able to use the software as their primary means of communication with the speaking world. Extensive field trials over a six month period with a prototype device and in collaboration with user's caregivers strongly suggested this might be the case. Off-device improvements were also noted suggesting that Easy Speaker, or similar software has the potential to be used as a communication training tool. Such training would be likely 10 improve overall communicative effectiveness. To conclude, a model for successful ACD development was proposed.
|
103 |
Public key cryptosystems : theory, application and implementationMcAuley, Anthony Joseph January 1985 (has links)
The determination of an individual's right to privacy is mainly a nontechnical matter, but the pragmatics of providing it is the central concern of the cryptographer. This thesis has sought answers to some of the outstanding issues in cryptography. In particular, some of the theoretical, application and implementation problems associated with a Public Key Cryptosystem (PKC). The Trapdoor Knapsack (TK) PKC is capable of fast throughput, but suffers from serious disadvantages. In chapter two a more general approach to the TK-PKC is described, showing how the public key size can be significantly reduced. To overcome the security limitations a new trapdoor was described in chapter three. It is based on transformations between the radix and residue number systems. Chapter four considers how cryptography can best be applied to multi-addressed packets of information. We show how security or communication network structure can be used to advantage, then proposing a new broadcast cryptosystem, which is more generally applicable. Copyright is traditionally used to protect the publisher from the pirate. Chapter five shows how to protect information when in easily copyable digital format. Chapter six describes the potential and pitfalls of VLSI, followed in chapter seven by a model for comparing the cost and performance of VLSI architectures. Chapter eight deals with novel architectures for all the basic arithmetic operations. These architectures provide a basic vocabulary of low complexity VLSI arithmetic structures for a wide range of applications. The design of a VLSI device, the Advanced Cipher Processor (ACP), to implement the RSA algorithm is described in chapter nine. It's heart is the modular exponential unit, which is a synthesis of the architectures in chapter eight. The ACP is capable of a throughput of 50 000 bits per second.
|
104 |
Analysis and synthesis of digital active networksCoupe, Francis Geoffrey Armstrong January 1979 (has links)
The analysis of digital active networks is developed in this thesis, starting from the definitions of digital amplifiers and digital amplifier arrays and concluding with the presentation of general analysis techniques for N-port digital active networks. The analysis techniques are then tested by comparing the results of practical experiments with numerical evaluations of the derived transfer functions using a computer. The basic techniques necessary for the synthesis of digital active networks are described with an example, and the thesis is concluded with a discussion of the advantages of digital active networks over their analogue equivalents.
|
105 |
Serial-data computation in VLSISmith, Stewart Gresty January 1987 (has links)
No description available.
|
106 |
Modelling aspects of macroeconomic behaviour in Kyrgyzstan using system dynamicsTentieva, Gulkayr J. January 1999 (has links)
The aim of this thesis is to consider two issues that are of particular significance for macroeconomic modelling. These are the existence of post-so'riet transitional economies and the relevance of either Post-Keynesian or Neo-Classical policy advice in the context of dynamic disorder. In this work, I use a methodology called System Dynamics. This prm'ides an alternative, interactive methodology for analysing macro-dynamics. Traditional macroeconomic tools such as Regression Analysis, Time-Series Analysis, Simultaneous Equation Models and the like require many years of unbroken data which does not exist for transitional economies. It is shmt'n that the different approach of System Dynamics can overcome these difficulties. Some of my models of the Kyrgyz Economy used quantity-rationed systems with pulse elements integrated into potential and actual excess demand levels reveal dynamic equilibria, disequilibria and the potential for chaotic behaviour. The d(tJiculties facing macroeconomic management in these conditions and the polrver of the System Dynamics modelling methodology in assisting policy formulation and evaluation are stressed. The key inSights delivered by the models discussed indicate that policy targe/cd at reducing delay lags could be beneficial in alleviating innate tendencies in this economy towards endemic disequilibria in Aggregate Supply and Demand. Morco)'er, due to the potential for chaos existing in the non-linear dynamic economic relationships inherent in the models the relevance (~lpolicy options based on cither extremc Post-Keyncsian or Nco-Classical thinking are questioned. Indeed our Post-Keynesian (zrnamic models contain non-linear dynamic tendencies, 'which parado,Yical(Y yield policy implications consistent with .Yco-Classical thinking.
|
107 |
Bisimulations for concurrencyCastellani, Ilaria January 1987 (has links)
No description available.
|
108 |
An investigation into methods for the evolutionary development of computer-aided design systemsTrafford, David B. January 1985 (has links)
A basic requirement of all CAD systems, is that the facilities offered remain relevant to the current needs of users. A characteristic of CAD system users is that their requirements continually change or, to be more accurate, evolve, as their understanding of the design problem and available technology develops. This trait is exemplified by their inability to articulate requirements, both immediate and future with any degree of confidence. Industrial experience of using the traditional methods for developing information systems, which are based upon the Linear Life Cycle (LLC) concept, has proven to be unsuitable for CAD applications. Its failure results from the premise that users' requirements may be accurately stated at the start of the cycle and will not change with time. The need for a new development strategy which supports the evolving requirements of CAD system users is therefore evident. This research resulted in the formulation of such a development strategy. It is based upon an evolutionary approach to system development in which the users' requirements are initially satisfied by the design and implementation of a pilot sub-system which in turn forms the basis for evolution by, its incremental modification and/or extension. The success of this approach principally lies in the ability to modify the software as required with tbe minimum of resources. A major factor determining the degree to which a system may be modified was identified to beits software configuration. A number of design techniques were proposed which contributed to highly flexible configurations, principally through the criteria for functional partitioning, decoupling of functional modules from data storage and the method of organising the data. A new type of data structure was also devised which enabled new data entities and relationships to be added with no modification to the software structure. The development methods resulting from this research were extensively validated during the design and implementation of a large scale industrial CAD system.
|
109 |
Non-linear echo cancellation based on transpose distributed arithmetic adaptive filtersSmith, Mark Jason January 1987 (has links)
No description available.
|
110 |
Memory and optimisation in neural network modelsForrest, B. M. January 1988 (has links)
A numerical study of two classes of neural network models is presented. The performance of Ising spin neural networks as content-addressable memories for the storage of bit patterns is analysed. By studying systems of increasing sizes, behaviour consistent with fintite-size scaling, characteristic of a first-order phase transition, is shown to be exhibited by the basins of attraction of the stored patterns in the Hopfield model. A local iterative learning algorithm is then developed for these models which is shown to achieve perfect storage of nominated patterns with near-optimal content-addressability. Similar scaling behaviour of the associated basins of attraction is observed. For both this learning algorithm and the Hopfield model, by extrapolating to the thermodynamic limit, estimates are obtained for the critical minimum overlap which an input pattern must have with a stored pattern in order to successfully retrieve it. The role of a neural network as a tool for optimising cost functions of binary valued variables is also studied. The particular application considered is that of restoring binary images which have become corrupted by noise. Image restorations are achieved by representing the array of pixel intensities as a network of analogue neurons. The performance of the network is shown to compare favourably with two other deterministic methods-a gradient descent on the same cost function and a majority-rule scheme-both in terms of restoring images and in terms of minimising the cost function. All of the computationally intensive simulations exploit the inherent parallelism in the models: both SIMD (the ICL DAP) and MIMD (the Meiko Computing Surface) machines are used.
|
Page generated in 0.0248 seconds