• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 180
  • 127
  • 29
  • 21
  • 7
  • 1
  • Tagged with
  • 856
  • 334
  • 323
  • 318
  • 317
  • 317
  • 317
  • 313
  • 313
  • 312
  • 312
  • 311
  • 311
  • 311
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

The design and implementation of the Durham university seismic processing system

Poulter, Michael John January 1982 (has links)
A NERC Research Grant, in late 1978 permitted the Department of Geological Sciences, at the University of Durham, to purchase a pdp 11/34 minicomputer system. Together with a pdp 8/e, already possessed by the Department, this system was intended to fulfill two roles; provide a computer tool for research work into seismic reflection methods and provide a system for production processing of seismic reflection data acquired by the Department, mainly as a result of marine geophysical investigations. This thesis describes the design of Systems level software, and its implementation, to allow the computer systems to be easily used as a general research tool, and the design and implementation of a suite of programs, to provide the basic facilities of a seismic reflection processing system. At the end of this work it was possible to reach a number of conclusions on how both the hardware and software could be developed to provide a more powerful system for the future.
312

Fault tolerance in digital controllers using software techniques

Halse, Robert G. January 1984 (has links)
Microprocessor based systems for controlling gas supplies require very high levels of reliability for safety reasons. Non-redundant systems are considered to be inadequate, and an alternative approach is necessary. in digital systems, transient faults are as much as fifty times more common than permanent faults. Therefore mechanisms which allow for recovery from transients will provide large Improvements in reliability. However, to enable effective design of recovery mechanisms it Is necessary to understand failure modes. The results from practical interference tests, designed to simulate transient faults, are presented. They show that corruption to the correct flow of program execution is a common failure, and that subsequent instruction fetches can be performed from any of the memory locations. Under these conditions any value of operation code can be Interpreted as an instruction, including those undeclared by the manufacturers. Four commonly used microprocessors are investigated to establish the functions of the undeclared codes, and other undeclared operations are revealed. Analyses on the sequence of events following a random jump into the four main memory types of data, program, unused and input areas, are presented. Recovery from this type of execution can be achieved by the addition of restart codes into the areas, so that execution can transfer to a recovery routine. The effect of this mechanism on the recovery process is investigated. Finally, some methods of testing systems, to check the levels of reliability improvement obtained by these techniques, are considered.
313

Simulation of a multiprocessor computer system

Salih, A. M. January 1981 (has links)
The introduction of computers and software engineering in telephone switching systems has dictated the need for powerful design aids for such complex systems. Among these design aids simulators - real-time environment simulators and flat-level simulators - have been found particularly useful in stored program controlled switching systems design and evaluation. However, both types of simulators suffer from certain disadvantages. An alternative methodology to the simulation of stored program controlled switching systems is proposed in this research. The methodology is based on the development of a process-based multilevel hierarchically structured software simulator. This methodology eliminates the disadvantages of environment and flat-level simulators. It enables the modelling of the system in a 1 to 1 transformation process retaining the sub-systems interfaces and, hence, making it easier to see the resemblance between the model and modelled system and to incorporate design modifications and/or additions in the simulator. This methodology has been applied in building a simulation package for the System X family of exchanges. The Processor Utility Sub-system used to control the exchanges is first simulated, verified and validated. The application sub-systems models are then added one level higher_, resulting in an open-ended simulator having sub-systems models at different levels of detail and capable of simulating any member of the System X family of exchanges. The viability of the methodology is demonstrated by conducting experiments to tune the real-time operating system and by simulating a particular exchange - The Digital Main Network Switching Centre - in order to determine its performance characteristics.
314

The extension and hardware implementation of the comprehensive integrated security system concept

Morrissey, Joseph Patrick January 1995 (has links)
The current strategy to computer networking is to increase the accessibility that legitimate users have to their respective systems and to distribute functionality. This creates a more efficient working environment, users may work from home, organisations can make better use of their computing power. Unfortunately, a side effect of opening up computer systems and placing them on potentially global networks is that they face increased threats from uncontrolled access points, and from eavesdroppers listening to the data communicated between systems. Along with these increased threats the traditional ones such as disgruntled employees, malicious software, and accidental damage must still be countered. A comprehensive integrated security system ( CISS ) has been developed to provide security within the Open Systems Interconnection (OSI) and Open Distributed Processing (ODP) environments. The research described in this thesis investigates alternative methods for its implementation and its optimisation through partial implementation within hardware and software and the investigation of mechanismsto improve its security. A new deployment strategy for CISS is described where functionality is divided amongst computing platforms of increasing capability within a security domain. Definitions are given of a: local security unit, that provides terminal security; local security servers that serve the local security units and domain management centres that provide security service coordination within a domain. New hardware that provides RSA and DES functionality capable of being connected to Sun microsystems is detailed. The board can be used as a basic building block of CISS, providing fast cryptographic facilities, or in isolation for discrete cryptographic services. Software written for UNIX in C/C++ is described, which provides optimised security mechanisms on computer systems that do not have SBus connectivity. A new identification/authentication mechanism is investigated that can be added to existing systems with the potential for extension into a real time supervision scenario. The mechanism uses keystroke analysis through the application of neural networks and genetic algorithms and has produced very encouraging results. Finally, a new conceptual model for intrusion detection capable of dealing with real time and historical evaluation is discussed, which further enhances the CISS concept.
315

A distributed security architecture for large scale systems

Shepherd, Simon John January 1992 (has links)
This thesis describes the research leading from the conception, through development, to the practical implementation of a comprehensive security architecture for use within, and as a value-added enhancement to, the ISO Open Systems Interconnection (OSI) model. The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can allow any of the ISO recommended security facilities to be provided at any layer of the model. It is suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications. For large scale, distributed processing operations, a network of security management centres (SMCs) is suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided in an efficient manner. The background to the OSI standards are covered in detail, followed by an introduction to security in open systems. A survey of existing techniques in formal analysis and verification is then presented. The architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed by an extension of the CSS concept to a large scale network controlled by SMCs. A new approach to formal security analysis is described which is based on two main methodologies. Firstly, every function within the system is built from layers of provably secure sequences of finite state machines, using a recursive function to monitor and constrain the system to the desired state at all times. Secondly, the correctness of the protocols generated by the sequences to exchange security information and control data between agents in a distributed environment, is analysed in terms of a modified temporal Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system as a result of actions performed by entities within the system, including the notion of timeliness. The two fundamental problems in number theory upon which the assumptions about the security of the finite state machine model rest are described, together with a comprehensive survey of the very latest progress in this area. Having assumed that the two problems will remain computationally intractable in the foreseeable future, the method is then applied to the formal analysis of some of the components of the Comprehensive Security System. A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out in Chapter 1. This implementation is described, and finally some comments are made on the possible future of research into security aspects of distributed systems.
316

An adaptive partial response data channel for hard disk magnetic recording

Darragh, Neil January 1994 (has links)
An adaptive data channel is proposed which is better able to deal with the variations in performance typically found in the recording components of a hard disk drive. Three such categories of variation were investigated in order to gain an understanding of their relative and absolute significance; variations over radius, along the track length, and between different head / media pairs. The variations were characterised in terms of their effects on the step-response pulse width and signal-to-noise ratio. It was found that in each of the categories investigated, significant variations could be found in both longitudinal and perpendicular recording systems which, with the exception of radial variations, were nondeterministic over different head / media pairs but were deterministic for any particular head / media pair characterised. Conventional data channel design assumes such variations are non-deterministic and is therefore designed to provide the minimum error rate performance for the worst case expected recording performance within the range of accepted manufacturing tolerance. The proposed adaptive channel works on the principle that once a particular set of recording components are assembled into the disk drive, such variations become deterministic if they are able to be characterised. Such ability is facilitated by the recent introduction of partial response signalling to hard disk magnetic recording which brings with it the discrete-time sampler and the ability of the microprocessor to analyse signals digitally much more easily than analogue domain alternatives. Simple methods of measuring the step-response pulse width and signal to noise ratio with the partial response channel's electronic components are presented. The expected error rate as a function of recording density and signal to noise ratio is derived experimentally for the PR4 and EPR4 classes of partial response. On the basis of this information and the recording performance it has measured, the adaptive channel is able to implement either PR4 or EPR4 signalling and at any data rate. The capacity advantage over the non-adaptive approach is investigated for the variables previously identified. It is concluded on the basis of this investigation that the proposed adaptive channel could provide significant manufacturing yield and capacity advantages over the non-adaptive approach for a modest increase in electronic complexity.
317

Computer network analysis and optimisation

Ray, Gavin Peter January 1993 (has links)
This thesis presents a study and analysis of the major influences on network cost and their related performance. New methods have been devised to find solutions to network optimisation problems particular to the AT&T ISTEL networks in Europe and these are presented together with examples of their successful commercial application. Network performance is seen by the user in terms of network availability and traffic delay times. The network performance is influenced by many parameters, the dominating influences typically being the number of users accessing the network, the type of traffic demands they place upon it and the particular network configuration itself. The number of possible network configurations available to a network designer is vast if the full range of currently available equipment is taken into account. The aim of this research has been to assist in the selection of most suitable network designs for optimum performance and cost. This thesis looks at the current differing network technologies, their performance characteristics and the issues pertinent to any network design and optimisation procedures. A distinction is made between the network equipment providing user 'access' and that which constitutes the cross country, or *core\ data transport medium. This partitioning of the problem is exploited with the analysis concentrating on each section separately. The access side of the AT&T ISTEL - UK network is used as a basis for an analysis of the general access network. The aim is to allow network providers to analyse the root cause of excessive delay problems and find where small adjustments to access configurations might lead to real performance improvements from a user point of view. A method is developed to allow statistical estimates of performance and quality of service for typical access network configurations. From this a general method for the optimisation of cost expenditure and performance improvement is proposed. The optimisation of both circuit switched and packet switched computer networks is shown to be difficult and is normally tackled by the use of complex procedures on mainframe computers. The new work carried out in this study takes a fresh look at the basic properties of networks in order to develop a new heuristic method for the design and optimisation of circuit switched core networks on a personal computer platform. A fully functional design system was developed that implements time division multiplexed core network design. The system uses both a new heuristic method for improving the quality of the designs and a new 'speed up' algorithm for reducing times to find feasible routes, thereby dramatically improving overall design times. The completed system has since been used extensively to assist in the design of commercial networks across Europe.
318

Investigation into submicron track positioning and following technology for computer magnetic disks

Tan, Baolin January 1998 (has links)
In the recent past some magnetic heads with submicron trackwidth have been developed in order to increase track density of computer magnetic disks, however a servo control system for a submicron trackwidth head has not been investigated. The main objectives of this work are to investigate and develop a new servo pattern recording model, a new position sensor, actuator, servo controller used for submicron track positioning and following on a computer hard disk with ultrahigh track density, to increase its capacity. In this position sensor study, new modes of reading and writing servo information for longitudinal and perpendicular magnetic recording have been developed. The read/write processes in the model have been studied including the recording trackwidth, the bit length, the length and shape of the transition, the relationship between the length of the MR head and the recording wavelength, and the SIN of readout. lt has also been investigated that the servo patterns are magnetized along the radial direction by a transverse writing head that is aligned at right angles with the normal data head and the servo signals are reproduced by a transverse MR head with its stripe and pole gap tangential to the circumferential direction. lt has been studied how the servo signal amplitude and linearity are affected by the length of the MR sensor and the distance between the shields of the head. Such things as the spacing and length of the servo-pattern elements have been optimised so as to achieve minimum jitter and maximum utilisation of the surface of the disk. The factors (i.e. the skew angle of the head) affecting the SIN of the position sensor have been analysed and demonstrated. As a further development, a buried servo method has been studied which uses a servo layer underneath the data layer, so that a continuous servo signal is obtained. A new piezo-electric bimorph actuator has been demonstrated. This can be used as a fine actuator in hard disk recording. The linearity and delay of its response are improved by designing a circuit and selecting a dimension of the bimorph element. A dual-stage actuator has been developed. A novel integrated fine actuator using a piezo-electric bimorph has also been designed. A new type of construction for a magnetic head and actuator has been studied. A servo controller for a dual-stage actuator has been developed. The wholly digital controller for positioning and following has been designed and its performances have been simulated by the MAL TAB computer program. A submicron servo track writer and a laser system measuring dynamic micro-movement of a magnetic head have been specially developed for this project. Finally, track positioning and following on 0.7 µm tracks with a 7% trackwidth rms runout has been demonstrated using the new servo method when the disk-was rotating at low speed. This is one of the best results in this field in the world.
319

Initial design studies for a high-speed distributed prolog database machine

Naoom, Mazin Fawzi January 1986 (has links)
No description available.
320

Characterisation of planar resonators for use in circulator hardware

Nisbet, W. T. January 1980 (has links)
No description available.

Page generated in 0.0382 seconds