• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5651
  • 1126
  • 723
  • 337
  • 65
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8462
  • 8462
  • 7609
  • 6967
  • 3980
  • 3979
  • 3147
  • 3069
  • 2965
  • 2965
  • 2965
  • 2965
  • 2965
  • 1164
  • 1157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A permittivity measurement system for high frequency laboratories

Marais, Johannes Izak Frederik 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2006. / The open-ended coaxial probe is revisited as a broadband measurement system for general high frequency permittivity measurements. Three coaxial probes were developed that are suited for the measurement of both liquids and solids. The components of a permittivity measurement system were investigated and improvements were made to the coaxial probe where needed. This includes the development of a full wave code with great calculation time improvements without sacrificing accuracy. This code allows measurements to be performed in a high frequency laboratory and the permittivity extracted without any mentionable delay. A capacitance model that better describes the impedance of an open-ended coaxial line is also suggested that can be used for real-time permittivity extraction over a limited frequency range. Calibration formed a vital part of the project and great time was spent developing a TRL and a SOLT calibration set for the coaxial probe geometry. The combination of the TRL and SOLT standards also allows measurement of the residual errors after calibration and is used in an uncertainty analysis of the extracted permittivity. Well known materials such as PTFE, PVC, methanol and water were measured to test the probes. The measured dielectric constants are all within 3% of values quoted in literature. The loss term of the samples are also in good agreement with the expected values.
2

Parameter extraction of superconducting integrated circuits

Lotter, Pierre 12 1900 (has links)
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2006. / Integrated circuits are expensive to manufacture and it is important to verify the correct operation of a circuit before fabrication. Efficient, though accurate, parameter extraction of post-layout designs are required for estimation of circuit success rates. This thesis discusses electrical netlist and fast parameter extraction techniques suited for both intraand inter-gate connections. This includes the use of extraction windows and look-up tables (LUTs) for accurate inductance and capacitance estimation. These techniques can readily be implemented in automated layout software where fast parameter extraction is required for timing analysis and gate placement.
3

Public key cryptosystems : theory, application and implementation

McAuley, Anthony Joseph January 1985 (has links)
The determination of an individual's right to privacy is mainly a nontechnical matter, but the pragmatics of providing it is the central concern of the cryptographer. This thesis has sought answers to some of the outstanding issues in cryptography. In particular, some of the theoretical, application and implementation problems associated with a Public Key Cryptosystem (PKC). The Trapdoor Knapsack (TK) PKC is capable of fast throughput, but suffers from serious disadvantages. In chapter two a more general approach to the TK-PKC is described, showing how the public key size can be significantly reduced. To overcome the security limitations a new trapdoor was described in chapter three. It is based on transformations between the radix and residue number systems. Chapter four considers how cryptography can best be applied to multi-addressed packets of information. We show how security or communication network structure can be used to advantage, then proposing a new broadcast cryptosystem, which is more generally applicable. Copyright is traditionally used to protect the publisher from the pirate. Chapter five shows how to protect information when in easily copyable digital format. Chapter six describes the potential and pitfalls of VLSI, followed in chapter seven by a model for comparing the cost and performance of VLSI architectures. Chapter eight deals with novel architectures for all the basic arithmetic operations. These architectures provide a basic vocabulary of low complexity VLSI arithmetic structures for a wide range of applications. The design of a VLSI device, the Advanced Cipher Processor (ACP), to implement the RSA algorithm is described in chapter nine. It's heart is the modular exponential unit, which is a synthesis of the architectures in chapter eight. The ACP is capable of a throughput of 50 000 bits per second.
4

Analysis and synthesis of digital active networks

Coupe, Francis Geoffrey Armstrong January 1979 (has links)
The analysis of digital active networks is developed in this thesis, starting from the definitions of digital amplifiers and digital amplifier arrays and concluding with the presentation of general analysis techniques for N-port digital active networks. The analysis techniques are then tested by comparing the results of practical experiments with numerical evaluations of the derived transfer functions using a computer. The basic techniques necessary for the synthesis of digital active networks are described with an example, and the thesis is concluded with a discussion of the advantages of digital active networks over their analogue equivalents.
5

Nonlinear control of an industrial robot

Gilbert, James Michael January 1989 (has links)
The precise control of a robot manipulator travelling at high speed constitutes a major research challenge. This is due to the nonlinear nature of the dynamics of the arm which make many traditional, linear control methodologies inappropriate. An alternative approach is to adopt controllers which are themselves nonlinear. Variable structure control systems provide the possibility of imposing dynamic characteristics upon a poorly modelled and time varying system by means of a discontinuous control signal. The basic algorithm overcomes some nonlinear effects but is sensitive to Coulomb friction andactuator saturation. By augmenting this controller with compensation terms, these effects may largely be eliminated. In order to investigate these ideas, a number of variable structure control systems ~re applied to a low cost industrial robot having a highly nonlinear and flexible drive system. By a combination of hardware enhancements and control system developments, an improvement in speed by a factor of approximately three was achieved while the trajectory tracking accuracy was improved by a factor of ten, compared with the manufacturer's control system. In order to achieve these improvements, it was necessary to develop a dynamic model of the arm including the effects of drive system flexibility and nonlinearities. The development of this model is reported in this thesis, as is work carried out on a comparison of numerical algorithms for the solution of differential equations with discontinuous right hand sides, required in the computer aided design of variable structure control systems.
6

Robotic workcell analysis and object level programming

Monkman, Gareth John January 1990 (has links)
For many years robots have been programmed at manipulator or joint level without any real thought to the implementation of sensing until errors occur during program execution. For the control of complex, or multiple robot workcells, programming must be carried out at a higher level, taking into account the possibility of error occurrence. This requires the integration of decision information based on sensory data. Aspects of robotic workcell control are explored during this work with the object of integrating the results of sensor outputs to facilitate error recovery for the purposes of achieving completely autonomous operation. Network theory is used for the development of analysis techniques based on stochastic data. Object level programming is implemented using Markov chain theory to provide fully sensor integrated robot workcell control.
7

Development of a heterogeneous microwave network, fade simulation tool applicable to networks that span Europe

Basarudin, Hafiz January 2012 (has links)
Radio communication systems operating at microwave frequencies are strongly attenuated by hydrometeors such as rain and wet snow (sleet). Hydrometeor attenuation dominates the dynamic fading of most types of radio links operating above 10 GHz, especially high capacity, fixed, terrestrial and Earth-Space links. The International Telecommunication Unions – Radio Section (ITU-R) provides a set of internationally recognized models to predict annual fade distributions for a wide variety of individual radio link. However, these models are not sufficient for the design and optimisation of networks, even as simple as two links. There are considerable potential gains to be achieved from the optimized design of real-time or predictive Dynamic Resource Management systems. The development of these systems requires a joint channel simulation tool applicable to arbitrary, heterogeneous networks. This thesis describes the development of a network fade simulation tool, known as GINSIM, which can simulate joint dynamic fade time-series on heterogeneous networks of arbitrary geometry, spanning Europe. GINSIM uses as input meteorological and topological data from a variety of sources and numerically calculates the joint effects on fading on all links in a specified network. ITU-R models are used to transform rain rate into specific attenuation and to estimate the specific attenuation amplification due to non-liquid hydrometeors. The resulting simulation tool has been verified against ITU-R models of average annual fade distributions, fade slope and fade duration distributions, in the southern UK. Validation has also been performed against measured terrestrial and Earth-space link data, acquired in the Southern UK and Scotland.
8

Numerical modelling of the deformation of elastic material by the TLM method

Langley, Philip January 1997 (has links)
The transmission line matrix (TLM) method is a numerical tool for the solution of wave and diffusion type equations. The application of TLM to physical phenomena such as heat flow and electromagnetic wave propagation is well established. A previous attempt to apply TLM models to the area of elastic wave propagation and elastic deformation had limited success. The work of this thesis extends the application base of TLM to the area of elastic deformation modelling and validates the model for several two-dimensional situations. In doing this it has been necessary to develop new nodal structures which facilitate the scaling of differential coefficients and incorporation of cross derivatives. Nodal structures which allow the modelling of two and three-dimensional, and anisotropic, elastic deformation are described. The technique is demonstrated by applying the elastic deformation model to several elastic problems. These include two-dimensional isotropic models and models of anisotropic elastic deformation. Provision is also made for the application of various boundary conditions which include displacement, force and frictional boundaries.
9

Quality of service for voice over next generation networks.

Perumal, Eugene Govindhren. January 2004 (has links)
The Global communications transformation is currently in progress. Packet switched technology has moved from data - only applications into the heart of the network to take up the functions of traditional circuit - switched equipment. Voice over A TM(VoATM) and voice over IP(VoIP) are the two main alternatives for carrying voice packets over NGN' s. A TM offers the advantage of its built in quality of service mechanisms. IP on the other hand could not provide QoS guarantees in its traditional form. IP QoS mechanisms evolved only in recent years. There are currently no QoS differences between Next Generation Networks based on VoATM or VoIP. However non QoS agreements are more in favour of VoIP instead of VoA TM. This gives VoIP the leading edge bet the Voice over packet technologies. In this thesis the E - Model was optimized and used to study the effects of delay, utilization and coder design on voice quality. The optimization was used to choose a coder and utilization levels given certain conditions. An optimization algorithm formed through the E - Model was used to assist with the selection of parameters important to VoIP networks. These parameters include the link utilization, voice coder and allowable packet loss. This research also shows us that different utilization, voice coder and packet loss levels are optimal in different situations. A remote and core VoIP Network simulation model was developed and used to study the complex queuing issues surrounding VoIP networks. The models look at some of the variables that need to be controlled in order to minimize delay. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2004.
10

On-line estimation approaches to fault-tolerant control of uncertain systems

Klinkhieo, Supat January 2009 (has links)
This thesis is concerned with fault estimation in Fault-Tolerant Control (FTC) and as such involves the joint problem of on-line estimation within an adaptive control system. The faults that are considered are significant uncertainties affecting the control variables of the process and their estimates are used in an adaptive control compensation mechanism. The approach taken involves the active FTC, as the faults can be considered as uncertainties affecting the control system. The engineering (application domain) challenges that are addressed are: (1) On-line model-based fault estimation and compensation as an FTC problem, for systems with large but bounded fault magnitudes and for which the faults can be considered as a special form of dynamic uncertainty. (2) Fault-tolerance in the distributed control of uncertain inter-connected systems The thesis also describes how challenge (1) can be used in the distributed control problem of challenge (2). The basic principle adopted throughout the work is that the controller has two components, one involving the nominal control action and the second acting as an adaptive compensation for significant uncertainties and fault effects. The fault effects are a form of uncertainty which is considered too large for the application of passive FTC methods. The thesis considers several approaches to robust control and estimation: augmented state observer (ASO); sliding mode control (SMC); sliding mode fault estimation via Sliding Mode Observer (SMO); linear parameter-varying (LPV) control; two-level distributed control with learning coordination.

Page generated in 0.1627 seconds