Spelling suggestions: "subject:"anda electronic engineering."" "subject:"ando electronic engineering.""
151 |
A study of low force fabric characteristics and vibrational behaviour for automated garment handlingPollet, Didier Michel January 1998 (has links)
One of the fundamental concepts in automated garment assembly is that the orientation of a fabric panel should never be lost. However, if a panel does become distorted, several techniques, such as vision, air flotation tables, and vibratory conveyors are available to restore the orientation. This thesis has investigated the behaviour of a fabric panel on a vibratory table. Several table parameters such as amplitude of vibration, frequency and angle of inclination, together with some important fabric properties as friction and compressibility are required to understand the behaviour. However, most work on friction in textiles considers fibre-fibre or fabric-fabric friction, which is not appropriate to this and so low force frictional properties between unloaded fabric and engineering surfaces (i.e., aluminium, Formica and rubber) have been studied. The influence of several experimental variables on friction is demonstrated, in particular, the effect of humidity and velocity. Further, an in depth study is made on the stick-slip of fabric panels wherein a novel measuring technique is introduced. An estimate of the damping, which is required to model the fabric, has been obtained from an in-plane vibration test. The second significant fabric property to be studied is the compression both static and impact. Again, only low-force compression tests are carried out since these are the typical forces experienced by fabrics on a vibrating table. The static compressibility of knitted and woven materials is verified with van Wvk's equation. which gives a near indistinguishable fit with the experimental data.
|
152 |
Time domain threshold crossing for signals in noiseAl-Jajjoka, Sam Nooh K. January 1995 (has links)
This work investigates the discrimination of times between threshold crossings for deterministic periodic signals with added band-limited noise. The methods include very low signal to noise ratio (one or less). Investigation has concentrated on the theory of double threshold crossings, with especial care taken in the effects of correlations in the noise, and their effects on the probability of detection of double crossings. A computer program has been written to evaluate these probabilities for a wide range of signal to noise ratiOS, a wide range of signal to bandwidth ratios, and a range of times between crossings of up to two signal periods. Correlations due to the extreme cases of a Brickwall filter and a second order Butterworth filter have been included; other filters can easily be included in the program. The method is simulated and demonstrated by implementing on a digital signal processor (DSP) using a TMS32020. Results from the DSP technique are in agreement with the theoretical evaluations. Probability results could be used to determine optimum time thresholds and windows for signal detection and frequency discrimination, to determine the signal length for adequate discrimination, and to evaluate channel capacities. The ability to treat high noise, including exact effects of time correlations, promises new applications in electronic signal detection, communications, and pulse discrimination neural networks.
|
153 |
Video object segmentation and tracking.Murugas, Themesha. 31 March 2014 (has links)
One of the more complex video processing problems currently vexing researchers is that of
object segmentation. This involves identifying semantically meaningful objects in a scene and
separating them from the background. While the human visual system is capable of performing
this task with minimal effort, development and research in machine vision is yet to yield
techniques that perform the task as effectively and efficiently. The problem is not only difficult
due to the complexity of the mechanisms involved but also because it is an ill-posed problem.
No unique segmentation of a scene exists as what is of interest as a segmented object depends
very much on the application and the scene content. In most situations a priori knowledge of the
nature of the problem is required, often depending on the specific application in which the
segmentation tool is to be used.
This research presents an automatic method of segmenting objects from a video sequence. The
intent is to extract and maintain both the shape and contour information as the object changes
dynamically over time in the sequence. A priori information is incorporated by requesting the
user to tune a set of input parameters prior to execution of the algorithm.
Motion is used as a semantic for video object extraction subject to the assumption that there is
only one moving object in the scene and the only motion in the video sequence is that of the
object of interest. It is further assumed that there is constant illumination and no occlusion of the
object.
A change detection mask is used to detect the moving object followed by morphological
operators to refine the result. The change detection mask yields a model of the moving
components; this is then compared to a contour map of the frame to extract a more accurate
contour of the moving object and this is then used to extract the object of interest itself. Since
the video object is moving as the sequence progresses, it is necessary to update the object over
time. To accomplish this, an object tracker has been implemented based on the Hausdorff objectmatching
algorithm.
The dissertation begins with an overview of segmentation techniques and a discussion of the
approach used in this research. This is followed by a detailed description of the algorithm
covering initial segmentation, object tracking across frames and video object extraction. Finally,
the semantic object extraction results for a variety of video sequences are presented and
evaluated. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005
|
154 |
Traffic modeling in mobile internet protocol : version 6.Mtika, Steve Davidson. January 2005 (has links)
Mobile Internet Protocol Version 6 (lPv6) is the new version of the Internet Protocol (IP) born out of the great success of Internet Protocol version 4 (IPv4). The motivation behind the development of Mobile IPv6 standard stems from user's demand for mobile devices which can connect and move seamlessly across a growing number of connectivity options. It is both suitable for mobility between subnets across homogenous and inhomogeneous media. The protocol allows a mobile node to communicate with other hosts after changing its point of attachment from one subnet to another. The huge address space available meets the requirements for rapid development of internet as the number of mobile nodes increases tremendously with the rapid expansion of the internet. Mobility, security and quality of service (QoS) being integrated in Mobile TPv6 makes it the important foundation stone for building the mobile information society and the future internet. Convergence between current network technologies: the intern et and mobile telephony is taking place, but the internet's IP routing was designed to work with conventional static nodes. Mobile IPv6 is therefore considered to be one of the key technologies for realizing convergence which enables seamless communication between fixed and mobile access networks. For this reason, there is numerous works in location registrations and mobility management, traffic modeling, QoS, routing procedures etc. To meet the increased demand for mobile telecommunications, traffic modeling is an important step towards understanding and solving performance problems in the future wireless IP networks. Understanding the nature of this traffic, identifying its characteristics and developing appropriate traffic models coupled with appropriate mobility management architectures are of great importance to the traffic engineering and performance evaluation of these networks. It is imperative that the mobility management used keeps providing good performance to mobile users and maintain network load due to signaling and packet delivery as low as possible. To reduce this load, Intemet Engineering Task Force (IETF) proposed a regional mobility management. The load is reduced by allowing local migrations to be handled locally transparent from the Home Agent and the Correspondent Node as the mobile nodes roams freely around the network. This dissertation tackles two major aspects. Firstly, we propose the dynamic regional mobility management (DRMM) architecture with the aim to minimize network load while keeping an optimal number of access routers in the region. The mobility management is dynamic based on the movement and population of the mobile nodes around the network. Most traffic models in telecommunication networks have been based on the exponential Poisson processes. This model unfortunately has been proved to be unsuitable for modeling busty IP traffic. Several approaches to model IP traffic using Markovian processes have been developed using the Batch Markovian Alrival Process (BMAP) by characterizing arrivals as batches of sizes of different distributions. The BMAP is constructed by generalizing batch Poisson processes to allow for non-exponential times between arrivals of batches while maintaining an underlying Markovian structure. The second aspect of this dissertation covers the traffic characterization. We give the analysis of an access router as a single server queue with unlimited waiting space under a non pre-emptive priority queuing discipline. We model the arrival process as a superposition of BMAP processes. We characterize the superimposed arrival processes using the BMAP presentation. We derive the queue length and waiting time for this type of queuing system. Performance of this traffic model is evaluated by obtaining numerical results in terms of queue length and waiting time and its distribution for the high and low priority traffic. We finally present a call admission control scheme that supports QoS. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2005.
|
155 |
Experiments in thin film deposition : plasma-based fabrication of carbon nanotubes and magnesium diboride thin films.Coetsee, Dirk. January 2004 (has links)
A simple, low-cost plasma reactor was developed for the purpose of carrying out thin film deposition experiments. The reactor is based largely on the Atmospheric Pressure Nonequilibrium Plasma (APNEP) design with a simple modification. It was used in an attempt to fabricate magnesium diboride thin films via a novel, but unsuccessful CVD process where plasma etching provides a precursor boron flux. Carbon nanotubes were successfully synthesised with the apparatus using a plasma-based variation of the floating catalyst or vapour phase growth method. The affect of various parameters and chemicals on the quality of nanotube production was assessed. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2004.
|
156 |
Super-orthogonal space-time turbo codes in Rayleigh fading channels.Pillai, Jayesh Narayana. January 2005 (has links)
The vision of anytime, anywhere communications coupled by the rapid growth of
wireless subscribers and increased volumes of internet users, suggests that the
widespread demand for always-on access data, is sure to be a major driver for the
wireless industry in the years to come. Among many cutting edge wireless
technologies, a new class of transmission techniques, known as Multiple-Input
Multiple-Output (MIMO) techniques, has emerged as an important technology
leading to promising link capacity gains of several fold increase in data rates and
spectral efficiency. While the use of MIMO techniques in the third generation (3G)
standards is minimal, it is anticipated that these technologies will play an important
role in the physical layer of fixed and fourth generation (4G) wireless systems.
Concatenated codes, a class of forward error correction codes, of which Turbo codes
are a classical example, have been shown to achieve reliable performance which
approach the Shannon limit. An effective and practical way to approach the capacity
of MIMO wireless channels is to employ space-time coding (STC). Space-Time
coding is based on introducing joint correlation in transmitted signals in both the
space and time domains. Space-Time Trellis Codes (STTCs) have been shown to
provide the best trade-off in terms of coding gain advantage, improved data rates and
computational complexity.
Super-Orthogonal Space-Time Trellis Coding (SOSTTC) is the recently proposed
form of space-time trellis coding which outperforms its predecessor. The code has a
systematic design method to maximize the coding gain for a given rate, constellation
size, and number of states. Simulation and analytical results are provided to justify the
improved performance. The main focus of this dissertation is on STTCs, SOSTTCs
and their concatenated versions in quasi-static and rapid Rayleigh fading channels.
Turbo codes and space-time codes have made significant impact in terms of the
theory and practice by closing the gap on the Shannon limit and the large capacity gains provided by the MIMO channel, respectively. However, a convincing solution
to exploit the capabilities provided by a MIMO channel would be to build the turbo
processing principle into the design of MIMO architectures. The field of concatenated
STTCs has already received much attention and has shown improved performance
over conventional STTCs. Recently simple and double concatenated STTCs
structures have shown to provide a further improvement performance. Motivated by
this fact, two concatenated SOSTTC structures are proposed called Super-orthogonal
space-time turbo codes. The performance of these new concatenated SOSTTC is
compared with that of concatenated STTCs and conventional SOSTTCs with
simulations in Rayleigh fading channels. It is seen that the SOST-CC system
outperforms the ST-CC system in rapid fading channels, whereas it maintains
performance similar to that in quasi-static. The SOST-SC system has improved
performance for larger frame lengths and overall maintains similar performance with
ST-SC systems. A further investigation of these codes with channel estimation errors
is also provided. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.
|
157 |
ATM performance in rural areas of South Africa.Mbatha, Sakhiseni J. January 2005 (has links)
Rural areas in developing countries span vast areas with a variety of climatic zones,
vegetation and terrain features, which are hostile to the installation and maintenance of
telecommunication infrastructures. Provision of telecommunications services to these
areas using traditional wired and existing wiring telephone system with centralized
network architecture becomes prohibitively expensive and not viable in many cases,
because there is no infrastructure and the area is sparsely populated. Applications of
wireless systems seem to provide a cost-effective solution for such a scenario. However,
deployment of ATM in rural areas as a backbone technology wide area network (WAN)
has not been thoroughly investigated so far.
The dissertation investigates the feasibility of deployment of ATM backbone network
(WAN) to be implemented in the rural. ATM is a digital transmission service for wide
area networks providing speeds from 2 Megabits per second up to 155 Megabits per
second. Businesses and institutions that transmit extremely high volumes of virtually
error-free information at high speeds over wide area network with high quality and
reliable connections currently use this service.
For the purpose of saving the utilization of more bandwidth, the network should support
or have a high forward bit rate, i.e. it must convey high traffic from base station to the
user (i.e. upstream) than from the user to the base station (down stream). This work also
investigates the features from the rural areas that degrade the performance of the
networks and have a negative impact in the deployment of the telecommunications
networks services. Identification of these features will lead to the suggestion of the least
cost-effective telecommunication service.
For the purpose of evaluating the performance and feasibility of the network, modeling of
the ATM network is accomplished using Project Estimation (ProjEstim) Simulation Tool
as the comprehensive tool for simulating large communication networks with detailed
protocol modeling and performance analysis. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.
|
158 |
Opportunistic scheduling algorithms in downlink centralized wireless networks.Yin, Rui. January 2005 (has links)
As wireless spectrum efficiency is becoming increasingly important with the growing demands
for wideband wireless service scheduling algorithm plays an important role in the
design of advanced wireless networks. Opportunistic scheduling algorithms for wireless
communication networks under different QoS constraints have gained popularity in recent
years since they have potentials of achieving higher system performance. In this dissertation
firstly we formulate the framework of opportunistic scheduling algorithms. Then
we propose three new opportunistic scheduling schemes under different QoS criteria and
situations (single channel or multiple channel).
1. Temporal fairness opportunistic scheduling algorithm in the short term.
We replicate the temporal fairness opportunistic scheduling algorithm in the long
term. From simulation results we find that this algorithm improves the system
performance and complies with the temporal fairness constraint in the long term.
However, the disadvantage of this algorithm is that it is unfair from the beginning
of simulation to 10000 time slot on system resource (time slots) allocation - we say
it is unfair in the short term. With such a scheme, it is possible that some users
with bad channel conditions would starve for a long time (more than a few seconds) ,
which is undesirable to certain users (say, real-time users). So we propose the new
scheme called temporal fairness opportunistic scheduling algorithm in the short term
to satisfy users ' requirements of system resource in both short term and long term.
Our simulation results show that the new scheme performs well with respect to both
temporal fairness constraint and system performance improvement.
2. Delay-concerned opportunistic scheduling algorithm.
While most work has been done on opportunistic scheduling algorithm under fairness
constraints on user level, we consider users' packet delay in opportunistic scheduling.
Firstly we examine the packet delay performance under the long term temporal
fairness opportunistic scheduling (TFOL) algorithm. We also simulate the earliest
deadline-first (EDF) scheduling algorithm in the wireless environment. We find that
the disadvantage of opportunistic scheduling algorithm is that it is unfair in packet
delay distribution because it results in a bias for users with good channel conditions
in packet delay to improve system performance. Under EDF algorithm, packet delay
of users with different channel conditions is almost the same but the problem is that
it is worse than the opportunistic scheduling algorithm. So we propose another new
scheme which considers both users' channel conditions and packet delay. Simulation
results show that the new scheme works well with respect to both system performance
improvement and the balance of packet delay distribution.
3. Utilitarian fairness scheduling algorithm in multiple wireless channel networks.
Existing studies have so far focused on the design of scheduling algorithm in the
single wireless communication network under the fairness constraint. A common
assumption of existing designs is that only a single user can access the channel
at a given time slot. However, spread spectrum techniques are increasingly being
deployed to allow multiple data users to transmit simultaneously on a relatively
small number of separate high-rate channels. Not much work has been done on
the scheduling algorithm in the multiple wireless channel networks. Furthermore
in wire-line network, when a certain amount of resource is assigned to a user, it
guarantees that the user gets some amount of performance, but in wireless network
this point is different because channel conditions are different among users. Hence,
in wireless channel the user's performance does not directly depend on its allocation
of system resource. Finally the opportunistic scheduling mechanism for wireless
communication networks is gaining popularity because it utilizes the "multi-user
diversity" to maximize the system performance. So, considering these three points
in the fourth section, we propose utilitarian fairness scheduling algorithm in multiple
wireless channel networks. Utilitarian fairness is to guarantee that every user can get
its performance requirement which is pre-defined. The proposed criterion fits in with
wireless networks. We also use the opportunistic scheduling mechanism to maximize
system performance under the utilitarian fairness constraint. Simulation results show
that the new scheme works well in both utilitarian fairness and utilitarian efficiency
of system resource in the multiple wireless channel situation. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.
|
159 |
Modeling and torque control implementtion for an 8/6 switched reluctance motor.Wang, Sen. 30 May 2013 (has links)
This thesis begins with a brief introduction of the basic principles of operation of SRMs, and
explains how flux characteristics are derived from voltage and current measurements, and
presents results obtained from an 8/6 SRM. Torque characteristics are derived from these
flux characteristics using both the inductance and co-energy methods. Comparison of these
results with direct torque measurements shows that the co-energy method is significantly
more accurate than the inductance method. Electrical and mechanical simulation models are
derived from inductance and torque characteristics, and implemented in Matlab/Simulink.
Simulated results are shown to agree with measurements obtained from physical locked and
free rotor alignment experiments. These models are also used to illustrate the need for
sophisticated commutation strategies and high performance current control loops to achieve
low ripple torque control.
The Matlab/Simulink models are transferred to PSCAD to compare the current control
abilities, cost, complexity and robustness of the Asymmetrical Half Bridge (AHB), n+ 1
switch, and C-dump SRM converter topologies. The relatively high cost of the AHB
converter is justified in terms of its robustness, simplicity and superior capabilities for
current and torque control. The torque sharing function commutation strategy for low ripple
torque control is presented and simulated with hysteresis current control for the 8/6 SRM fed
from a four phase AHB converter.
A DSP implementation of the current and torque control loops is also presented and tested
under various dynamic speed and load conditions and recommendations are made for future
work. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2006
|
160 |
Behavioural simulation of mixed analogue/digital circuitsLong, David Ian January 1996 (has links)
Continuing improvements in integrated circuit technology have made possible the implementation of complex electronic systems on a single chip. This often requires both analogue and digital signal processing. It is essential to simulate such IC's during the design process to detect errors at an early stage. Unfortunately, the simulators that are currently available are not well-suited to large mixed-signal circuits. This thesis describes the design and development of a new methodology for simulating analogue and digital components in a single, integrated environment. The methodology represents components as behavioural models that are more efficient than the circuit models used in conventional simulators. The signals that flow between models are all represented as piecewise-linear (PWL) waveforms. Since models representing digital and analogue components use the same format to represent their signals, they can be directly connected together. An object-oriented approach was used to create a class hierarchy to implement the component models. This supports rapid development of new models since all models are derived from a common base class and inherit the methods and attributes defined in their parentc lassesT. he signal objectsa re implementedw ith a similar class hierarchy. The development and validation of models representing various digital, analogue and mixed-signal components are described. Comparisons are made between the accuracy and performance of the proposed methodology and several commercial simulators. The development of a Windows-based demonstrations imulation tool called POISE is also described. This permitted models to be tested independently and multiple models to be connected together to form structural models of complex circuits.
|
Page generated in 0.122 seconds