• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5800
  • 1138
  • 723
  • 337
  • 66
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 8697
  • 8697
  • 7759
  • 7118
  • 3981
  • 3980
  • 3297
  • 3217
  • 3113
  • 3113
  • 3113
  • 3113
  • 3113
  • 1164
  • 1157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Supervised dictionary learning for action recognition and localization

Kumar, B. G. Vijay January 2012 (has links)
Image sequences with humans and human activities are everywhere. With the amount of produced and distributed data increasing at an unprecedented rate, there has been a lot of interest in building systems that can understand and interpret the visual data, and in particular detect and recognise human actions. Dictionary based approaches learn a dictionary from descriptors extracted from the videos in the first stage and a classifier or a detector in the second stage. The major drawback of such an approach is that the dictionary is learned in an unsupervised manner without considering the task (classification or detection) that follows it. In this work we develop task dependent(supervised) dictionaries for action recognition and localization, i.e., dictionaries that are best suited for the subsequent task. In the first part of the work, we propose a supervised max-margin framework for linear and non-linear Non-Negative Matrix Factorization (NMF). To achieve this, we impose max-margin constraints within the formulation of NMF and simultaneously solve for the classifier and the dictionary. The dictionary (basis matrix) thus obtained maximizes the margin of the classifier in the low dimensional space (in the linear case) or in the high dimensional feature space (in the non-linear case). In the second part the work, we develop methodologies for action localization. We first propose a dictionary weighting approach where we learn local and global weights for the dictionary by considering the localization information of the training sequences. We next extend this approach to learn a task-dependent dictionary for action localization that incorporates the localization information of the training sequences into dictionary learning. The results on publicly available datasets show that the performance of the system is improved by using the supervised information while learning dictionary.
302

Functional topology of networks

Zaman, Sabri-E. January 2016 (has links)
In order to utilise network resources efficiently, we need a strong knowledge of how the resources are shared and provisioned. However,this information is often unavailable due to the complexity of modern networks, the restrictive access to information describing their configurations and accuracy/reliability issues regarding information provisioning methods. Here, we propose the concept of functional topologies to deduce how resources are shared between different traffic flows. A functional topology describes the dependencies between traffic flows as a graph of interactions; this is in contrast to typical network graphs that model the physical connections between network components (routers and hosts). Unlike other work relying on in-network data, this topology is constructed solely at end hosts by measuring interdependencies of traffic flows via cross-correlation analysis. In order to measure the complete sets of interdependencies of traffic flows, different time intervals are used for sampling time series data. It is shown that these time intervals are related to maximum delays of traffic flows in network. The results of cross-correlation analysis are validated using well-known inverse participation ratio (IPR). As a part of the validation process, the results are analysed and compared with dominant/important flows of the network obtained by a new technique that uses eigen decomposition and spanning tree algorithm. The methodology of measuring interdependencies of traffic flows is validated and evaluated using real world data from a sensor network,as well as detailed simulation modelling different network topologies e.g. local area network. All the dependency measurements of traffic flow results are fed into a novel algorithm to construct functional topology of the network. Result shows that the algorithm constructs accurate functional topology of the network. Functional topology simplifies network topology by considering only nodes that create dependencies among traffic flows. With the help of this topology, end hosts can gain insight into resource provisioning of a network without requiring ISP assistance.
303

Multiple antenna systems : channel capacity and low-density parity-check codes.

Byers, Geoffrey James. January 2005 (has links)
The demand for high data rate wireless communication systems is evident today as indicated by the rapid growth in wireless subscribers and services. High data rate systems are bandwidth intensive but bandwidth is an expensive and scarce commodity. The ability of future wireless systems to efficiently utilise the available bandwidth is therefore integral to their progress and development. The wireless communications channel is a harsh environment where time varying multipath fading, noise and interference from other users and systems all contribute to the corruption of the received signal. It is difficult to overcome these problems and achieve the high data rates required using single antenna technology. Multiple-input-multipleoutput (MIMO) systems have recently emerged as a promising technique for achieving very large bandwidth efficiencies in wireless channels. Such a system employs multiple antennas at both the transmitter and the receiver. These systems exploit the spatial dimension of the wireless channel to achieve significant gains in terms of capacity and reliability over single antenna systems and consequently achieve high data rates. MIMO systems are currently being considered for 3rd generations cellular systems. The performance of MIMO systems is heavily dependent on the environment in which the system is utilised. For this reason a realistic channel model is essential for understanding the performance of these systems. Recent studies on the capacity of MIMO channels have focused on the effect of spatial correlation but the joint effect of spatial and temporal correlation has not been well studied. The first part of this thesis proposes a new spatially and temporally correlated MIMO channel model which considers motion of the receiver and nonisotropic scattering at both ends of the radio link. The outage capacity of this channel is examined where the effects of antenna spacing, array angle, degree of scattering and receiver motion are investigated. It is shown that the channel capacity still increases linearly with the number of transmit and receive antennas, despite the presence of both spatial and temporal correlation. The capacity of MIMO channels is generally investigated by simulation. Where analytical expressions have been considered for spatially correlated channels, only bounds or approximations have been used. In this thesis closed form analytical expressions are derived for the ergodic capacity of MIMO channels for the cases of spatial correlation at one end and both ends of the radio link. The latter does not lend itself to numerical integration but the former is shown to be accurate by comparison with simulation results. The proposed analysis is also very general as it is based on the transmit and receive antenna correlation matrices. Low-density parity-check (LDPC) codes have recently been rediscovered and have been shown to approach the Shannon limit and even outperform turbo codes for long block lengths. Non-binary LDPC codes have demonstrated improved performance over binary LDPC codes in the AWGN channel. Methods to optimise non-binary LDPC codes have not been well developed where only simulation based approaches have been employed, which are not very efficient. For this reason, a new approach is proposed which is based on extrinsic information transfer (EXIT) charts. It is demonstrated that by performing curve matching on the EXIT chart, good non-binary LDPC codes can be designed for the AWGN channel. In order to approach the theoretical capacity of MIMO channels, many space-time coded, multiple antenna (MA) systems have been considered in the literature. These systems merge channel coding and antenna diversity and exploit the benefits of both. Binary LDPC codes have demonstrated good performance in MA systems but nonbinary LDPC codes have not been considered. Therefore, the application of non-binary LDPC codes to MA systems is investigated where the codes are optimised for the system of interest, using a simulation and EXIT chart based design approach. It is shown that non-binary LDPC codes achieve a small gain in performance over binary LDPC codes in MA systems. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2005.
304

Investigating the combined appearance model for statistical modelling of facial images.

Allen, Nicholas Peter Legh. January 2007 (has links)
The combined appearance model is a linear, parameterized and flexible model which has emerged as a powerful tool for representing, interpreting, and synthesizing the complex, non-rigid structure of the human face. The inherent strength of this model arises from the utilization of a representative training set which provides a-priori knowledge of the allowable appearance variation of the face. The model was introduced by Edwards et al in 1998 as part of the Active Appearance Model framework, a template alignment algorithm which used the model to automatically locate deformable objects within images. Since this debut, the model has been utilized within a plethora of applications relating to facial image processing. In essence, the ap pearance model combines individual statistical models of shape and texture variation in order to produce a single model of correlations between both shape and texture. In the context of facial modelling, this approach produces a model which is flexible in that it can accommodate the range of variation found in the face, specific in that it is restricted to only facial instances, and compact in that a new facial instance may be synthesized using a small set of parameters. It is additionally this compactness which makes it a candidate for model based video coding. Methods used in the past to model faces are reviewed and the capabilities of the statistical model in general are investigated. Various approaches to building the intermediate linear Point Distribution Models (PDMs) and grey-level models are outlined and an approach decided upon for implementation. The respective statistical models for the Informatics and Modelling (IMM) and Extended Multi-Model Verification for Teleservices and Secu- rities (XM2VTS) facial databases are built using MATLAB in an approach incorporating Procrustes Analysis, Affine Transform Warping and Principal Components Analysis. The MATLAB implementation's integrity was validated against a similar approach encoun tered in literature and found to produce results within 0.59%, 0.69% and 0.69% of those published for the shape, texture and combined models respectively. The models are consequently assessed with regard to their flexibility, specificity and compactness. The results demonstrate the model's ability to be successfully constrained to the synthesis of "legal" faces, to successfully parameterize and re-synthesize new unseen images from outside the training sets and to significantly reduce the high dimensionality of input facial images to produce a powerful, compact model. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2007
305

MIMO equalization.

Mathew, Jerry George. January 2005 (has links)
In recent years, space-time block co'des (STBC) for multi-antenna wireless systems have emerged as attractive encoding schemes for wireless communications. These codes provide full diversity gain and achieve good performance with simple receiver structures without the additional increase in bandwidth or power requirements. When implemented over broadband channels, STBCs can be combined with orthogonal frequency division multiplexing (OFDM) or single carrier frequency domain (SC-FD) transmission schemes to achieve multi-path diversity and to decouple the broadband frequency selective channel into independent flat fading channels. This dissertation focuses on the SC-FD transmission schemes that exploit the STBC structure to provide computationally cost efficient receivers in terms of equalization and channel estimation. The main contributions in this dissertation are as follows: • The original SC-FD STBC receiver that bench marks STBC in a frequency selective channel is limited to coherent detection where the knowledge of the channel state information (CSI) is assumed at the receiver. We extend this receiver to a multiple access system. Through analysis and simulations we prove that the extended system does not incur any performance penalty. This key result implies that the SC-FD STBC scheme is suitable for multiple-user systems where higher data rates are possible. • The problem of channel estimation is considered in a time and frequency selective environment. The existing receiver is based on a recursive least squares (RLS) adaptive algorithm and provides joint equalization and interference suppression. We utilize a system with perfect channel state information (CSI) to show from simulations how various design parameters for the RLS algorithm can be selected in order to get near perfect CSI performance. • The RLS receiver has two modes of operation viz. training mode and direct decision mode. In training mode, a block of known symbols is used to make the initial estimate. To ensure convergence of the algorithm a re-training interval must be predefined. This results in an increase in the system overhead. A linear predictor that utilizes the knowled~e of the autocorrelation function for a Rayleigh fading channel is developed. The predictor is combined with. the adaptive receiver to provide a bandwidth efficient receiver by decreasing the training block size.· The simulation results show that the performance penalty for the new system is negligible. • Finally, a new Q-R based receiver is developed to provide a more robust solution to the RLS adaptive receiver. The simulation results clearly show that the new receiver outperforms the RLS based receiver at higher Doppler frequencies, where rapid channel variations result in numerical instability of the RLS algorithm. The linear predictor is also added to the new receiver which results in a more robust and bandwidth efficient receiver. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2005.
306

Parallel implementation of fractal image compression

Uys, Ryan F. January 2000 (has links)
Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.
307

Implementation of an application specific low bit rate video compression scheme.

McIntosh, Ian James. January 2001 (has links)
The trend towards digital video has created huge demands all the link bandwidth required to carry the digital stream, giving rise to the growing research into video compression schemes. General video compression standards, which focus on providing the best compression for any type of video scene, have been shown to perform badly at low bit rates and thus are not often used for such applications. A suitable low bit rate scheme would be one that achieves a reasonable degree of quality over a range of compression ratios, while perhaps being limited to a small set of specific applications. One such application specific scheme. as presented in this thesis, is to provide a differentiated image quality, allowing a user-defined region of interest to be reproduced at a higher quality than the rest of the image. The thesis begins by introducing some important concepts that are used for video compression followed by a survey of relevant literature concerning the latest developments in video compression research. A video compression scheme, based on the Wavelet transform, and using an application specific idea, is proposed and implemented on a digital signal processor (DSP), the Philips Trimedia TM·1300. The scheme is able to capture and compress the video stream and transmit the compressed data via a low bit· rate serial link to be decompressed and displayed on a video monilor. A wide range of flexibility is supported, with the ability to change various compression parameters 'on-the-fly', The compression allgorithm is controlled by a PC application that displays the decompressed video and the original video for comparison, while displaying useful rate metrics such as Peak Signal to Noise Ratio (PSNR), Details of implementation and practicality are discussed. The thesis then presents examples and results from both implementation and testing before concluding with suggestions for further improvement. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
308

Robust multivariable control design : an application to a bank-to-turn missile.

Reddi, Yashren. January 2011 (has links)
Multi-input multi-output (MIMO) control system design is much more difficult than single-input single output (SISO) design due to the combination of cross-coupling and uncertainty. An investigation is undertaken into both the classical Quantitative Feedback Theory (QFT) and modern H-infinity frequency domain design methods. These design tools are applied to a bank-to-turn (BTT) missile plant at multiple operating points for a gain scheduled implementation. A new method is presented that exploits both QFT and H-infinity design methods. It is shown that this method gives insight into the H-infinity design and provides a classical approach to tuning the final H-infinity controller. The use of “true” inversionfree design equations, unlike the theory that appears in current literature, is shown to provide less conservative bounds at frequencies near and beyond the gain cross-over frequency. All of the techniques investigated and presented are applied to the BTT missile to show their application to a practical problem. It was found that the H-infinity design method was able to produce satisfactory controllers at high angles of attack where there were no QFT solutions found. Although an H-infinity controller was produced for all operating points except the last, the controllers were found to be of very high-order, contain very poorly damped second order terms and generally more conservative, as opposed to the QFT designs. An investigation into simultaneous stabilization of multiple plants using Hinfinity is also presented. Although a solution to this was not found, a strongly justified case to entice further investigation is presented. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2011.
309

Investigating the performance of generator protection relays using a real-time simulator.

Huang, Yu-Ting. January 2013 (has links)
Real-time simulators have been utilized to perform hardware-in-loop testing of protection relays and power system controllers for some years. However, hardware-in-loop testing of generator protection relays has until recently been limited by a lack of suitable dynamic models of synchronous generators in the real-time simulation environment. Historically, the Park transformation has been chosen as the mathematical approach for dynamic modelling of electrical machines in simulation programs, since it greatly simplifies the dynamic equations. However, generator internal winding faults could not be represented faithfully with the aforementioned modelling approach due to its mathematical limitations. Recently, a new real-time phase-domain, synchronous machine model has become available that allows representation of internal winding faults in the stator circuits of a synchronous machine as well as faults in the excitation systems feeding the field circuits of these machines. The development of this phase-domain synchronous machine model for real-time simulators opens up the scope for hardware-in-loop testing of generator protection relays since the performance of various generator protection elements can now be examined using the advanced features provided by the new machine model. This thesis presents a thorough, research-based analysis of the new phase-domain synchronous generator model in order to assess its suitability for testing modern generator protection schemes. The thesis reviews the theory of operation and settings calculations of the various elements present in a particular representative modern numerical generator protection relay and describes the development of a detailed, real-time digital simulation model of a multi-generator system suitable for studying the performance of the protection functions provided within this relay. As part of the development of this real-time model, the thesis presents a custom-developed real-time modelling approach for representing the load-dependent third-harmonic voltages present in the windings of a large synchronous generator which are needed in order to test certain types of stator-winding protection schemes. The thesis presents the results of detailed, closed-loop testing of the representative generator protection relay hardware and its settings using the developed models on a real-time digital simulator. The results demonstrate the correctness of the modelling and testing approach and show that using the phase-domain synchronous machine model, together with the supplementary models presented in the thesis, it is possible to evaluate the performance of various generator protective functions that could not otherwise have been analysed using conventional machine models and testing techniques. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2013.
310

A Kalman filter model for signal estimation in the auditory system [electronic resource] /

Hauger, Martin Manfred. January 2005 (has links)
Thesis (M. Eng.)(Electronic)--University of Pretoria, 2005. / Summaries in English and Afrikaans. Includes bibliographical references.

Page generated in 0.0799 seconds