• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 240
  • 217
  • 1
  • 1
  • 1
  • Tagged with
  • 474
  • 474
  • 474
  • 337
  • 151
  • 83
  • 75
  • 69
  • 68
  • 53
  • 44
  • 43
  • 43
  • 43
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Investigating the combined appearance model for statistical modelling of facial images.

Allen, Nicholas Peter Legh. January 2007 (has links)
The combined appearance model is a linear, parameterized and flexible model which has emerged as a powerful tool for representing, interpreting, and synthesizing the complex, non-rigid structure of the human face. The inherent strength of this model arises from the utilization of a representative training set which provides a-priori knowledge of the allowable appearance variation of the face. The model was introduced by Edwards et al in 1998 as part of the Active Appearance Model framework, a template alignment algorithm which used the model to automatically locate deformable objects within images. Since this debut, the model has been utilized within a plethora of applications relating to facial image processing. In essence, the ap pearance model combines individual statistical models of shape and texture variation in order to produce a single model of correlations between both shape and texture. In the context of facial modelling, this approach produces a model which is flexible in that it can accommodate the range of variation found in the face, specific in that it is restricted to only facial instances, and compact in that a new facial instance may be synthesized using a small set of parameters. It is additionally this compactness which makes it a candidate for model based video coding. Methods used in the past to model faces are reviewed and the capabilities of the statistical model in general are investigated. Various approaches to building the intermediate linear Point Distribution Models (PDMs) and grey-level models are outlined and an approach decided upon for implementation. The respective statistical models for the Informatics and Modelling (IMM) and Extended Multi-Model Verification for Teleservices and Secu- rities (XM2VTS) facial databases are built using MATLAB in an approach incorporating Procrustes Analysis, Affine Transform Warping and Principal Components Analysis. The MATLAB implementation's integrity was validated against a similar approach encoun tered in literature and found to produce results within 0.59%, 0.69% and 0.69% of those published for the shape, texture and combined models respectively. The models are consequently assessed with regard to their flexibility, specificity and compactness. The results demonstrate the model's ability to be successfully constrained to the synthesis of "legal" faces, to successfully parameterize and re-synthesize new unseen images from outside the training sets and to significantly reduce the high dimensionality of input facial images to produce a powerful, compact model. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2007
12

MIMO equalization.

Mathew, Jerry George. January 2005 (has links)
In recent years, space-time block co'des (STBC) for multi-antenna wireless systems have emerged as attractive encoding schemes for wireless communications. These codes provide full diversity gain and achieve good performance with simple receiver structures without the additional increase in bandwidth or power requirements. When implemented over broadband channels, STBCs can be combined with orthogonal frequency division multiplexing (OFDM) or single carrier frequency domain (SC-FD) transmission schemes to achieve multi-path diversity and to decouple the broadband frequency selective channel into independent flat fading channels. This dissertation focuses on the SC-FD transmission schemes that exploit the STBC structure to provide computationally cost efficient receivers in terms of equalization and channel estimation. The main contributions in this dissertation are as follows: • The original SC-FD STBC receiver that bench marks STBC in a frequency selective channel is limited to coherent detection where the knowledge of the channel state information (CSI) is assumed at the receiver. We extend this receiver to a multiple access system. Through analysis and simulations we prove that the extended system does not incur any performance penalty. This key result implies that the SC-FD STBC scheme is suitable for multiple-user systems where higher data rates are possible. • The problem of channel estimation is considered in a time and frequency selective environment. The existing receiver is based on a recursive least squares (RLS) adaptive algorithm and provides joint equalization and interference suppression. We utilize a system with perfect channel state information (CSI) to show from simulations how various design parameters for the RLS algorithm can be selected in order to get near perfect CSI performance. • The RLS receiver has two modes of operation viz. training mode and direct decision mode. In training mode, a block of known symbols is used to make the initial estimate. To ensure convergence of the algorithm a re-training interval must be predefined. This results in an increase in the system overhead. A linear predictor that utilizes the knowled~e of the autocorrelation function for a Rayleigh fading channel is developed. The predictor is combined with. the adaptive receiver to provide a bandwidth efficient receiver by decreasing the training block size.· The simulation results show that the performance penalty for the new system is negligible. • Finally, a new Q-R based receiver is developed to provide a more robust solution to the RLS adaptive receiver. The simulation results clearly show that the new receiver outperforms the RLS based receiver at higher Doppler frequencies, where rapid channel variations result in numerical instability of the RLS algorithm. The linear predictor is also added to the new receiver which results in a more robust and bandwidth efficient receiver. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2005.
13

Parallel implementation of fractal image compression

Uys, Ryan F. January 2000 (has links)
Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.
14

Implementation of an application specific low bit rate video compression scheme.

McIntosh, Ian James. January 2001 (has links)
The trend towards digital video has created huge demands all the link bandwidth required to carry the digital stream, giving rise to the growing research into video compression schemes. General video compression standards, which focus on providing the best compression for any type of video scene, have been shown to perform badly at low bit rates and thus are not often used for such applications. A suitable low bit rate scheme would be one that achieves a reasonable degree of quality over a range of compression ratios, while perhaps being limited to a small set of specific applications. One such application specific scheme. as presented in this thesis, is to provide a differentiated image quality, allowing a user-defined region of interest to be reproduced at a higher quality than the rest of the image. The thesis begins by introducing some important concepts that are used for video compression followed by a survey of relevant literature concerning the latest developments in video compression research. A video compression scheme, based on the Wavelet transform, and using an application specific idea, is proposed and implemented on a digital signal processor (DSP), the Philips Trimedia TM·1300. The scheme is able to capture and compress the video stream and transmit the compressed data via a low bit· rate serial link to be decompressed and displayed on a video monilor. A wide range of flexibility is supported, with the ability to change various compression parameters 'on-the-fly', The compression allgorithm is controlled by a PC application that displays the decompressed video and the original video for comparison, while displaying useful rate metrics such as Peak Signal to Noise Ratio (PSNR), Details of implementation and practicality are discussed. The thesis then presents examples and results from both implementation and testing before concluding with suggestions for further improvement. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
15

Robust multivariable control design : an application to a bank-to-turn missile.

Reddi, Yashren. January 2011 (has links)
Multi-input multi-output (MIMO) control system design is much more difficult than single-input single output (SISO) design due to the combination of cross-coupling and uncertainty. An investigation is undertaken into both the classical Quantitative Feedback Theory (QFT) and modern H-infinity frequency domain design methods. These design tools are applied to a bank-to-turn (BTT) missile plant at multiple operating points for a gain scheduled implementation. A new method is presented that exploits both QFT and H-infinity design methods. It is shown that this method gives insight into the H-infinity design and provides a classical approach to tuning the final H-infinity controller. The use of “true” inversionfree design equations, unlike the theory that appears in current literature, is shown to provide less conservative bounds at frequencies near and beyond the gain cross-over frequency. All of the techniques investigated and presented are applied to the BTT missile to show their application to a practical problem. It was found that the H-infinity design method was able to produce satisfactory controllers at high angles of attack where there were no QFT solutions found. Although an H-infinity controller was produced for all operating points except the last, the controllers were found to be of very high-order, contain very poorly damped second order terms and generally more conservative, as opposed to the QFT designs. An investigation into simultaneous stabilization of multiple plants using Hinfinity is also presented. Although a solution to this was not found, a strongly justified case to entice further investigation is presented. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2011.
16

Investigating the performance of generator protection relays using a real-time simulator.

Huang, Yu-Ting. January 2013 (has links)
Real-time simulators have been utilized to perform hardware-in-loop testing of protection relays and power system controllers for some years. However, hardware-in-loop testing of generator protection relays has until recently been limited by a lack of suitable dynamic models of synchronous generators in the real-time simulation environment. Historically, the Park transformation has been chosen as the mathematical approach for dynamic modelling of electrical machines in simulation programs, since it greatly simplifies the dynamic equations. However, generator internal winding faults could not be represented faithfully with the aforementioned modelling approach due to its mathematical limitations. Recently, a new real-time phase-domain, synchronous machine model has become available that allows representation of internal winding faults in the stator circuits of a synchronous machine as well as faults in the excitation systems feeding the field circuits of these machines. The development of this phase-domain synchronous machine model for real-time simulators opens up the scope for hardware-in-loop testing of generator protection relays since the performance of various generator protection elements can now be examined using the advanced features provided by the new machine model. This thesis presents a thorough, research-based analysis of the new phase-domain synchronous generator model in order to assess its suitability for testing modern generator protection schemes. The thesis reviews the theory of operation and settings calculations of the various elements present in a particular representative modern numerical generator protection relay and describes the development of a detailed, real-time digital simulation model of a multi-generator system suitable for studying the performance of the protection functions provided within this relay. As part of the development of this real-time model, the thesis presents a custom-developed real-time modelling approach for representing the load-dependent third-harmonic voltages present in the windings of a large synchronous generator which are needed in order to test certain types of stator-winding protection schemes. The thesis presents the results of detailed, closed-loop testing of the representative generator protection relay hardware and its settings using the developed models on a real-time digital simulator. The results demonstrate the correctness of the modelling and testing approach and show that using the phase-domain synchronous machine model, together with the supplementary models presented in the thesis, it is possible to evaluate the performance of various generator protective functions that could not otherwise have been analysed using conventional machine models and testing techniques. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2013.
17

Constant modulus based blind adaptive multiuser detection.

January 2004 (has links)
Signal processing techniques such as multi user detection (MUD) have the capability of greatly enhancing the performance and capacity of future generation wireless communications systems. Blind adaptive MUD's have many favourable qualities and their application to OS-COMA systems has attracted a lot of attention. The constant modulus algorithm is widely deployed in blind channel equalizations applications. The central premise of this thesis is that the constant modulus cost function is very suitable for the purposes of blind adaptive MUD for future generation wireless communications systems. To prove this point, the adaptive performance of blind (and non-blind) adaptive MUD's is derived analytically for all the schemes that can be made to fit the same generic structure as the constant modulus scheme. For the first time, both the relative and absolute performance levels of the different adaptive algorithms are computed, which gives insights into the performance levels of the different blind adaptive MUD schemes, and demonstrates the merit of the constant modulus based schemes. The adaptive performance of the blind adaptive MUD's is quantified using the excess mean square error (EMSE) as a metric, and is derived for the steady-state, tracking, and transient stages of the adaptive algorithms. If constant modulus based MUD's are suitable for future generation wireless communications systems, then they should also be capable of suppressing multi-rate DS-COMA interference and also demonstrate the ability to suppress narrow band interference (NBI) that arises in overlay systems. Multi-rate DS-COMA provides the capability of transmitting at various bit rates and quality of service levels over the same air interface. Limited spectrum availability may lead to the implementation of overlay systems whereby wide-band COMA signal are collocated with existing narrow band services. Both overlay systems and multi-rate DS-COMA are important features of future generation wireless communications systems. The interference patterns generated by both multi-rate OS-COMA and digital NBI are cyclostationary (or periodically time varying) and traditional MUD techniques do not take this into account and are thus suboptimal. Cyclic MUD's, although suboptimal, do however take the cyclostationarity of the interference into account, but to date there have been no cyclic MUD's based on the constant modulus cost function proposed. This thesis thus derives novel, blind adaptive, cyclic MUD's based on the constant modulus cost function, for direct implementation on the FREquency SHift (FRESH) filter architecture. The FRESH architecture provides a modular and thus flexible implementation (in terms of computational complexity) of a periodically time varying filter. The operation of the blind adaptive MUD on these reduced complexity architectures is also explored.· The robustness of the new cyclic MUD is proven via a rigorous mathematical proof. An alternate architecture to the FRESH filter is the filter bank. Using the previously derived analytical framework for the adaptive performance of MUD's, the relative performance of the adaptive algorithms on the FRESH and filter bank architectures is examined. Prior to this thesis, no conclusions could be drawn as to which architecture would yield superior performance. The performance analysis of the adaptive algorithms is also extended in this thesis in order to consider the effects of timing jitrer at the receiver, signature waveform mismatch, and other pertinent issues that arise in realistic implementation scenarios. Thus, through a careful analytical approach, which is verified by computer simulation results, the suitability of constant modulus based MUD's is established in this thesis. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2004.
18

Traffic modelling and analysis of next generation networks.

Walingo, Tom. January 2008 (has links)
Wireless communication systems have demonstrated tremendous growth over the last decade, and this growth continues unabated worldwide. The networks have evolved from analogue based first generation systems to third generation systems and further. We are envisaging a Next Generation Network (NGN) that should deliver anything anywhere anytime, with full quality of service (QoS) guarantees. Delivering anything anywhere anytime is a challenge that is a focus for many researchers. Careful teletraffic design is required for this ambitious project to be realized. This research goes through the protocol choices, design factors, performance measures and the teletraffic analysis, necessary to make the project feasible. The first significant contribution of this thesis is the development of a Call Admission Control (CAC) model as a means of achieving QoS in the NGN’s. The proposed CAC model uses an expanded set of admission control parameters. The existing CAC schemes focus on one major QoS parameter for CAC; the Code Division Multiple Access (CDMA) based models focus on the signal to interference ratio (SIR) while the Asynchronous Transfer Mode (ATM) based models focus on delay. A key element of NGN’s is inter-working of many protocols and hence the need for a diverse set of admission control parameters. The developed CAC algorithm uses an expanded set of admission control parameters (SIR, delay, etc). The admission parameters can be generalized as broadly as the design engineer might require for a particular traffic class without rendering the analysis intractable. The second significant contribution of this thesis is the presentation of a complete teletraffic analytical model for an NGN. The NGN network features the following issues; firstly, NGN call admission control algorithm, with expanded admission control parameters; secondly, multiple traffic types, with their diverse demands; thirdly, the NGN protocol issues such as CDMA’s soft capacity and finally, scheduling on both the wired and wireless links. A full teletraffic analysis with all analytical challenges is presented. The analysis shows that an NGN teletraffic model with more traffic parameters performs better than a model with less traffic parameters. The third contribution of the thesis is the extension of the model to traffic arrivals that are not purely Markovian. This work presents a complete teletraffic analytical model with Batch Markovian Arrival (BMAP) traffic statistics unlike the conventional Markovian types. The Markovian traffic models are deployed for analytical simplicity at the expense of realistic traffic types. With CAC, the BMAP processes become non-homogeneous. The analysis of homogeneous BMAP process is extended to non-homogeneous processes for the teletraffic model in this thesis. This is done while incorporating all the features of the NGN network. A feasible analytical model for an NGN must combine factors from all the areas of the protocol stack. Most models only consider the physical layer issues such as SIR or the network layer issues such as packet delay. They either address call level issues or packet level issues on the network. The fourth contribution has been to incorporate the issues of the transport layer into the admission control algorithm. A complete teletraffic analysis of our network with the effects of the transport layer protocol, the Transmission Control Protocol (TCP), is performed. This is done over a wireless channel. The wireless link and the protocol are mathematically modeled, there-after, the protocols effect on network performance is thoroughly presented. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2008.
19

The Design of high-voltage planar transistors with specific reference to the collector region.

Smithies, Stafford Alun. January 1984 (has links)
The thesis represents a major contribution to the understanding of the design and fabrication of high-voltage planar silicon bipolar transistors, and reports on the original research carried out and the special methods evolved leading to the successful design, development and industrialization of two highly specialized transistors. The development of these transistors, destined for high-reliability applications in subscriber telephone systems, was funded by the South African Department of Posts and Telecommunications. The first device developed was a discrete transistor meeting the requirements of a singularly difficult specification that included the following. An accurately controlled upper limit to quasi-saturation operation, so that above a collector-emitter voltage of 4 V at 60 mA, the device characteristics should be extremely linear. An extremely small range of acceptable gains, with lower and upper limits of 80 and 180 respectively. Both accurately reproducible and high breakdown-voltages exceeding 200 V. The ability to withstand 100 W pulses of 10ps duration at a case temperature of 95 °c and a collector-emitter voltage of 130 V. The second device represents a design and development breakthrough resulting in a unique high-voltage integrated Darlington transistor incorporating the following design features. The standard discrete high-voltage transistors used initially in the Darlington application were found to fail frequently due to an external breakdown mechanism under lightning surge conditions, which are common in South Africa. To overcome this weakness, the integrated Darlington incorporates a special clamping circuit to absorb the surge energy non-destructively within the bulk of the device and thereby prevent external breakdown. To act as an electrostatic shielding system a new 'inverted metallization structure' was developed and incorporated in the Darlington transistor design. With this structure it was possible to realize transistors with a combination of extremely high gains, approaching 105 , and very low collector-emitter leakage currents, often lower than 1 nA at an applied 240 V, and no device with comparable properties has been reported on elsewhere. During the development of the integrated Darlington it was recognized that there was a necessity for a simple yet accurate method of predicting quasi-saturation operation. This consideration led to the development of a totally new, user-orientated, graphical model for predicting the gain of a transistor when operating in the quasi-saturation mode a model involving the use of entirely new yet easily measured parameters. The model was successfully applied to the verification of the Darlington design and the optimization of processing parameters for the device. Although undertaken in a research environment, the projects were handled under pressures normally associated with industrial conditions. Time schedules were constrained, and this influenced design strategy. As a consequence, however, the need arose to develop fast and efficient design aids since much of the theoretical design was implemented for production without recourse to long-term experimental verification in the laboratory. Whilst the author viewed this approach as less than ideal, the successful production of almost two million of these highly specialized devices, including both types, has lent authority to the design techniques developed. In spite of the industry-like pressures imposed during the course of the work, many aspects of the development programmes were further investigated and refined by research that would have been omitted had the author accepted the realization of a working device as the only goal. This research has not only contributed to the production of devices of exceptionally high quality, but has also produced a wealth of new information valuable to future designers. These aids include a new and highly accurate correction for the parasitic collector resistance of a transistor; design data for the specification of epitaxial layers for transistors with collector-emitter breakdown voltages ranging between 5 V and 800 V; information on Gate Associated Transistor (GAT) structures; and the entirely new graphical method, mentioned above, for modelling saturation effects in bipolar transistors. Process development was successfully carried out within the strict confines of compatibility with available equipment, and the pre-requisite that the existing production of low-voltage bipolar integrated circuits should in no way be compromised. Successful transfer of the technology, followed by industrialization, has demonstrated the effectiveness of a method developed by the author for the rapid communication and dissemination of appropriate information in a system without precedents for such procedures. Listed below are other examples showing that useful information was gathered and new techniques developed. Emitter-region defects associated with the metallization process were identified. Test data were used to monitor project performance and in the development of data management techniques. Interaction with the author resulted in the establishment of the first Quality Assurance and Audit function for microelectronics activities by the Department of Posts and Telecommunications in the Republic of South Africa. The group formed had the authority to handle the certification of semiconductor capabilities and the qualification for service of semiconductor components. An entirely new continuous failure analysis programme was introduced covering both the products manufactured and similar types from other sources: a programme that has brought to light the major failure mechanisms in the high-voltage transistors. On the basis of the knowledge gained during the research and development programmes it has been possible to make recommendations, substantiated by preliminary investigations for further original research work on a new type of negative-resistance high-voltage device. This would initially be destined for use in subscriber telephones to improve their immunity to surges, and it would form the basis of the development of a totally new type of interface circuit with in-built protection against surges, for application at the subscriber line interface in electronic exchanges. / Thesis (Ph.D.) - University of Natal, Durban, 1984.
20

A structure from motion solution to head pose recovery for model-based video coding.

Heathcote, Jonathan Michael. January 2005 (has links)
Current hybrid coders such as H.261/263/264 or MPEG-l/-2 cannot always offer high quality-to-compression ratios for video transfer over the (low-bandwidth) wireless channels typical of handheld devices (such as smartphones and PDAs). Often these devices are utilised in videophone and teleconferencing scenarios, where the subjects of inte:est in the scene are peoples faces. In these cases, an alternative coding scheme known as Model-Based Video Coding (MBVC) can be employed. MBVC systems for face scenes utilise geometrically and photorealistically accurate computer graphic models to represent head !md shoulder views of people in a scene. High compression ratios are achieved at the encoder by extracting and transmitting only the parameters which represent the explicit shape and motion changes occurring on the face in the scene. With some a priori knowledge (such as the MPEG-4 standard for facial animation parameters), the transmitted parameters can be used at the decoder to accurately animate the graphical model and a synthesised version of the scene (originally appearing at the encoder) can be output. Primary components for facial re-animation at the decoder are a set of local and global motion parameters extracted from the video sequence appearing at the encoder. Local motion describes the changes in facial expression occurring on the face. Global motion describes the three-dimensional motion· of the entire head as a rigid object. Extraction of this three-dimensional global motion is often called head tracking. This thesis focuses on the tracking of rigid head pose in a monocular video sequence. The system framework utilises the recursive Structure from Motion (SfM) method of Azarbayejani and Pentland. Integral to the SfM solution are a large number of manually selected two-dimensional feature points, which are tracked throughout the sequence using an efficient image registration technique. The trajectories of the feature points are simultaneously processed by an extended Kalman filter (EKF) to stably recover camera geometry and the rigid three-dimensional structure and pose of the head. To improve estimation accuracy and stability, adaptive estimation is harnessed within the Kalman filter by dynamically varying the noise associated with each of the feature measurements. A closed loop approach is used to constrain feature tracking in each frame. The Kalman filter's estimate of motion and structure of the face are used to predict the trajectory of the features, thereby constraining the search space for the next frame in the video sequence. Further robustness in feature tracking is achieved through the integration of a linear appearance basis to accommodate variations in illumination or changes in aspect on the face. Synthetic experiments are performed for both the SfM and the feature tracking algorithm. The accuracy of the SfM solution is evaluated against synthetic ground truth. Further experimentation demonstrates the stability of the framework to significant noise corruption on arriving measurement data. The accuracy of obtained pixel measurements in the feature tracking algorithm is also evaluated against known ground truth. Additional experiments confirm feature tracking stability despite significant changes in target appearance. Experiments with real video sequences illustrate robustness of the complete head tracker to partial occlusions on the face. The SfM solution (including two-dimensional tracking) runs near real time at 12 Hz. The limits of Pitch, Yaw and Roll (rotational) recovery are 45°,45° and 90° respectively. Large translational recovery (especially depth) is also demonstrated. The estimated motion trajectories are validated against (publically available) ground truth motion captured using a commercial magnetic orientation tracking system. Rigid reanimation of an overlayed wire frame face model is further used as a visually subjective analysis technique. These combined results serve to confirm the suitability of the proposed head tracker as the global (rigid) motion estimator in an MBVC system. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2005.

Page generated in 0.1184 seconds