• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 44
  • Tagged with
  • 111
  • 111
  • 111
  • 65
  • 33
  • 30
  • 24
  • 22
  • 22
  • 17
  • 16
  • 16
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Bi-modal biometrics authentication on iris and signature.

Viriri, Serestina. January 2010 (has links)
Multi-modal biometrics is one of the most promising avenues to address the performance problems in biometrics-based personal authentication systems. While uni-modal biometric systems have bolstered personal authentication better than traditional security methods, the main challenges remain the restricted degrees of freedom, non-universality and spoof attacks of the traits. In this research work, we investigate the performance improvement in bi-modal biometrics authentication systems based on the physiological trait, the iris, and behavioral trait, the signature. We investigate a model to detect the largest non-occluded rectangular part of the iris as a region of interest (ROI) from which iris features are extracted by a cumulative-sums-based grey change analysis algorithm and Gabor Filters. In order to address the space complexity of biometric systems, we proposed two majority vote-based algorithms which compute prototype iris features codes as the reliable specimen templates. Experiments obtained a success rate of 99.6%. A text-based directional signature verification algorithm is investigated. The algorithm verifies signatures, even when they are composed of symbols and special unconstrained cursive characters which are superimposed and embellished. The experimental results show that the proposed approach has an improved true positive rate of 94.95%. A user-specific weighting technique, the user-score-based, which is based on the different degrees of importance for the iris and signature traits of an individual, is proposed. Then, an intelligent dual ν-support vector machine (2ν-SVM) based fusion algorithm is used to integrate the weighted match scores of the iris and signature modalities at the matching score level. The bi-modal biometrics system obtained a false rejection rate (FRR) of 0.008, and a false acceptance rate (FAR) of 0.001. / Thesis (Ph.D)-University of KwaZulu-Natal, Westville, 2010.

On case representation and indexing in a case-based reasoning system for waste management.

Wortmann, Karl Lyndon. January 1997 (has links)
Case-Based Reasoning is a fairly new Artificial Intelligence technique which makes use of past experience as the basis for solving new problems. Typically, a case-based reasoning system stores actual past problems and solutions in memory as cases. Due to its ability to reason from actual experience and to save solved problems and thus learn automatically, case-based reasoning has been found to be applicable to domains for which techniques such as rule-based reasoning have traditionally not been well-suited, such as experience-rich, unstructured domains. This applicability has led to it becoming a viable new artificial intelligence topic from both a research and application perspective. This dissertation concentrates on researching and implementing indexing techniques for casebased reasoning. Case representation is researched as a requirement for implementation of indexing techniques, and pre-transportation decision making for hazardous waste handling is used as the domain for applying and testing the techniques. The field of case-based reasoning was covered in general. Case representation and indexing were researched in detail. A single case representation scheme was designed and implemented. Five indexing techniques were designed, implemented and tested. Their effectiveness is assessed in relation to each other, to other reasoners and implications for their use as the basis for a case-based reasoning intelligent decision support system for pre-transportation decision making for hazardous waste handling are briefly assessed. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1997.

Planarity testing and embedding algorithms.

Carson, D. I. January 1990 (has links)
This thesis deals with several aspects of planar graphs, and some of the problems associated with non-planar graphs. Chapter 1 is devoted to introducing some of the fundamental notation and tools used in the remainder of the thesis. Graphs serve as useful models of electronic circuits. It is often of interest to know if a given electronic circuit has a layout on the plane so that no two wires cross. In Chapter 2, three efficient algorithms are described for determining whether a given 2-connected graph (which may model such a circuit) is planar. The first planarity testing algorithm uses a path addition approach. Although this algorithm is efficient, it does not have linear complexity. However, the second planarity testing algorithm has linear complexity, and uses a recursive fragment addition technique. The last planarity testing algorithm also has linear complexity, and relies on a relatively new data structure called PQ-trees which have several important applications to planar graphs. This algorithm uses a vertex addition technique. Chapter 3 further develops the idea of modelling an electronic circuit using a graph. Knowing that a given electronic circuit may be placed in the plane with no wires crossing is often insufficient. For example, some electronic circuits often have in excess of 100 000 nodes. Thus, obtaining a description of such a layout is important. In Chapter 3 we study two algorithms for obtaining such a description, both of which rely on the PQ-tree data structure. The first algorithm determines a rotational embedding of a 2-connected graph. Given a rotational embedding of a 2-connected graph, the second algorithm determines if a convex drawing of a graph is possible. If a convex drawing is possible, then we output the convex drawing. In Chapter 4, we concern ourselves with graphs that have failed a planarity test of Chapter 2. This is of particular importance, since complex electronic circuits often do not allow a layout on the plane. We study three different ways of approaching the problem of an electronic circuit modelled on a non-planar graph, all of which use the PQ-tree data structure. We study an algorithm for finding an upper bound on the thickness of a graph, an algorithm for determining the subgraphs of a non-planar graph which are subdivisions of the Kuratowski graphs K5 and K3,3, and lastly we present a new algorithm for finding an upper bound on the genus of a non-planar graph. / Thesis (M.Sc.)-University of Natal, Durban,1990.

Qualitative and structural analysis of video sequences.

Brits, Alessio. 17 October 2013 (has links)
This thesis analyses videos in two distinct ways so as to improve both human understanding and the computer description of events that unfold in video sequences. Qualitative analysis can be used to understand a scene in which many details are not needed. However, for there to be an accurate interpretation of a scene, a computer system has to first evaluate discretely the events in a scene. Such a method must involve structural features and the shapes of the objects in the scene. In this thesis we perform qualitative analysis on a road scene and generate terms that can be understood by humans and that describe the status of the traffic and its congestion. Areas in the video that contain vehicles are identified regardless of scale. The movement of the vehicles is further identified and a rule-based technique is used to accurately determine the status of the traffic and its congestion. Occlusion is a common problem in scene analysis tracking. A novel technique is developed to vertically separate groups of people in video sequences. A histogram is generated based on the shape of a group of people and its valleys are identified. A vertical seam for each valley is then detected using the intensity of the edges. This is then used as the separation boundary between the different individuals. This could definitely improve the tracking of people in a crowd. Both techniques achieve good results, with the qualitative analysis accurately describing the status and congestion of a traffic scene, while the structural analysis can separate a group of people into distinctly separate persons. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2011.

Face recognition with Eigenfaces : a detailed study.

Vawda, Nadeem. January 2012 (has links)
With human society becoming increasingly computerised, the use of biometrics to automatically establish the identity of an individual is of great interest in a wide variety of applications. Facial appearance is an appealing biometric, on account of its relatively non-intrusive nature. As such, automated face recognition systems have been the subject of much research in recent years. This dissertation describes the development of a fully automatic face recognition system, and provides an analysis of its performance under various di erent operating conditions, in comparison with results published in prior literature. In addition to giving a detailed description of the mathematical underpinnings of the techniques used by the system, we discuss the practical considerations involved in implementing the described techniques. The system presented here uses the eigenface approach to representing facial features. A number of di erent recognition techniques have been implemented and evaluated. These include a number of variants of the original eigenface technique proposed by Turk and Pentland, as well as a related technique based on the probabilistic approach of Moghaddam et al. Due to the wide range of datasets used to evaluate face recognition systems in the literature, it is di cult to reliably compare the performance of di erent systems. The system described here has been tested with datasets encompassing a wide range of di erent conditions, allowing us to draw conclusions about how the characteristics of the test data a ect the results that are obtained. The performance of this system is comparable to other eigenface-based systems documented in the literature, achieving success rates in the region of 85% for large datasets under controlled conditions. However, performance was observed to degrade signi cantly when testing with more free-form images; in particular, the e ects of ageing on facial appearance were noted to cause problems for the system. This suggests that the matter of ageing is still a fruitful direction for further research. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2012.

Representation of regular formal languages.

Safla, Aslam. 17 May 2014 (has links)
This dissertation presents three different approaches to representing Regular Formal languages, i.e., regular expressions, finite acceptors and regular grammars. We define how each method is used to represent the language, and then the method for translating from one representation to another of the language. A toolkit is then presented which allows the user to input their definition of a language using any of the three models, and also allows the user to translate the representation of the language from one model to another. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pitermaritzburg, 2014.

Shot classification in broadcast soccer video.

Guimaraes, Lionel. January 2013 (has links)
Event understanding systems, responsible for automatically generating human relatable event descriptions from video sequences, is an open problem in computer vision research that has many applications in the sports domain, such as indexing and retrieval systems for sports video. Background modelling and shot classification of broadcast video are important steps in event understanding in video sequences. Shot classification seeks to identify shots, i.e. the labelling of continuous frame sequences captured by a single camera action such as long shot, close-up and audience shot, while background modelling seeks to classify pixels in an image as foreground/background. Many features used for shot classification are built upon the background model therefore background modelling is an essential part of shot classification. This dissertation reports on an investigation into techniques and procedures for background modelling and classification of shots in broadcast soccer videos. Broadcast video refers to video which would typically be viewed by a person at home on their television set and imposes constraints that are often not considered in many approaches to event detection. In this work we analyse the performances of two background modelling techniques appropriate for broadcast video, the colour distance model and Gaussian mixture model. The performance of the background models depends on correctly set parameters. Some techniques offer better updating schemes and thus adapt better to the changing conditions of a game, some are shown to be more robust to changes in broadcast technique and are therefore of greater value in shot classification. Our results show the colour distance model slightly outperformed the Gaussian mixture model with both techniques performing similar to those found in literature. Many features useful for shot classification are proposed in the literature. This dissertation identifies these features and presents a detailed analysis and comparison of various features appropriate for shot classification in broadcast soccer video. Once a feature set is established, a classifier is required to determine a shot class based on the extracted features. We establish the best use of the feature set and decision tree parameters that result in the best performance and then use a combined feature set to train a neural network to classify shots. The combined feature set in conjunction with the neural network classifier proved effective in classifying shots and in some situations outperformed those techniques found in literature. / Thesis (M.Sc.)-University of KwaZulu-Natal, Durban, 2012.

Modelling with mathematica.

Murrell, Hugh. January 1994 (has links)
In this thesis a number of mathematical models are investigated with the aid of the modelling package Mathematica. Some of the models are of a mechanical nature and some of the models are laboratories that have been constructed for the purpose of assisting researchers in a particular field. In the early sections of the thesis mechanical models are investigated. After the equations of motion for the model have been presented, Mathematica is employed to generate solutions which are then used to drive animations of the model. The frames of the animations are graphical snapshots of the model in motion. Mathematica proves to be an ideal tool for this type of modelling since it combines algebraic, numeric and graphics capabilities on one platform. In the later sections of this thesis, Mathematica laboratories are created for investigating models in two different fields. The first laboratory is a collection of routines for performing Phase-Plane analysis of planar autonomous systems of ordinary differential equations. A model of a mathematical concept called a bifurcation is investigated and an animation of this mathematical event is produced. The second laboratory is intended to help researchers in the tomography field. A standard filtered back-projection algorithm for reconstructing images from their projections is implemented. In the final section of the thesis an indication of how the tomography laboratory could be used is presented. Wavelet theory is used to construct a new filter that could be used in filtered back-projection tomography. / Thesis (Ph.D.)-University of Natal, Durban, 1994.

A practical investigation of meteor-burst communications.

Melville, Stuart William. January 1991 (has links)
This study considers the meteor-burst communication (MBC) environment at three levels. At the lowest level, the trails themselves are studied and analysed. Then individual links are studied in order to determine the data throughput and wait time that might be expected at various data rates. Finally, at the top level, MBC networks are studied in order to provide information on the effects of routing strategies, topologies, and connectivity in such networks. A significant amount of theoretical work has been done in the classification of meteor trails, and the analysis of the throughput potential of the channel. At the same time the issues of wait time on MBC links, and MBC network strategies, have been largely ignored. The work presented here is based on data captured on actual monitoring links, and is intended to provide both an observational comparison to theoretical predictions in the well-researched areas, and a source of base information for the others. Chapter 1 of this thesis gives an overview of the field of meteor-burst communications. Prior work in the field is discussed, as are the advantages and disadvantages of the channel, and current application areas. Chapter 2 describes work done on the classification of observed meteor trails into distinctive 'families'. The rule-based system designed for this task is discussed as well as the eventual classification schema produced, which is far more comprehensive and consistent than previously proposed schemas. Chapter 3 deals with the throughput potential of the channel, based on the observed trails. A comparison to predicted results, both as regards fixed and adaptive data-rates, is made with some notable differences between predicted v results and observed results highlighted. The trail families with the largest contribution to the throughput capacity of the channel are identified. Chapter 4 deals with wait time in meteor-burst communications. The data rates at which wait time is minimised in the links used are found, and compared to the rates at which throughput was optimised. These are found to be very different, as indeed are the contributions of the various trail families at these rates. Chapter 5 describes a software system designed to analyse the effect of routing strategies in MBC networks, and presents initial results derived from this system. Certain features of the channel, in particular its sporadic nature, are shown to have significant effects on network performance. Chapter 6 continues the presentation of network results, specifically concentrating on the effect of topologies and connectivity within MBC networks. Chapter 7 concludes the thesis, highlighting suggested areas for further research as well as summarising the more important results presented. / Thesis (Ph.D.)-University of Natal, Durban, 1991.

Ontology driven multi-agent systems : an architecture for sensor web applications.

Moodley, Deshendran. January 2009 (has links)
Advances in sensor technology and space science have resulted in the availability of vast quantities of high quality earth observation data. This data can be used for monitoring the earth and to enhance our understanding of natural processes. Sensor Web researchers are working on constructing a worldwide computing infrastructure that enables dynamic sharing and analysis of complex heterogeneous earth observation data sets. Key challenges that are currently being investigated include data integration; service discovery, reuse and composition; semantic interoperability; and system dynamism. Two emerging technologies that have shown promise in dealing with these challenges are ontologies and software agents. This research investigates how these technologies can be integrated into an Ontology Driven Multi-Agent System (ODMAS) for the Sensor Web. The research proposes an ODMAS framework and an implemented middleware platform, i.e. the Sensor Web Agent Platform (SWAP). SWAP deals with ontology construction, ontology use, and agent based design, implementation and deployment. It provides a semantic infrastructure, an abstract architecture, an internal agent architecture and a Multi-Agent System (MAS) middleware platform. Distinguishing features include: the incorporation of Bayesian Networks to represent and reason about uncertain knowledge; ontologies to describe system entities such as agent services, interaction protocols and agent workflows; and a flexible adapter based MAS platform that facilitates agent development, execution and deployment. SWAP aims to guide and ease the design, development and deployment of dynamic alerting and monitoring applications. The efficacy of SWAP is demonstrated by two satellite image processing applications, viz. wildfire detection and monitoring informal settlement. This approach can provide significant benefits to a wide range of Sensor Web users. These include: developers for deploying agents and agent based applications; end users for accessing, managing and visualising information provided by real time monitoring applications, and scientists who can use the Sensor Web as a scientific computing platform to facilitate knowledge sharing and discovery. An Ontology Driven Multi-Agent Sensor Web has the potential to forever change the way in which geospatial data and knowledge is accessed and used. This research describes this far reaching vision, identifies key challenges and provides a first step towards the vision. / Thesis (Ph.D.)-University of KwaZulu-Natal, 2009.

Page generated in 0.1259 seconds