• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 44
  • Tagged with
  • 112
  • 112
  • 112
  • 65
  • 33
  • 30
  • 25
  • 22
  • 22
  • 17
  • 16
  • 16
  • 13
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On case representation and indexing in a case-based reasoning system for waste management.

Wortmann, Karl Lyndon. January 1997 (has links)
Case-Based Reasoning is a fairly new Artificial Intelligence technique which makes use of past experience as the basis for solving new problems. Typically, a case-based reasoning system stores actual past problems and solutions in memory as cases. Due to its ability to reason from actual experience and to save solved problems and thus learn automatically, case-based reasoning has been found to be applicable to domains for which techniques such as rule-based reasoning have traditionally not been well-suited, such as experience-rich, unstructured domains. This applicability has led to it becoming a viable new artificial intelligence topic from both a research and application perspective. This dissertation concentrates on researching and implementing indexing techniques for casebased reasoning. Case representation is researched as a requirement for implementation of indexing techniques, and pre-transportation decision making for hazardous waste handling is used as the domain for applying and testing the techniques. The field of case-based reasoning was covered in general. Case representation and indexing were researched in detail. A single case representation scheme was designed and implemented. Five indexing techniques were designed, implemented and tested. Their effectiveness is assessed in relation to each other, to other reasoners and implications for their use as the basis for a case-based reasoning intelligent decision support system for pre-transportation decision making for hazardous waste handling are briefly assessed. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1997.
2

Bi-modal biometrics authentication on iris and signature.

Viriri, Serestina. January 2010 (has links)
Multi-modal biometrics is one of the most promising avenues to address the performance problems in biometrics-based personal authentication systems. While uni-modal biometric systems have bolstered personal authentication better than traditional security methods, the main challenges remain the restricted degrees of freedom, non-universality and spoof attacks of the traits. In this research work, we investigate the performance improvement in bi-modal biometrics authentication systems based on the physiological trait, the iris, and behavioral trait, the signature. We investigate a model to detect the largest non-occluded rectangular part of the iris as a region of interest (ROI) from which iris features are extracted by a cumulative-sums-based grey change analysis algorithm and Gabor Filters. In order to address the space complexity of biometric systems, we proposed two majority vote-based algorithms which compute prototype iris features codes as the reliable specimen templates. Experiments obtained a success rate of 99.6%. A text-based directional signature verification algorithm is investigated. The algorithm verifies signatures, even when they are composed of symbols and special unconstrained cursive characters which are superimposed and embellished. The experimental results show that the proposed approach has an improved true positive rate of 94.95%. A user-specific weighting technique, the user-score-based, which is based on the different degrees of importance for the iris and signature traits of an individual, is proposed. Then, an intelligent dual ν-support vector machine (2ν-SVM) based fusion algorithm is used to integrate the weighted match scores of the iris and signature modalities at the matching score level. The bi-modal biometrics system obtained a false rejection rate (FRR) of 0.008, and a false acceptance rate (FAR) of 0.001. / Thesis (Ph.D)-University of KwaZulu-Natal, Westville, 2010.
3

Planarity testing and embedding algorithms.

Carson, D. I. January 1990 (has links)
This thesis deals with several aspects of planar graphs, and some of the problems associated with non-planar graphs. Chapter 1 is devoted to introducing some of the fundamental notation and tools used in the remainder of the thesis. Graphs serve as useful models of electronic circuits. It is often of interest to know if a given electronic circuit has a layout on the plane so that no two wires cross. In Chapter 2, three efficient algorithms are described for determining whether a given 2-connected graph (which may model such a circuit) is planar. The first planarity testing algorithm uses a path addition approach. Although this algorithm is efficient, it does not have linear complexity. However, the second planarity testing algorithm has linear complexity, and uses a recursive fragment addition technique. The last planarity testing algorithm also has linear complexity, and relies on a relatively new data structure called PQ-trees which have several important applications to planar graphs. This algorithm uses a vertex addition technique. Chapter 3 further develops the idea of modelling an electronic circuit using a graph. Knowing that a given electronic circuit may be placed in the plane with no wires crossing is often insufficient. For example, some electronic circuits often have in excess of 100 000 nodes. Thus, obtaining a description of such a layout is important. In Chapter 3 we study two algorithms for obtaining such a description, both of which rely on the PQ-tree data structure. The first algorithm determines a rotational embedding of a 2-connected graph. Given a rotational embedding of a 2-connected graph, the second algorithm determines if a convex drawing of a graph is possible. If a convex drawing is possible, then we output the convex drawing. In Chapter 4, we concern ourselves with graphs that have failed a planarity test of Chapter 2. This is of particular importance, since complex electronic circuits often do not allow a layout on the plane. We study three different ways of approaching the problem of an electronic circuit modelled on a non-planar graph, all of which use the PQ-tree data structure. We study an algorithm for finding an upper bound on the thickness of a graph, an algorithm for determining the subgraphs of a non-planar graph which are subdivisions of the Kuratowski graphs K5 and K3,3, and lastly we present a new algorithm for finding an upper bound on the genus of a non-planar graph. / Thesis (M.Sc.)-University of Natal, Durban,1990.
4

Qualitative and structural analysis of video sequences.

Brits, Alessio. 17 October 2013 (has links)
This thesis analyses videos in two distinct ways so as to improve both human understanding and the computer description of events that unfold in video sequences. Qualitative analysis can be used to understand a scene in which many details are not needed. However, for there to be an accurate interpretation of a scene, a computer system has to first evaluate discretely the events in a scene. Such a method must involve structural features and the shapes of the objects in the scene. In this thesis we perform qualitative analysis on a road scene and generate terms that can be understood by humans and that describe the status of the traffic and its congestion. Areas in the video that contain vehicles are identified regardless of scale. The movement of the vehicles is further identified and a rule-based technique is used to accurately determine the status of the traffic and its congestion. Occlusion is a common problem in scene analysis tracking. A novel technique is developed to vertically separate groups of people in video sequences. A histogram is generated based on the shape of a group of people and its valleys are identified. A vertical seam for each valley is then detected using the intensity of the edges. This is then used as the separation boundary between the different individuals. This could definitely improve the tracking of people in a crowd. Both techniques achieve good results, with the qualitative analysis accurately describing the status and congestion of a traffic scene, while the structural analysis can separate a group of people into distinctly separate persons. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2011.
5

Random generation of finite automata over the domain of the regular languages

Raitt, Lesley Anne 12 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: The random generation of finite automata over the domain of their graph structures is a wellknown problem. However, random generation of finite automata over the domain of the regular languages has not been studied in such detail. Random generation algorithms designed for this domain would be useful for the investigation of the properties of the regular languages associated with the finite automata. We studied the existing enumerations and algorithms to randomly generate UDFAs and binary DFAs as they pertained to the domain of the regular languages. We evaluated the algorithms experimentally across the domain of the regular languages for small values of n and found the distributions non-uniform. Therefore, for UDFAs, we derived an algorithm for the random generation of UDFAs over the domain of the regular languages from Domaratzki et. al.’s [9] enumeration of the domain of the regular languages. Furthermore, for binary DFAs, we concluded that for large values of n, the bijection method is a viable means of randomly generating binary DFAs over the domain of the regular langagues. We looked at all the random generation of union-UNFAs and -UNFAs across the domain of the regular languages. Our study of these UNFAs took all possible variables for the generation of UNFAs into account. The random generation of UNFAs over the domain of the regular languages is an open problem / AFRIKAANSE OPSOMMING: Die ewekansige generasie van eindige toestand outomate (eto’s) oor die domein van hul grafiekstrukture is ’n bekende probleem. Nieteenstaande het die ewekansige generasie van eindige toestand outomate oor die domein van die regulˆere tale nie soveel aandag gekry nie. Algoritmes wat eindige toestand outomate ewekansig genereer oor die domein van die regulˆere tale sal nuttig wees om die ondersoek van die eienskappe van regulˆere tale, wat met eto’s verbind is, te bewerkstellig. Ons het die bestaande aftellings en algoritmes bestudeer vir die ewekansige generasie van deterministiese eindige toestand outomate (deto’s) met een en twee alfabetiese simbole soos dit betrekking het op die domein van die regulˆere tale bestudeer. Ons het die algoritmes eksperimenteel beoordeel oor die domein van die regulˆere tale vir outomate met min toestande en bevind dat die verspreiding nie eenvomig is nie. Daarom het ons ’n algoritme afgelei vir die ewekansige generasie van deto’s met een alfabetsimbool oor die domein van die regulˆere tale van Domaratzki et. al. [9] se aftelling. Bowendien, in die geval van deto’s met twee alfabetsimbole met ’n groot hoeveelheid toestande is die ‘bijeksie metode ’n goeie algoritme om te gebruik vir die ewekansige generasie van hierdie deto’s oor die domein van die regulˆere tale. Ons het ook die ewekansige generasie van [-nie-deterministiese eindige toestand outomate en -nie-deterministiese eindige toestand outomate oor die domein van die regulˆere tale bestudeer. Ons studie van hierdie neto’s het alle moontlike veranderlikes in ageneem. Die ewekansige generering van deto’s oor die domein van die regulˆere tale is ’n ope probleem.
6

Automated program generation : bridging the gap between model and implementation

Bezuidenhout, Johannes Abraham 02 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / ENGLISH ABSTRACT: The general goal of this thesis is the investigation of a technique that allows model checking to be directly integrated into the software development process, preserving the benefits of model checking while addressing some of its limitations. A technique was developed that allows a complete executable implementation to be generated from an enhanced model specification. This included the development of a program, the Generator, that completely automates the generation process. In addition, it is illustrated how structuring the specification as a transitions system formally separates the control flow from the details of manipulating data. This simplifies the verification process which is focused on checking control flow in detail. By combining this structuring approach with automated implementation generation we ensure that the verified system behaviour is preserved in the actual implementation. An additional benefit is that data manipulation, which is generally not suited to model checking, is restricted to separate, independent code fragments that can be verified using verification techniques for sequential programs. These data manipulation code segments can also be optimised for the implementation without affecting the verification of the control structure. This technique was used to develop a reactive system, an FTP server, and this experiment illustrated that efficient code can be automatically generated while preserving the benefits of model checking. / AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek ’n tegniek wat modeltoetsing laat deel uitmaak van die sagtewareontwikkelingsproses, en sodoende betroubaarheid verbeter terwyl sekere tekorkominge van die tradisionele modeltoetsing proses aangespreek word. Die tegniek wat ontwikkel is maak dit moontlik om ’n volledige uitvoerbare implementasie vanaf ’n gespesialiseerde model spesifikasie te genereer. Om die implementasie-generasie stap ten volle te outomatiseer is ’n program, die Generator, ontwikkel. Daarby word dit ook gewys hoe die kontrolevloei op ’n formele manier geskei kan word van data-manipulasie deur gebruik te maak van ’n staatoorgangsstelsel struktureringsbenadering. Dit vereenvoudig die verifikasie proses, wat fokus op kontrolevloei. Deur di´e struktureringsbenadering te kombineer met outomatiese implementasie-generasie, word verseker dat die geverifieerde stelsel se gedrag behou word in die finale implementasie. ’n Bykomende voordeel is dat data-manipulasie, wat gewoonlik nie geskik is vir modeltoetsing nie, beperk word tot aparte, onafhanklike kode segmente wat geverifieer kan word deur gebruik te maak van verifikasie tegnieke vir sekwensi¨eele programme. Hierdie data-manipulasie kode segmente kan ook geoptimeer word vir die implementasie sonder om die verifikasie van die kontrole struktuur te be¨ınvloed. Hierdie tegniek word gebruik om ’n reaktiewe stelsel, ’n FTP bediener, te ontwikkel, en di´e eksperiment wys dat doeltreffende kode outomaties gegenereer kan word terwyl die voordele van modeltoetsing behou word.
7

Component-based face recognition.

Dombeu, Jean Vincent Fonou. January 2008 (has links)
Component-based automatic face recognition has been of interest to a growing number of researchers in the past fifteen years. However, the main challenge remains the automatic extraction of facial components for recognition in different face orientations without any human intervention; or any assumption on the location of these components. In this work, we investigate a solution to this problem. Facial components: eyes, nose, and mouth are firstly detected in different orientations of face. To ensure that the components detected are appropriate for recognition, the Support Vector Machine (SVM) classifer is applied to identify facial components that have been accurately detected. Thereafter, features are extracted from the correctly detected components by Gabor Filters and Zernike Moments combined. Gabor Filters are used to extract the texture characteristics of the eyes and Zernike Moments are applied to compute the shape characteristics of the nose and the mouth. The texture and the shape features are concatenated and normalized to build the final feature vector of the input face image. Experiments show that our feature extraction strategy is robust, it also provides a more compact representation of face images and achieves an average recognition rate of 95% in different face orientations. / Thesis (M.Sc.)-University of KwaZulu-Natal, 2008.
8

The applicability of case-based reasoning to software cost estimation.

January 2002 (has links)
The nature and competitiveness of the modern software development industry demands that software engineers be able to make accurate and consistent software cost estimates. Traditionally software cost estimates have been derived with algorithmic cost estimation models such as COCOMO and Function Point Analysis. However, researchers have shown that existing software cost estimation techniques fail to produce accurate and consistent software cost estimates. Improving the reliability of software cost estimates would facilitate cost savings, improved delivery time and better quality software developments. To this end, considerable research has been conducted into finding alternative software cost estimation models that are able produce better quality software cost estimates. Researchers have suggested a number of alternative models to this problem area. One of the most promising alternatives is Case-Based Reasoning (CBR), which is a machine learning paradigm that makes use of past experiences to solve new problems. CBR has been proposed as a solution since it is highly suited to weak theory domains, where the relationships between cause and effect are not well understood. The aim of this research was to determine the applicability of CBR to software cost estimation. This was accomplished in part through the thorough investigation of the theoretical and practical background to CBR, software cost estimation and current research on CBR applied to software cost estimation. This provided a foundation for the development of experimental CBR software cost estimation models with which an empirical evaluation of this technology applied to software cost estimation was performed. In addition, several regression models were developed, against which the effectiveness of the CBR system could be evaluated. The architecture of the CBR models developed, facilitated the investigation of the effects of case granularity on the quality of the results obtained from them. Traditionally researchers into this field have made use of poorly populated datasets, which did not accurately reflect the true nature of the software development industry. However, for the purposes of this research an extensive database of 300 software development projects was obtained on which these experiments were performed. The results obtained through experimentation indicated that the CBR models that were developed, performed similarly and in some cases better than those developed by other researchers. In terms of the quality of the results produced, the best CBR model was able to significantly outperform the estimates produced by the best regression model. Also, the effects of increased case granularity was shown to result in better quality predictions made by the CBR models. These promising results experimentally validated CBR as an applicable software cost estimation technique. In addition, it was shown that CBR has a number of methodological advantages over traditional cost estimation techniques. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 2002.
9

Devising a common vocabulary for a knowledge management university intranet.

Mahomva, Sarudzai. January 2003 (has links)
For the past few years, the University of Natal has been using an HTML-driven InnerWeb as its intranet system. The advantages of database driven intranet technologies over static HTML pages are now well established. It was felt that the University should change to a database driven intranet system which would better serve the needs of the University community. The first part of this study was conducted to establish user perceptions and requirements of such an intranet. Results from this study suggested that the functionalities and needs expressed by participants are synonymous with functionalities offered by database driven intranets. The second part of this study was therefore to follow up and prioritise the identified requirements for the main intranet interface to establish a controlled vocabulary and investigate current debate on the possibilities and limitations of intranets as a knowledge management tool. Part of the study took cognisance of Stoke's use inspired research premise by adapting constructivist research philosophy as well as Van den Akker's development research strategy to guide the study. Eclectic mixed methodology as suggested by Reeves guided the research design for this study. Thus data gathering methods which included group and on-line card sorting, semi-structured interviews, category membership expectation tests and prototype validation were used to validate each stage of the development process. Data analysis procedures included using Microsoft Excel to calculate the total score assigned to each item for possible inclusion on the intranet, cluster analysis using IBM EZSort software, analysing interview transcripts using QSR NVlvo software as well as simple eye balling of the category membership expectation data. The initial 93 items for possible inclusion, which were identified at the first part of the study were reduced to 60 items. Some distinct themes, which were identified, include research activities, library, social notices, corporate notices, learning activities, University Policies and Procedures, student activities, staff activities and on-line collaboration . The results of this study suggest that it is challenging to establish vocabulary which is common to the majority of prospective users. Thus, some of the suggested vocabulary for category labels did not have majority consensus. This study also suggests that participants expect a process driven intranet, which offers multidimensional access points and multiple ways to navigate. This implies analysing same data from different viewpoints. Participants want more from an intranet than simple document publishing though a few can not see the intranet beyond a document retrieval tool. The study suggests that users have different needs which could be better addressed by offering customisation and personalisation functionalities to suit users' individual needs. Participants expect to use the intranet as a reliable institutional memory which offers seamless remote access to synchronous and asynchronous communicating tools, access to various forms of digital media, interactive on-line administration functionalities as well as access to on-line academic related activities. / Thesis (M.A.)-University of Natal, Durban, 2003.
10

Spectral techniques for roughness estimation.

Lewis, Mark. January 2001 (has links)
Roughness is a relatively untouched field considering its significance to natural scientists. In this thesis mathematical techniques for measuring the roughness of signals are constructed and investigated. Both one dimensional and two dimensional signals are tackled. Applications include geological profiles and biological surfaces. Mathematical techniques include Fourier and Wavelet Transforms. / Thesis (M.Sc.)-University of Natal, Durban, 2001.

Page generated in 0.1155 seconds