• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 124
  • 32
  • 18
  • 17
  • 8
  • 2
  • Tagged with
  • 359
  • 153
  • 117
  • 116
  • 116
  • 40
  • 33
  • 31
  • 28
  • 26
  • 26
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Vector representations of structured data

Mintram, Robert C. January 2002 (has links)
The connectionist approach to creating vector representations (VREPs) of structured data is usually implemented by artificial neural network (ANN) architectures. ANNs are trained on a representative corpus and can then demonstrate some degree of generalization to novel data. In this context, structured data are typically trees, the leaf nodes of which are assigned some n-element (often binary) vector representation. The strategy used to encode the leaf data and the width of the consequent vectors can have an impact on the encoding performance of the ANN architecture. In this thesis the architecture of principle interest is called simplified recursive auto associative memory, (S)RAAM, which was devised to provide a theoretical model for abother architecture called recursive auto associative memory, RAAM. Research continues in RAAMs in terms of improving their learning ability, understanding the features that are encoded and improving generalization. (S)RAAM is a mathematical model that lends itself more readily to addressing these issues. Usually ANNs designed to encode structured data will, as a result of training, simultaneously create an encoder function to transform the data into vectors and a decoder function to perform the reverse transformation. (S)RAAM is a mathematical model that lends itself more readily to addressing these issues. Usually ANNs designed to encode structured data will, as a result of training, simultaneously create an encoder function to transform the data into vectors and a decoder function to perform the reverse transformation. (S)RAAM as a model of this process was designed to follow this paradigm. It is shown that this is not strictly necessary and that encoder and decoder functions can be created at separate times, their connection being maintained by the data unpon which they operate. This leads to a new, more versatile model called, in this thesis, the General Encoder Decoder, GED. The GED, like (S)RAAM, is implemented as an algorithm rather than a neural network architecture. The thesis contends that the broad scope of the GED model makes it a versatile experimental vehicle supporting research into key properties of VREPs. In particular these properties include the strategy used to encode the leaf tokens within tree structures and the features of these structures that are preferentially encoded
32

Text segmentation in web images using colour perception and topological features

Karatzas, Dimosthenis A. January 2003 (has links)
The research presented in this thesis addresses the problem of Text Segmentation in Web images. Text is routinely created in image form (headers, banners etc.) on Web pages, as an attempt to overcome the stylistic limitations of HTML. This text however, has a potentially high semantic value in terms of indexing and searching for the corresponding Web pages. As current search engine technology does not allow for text extraction and recognition in images, the text in image form is ignored. Moreover, it is desirable to obtain a uniform representation of all visible text of a Web page (for applications such as voice browsing or automated content analysis). This thesis presents two methods for text segmentation in Web images using colour perception and topological features. The nature of Web images and the implicit problems to text segmentation are described, and a study is performed to assess the magnitude of the problem and establish the need for automated text segmentation methods. Two segmentation methods are subsequently presented: the Split-and-Merge segmentation method and the Fuzzy segmentation method. Although approached in a distinctly different way in each method, the safe assumption that a human being should be able to read the text in any given Web Image is the foundation of both methods’ reasoning. This anthropocentric character of the methods along with the use of topological features of connected components, comprise the underlying working principles of the methods. An approach for classifying the connected components resulting from the segmentation methods as either characters or parts of the background is also presented.
33

Fingerprint-based biometric recognition allied to fuzzy-neural feature classification

Mohamed, Suliman M. January 2002 (has links)
The research investigates fingerprint recognition as one of the most reliable biometrics identification methods. An automatic identification process of humans-based on fingerprints requires the input fingerprint to be matched with a large number of fingerprints in a database. To reduce the search time and computational complexity, it is desirable to classify the database of fingerprints into an accurate and consistent manner so that the input fingerprint is matched only with a subset of the fingerprints in the database. In this regard, the research addressed fingerprint classification. The goal is to improve the accuracy and speed up of existing automatic fingerprint identification algorithms. The investigation is based on analysis of fingerprint characteristics and feature classification using neural network and fuzzy-neural classifiers. The methodology developed, is comprised of image processing, computation of a directional field image, singular-point detection, and feature vector encoding. The statistical distribution of feature vectors was analysed using SPSS. Three types of classifiers, namely, multi-layered perceptrons, radial basis function and fuzzy-neural methods were implemented. The developed classification systems were tested and evaluated on 4,000 fingerprint images on the NIST-4 database. For the five-class problem, classification accuracy of 96.2% for FNN, 96.07% for MLP and 84.54% for RBF was achieved, without any rejection. FNN and MLP classification results are significant in comparison with existing studies, which have been reviewed.
34

Decision tree simplification for classifier ensembles

Ardeshir, G. January 2002 (has links)
Design of ensemble classifiers involves three factors: 1) a learning algorithm to produce a classifier (base classifier), 2) an ensemble method to generate diverse classifiers, and 3) a combining method to combine decisions made by base classifiers. With regard to the first factor, a good choice for constructing a classifier is a decision tree learning algorithm. However, a possible problem with this learning algorithm is its complexity which has only been addressed previously in the context of pruning methods for individual trees. Furthermore, the ensemble method may require the learning algorithm to produce a complex classifier. Considering the fact that performance of simplification methods as well as ensemble methods changes from one domain to another, our main contribution is to address a simplification method (post-pruning) in the context of ensemble methods including Bagging, Boosting and Error-Correcting Output Code (ECOC). Using a statistical test, the performance of ensembles made by Bagging, Boosting and ECOC as well as five pruning methods in the context of ensembles is compared. In addition to the implementation a supporting theory called Margin, is discussed and the relationship of Pruning to bias and variance is explained. For ECOC, the effect of parameters such as code length and size of training set on performance of Pruning methods is also studied. Decomposition methods such as ECOC are considered as a solution to reduce complexity of multi-class problems in many real problems such as face recognition. Focusing on the decomposition methods, AdaBoost.OC which is a combination of Boosting and ECOC is compared with the pseudo-loss based version of Boosting, AdaBoost.M2. In addition, the influence of pruning on the performance of ensembles is studied. Motivated by the result that both pruned and unpruned ensembles made by AdaBoost.OC have similar accuracy, pruned ensembles are compared with ensembles of single node decision trees. This results in the hypothesis that ensembles of simple classifiers may give better performance as shown for AdaBoost.OC on the identification problem in face recognition. The implication is that in some problems to achieve best accuracy of an ensemble, it is necessary to select base classifier complexity.
35

Pose estimation using the EM algorithm

Moss, Simon January 2002 (has links)
No description available.
36

Generating references in hierarchical domains : the case of document deixis

Paraboni, Ivandré January 2003 (has links)
No description available.
37

Modelling social interaction attitudes in multi-agent systems

Kalenka, Susanne January 2001 (has links)
No description available.
38

Context-assisted learning in artifical neural networks

Koetsier, Jos January 2003 (has links)
No description available.
39

Evolutionary and conventional reinforcement learning in multi agent systems for social simulation

Miramontes Hercog, Luis January 2003 (has links)
No description available.
40

Automated prototype induction

González Rodríguez, Inés January 2002 (has links)
No description available.

Page generated in 0.0237 seconds