• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 61
  • 18
  • 17
  • 8
  • 2
  • Tagged with
  • 437
  • 231
  • 194
  • 193
  • 193
  • 46
  • 36
  • 36
  • 34
  • 33
  • 33
  • 31
  • 30
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Explorations into the behaviour-oriented nature of intelligence : fuzzy behavioural maps

Gonzalez de Miguel, Ana Maria January 2003 (has links)
This thesis explores the behaviour-oriented nature of intelligence and presents the definition and use of Fuzzy Behavioural Maps (FBMs) as a flexible development framework for providing complex autonomous agent behaviour. This thesis provides a proof-of-concept for simple FBMs, including some experimental results in Mobile Robotics and Fuzzy Logic Control. This practical work shows the design of a collision avoidance behaviour (of a mobile robot) using a simple FBM and, the implementation of this using a Fuzzy Logic Controller (FLC). The FBM incorporates three causally related sensorimotor activities (moving around, perceiving obstacles and, varying speed). This Collision Avoidance FBM is designed (in more detail) using fuzzy relations (between levels of perception, motion and variation of speed) in the form of fuzzy control rules. The FLC stores and manipulates these fuzzy control (FBM) rules using fuzzy inference mechanisms and other related implementation parameters (fuzzy sets and fuzzy logic operators). The resulting FBM-FLC architecture controls the behaviour patterns of the agent. Its fuzzy inference mechanisms determine the level of activation of each FBM node while driving appropriate control actions over the creature's motors. The thesis validates (demonstrates the general fitness of) this control architecture through various pilot tests (computer simulations). This practical work also serves to emphasise some benefits in the use of FLC techniques to implement FBMs (e.g. flexibility of the fuzzy aggregation methods and fuzzy granularity).More generally, the thesis presents and validates a FBM Framework to develop more complex autonomous agent behaviour. This framework represents a top-down approach to derive the BB models using generic FBMs, levels of abstraction and refinement stages. Its major scope is to capture and model behavioural dynamics at different levels of abstraction (through different levels of refinement). Most obviously, the framework maps some required behaviours into connection structures of behaviour-producing modules that are causally related. But the main idea is following as many refinement stages as required to complete the development process. These refinement stages help to identify lower design parameters (i.e. control actions) rather than linguistic variables, fuzzy sets or, fuzzy inference mechanisms. They facilitate the definition of the behaviours selected from first levels of abstraction. Further, the thesis proposes taking the FBM Framework into the implementation levels that are required to build BB control architecture and provides and application case study. This describes how to develop a complex, non-hierarchical, multi-agent behaviour system using the refinement capabilities of the FBM Framework. Finally, the thesis introduces some more general ideas about the use of this framework to cope with some, current complexity issues around the behaviour-oriented nature of intelligence.
112

Interpretable classification model for automotive material fatigue

Lee, Kee Khoon January 2002 (has links)
No description available.
113

Modular neural networks for analysis of flow cytometry data

Autret, Arnaud January 2003 (has links)
In predicting environmental hazards or estimating the impact of human activities on the marine ecosystem, scientists have multiplied the need for sample analysis. The classical microscopic approach is time consuming and wastes the talent and intellectual abilities of trained specialists. Therefore, scientists developed an automated optical tool, called a Flow Cytometer (FC), to analyse samples quickly and in large quantities. The flow cytometer has successfully been applied to real phytoplankton studies. However, analysis of the data extracted from samples is still required. Artificial Neural Networks (ANNs) are one of the tools applied to FC data analysis. Despite several successful applications, ANNs have not been widely adopted by the marine biologist community, as they can not possible to change the number of species in the classification problem without retraining of the full system from scratch. Training is time consuming and requires expertise in ANNs. Moreover, most ANN paradigms cannot cope effectively with unknown data, such as data coming from new phytoplankton species or from species outside the scope of the studies. This project developed a new ANN technique based on a modular architecture that removes the need for retraining and allows unknowns to be detected and rejected. Furthermore, the Support Vector Machine architecture is applied in this domain for the first time and compared against another ANN paradigm called Radial Basis Function Networks. The results show that the modular architecture is able to effectively deal with new data which can be incorporated into the ANN architecture without fully retraining the system.
114

Constructing 3D faces from natural language interface

Ahmad, Salman January 2002 (has links)
This thesis presents a system by which 3D images of human faces can be constructed using a natural language interface. The driving force behind the project was the need to create a system whereby a machine could produce artistic images from verbal or composed descriptions. This research is the first to look at constructing and modifying facial image artwork using a natural language interface. Specialised modules have been developed to control geometry of 3D polygonal head models in a commercial modeller from natural language descriptions. These modules were produced from research on human physiognomy, 3D modelling techniques and tools, facial modelling and natural language processing.
115

Localisation for virtual environments

Law, Robin Ren-Pei January 2002 (has links)
No description available.
116

eXtensible business reporting language semantic error checking for accounting information systems

Vipoopinyo, Jarupa January 2013 (has links)
The financial reporting world has recently faced a number of changes due to the impact of the Internet. Today, the revolution in business communication is accelerating and more data is being shared by a large number of participant users, aside from the company’s internal management, including: clients, business partners, financial market analysts, investors and government regulators. These changes have led to the development of eXtensible Business Reporting Language (XBRL), which is an opensource Internet-based financial reporting language. XBRL is an extension of eXtensible Markup Language (XML) that provides machinereadable tags for each individual data element in each financial statement. XBRL is likely to be used as a platform that offers universal standards for defining business information. XBRL can ease the preparation, analysis, and exchange of business information along each part of financial reporting supply chain and across companies around the world. It can also increase the efficiency for all related users of business data. This study has analysed the accuracy of XBRL outputs by conducting a literature review and by checking the accuracy of the real company XBRL filing submissions that are published publicly. This study has found that there were many errors in these public XBRL documents that were caused either through a few basic common errors or from mistakes in related financial information. Therefore, this study has aimed to discover any possible causes of errors in XBRL filings. It has also aimed to find a way to detect those errors. Consequently, this study conducted a semantic checking system that aimed to detect XBRL errors and so enhance the accuracy of financial statements. To develop the semantic checking system, the results of an error finding analysis were combined, filtered, and classified into each category of errors, including the integration of accounting, business, and technology knowledge to fulfil the system. A process flow for the semantic checking system was created to help understand both the method and the rule. The rules were then set up to determine the different aspect of errors, which had a different method to manage and reduce errors. The semantic checking system was created in terms of the information specification of the XBRL filings. The system was designed to be more practical for the users by presenting the relationship between the real data and accounting practice. Moreover, a prototype was produced and the case study method was applied to prove the development of the system. This step was able to ensure the accuracy of the semantic checking system. Finally, this semantic checking system has been shown to improve the accuracy of XBRL filings. It also emphasises the importance of employing XBRL preparers who are aware of all of the possible issues that may arise while preparing an XBRL-based filing submission.
117

Adaptive resonance theory : theory and application to synthetic aperture radar

Saddington, P. January 2002 (has links)
Artificial Neural Networks are massively parallel systems that are constructed from many simple processing elements called neurons. The neurons are connected via weights. This structure is inspired by the current understanding of how biological networks function. Since the 1980s, research into this field has exploded into the hive of activity that currently surrounds neural networks and intelligent systems. The work in this thesis is concerned with one particular artificial neural network: Adaptive Resonance Theory (ART). It is an unsupervised neural network that attempts to solve the stability-plasticity dilemma. The model is, however, limited by a few serious problems that restrict its use in real life situations. The network's ability to cluster consistently with uncorrupt inputs when the input is subject to even modest amounts of noise is severely handicapped. The work detailed herein attempts to improve on ART's behaviour towards noisy inputs. Novel equations are developed and described that improve on the network's performance when the system is subject to noisy inputs. One of the novel equations affecting vigilance makes a significant improvement over the originators' equations and can cope with 16% target noise before results fall to the same values as the standard equation. The novel work is tested using a real-life (not simulated) data set from the MSTAR database. Synthetic Aperture Radar targets are clustered and then subject to noise before being represented to the network. These data simulate a typical environment where a clustering or classifying module would be needed for object recognition. Such a module could then be used in an Automatic Target Recognition (ATR) system. Once the problem is mitigated, Adaptive Resonance Theory neural networks could play important roles in ATR systems due to its lack of computational complexity and low memory requirements when compared with other clustering techniques. Keywords: Adaptive Resonance Theory, clustering consistency, neural network, automatic target recognition, noisy inputs.
118

Efficient system identification based on root cepstral deconvolution

Sarpal, Sanjeev January 2003 (has links)
This thesis summarizes approximately three years of research on signal modelling for the purposes of system identification. Improvements in signal modelling techniques have been encouraged over the years by society's demand for more efficient ways of accessing information. As a consequence, several modelling/compression techniques in both the time domain and the frequency domain have been developed as possible solutions to these problems. Cepstral deconvolution is a frequency domain modelling technique that has been successfully applied to many diverse fields, such as speech and seismic analysis. Thus far, all cepstral modelling performance has been empirical, relying on the judgement of the designer. Therefore a novel method for measuring root cepstral pole-zero modelling performance is proposed, by introducing a cost function applied directly to the root cepstral domain. It is, therefore, possible to demonstrate the optimized modelling of a pole-zero model and show that its performance is superior to that of a FIR Wiener filter and LPC. The optimized modelling of speech data is considered by a special form of the developed cost function. It is demonstrated that the modelling performance of the root cepstral method is superior to that of the real (magnitude) cepstrum and LPC. A novel method of model order identification for use with time domain modelling methods based around z-plane root cepstral plots is also developed and discussed. It is demonstrated that the positions of a model or plant's poles and zeros may be determined by visual inspection of the resulting z-plane plot. However, performance in noise was poor to that of LPC, leading to difficulties when trying to determine the model's order. Finally, an investigation into the poor phase modelling performance of the algorithm when modelling signals comprised of multiple excitations is presented. It is demonstrated that all DFT/FFT based analysis techniques are fundamentally flawed due to discontinuities. As a consequence, a simple pre-filtering algorithm is presented as a possible solution.
119

Object recognition by region matching using relaxation with relational constraints

Ahmadyfard, Alireza January 2003 (has links)
Our objective in this thesis is to develop a method for establishing an object recognition system based on the matching of image regions. A region is segmented from image based on colour homogeneity of pixels. The method can be applied to a number of computer vision applications such as object recognition (in general) and image retrieval. The motivation for using regions as image primitives is that they can be represented invariantly to a group of geometric transformations and regions are stable under scaling. We model each object of interest in our database using a single frontal image. The recognition task is to determine the presence of object(s) of interest in scene images. We propose a novel method for afflne invariant representation of image regions in the form of Attributed Relational Graph (ARG). To make image regions comparable for matching, we project each region to an affine invariant space and describe it using a set of unary measurements. The distinctiveness of these features is enhanced by describing the relation between the region and its neighbours. We limit ourselves to the low order relations, binary relations, to minimise the combinatorial complexity of both feature extraction and model matching, and to maximise the probability of the features being observed. We propose two sets of binary measurements: geometric relations between pair of regions, and colour profile on the line connecting the centroids of regions. We demonstrate that the former measurements are very discriminative when the shape of segmented regions is informative. However, they are susceptible to distortion of regions boundaries as a result of severe geometric transformations. In contrast, the colour profile binary measurements are very robust. Using this representation we construct a graph to represent the regions in the scene image and refer to it as the scene graph. Similarly a graph containing the regions of all object models is constructed and referred to as the model graph. We consider the object recognition as the problem of matching the scene graph and model graphs. We adopt the probabilistic relaxation labelling technique for our problem. The method is modified to cope better with image segmentation errors. The implemented algorithm is evaluated under affine transformation, occlusion, illumination change and cluttered scene. Good performance for recognition even under severe scaling and in cluttered scenes is reported. Key words: Region Matching, Object Recognition, Relaxation Labelling, Affine Invariant.
120

Automatic architecture selection for probability density function estimation in computer vision

Sadeghi, Mohammad T. January 2002 (has links)
In this thesis, the problem of probability density function estimation using finite mixture models is considered. Gaussian mixture modelling is used to provide a semi-parametric density estimate for a given data set. The fundamental problem with this approach is that the number of mixtures required to adequately describe the data is not known in advance. In this work, a predictive validation technique [91] is studied and developed as a useful, operational tool that automatically selects the number of components for Gaussian mixture models. The predictive validation test approves a candidate model if, for the set of events they try to predict, the predicted frequencies derived from the model match the empirical ones derived from the data set. A model selection algorithm, based on the validation test, is developed which prevents both problems of over-fitting and under-fitting. We investigate the influence of the various parameters in the model selection method in order to develop it into a robust operational tool. The capability of the proposed method in real world applications is examined on the problem of face image segmentation for automatic initialisation of lip tracking systems. A segmentation approach is proposed which is based on Gaussian mixture modelling of the pixels RGB values using the predictive validation technique. The lip region segmentation is based on the estimated model. First a grouping of the model components is performed using a novel approach. The resulting groups are then the basis of a Bayesian decision making system which labels the pixels in the mouth area as lip or non-lip. The experimental results demonstrate the superiority of the method over the conventional clustering approaches. In order to improve the method computationally an image sampling technique is applied which is based on Sobol sequences. Also, the image modelling process is strengthened by incorporating spatial contextual information using two different methods, a Neigh-bourhood Expectation Maximisation technique and a spatial clustering method based on a Gibbs/Markov random field modelling approach. Both methods are developed within the proposed modelling framework. The results obtained on the lip segmentation application suggest that spatial context is beneficial.

Page generated in 0.0218 seconds