• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 69
  • 27
  • 1
  • 1
  • Tagged with
  • 1573
  • 255
  • 191
  • 127
  • 122
  • 115
  • 94
  • 92
  • 90
  • 79
  • 71
  • 60
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Adaption of reference patterns in word-based speech recognition

McInnes, F. R. January 1988 (has links)
The word-based approach to automatic speech recognition is one which has received attention from many researchers and has been exploited in various practical applications. A typical recognition system has a set of stored reference patterns, one or more for each word in the vocabulary to be recognised. These reference patterns are formed from training utterances supplied before a recognition session begins, either by the intended user of the system or, for a speaker-independent system, by a representative set of speakers. When the system is used for recognition, each new input utterance is compared with the stored patterns and is recognised as the word (or sequence of words) for which the minimal value of a distance (dissimilarity) measure, or equivalently the maximal likelihood, is obtained. The comparison of the input with the reference patterns is typically accomplished by an algorithm incorporating dynamic programming, which finds the optimal alignment of input and reference patterns andthe corresponding distance or likelihood. This approach to recognition, in its basic form, retains the same reference patterns unchanged throughout the recognition of any sequence of input utterances. Thus the recognition system has no capability of learning from the new utterances presented during a recognition session. If a recognition system can be made to adapt its reference patterns during its operation, to incorporate information from the recognised utterances, then this may be expected to allow progressive improvement of the modelling of the words (as pronounced by the current speaker), and hence enhancement of the accuracy of recognition - provided that the adaptation of incorrect words' reference patterns in cases of misrecognition can be prevented or kept to a sufficiently low level. By adaptation, speaker-specific initial reference patterns can be made more reliably representative of the speaker's typical pronunciations, by the use of data from additional utterances of the words; and speaker-independent reference patterns can be made speaker-specific through the incorporation of information from utterances by the speaker currently using the recognition system. Adaptation can also permit the dynamic adjustment of reference patterns to track any gradual drift, or systematic difference from one occasion to another, in the speaker's voice or pronunciation or in the level and characteristics of background noise. In this thesis, the development of an isolated word recognition system which incorporates various adaptation options is described, and the results of experiments to measure the effects of adaptation are presented and discussed. Both supervised adaptation (which is controlled by feedback from the user as to the correctness or incorrectness of each recognition) and unsupervised adaptation (without such feedback) are explored. The adaptation operates by a weighted averaging of the current reference pattern (template) with the recognised input. Two main weighting options have been defined: one which results in optimisation of the templates for the speaker's typical realisations of the words (if these are assumed to be invariant in time), and one which results in tracking of gradual variations in time. Various values of the relative weights on the existing template and on the input have been tested. Adaptation has been applied both to speaker-specific initial templates and to speaker-independent ones. In each case, the statistical significances of comparative results are computed from the means and variations across a set of test speakers. A compensation technique has been introduced, whereby the distance obtained in matching a template with an input utterance is adjusted according to the number of times that template has been adapted. This is necessary because adaptation reduces the typical distances obtained for the adapted template even when this template does not correspond to the correct recognition of the input. Appropriate values of the compensation parameters, to optimise the recognition performance, have been found for various adaptation options. The main conclusions from the experiments are that adaptation, especially supervised adaptation, can yield consistent and useful improvements in the performance of an isolated word recognition system, and that the application of appropriate word distance compensation is important for the attainment of the maximum benefit from the adaptation. Possible refinements and extensions of the adaptation technique are discussed. Results of a limited evaluation of template adaptation in a connected word recognition system are presented. Other aspects of the recognition system which are described and discussed include an efficient multiple-stage decision procedure and some features of the user-system interface design.

Agency-based Integration of Aesthetic Criteria within an Interactive Evolutionary Design Environment

Machwe, Azahar Tekchand January 2008 (has links)
Traditional interactive evolutionary design systems combine auser based fitness function with an evolutionary search process. Effective integration of machine based tools, human designers and real world design processes, requires a higher level of information exchange between the user and the design system. This dual requirement of increasing the connectivity between the machine and the user as well as incorporating human preferences with machine based fitness evaluations is the main focus of this research. There are two problems in implementing the above, namely the problem of representation as well as user fatigue resulting from design evaluations. The initial work involved an integration of component-based representation, software agents and machine learning with an evolutionary programming algorithm for a relatively simple bridge design problem (the Bridge Design System) with both human and machine based evaluation. The main research contribution of the Bridge Design System was the integration of componentbased representation and the machine learning sub-system. The component-based representation addresses the problem of representation. The machine learning sub-system provides a possible solution to the user fatigue problem. The Bridge Design System was extended to tackle a more complex 3-D design problem related to Urban-Furniture design. To enhance the interactivity and usability of the system, population clustering based on solution similarity was introduced within the urban-furniture design system. The user fatigue issue was addressed further through population clustering which allowed users to work with larger population sizes than usual. Clustering also allowed the identification of features present in high performance as well as user preferred solutions.

Interpreting modal natural deduction as resolution

Robinson, David Edward Ashdown January 2009 (has links)
This thesis studies deduction systems for modal logics and the relation between them. Natural deduction systems give proofs that are close to human reasoning but are not well suited to automation while refutation systems are well suited to automation but inference steps are not close to human informal reasoning. This thesis will introduce a natural deduction calculus with a resolution rule that gives a good framework for simulating different calculi and studying their properties. We show that this calculus is able to directly simulate a tableau calculus for modal logic using two different search strategies. We then introduce an ordered hyperresolution calculus for modal logic K using a structural transformation to preserve structure of input formulae. We show that there is a mapping from derivations in the ordered hyperresolution calculus to derivations in the natural deduction calculus and a further mapping in the other direction. The hyperresolution calculus is a standard calculus and we show that it is therefore possible to automatically generate proofs dose to human reasoning using already existing, fast theorem provers. We give extensions of the structural transformation to a number of extensions of K and show that the mappings in both directions still hold. Since we have two simulations in a common framework, the relation between the tableau and resolution simulation are considered.

Agent Risk Management in Electronic Markets Using Option Derivatives

Espinosa, Omar Baqueiro January 2008 (has links)
In this thesis I present a framework for intelligent software agents to manage risk in electronic marketplaces using Option Derivatives. To compare the perfonnance of agents that trade Option Derivatives with agents not using them, I create a simulation of a financial marketplace in which software agents are vested with decision rules for buying and selling assets and Options. The motivation of my work is the need of risk management mechanisms for those Multi-Agent Systems where resources are allocated according to a market mechanism. Autonomous agents participating in such markets need to consider the risks to which they are exposed when trading in them, and to take actions to manage those risks. This thesis considers the hypothesis that software agents can benefit from trading Option Derivatives, using them as a tool to manage their exposure to uncertainty in the market. The main contributions of this thesis are: First, an abstract framework of an Option trading market is developed. This framework serves as a foundation for the implementation of computational Option trading mechanisms in systems using Market-Based resource allocation. The framework can be incorporated into existing Market-Based systems using the traded resources as the underlying assets for the Option market. Within the framework, four basic Option trading strategies are introduced, some of which reason about the risks exposed by their actions. These strategies are provided as a foundation for the development of more complex strategies that maximise the utility of the trading agents by the use of Options. The second contribution of this thesis is the analysis of the results from simulation experiments perfonned with the implementation of a software Multi-Agent System based on the developed Option trading framework. The system was developed in Java using the Repast simulation platfonn. The experiments were used to test the perfonnance of the developed trading strategies. This research shows that agents which traded Options by choosing actions aiming to minimize their risk perfonned significantly better than agents using other trading strategies, in the majority of the experiments. Agents using this risk-minimizing strategy also observed a lower correlation between the asset price and their returns, for the majority of the experimented scenarios. Agents which traded Options aiming to maximize their returns perfonned better than their peers in the scenarios where the asset price volatility was high. Finally, it was also observed that the perfonnance differential of the strategies increased as the uncertainty about the future price of the asset was increased.

Cross language information retrieval using ontologies

Abusalah, Mustafa A. January 2008 (has links)
The basic idea behind a Cross Language Information Retrieval (CLIR) system is to retrieve documents in a language different from the query. Therefore translation is needed before matching of query and document terms can take place. This translation process tends to cause a reduction in the retrieval effectiveness of CUR as compared to monolingual Information Retrieval systems. The research introduces a new CUR approach, by producing a unique CUR system based on multilingual Arabic/English ontologies; the ontology is used for query expansion and translation. Both Arabic and English ontologies are mapped using unique automatic ontology mapping tools that will be introduced in this study as well. This research addresses lexical ambiguity problems caused by erroneous translations. To prevent this, the study proposed developing a CUR system based on a multilingual ontology to create a mapping that will solve the lexical ambiguity problem. Also this study uses ontology semantic relations to expand the query to produce a better formulated query and gain better results. Finally a weighting algorithm is applied to the result set ofthe proposed system and results are compared to a state ofthe art baseline CUR system that uses a dictionary as a translation base. The CUR system was implemented in the travel domain and two ontologies were developed. A unique ontology mapping tool was also developed to map the two ontologies. The experimental work described consists of the design, development, and evaluation of the proposed CUR system. The evaluation of the proposed system demonstrates that the retrieval effectiveness outperformed the baseline system after running two human centered experiments. Relevancy judgments were measured and the results produced indicated that the proposed system is more effective than the baseline system.

Distributed control architecture

Rawlings, Trevor January 2009 (has links)
This document describes the development and testing of a novel Distributed Control Architecture (DCA). The DCA developed during the study is an attempt to turn the components used to construct unmanned vehicles into a network of intelligent devices, connected using standard networking protocols. The architecture exists at both a hardware and software level and provides a communication channel between control modules, actuators and sensors. A single unified mechanism for connecting sensors and actuators to the control software will reduce the technical knowledge required by platform integrators and allow control systems to be rapidly constructed in a Plug and Play manner. DCA uses standard networking hardware to connect components, removing the need for custom communication channels between individual sensors and actuators. The use of a common architecture for the communication between components should make it easier for software to dynamically determine the vehicle s current capabilities and increase the range of processing platforms that can be utilised. Implementations of the architecture currently exist for Microsoft Windows, Windows Mobile 5, Linux and Microchip dsPIC30 microcontrollers. Conceptually, DCA exposes the functionality of each networked device as objects with interfaces and associated methods. Allowing each object to expose multiple interfaces allows for future upgrades without breaking existing code. In addition, the use of common interfaces should help facilitate component reuse, unit testing and make it easier to write generic reusable software.

Evolutionary building layout optimisation

Manthilake, Inoka January 2011 (has links)
Space layout planning (SLP) is the organisation of functional/living spaces (spatial units-SUs) and corridors/access paths of a building satisfying requirements (e.g. accessibility, adjacency etc.) to achieve design goals (e.g. minimising unutilised space and travelling cost). Out of many ways of arranging SUs, a human designer may consider only a handful of alternatives due to resource limitations (e.g. time and effort). To facilitate this task, decision support for SLP design can be obtained using computer technology. Despite being highly combinatorial, many attempts have been made to automate SLP. However in the majority of these, the SUs are arranged in a fixed building footprint/boundary, which may limit exploration of the entire solution space. Thus, it is aimed to develop a space layout optimisation system that allows SUs to position themselves in a building site to satisfy design goals. The objectives of the research are to: understand architectural SLP and optimisation; assess the need for automation of SLP optimisation; explore methods to formulate the SLP optimisation problem; develop a prototype system to optimise SLP based on building design guidelines, and evaluate performance for its strengths and weaknesses using case studies. As early stages of building design are found to be most e ective in reducing the environmental impact and costs, it is also aimed to make provisions for integrating these aspects in SLP. To address the first three objectives, a literature review was conducted. The main finding of this was the current need for an optimisation tool for SLP. It also revealed that genetic algorithms-GA are widely used and show promise in optimisation. Then, a prototype space layout optimisation system (Sl-Opt) was developed using real-valued GA and was programed in JavaR. Constrained optimisation was employed where adjacency and accessibility needs were modelled as constraints, and the objective was to minimise the spread area of the layout. Following this, using an office layout with 8 SUs, Sl-Opt was evaluated for its performance. Results of the designed experiment and subsequent statistical tests showed that the selected parameters of GA operators influence optimisation collectively. Finally using the best parameter set, strengths and weaknesses of Sl-Opt were evaluated using two case studies: a hospital layout problem with 31 SUs and a problem with 10 non-rectangular SUs. Findings revealed that using the selected GA parameters Sl-Opt can successfully solve small scale problems of about less than 10 SUs. For larger prob- lems, the parameters need to be altered. Case studies also revealed that the system is capable of solving problems with non-rectangular SUs with varied 0rientations. Sl-Opt appear to have potential as a building layout decision support tool, and in addition, integration of other aspects such as energy efficiency and cost is possible.

Adaptive Function Modal learning Neural Networks

Kang, Miao January 2011 (has links)
Modal learning method is a neural network learning term that refers to a single neural network which combines with more than one mode of learning. It aims to achieve more powerful learning results than a neural network combines with only one single mode of learning. This thesis introduces a novel modal learning Adaptive Function Neural Network (ADFUNN) with the aim to overcome the linear inseparability limitation in a single weight layer supervised network. Adaptation in the function mode of learning within individual neurons is carried out in parallel with the traditional weights adaptation mode of learning between neurons; thus producing a more powerful, flexible form of learning. ADFUNN employs modifiable linear piecewise neuron activation functions and meanwhile adapts the weights using a modified delta learning rule. Experimental results show the single layer ADFUNN is highly effective at assimilating and generalising on many linearly inseparable problems, such as the Iris dataset, and a natural language phrase recognition task. A multi-layer approach, a Multi-layer ADFUNN (MADFUNN) is introduced to solve highly complex datasets. It aims to find a suitably restricted subset of neuron activation functions which has a good representational capacity and enables efficient learning for complex models with large datasets. Experiments on analytical function recognition and letter image recognition are solved by MADFUNN with high levels of recognition. In order to further explore modal learning, ADFUNN is combined with an unsupervised modal learning neural network called Snap-Drift (Palmer-Brown and Lee) to create a Snap-drift ADFUNN (SADFUNN). It is used to solve an optical and pen-based handwritten digit recognition task from the DCI machine learning repository and exhibits more powerful generalisation ability than the MLPs. An additional benefit of ADFUNN, as well as a MADFUNN and SADFUNN, is that the learned functions can support intelligent data analysis. These learned activation function curves reveal many useful information about the data.

Local pattern mining in multi-relational data

Spyropoluou, Eirini January 2013 (has links)
Multi-relational data mining has so far been synonym to methods based on Inductive Logic Programming (ILP) , which discover frequent first-order logic rules in the data. This is due to the fact that ILP conveniently captures the multi-relational structure, while there has not been a suitable pattern syntax extension of an itemset for the case of multi-relational data. Local pattern mining methods have mostly focused on mining a single relation. A common strategy for mining multi-relational data (MRD) has been to apply frequent items et mining on the join of all database relations. However, when flattening the data in this way, important structural information is lost and itemsets do not capture all the associations in the data. This thesis describes our research that led to a new approach for local pattern mining in multi-relational data. The final result of this research is summarised as follows. We define the new pattern syntax of Maximal Complete Connected Subsets (MCCSs) for MRD with binary relations, which captures well the structure of the original data. We additionally propose the generalisation of MCCSs, called N-MCCSs, for MRD containing relations of any arity. We demonstrate how N-MCCSs contain tiles [27] and n-sets [16] as special cases. Furthermore, we propose RMiner, an efficient algorithm to mine MCCSs and N -RMiner an efficient algorithm to mine N-MCCSs. We show experimentally that N-RMiner, while applicable to MRD in general, when applied to a Single n -ary relation, considerably outperforms the state of the art algorithm for mining n-sets [16] on real world datasets. Finally, this work is incorporated into a general data mining framework for quantifying the subjective interestingness of patterns based the prior information of the user.

Constraint based event recognition for information extraction

Crowe, J. D. M. January 1996 (has links)
A common feature of news reports is the reference to events other than the one which is central to the discourse. Previous research has suggested Gricean explanations for this; more generally, the phenomenon has been referred to simply as "journalistic style". Whatever the underlying reasons, recent investigations into information extraction have emphasised the need for a better understanding of the mechanisms that can be used to recognise and distinguish between multiple events in discourse. Existing information extraction systems approach the problem of event recognition in a number of ways. However, although frameworks and techniques for black box evaluations of information extraction systems have been developed in recent years, almost no attention has been given to the evaluation of techniques for event recognition, despite general acknowledgement of the inadequacies of current implementations. Not only is it unclear which mechanisms are useful, but there is also little consensus as to how such mechanisms could be compared. This thesis presents a formalism for representing event structure, and introduces an evaluation metric through which a range of event recognition mechanisms are quantitatively compared. These mechanisms are implemented as modules within the CONTESS event recognition systems, and explore the use of linguistic phenomena such as temporal phrases, locative phrases and cue phrases, as well as various discourse structuring heuristics. Our results show that, whilst temporal and cue phrases are consistently useful in event recognition, locative phrases are better ignored. A number of further linguistic phenomena and heuristics are examined, providing an insight into their value for event recognition purposes.

Page generated in 0.0623 seconds