• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 69
  • 27
  • 3
  • 1
  • Tagged with
  • 1575
  • 256
  • 191
  • 127
  • 122
  • 115
  • 96
  • 94
  • 90
  • 79
  • 71
  • 60
  • 60
  • 59
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

The computational application of bilattice logic to natural reasoning

Schoter, Andreas January 1996 (has links)
Chapter 1 looks at natural reasoning. It begins by considering the inferences that people make, particularly in terms of how those inferences differ from what is sanctioned by classical logic. I then consider the role of logic in relation to psychology and compare this relationship with the competence/performance distinction from syntax. I discuss four properties of natural reasoning that I believe are key to any theory: specifically partially, paraconsistancy, relevance and defeasibility. I then discuss whether these are semantic properties or pragmatic ones, and conclude by describing a new view of logic and inference prevalent in some contemporary writings. Chapter 2 looks at some of the existing formal approaches to the four properties. For each property I present the basic idea in formal terms, and then discuss a number of systems from the literature. Each section concludes with a brief discussion of the importance of the given property in the field of computation. Chapter 3 develops the formal system used in this thesis. this is an evidential, bilattice-based logic (EBL). I begin by presenting the mathematical preliminaries, and then show how the four properties of natural reasoning can be captured. The details of the logic itself are presented, beginning with the syntax and then moving on to the semantics. The role of pragmatic inferences in the logic is considered and a formal solution is advanced. I conclude by comparing EBL to some of the logics discussed in Chapter 2. Chapter 4 rounds off Part 1 by considering the implementation of the logic and some of it's computational properties. It begins by considering the application of evidential bilattice logic to logic programming; it extends Fitting's work in this area to construct a programming language, QLOG2. I give some examples of this language in use. The QLOG2 language is then used as a part of the implementation of the EBL system itself: I describe the details of this Implementation and then give some examples of the system in use. The chapter concludes by giving an informal presentation of some basic complexity results for logical closure in EBL, based on the given implementation. Chapter 5 presents some interesting data from linguistics that reflects some of the principles of natural reasoning; in particular I concentrate on implicatures and presupposition. I begin by describing the data and then consider a number of approaches from both the logical and the linguistic literature. Chapter 6 uses the logic developed in Chapter 3 to analyse the data presented in Chapter 5. I consider the basic inference cases, and then move on to more complex examples involving contextual interactions. The results are quite successful, and add weight to Mercer's quest for a common logical semantics for entailment and presupposition. All of the examples considered in this chapter can be handled by the implemented system described in Chapter 4. Finally, Chapter 7 rounds off by presenting some further areas of research that have been raised by this investigation. In particular, the issues of quantification and modality are discussed.
52

Evaluation of preprocessors for neural network speaker verification

Salleh, Sheikh-Hussain January 1997 (has links)
A hybrid network is proposed for speaker verification (SV). It consists of self organized neural networks and a network of multi layer perceptrons (MLP). The benefits of this SV system are its speed and simplicity. The first experiment used a neural network model (NNM) with frame labelling performed from a client codebook known as NNM-C. Enhanced performance was obtained from this model when compared with the HMM (Hidden Markov Model). The second set of experiments used the NNM with frame labelling from the client and the impostors codebook known as NNM-CI. In the third experiment a new approach used a speaker verification (HMM-MLP) method which combines a HMM based preprocessor with MLP. The output scores of the HMM were used as the inputs to the MLP. Compared with the NNM-C it is more successful since it has the capability of time alignment of the speech signals and the efficient discrimination capability of the neural networks. Moreover the method achieves better performance by the use of more than one feature set for each set of preprocessed parameters. This uses the fact that different feature sets can produce different partitionings of the vector space. The most important contribution of this research was the development and refinement of the NNM SV which incorporate the client barcode into the system design. The idea then is the use of correlation value to measure the degree of similarity or dissimilarity between the client and the impostor as an added information basis to aid the learning process. This measure provides a scalar evaluation of how well the client or the impostor utterance correlates with the client barcode. The integration of the client barcode to the system design has been shown to provide certain advantages to specific client speakers.
53

A query-based approach to ontologies using the theory of institutions

Pokrywczynski, Daniel January 2010 (has links)
In recent years we can observe an increasing interest in using ontologies in different branches of science and commerce. This includes disciplines like medicine, bio-informatics,the semantic web, artificial intelligence, and software engineering, to name a few. The need to use ontologies in new and evolving applications requires ontologies to evolve. Typical modifications of ontologies include extending an ontology with new axioms, extracting a module (by which we mean a self-sufficient part), and merging two ontologies together. While performing these operations one usually wants to know whether the semantics of the ontologies are, in some sense, preserved. As the number of ontology applications grew, so did the number of formalisms for ontology formulation. But this increasing number of ontology languages, while helping to develop ontologies and answering the various needs of users, turned out to be a potential source of problems as well. This becomes evident when one is working with multiple ontologies. For instance, when merging two ontologies one not only has to make sure that unwanted consequences are not entailed as a result of this operation but one may also have to solve the problem of these ontologies being given in different formalisms. Even within one formal language, different ontologies may use different vocabularies. Again, different vocabularies make ontologies difficult to use together. Similar problems arise when one wants to compare two ontologies or use an ontology to answer a query that may be given in different formalisms, or that may use different vocabularies. In the literature, modularity of ontologies, extending, merging and comparing ontologies have received a lot of attention, but usually these problems are considered within one formalism only. On the other hand the problem of comparing and combining ontologies formulated in distinct formalisms has not yet been deeply analysed. 'In our work we consider the issues of querying, merging and comparing ontologies in a more general way. In particular, we investigate how one can query an ontology if the query and the ontology are formulated in different formalisms and possibly different vocabularies. We research how to compare and how to merge ontologies if they are formulated in distinct formalisms and vocabularies. To make this possible we start by presenting an abstract view on ontologies; instead of focusing on the axioms inside the ontology, in our approach we lookat its consequences within certain query languages. Then we use the theory of institutions to define the consequence relation in a way that does not depend on a particular formal language. Thanks to that ontologies and queries do not have to be formulated in the same, formal language anymore; moreover, 'the ontology and the query may be formulated with the use of different vocabularies. This provides the first steps towards a formalism that allows us to compare and combine arbitrary ontologies. As the next step we introduce a structure which allows us to work with multiple ontologies, and we formulate the notions of entailment and inseparability of ontologies relative to a 'signature of interest in a way that does not depend on a particular formalism. This structure allows us to compare and combine arbitrary ontologies. Furthermore, we show how an abstract description logic can be extended to a description logic with individuals in a systematic and uniform way. We also investigate the relations between description logics and their counterparts with individuals. Thanks to that we areable to use ontologies together with sets of assertion (ABoxes) to answer queries about individuals. Again, we provide a structure allowing for answering queries about individuals originally formulated in a different formal language than the ontology and the ABox, we assume that ABoxes and ontologies are formulated in the same language. We also present a formulation of entailment and inseparability of ontologies based on instance checking as the one based on subsumption is not strong enough if we consider ontologies together with ABoxes. This formulation is also presented in a way that does not depend on a particular formal language. Finally, we investigate the problem of entailment with respect to some vocabulary E formulated in the lightweight description logic £GSf and prove that the corresponding decision problem is ExPTiME-complete. This extends the result presented by Lutz and Wolter [611 for description logic £G.
54

Performance modelling of applications in a smart environment

Chen, Xiao January 2013 (has links)
In today’s world, advanced computing technology has been widely used to improve our living conditions and facilitate people’s daily activities. Smart environment technology, including kinds of smart devices and intelligent systems, is now being researched to provide an advanced intelligent life, easy, comfortable environment. This thesis is aimed to investigate several related technologies corresponding to the design of a smart environment. Meanwhile, this thesis also explores different modelling approaches including formal methods and discrete event simulation. The core contents of the thesis include performance evaluation of scheduling policies and capacity planning strategies. The main contribution is in developing a modelling approach for smart hospital environments. This thesis also provides valuable experience in the formal modelling and the simulation of large scale systems. The chief findings are that the dynamic scheduling policy is proved to be the most efficient approach in the scheduling process; and a capacity scheme is also verified as the optimal scheme to obtain the high work efficiency under the condition of limited human resource. The main methods used for the performance modelling are Performance Evaluation Process Algebra (PEPA) and discrete event simulation. A great deal of modelling tasks was completed with these methods. For the analysis, we adopt both numerical analysis based on PEPA models and statistical measurements in the simulation.
55

Cross organisational compatible workflows generation and execution

Saleem, Mohammad January 2012 (has links)
With the development of internet and electronics, the demand for electronic and online commerce has increased. This has, in turn, increased the demand for business process automation. Workflow has established itself as the technology used for business process automation. Since business organisations have to work in coordination with many other business organisations in order to succeed in business, the workflows of business organisations are expected to collaborate with those of other business organisations. Collaborating organisations can only proceed in business if they have compatible workflows. Therefore, there is a need for cross organisational workflow collaboration. The dynamism and complexity of online and electronic business and high demand from the market leave the workflows prone to frequent changes. If a workflow changes, it has to be re-engineered as well as reconciled with the workflows of the collaborating organisations. To avoid the continuous re-engineering and reconciliation of workflows, and to reuse the existing units of work done, the focus has recently shifted from modeling workflows to automatic workflow generation. Workflows must proceed to runtime execution, otherwise, the effort invested in the build time workflow modeling is wasted. Therefore, workflow management and collaboration systems must support workflow enactment and runtime workflow collaboration. Although substantial research has been done in build-time workflow collaboration, automatic workflow generation, workflow enactment and runtime workflow collaboration, the integration of these highly inter-dependent aspects of workflow has not been considered in the literature. The research work presented in this thesis investigates the integration of these different aspects. The main focus of the research presented in this thesis is the creation of a framework that is able to generate multiple sets of compatible workflows for multiple collaborating organisations, from their OWLS process definitions and high level goals. The proposed framework also supports runtime enactment and runtime collaboration of the generated workflows.
56

Cooperative problem-solving using assumption-based truth maintenance

Dugdale, Julie Anne January 1994 (has links)
Classical expert systems have been criticised for ignoring the problem-solving ability of the user. The ramifications of this are more than just a reduced problem-solving capability. By excluding the knowledge of the user, the knowledge-base of the system is incomplete as it is infeasible to capture all the relevant factors. Furthermore, users become alienated as they do not have the opportunity to "adapt the situation according to their skills. In many cases the conclusions of the expert system are rejected, or the user's responsibility is abrogated, because the user cannot influence the decision. In response to these criticisms a new type of system is emerging - the Cooperative Problem-solving System. Such systems provide a dynamic interactive environment in which the user and the system work together to derive a solution. A cooperative approach is appropriate in two situations. The first situation is when a problem can be broken down into sub-problems which can then be assigned to the various participants. The second situation is when the relative merits of independently derived solutions need to be investigated by participants in order to arrive at a solution that is mutually acceptable. This thesis is concerned with cooperation which falls into the latter category. The cooperative system developed in this work is the first to study cooperation in this respect. The domain chosen to implement the cooperative problem-solving system is investment management. The process of investment management described in this work is based upon the approach used by the Product Operations division of International Computers Limited (ICL). Investment management is ideal because of the nature of the reasoning used and the type of cooperative interaction that takes place. Until now, the application of such a system to investment management has not been explored. Previous methods for analysing cooperation focus on the identification and assignment of individual tasks to the various agents. These methods are therefore inappropriate to the interpretation of cooperation used in this work. The functions necessary to provide a cooperative environment have been identified by developing a new approach to analysing cooperation. Central to this approach are the transcripts obtained from management meetings. This data was supplemented by devising a case-study and running simulated meetings. Seven functions that a cooperative problem-solving system should provide were identified from the transcripts: information provision, problem-solving, explanation, user-modelling, constraint recognition, problem-modelling, and confirmation. The individual identification of these functions is not new. The novelty in this work stems from the collective use of the functions for cooperation.
57

Stability analysis of dynamical neural networks

Arik, S. January 1997 (has links)
No description available.
58

A machine learning system for the automatic identification of text structure, and application to research article abstracts in computer science

Anthony, L. E. January 2002 (has links)
Teaching learners about the common structural patterns used in different types of texts, such as the abstract and introduction of research papers, has proved successful in many reading and writing courses. However, a major problem faced by researchers when analyzing texts is the vast amount of time needed to conduct the analysis. This has led to many studies reporting only `preliminary' findings, based on a small corpus of target texts. In this thesis, I propose a computer system that uses machine learning to automatically identify the structure of texts, enabling researchers to quickly and effectively process very large corpora. The system also has applications in the classroom as a teacher resource when evaluating and selecting texts that highlight certain structural features, and as a student resource when conducting data-driven learning. To test the system, it was applied to research article abstracts in computer science journals and found to be fast and accurate. It was also assessed by a practicing teacher and graduate school student, and shown to be flexible, easy to use, and a practical aid in the classroom.
59

Applying artificial intelligence techniques to data distribution

O'Neill, J. January 2014 (has links)
Automatic data distribution is one of the most crucial issues preventing the development of a fully automatic parallelisation environment. Researchers have proposed solutions that utilise artificial intelligence (AI) technology including expert systems and neural networks to try and solve the problem. In this research project, alternative artificial intelligent techniques including Genetic Algorithms (GAs) and Ant Colony Optimisation (ACO) are investigated for the purposes of determining if their use would be beneficial in the data distribution process. A data distribution 1001 has been developed for each technique in order to verify the detailed analysis. The tools were tested using 300 example loops and the results show that the introduction of these techniques was successful in determining an appropriate data partition and distribution strategy for all 300 test cases. Furthermore, a novel hyper-heuristic approach to the data distribution problem involving case base reasoning is also investigated. The aim of the hyper-heuristic approach is to select the most appropriate heuristic to apply to a particular problem. The approach has been verified by the development of a case base reasoning tool that will choose an appropriate heuristic based on previous experience. Results show that the approach is effective at identifying similar cases in the case base and choosing the most appropriate heuristic to apply.
60

Adaptive Function Modal learning Neural Networks

Kang, Miao January 2011 (has links)
Modal learning method is a neural network learning term that refers to a single neural network which combines with more than one mode of learning. It aims to achieve more powerful learning results than a neural network combines with only one single mode of learning. This thesis introduces a novel modal learning Adaptive Function Neural Network (ADFUNN) with the aim to overcome the linear inseparability limitation in a single weight layer supervised network. Adaptation in the function mode of learning within individual neurons is carried out in parallel with the traditional weights adaptation mode of learning between neurons; thus producing a more powerful, flexible form of learning. ADFUNN employs modifiable linear piecewise neuron activation functions and meanwhile adapts the weights using a modified delta learning rule. Experimental results show the single layer ADFUNN is highly effective at assimilating and generalising on many linearly inseparable problems, such as the Iris dataset, and a natural language phrase recognition task. A multi-layer approach, a Multi-layer ADFUNN (MADFUNN) is introduced to solve highly complex datasets. It aims to find a suitably restricted subset of neuron activation functions which has a good representational capacity and enables efficient learning for complex models with large datasets. Experiments on analytical function recognition and letter image recognition are solved by MADFUNN with high levels of recognition. In order to further explore modal learning, ADFUNN is combined with an unsupervised modal learning neural network called Snap-Drift (Palmer-Brown and Lee) to create a Snap-drift ADFUNN (SADFUNN). It is used to solve an optical and pen-based handwritten digit recognition task from the DCI machine learning repository and exhibits more powerful generalisation ability than the MLPs. An additional benefit of ADFUNN, as well as a MADFUNN and SADFUNN, is that the learned functions can support intelligent data analysis. These learned activation function curves reveal many useful information about the data.

Page generated in 0.023 seconds