• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2243
  • 503
  • 197
  • 196
  • 130
  • 126
  • 100
  • 67
  • 67
  • 67
  • 67
  • 67
  • 67
  • 30
  • 29
  • Tagged with
  • 4522
  • 4522
  • 1631
  • 1259
  • 1062
  • 952
  • 719
  • 717
  • 659
  • 630
  • 602
  • 530
  • 468
  • 465
  • 459
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

An analysis of the use of general methods to assess the difficulty of function optimization for genetic algorithms

Quick, R. J. January 2001 (has links)
No description available.
62

Application of genetic algorithms to open pit design and scheduling

Surme, Turan January 1996 (has links)
No description available.
63

A general methodology for designing and developing Intelligent Database Decision Aids, with application to medicine

Jackson, Susan Amani January 1995 (has links)
After more than twenty years of development effort in expert and knowledge-based applications, there are indications of a growing uncertainty in the practical potential of such systems, especially within medical domains. Many difficulties have arisen from attempting to model human experts. These are particularly evident when considering unformalised or nonstandardised domains which are common characteristics of many specialist medical fields. Moreover, little attention has been paid to the prospective users of these systems, the tasks which are routinely undertaken or the environment in which the users must operate. These factors have all contributed to the continued lack of success of such systems. This research reviews the difficulties encountered during the development of know ledgebased systems and conventional systems. From these studies, the importance of fully considering end-users and their needs became evident. It also became apparent that currently, there is a lack of techniques available to medical investigators which would allow them to quickly, easily and thoroughly analyse the information they collect during their research studies. However, the ability to undertake such reviews is crucial if consultants are not only to extend their knowledge of their domain through exploration but if they are also to evolve agreed operational practices. This standardisation of approach would lead to a rationalisation of the tests and procedures routinely undertaken, which in turn would result in the saving of time, money and patient discomfort. Consequently, this research also examines the intended user group, the typical procedures followed and the common tasks undertaken during clinical trials, as well as the environment in which the user group operates. These studies uncovered the typical facilities and assistance required by such investigators. From this information, a general methodology, characterising the processes involved in the construction of a generic Intelligent Database Decision Aid (IDDA), was developed. A suite of computer-based tools then evolved to facilitate the tailoring of such a system by a domain expert, who may be a naive computer user, for a specific investigation. These tools would thus give total control of a study to the domain expert and permit an IDDA system to be quickly and easily constructed for each new investigation. The approach was evaluated by utilising test cases drawn primarily from the medical domain. However, as the methodology was based upon commonly accepted investigative procedures, it was also reviewed in other domains to test its wider applicability. All of the IDDA systems were successfully constructed and the feedback obtained from the trials was very positive, both for the approach adopted and the various IDDA end-systems produced. Therefore, the general methodology proposed by this research has been shown to be effective and its benefits can now begin to be realized.
64

Application of artificial neural networks and the wavelet transform for pattern recognition, noise reduction and data compression

Choudhry, Muhammad Din January 2000 (has links)
Theory of Artificial Neural Networks (ANNs) could not provide an exact method of weights training. The training is done mostly by iterative trial and error minimisation methods which do not enable the ANNs for time incremental learning. In this thesis, it is shown that the weights successfully produced by an error minimisation method are nothing more than the scaled versions of their respective components of the sample pattern and that the training methods leaves a chance for a neuron to be deceived. An exact method of weight construction is developed in the form of a system of linear equations. A new linear classifier ANN and a number of thresholding procedures are developed. It is proved that the Hopfield network and the Boltzmann machine do not qualify as the reasonable networks. A generalised multiclass linear classifier ANN is developed which is a combination of a newly developed multiclass linear ANN and a newly developed multiclass XOR classifier ANN. A biological neuromuscular system is interpreted as a multiclass linear classifier ANN. A new technique for pattern recognition. especially for images, has been presented with a software check. The technique minimises the design topology of ANNs and enables them to classify a scaled, a mirror image, and a noisy version of the sample pattern. The Continuous Wavelet Transform (CWT), the Discrete Wavelet Transform, and the Wavelet Decomposition has been connected by developing an extend-able and intensifyable system of particular six Gaussian wavelets. A binary transform applicable for every real function is developed. The confusing automatic nature of the CWT is explained along with presenting a new style of defining wavelets. Application of the wavelet transforms for noise reduction and data compression/expansion is explained and their performance is checked through the self developed software. A modification in the CWT is made in order to make their application easier through ANNs. The ANNs are developed and their performance is checked against the self developed software. A new multiresolution zoom-out wavelet transform is developed which expands data without smoothing it. A new wavelet is deduced from the smoothing average filter. Some twodimensional wavelets for noise reduction and data compression/expansion are developed on the same style and their performance is checked through the self developed software. An ANN for CWT using a newly developed two-dimensional wavelet is developed and its activation is explained. Data compression by locating peaks and bottoms of data and setting other elements equals zero is done with the guarantee of reconstruction. The new wavelet transform is modified to reconstruct the data between peaks and bottoms. Peaks and bottoms detecting ANNs are developed and their performance is checked against the self developed software. Procedures for classification are presented with self developed software check. The theory of ANNs requires bit-wise parallel adders and multiplexors. A parallel adder circuit is developed by combining some newly developed basic units for the purpose.
65

Combining domain expert knowledge with neural networks for predicting corporate bankruptcies

Nasir, M. L. January 2000 (has links)
No description available.
66

Incremental activity and plan recognition for human teams

Masato, Daniele January 2012 (has links)
Anticipating human subjects' intentions and information needs is considered one of the ultimate goals of Artificial Intelligence. Activity and plan recognition contribute to this goal by studying how low-level observations about subjects and the environment in which they act can be linked to a high-level plan representation. This task is challenging in a dynamic and uncertain environment; the environment may change while the subjects are reasoning about it, and the effects of the subjects' interactions cannot be predicted with certainty. Humans generally struggle to enact plans and maintain situation awareness in such circumstances, even when they work in teams towards a common objective. Intelligent software assistants can support human teams by monitoring their activities and plan progress, thus relieving them from some of the cognitive burden they experience. The assistants' design needs to keep into account that teams can form and disband quickly in response to environmental changes, and that the course of action may change during plan execution. It is also crucial to efficiently and incrementally process a stream of observations in order to enable online prediction of those intentions and information needs. In this thesis we propose an incremental approach for team composition and activity recognition based on probabilistic graphical models. We show that this model can successfully learn team formations and behaviours in highly dynamic domains, and that classification can be performed in polynomial time. We evaluate our model within a simulated scenario provided by an open-source computer game. In addition, we discuss an incremental approach to plan recognition that exploits the results yielded by activity recognition to assess a team's course of action. We show how this model can account for incomplete or inconsistent knowledge about recognised activities, and how it can be integrated into an existing mechanism for plan recognition.
67

Adaptive estimation techniques for resident space object characterization

LaPointe, Jamie 26 January 2017 (has links)
<p> This thesis investigates using adaptive estimation techniques to determine unknown model parameters such as size and surface material reflectivity, while estimating position, velocity, attitude, and attitude rates of a resident space object. This work focuses on the application of these methods to the space situational awareness problem.</p><p> This thesis proposes a unique method of implementing a top-level gating network in a dual-layer hierarchical mixture of experts. In addition it proposes a decaying learning parameter for use in both the single layer mixture of experts and the dual-layer hierarchical mixture of experts. Both a single layer mixture of experts and dual-layer hierarchical mixture of experts are compared to the multiple model adaptive estimation in estimating resident space object parameters such as size and reflectivity. The hierarchical mixture of experts consists of macromodes. Each macromode can estimate a different parameter in parallel. Each macromode is a single layer mixture of experts with unscented Kalman filters used as the experts. A gating network in each macromode determines a gating weight which is used as a hypothesis tester. Then the output of the macromode gating weights go to a top level gating weight to determine which macromode contains the most probable model. The measurements consist of astrometric and photometric data from non-resolved observations of the target gathered via a telescope with a charge coupled device camera. Each filter receives the same measurement sequence. The apparent magnitude measurement model consists of the Ashikhmin Shirley bidirectional reflectance distribution function. The measurements, process models, and the additional shape, mass, and inertia characteristics allow the algorithm to predict the state and select the most probable fit to the size and reflectance characteristics based on the statistics of the measurement residuals and innovation covariance. A simulation code is developed to test these adaptive estimation techniques. The feasibility of these methods will be demonstrated in this thesis.</p>
68

Design and implementation of an English to Arabic machine translation (MEANA MT)

Alneami, Ahmed H. January 2001 (has links)
A new system for Arabic Machine Translation (called MEANA MT) has been built. This system is capable of the analysis of English language as a source and can convert the given sentences into Arabic. The designed system contains three sets of grammar rules governing the PARSING, TRANSFORMATION AND GENERATION PHASES. In the system, word sense ambiguity and some pragmatic patterns were resolved. A new two-way (Analysis/Generation) computational lexicon system dealing with the morphological analysis of the Arabic language has been created. The designed lexicon contains a set of rules governing the morphological inflection and derivation of Arabic nouns, verbs, verb "to be", verb "not to be" and pronouns. The lexicon generates Arabic word forms and their inflectional affixes such as plural and gender morphemes as well as attached pronouns, each according to its rules. It can not parse or generate unacceptable word inflections. This computational system is capable of dealing with vowelized Arabic words by parsing the vowel marks which are attached to the letters. Semantic value pairs were developed to show ~he word sense and other issues in morphology; e.g. genders, numbers and tenses. The system can parse and generate some pragmatic sentences and phrases like proper names, titles, acknowledgements, dates, telephone numbers and addresses. A Lexical Functional Grammar (LFG) formalism is used to combine the syntactic, morphological and semantic features. The grammar rules of this system were implemented and compiled in COMMON. LISP based on Tomita's Generalised LR parsing algorithm, augmented by Pseudo and Full Unification packages. After parsing, sentence constituents of the English sentence are rep- _ resented as Feature Structures (F-Structures). These take part in the transfer and generation process which uses transformation' grammar rules to change the English F-Structure into Arabic F-Structure. These Arabic F-Structure features will be suitable for the Arabic generation grammar to build the required Arabic sentence. This system has been tested on three domains (sentences and phrases); the first is a selected children's story, the second semantic sentences and the third domain consists of pragmatic sentences. This research could be considered as a complete solution for a personal MT system for small messages and sublanguage domains.
69

Reverse engineering an active eye

Schmidt-Cornelius, Hanson January 2002 (has links)
No description available.
70

An ontology model supporting multiple ontologies for knowledge sharing

Tamma, Valentina A. M. January 2001 (has links)
No description available.

Page generated in 0.043 seconds