491 |
Neuro-fuzzy based intelligent approaches to nonlinear system identification and forecastingAlshejari, Abeer January 2018 (has links)
Nearly three decades back nonlinear system identification consisted of several ad-hoc approaches, which were restricted to a very limited class of systems. However, with the advent of the various soft computing methodologies like neural networks and the fuzzy logic combined with optimization techniques, a wider class of systems can be handled at present. Complex systems may be of diverse characteristics and nature. These systems may be linear or nonlinear, continuous or discrete, time varying or time invariant, static or dynamic, short term or long term, central or distributed, predictable or unpredictable, ill or well defined. Neurofuzzy hybrid modelling approaches have been developed as an ideal technique for utilising linguistic values and numerical data. This Thesis is focused on the development of advanced neurofuzzy modelling architectures and their application to real case studies. Three potential requirements have been identified as desirable characteristics for such design: A model needs to have minimum number of rules; a model needs to be generic acting either as Multi-Input-Single-Output (MISO) or Multi-Input-Multi-Output (MIMO) identification model; a model needs to have a versatile nonlinear membership function. Initially, a MIMO Adaptive Fuzzy Logic System (AFLS) model which incorporates a prototype defuzzification scheme, while utilising an efficient, compared to the Takagi–Sugeno–Kang (TSK) based systems, fuzzification layer has been developed for the detection of meat spoilage using Fourier transform infrared (FTIR) spectroscopy. The identification strategy involved not only the classification of beef fillet samples in their respective quality class (i.e. fresh, semi-fresh and spoiled), but also the simultaneous prediction of their associated microbiological population directly from FTIR spectra. In the case of AFLS, the number of memberships for each input variable was directly associated to the number of rules, hence, the “curse of dimensionality” problem was significantly reduced. Results confirmed the advantage of the proposed scheme against Adaptive Neurofuzzy Inference System (ANFIS), Multilayer Perceptron (MLP) and Partial Least Squares (PLS) techniques used in the same case study. In the case of MISO systems, the TSK based structure, has been utilized in many neurofuzzy systems, like ANFIS. At the next stage of research, an Adaptive Fuzzy Inference Neural Network (AFINN) has been developed for the monitoring the spoilage of minced beef utilising multispectral imaging information. This model, which follows the TSK structure, incorporates a clustering pre-processing stage for the definition of fuzzy rules, while its final fuzzy rule base is determined by competitive learning. In this specific case study, AFINN model was also able to predict for the first time in the literature, the beef’s temperature directly from imaging information. Results again proved the superiority of the adopted model. By extending the line of research and adopting specific design concepts from the previous case studies, the Asymmetric Gaussian Fuzzy Inference Neural Network (AGFINN) architecture has been developed. This architecture has been designed based on the above design principles. A clustering preprocessing scheme has been applied to minimise the number of fuzzy rules. AGFINN incorporates features from the AFLS concept, by having the same number of rules as well as fuzzy memberships. In spite of the extensive use of the standard symmetric Gaussian membership functions, AGFINN utilizes an asymmetric function acting as input linguistic node. Since the asymmetric Gaussian membership function’s variability and flexibility are higher than the traditional one, it can partition the input space more effectively. AGFINN can be built either as an MISO or as an MIMO system. In the MISO case, a TSK defuzzification scheme has been implemented, while two different learning algorithms have been implemented. AGFINN has been tested on real datasets related to electricity price forecasting for the ISO New England Power Distribution System. Its performance was compared against a number of alternative models, including ANFIS, AFLS, MLP and Wavelet Neural Network (WNN), and proved to be superior. The concept of asymmetric functions proved to be a valid hypothesis and certainly it can find application to other architectures, such as in Fuzzy Wavelet Neural Network models, by designing a suitable flexible wavelet membership function. AGFINN’s MIMO characteristics also make the proposed architecture suitable for a larger range of applications/problems.
|
492 |
Musical instrument modelling using digital waveguidesAird, Marc-Laurent January 2002 (has links)
No description available.
|
493 |
Use of algebraically independent numbers in computationElsonbaty, Ahmed January 2004 (has links)
No description available.
|
494 |
Building abstractable story components with institutions and tropesThompson, Matthew January 2018 (has links)
Though much research has gone into tackling the problem of creating interactive narratives, no software has yet emerged that can be used by story authors to create these new types of narratives without having to learn a programming language or narrative formalism. Widely-used formalisms in interactive narrative research, such as Propp's Morphology of the Folktale and Lehnert's Plot Units' allow users to compose stories out of pre-defined components, but do not allow them to define their own story components, or to create abstractions by embedding components inside of other components. Current tools for interactive narrative authoring, such as those that use Young's Mimesis architecture or Facade's drama manager approach, direct intelligent agents playing the roles of characters through use of planners. Though these systems can handle player interactions and adapt the story around them, they are inaccessible to story authors who lack technical or programming ability. This thesis proposes the use of Story Tropes to informally describe story components. We introduce TropICAL, a controlled natural language system for the creation of tropes which allows non-programmer story authors to describe their story components informally. Inspired by Propp's Morphology, this language allows for the creation of new story components and abstractions that allow existing components to be embedded inside of new ones. Our TropICAL language compiles to the input language for an Answer Set solver, which represents the story components in terms of a formal normative framework, and hence allows for the automated verification of story paths. These paths can be visualised as branching tree diagrams in the StoryBuilder tool, so that authors can visualise the effect of adding different tropes to their stories, aiding the process of authoring interactive narratives. We evaluate the suitability of these tools for interactive story construction through a thematic analysis of story authors’ completion of story-authoring tasks using TropICAL and StoryBuilder. The participants complete tasks in which they have to describe stories with different degrees of complexity, finally requiring them to reuse existing tropes in their own trope abstractions. The thematic analysis identifies and examines the themes and patterns that emerge from the story authors’ use of the tool, revealing that non-programmer story authors are able to create their own stories using tropes without having to learn a strict narrative formalism.
|
495 |
Matrix iterative methods for elliptic differential equationsNichols, Nancy January 1966 (has links)
No description available.
|
496 |
Semantic tagging of medical narratives using SNOMED CTHina, Saman January 2013 (has links)
In the medical domain, semantic analysis is critical for several research questions which are not only limited to healthcare researchers but are of interest to NLP researchers. Yet, most of the data exists in the form of medical narratives. Semantic analysis of medical narratives is required to be carried out for the identification of semantic information and its classification with semantic categories. This semantic analysis is useful for domain users as well as non-domain users for further investigations. The main objective of this research is to develop a generic semantic tagger for medical narratives using a tag set derived from SNOMED CT® which is an international healthcare terminology. Towards this objective, the key hypothesis is that it is possible to identify semantic information (paraphrases of concepts, abbreviations of concepts and complex multiword concepts) in medical narratives and classify with globally known semantic categories by analysis of an authentic corpus of medical narratives and the language of SNOMED CT®. This research began with an investigation of using SNOMED CT® for identification of concepts in medical narratives which resulted in the derivation of a tag set. Later in this research, this tag set was used to develop three gold standard datasets. One of these datasets required anonymization because it contained four protected health information (PHI) categories. Therefore, a separate module was developed for the anonymization of these PHI categories. After the anonymization, a generic annotation scheme was developed and evaluated for the annotation of three gold standard datasets. One of the gold standard datasets was used to develop generic rule-patterns for the semantic tagger while the other two datasets were used for the evaluation of semantic tagger. Besides evaluation using the gold standard datasets, the semantic tagger was compared with three systems based on different methods, and shown to outperform them.
|
497 |
Backward fuzzy rule interpolationJin, Shangzhu January 2015 (has links)
No description available.
|
498 |
Detection & modelling of the distribution of linear structures in mammographic imagesHadley, Edward Michael January 2013 (has links)
Mammographic risk assessment is concerned with estimating the probability of a woman developing breast cancer. The aim is to improve the likelihood of early detection of breast cancers. The leading factor in determining risk is breast density, which has been shown to be the most accurate measure of mammographic risk, however more recently it has been suggested that the density (and possibly the distribution) of linear structures such as ducts and blood vessels within the breast are also related to mammographic risk. The purpose of this project is to investigate those relationships and the possibility of including this information in an automated risk assessment system. A methodology is developed for detecting the linear structures in 2D ammograms. This information is used to calculate the density of linear structures which is used in a risk classifier. Results show that a classifier based on the density of linear structures outperforms a classifier based on breast density (64% correct BIRADS classification using linear density compared with 53% using breast density), and that a classifier combining both factors outperforms both individual classifiers (74% correct BIRADS classification), suggesting that linear density is related to risk and provides useful information for risk assessment. The investigation in to the distribution of linear structures focusses on 3D tomosynthesis images. The linear structure detection methodology is developed for use in 3D, and a graph representation of the linear structures is extracted. Information from this graph relating to the distribution of linear structures is used for classification. The results of this classification (79% correct BIRADS classification) suggest that the distribution of linear structures is also related to risk and that this information provides additional risk–related information useful for risk assessment.
|
499 |
Intrinsically motivated developmental learning of communication in robotic agentsSheldon, Michael January 2013 (has links)
This thesis is concerned with the emergence of communication in arti cial agents as an integrated part of a more general developmental progression. We demonstrate how early gestural communication can emerge out of sensorimotor exploration before moving on to linguistic communication. We then show how communicative abilities can feed back into more general motor learning. We take a cumulative developmental approach, with two di erent robotic platforms undergoing a series of psychologically inspired developmental stages. These begin with the robot learning about its own body's capabilities and limitations, then on to object interaction, the learning of proto-imperative pointing and early language learning. Finally this culminates in more complex object interaction in the form of learning to build stacks of objects with the linguistic capabilities developed earlier being used to help guide the robot's learning. This developmental progression is supported by a schema learning mechanism which constructs a hierarchy of competencies capable of dealing with problems of gradually increasing complexity. To allow for the learning of general concepts we introduce an algorithm for the generalisation of schemas from a small number of examples through parameterisation. Throughout the robot's development its actions are driven by an intrinsic motivation system designed to mimic the play-like behaviour seen in infants. We suggest a possible approach to intrinsic motivation in a schema learning system and demonstrate how this can lead to the rapid unsupervised learning of both speci c experiences and general concepts.
|
500 |
3D laser scanner development and analysisLiu, Junjie January 2013 (has links)
This PhD project is a collaboration between Smart Light Devices, Ltd. in Aberdeen and Aberystwyth University on the development of such 3D laser scanners with an ultimate aim to inspect the underwater oil and gas pipes or structure. At the end of this project, a workable and full functional 3D laser scanner is to be developed. This PhD project puts a particular emphasis on the engineering and implementation of the scanner according to real applications’ requirements. Our 3D laser scanner is based on the principle of triangulation and its high accuracy over a short range scanning. Accurate 3D data can be obtained from a triangle between the scanner, camera lens, laser source, and the object being scanned. Once the distance between the scanner camera lens and laser source (stereo baseline) is known and the laser projection angle can be measured by the goniometer, all the X, Y,Z coordinates of the object surface can be obtained through trigonometry. This 3D laser scanner development involves a lot of issues and tasks including image noise removal, laser peak detection, corner detection, camera calibration and 3D reconstruction. These issues and tasks have been addressed, analysed and improved during the PhD period. Firstly, the Sparse Code Shrinkage (SCS) image de-noise is implemented, since it is one of the most suitable de-noising methods for our laser images with dark background and white laser stripe. Secondly, there are already plenty of methods for corner and laser peak detection, it is necessary to compare and evaluate which is the most suitable for our 3D laser scanner. Thus, comparative studies are carried out and their results are presented in this thesis. Thirdly, our scanner is based on laser triangulation, in this case, laser projection angle α and baseline distance D from the centre of the camera lens to laser source plays a crucial role in 3D reconstruction. However, these two parameters are hard to measure directly, and there are no particular tools designed for this purpose. Thus, a new approach is proposed in this thesis to estimate them which combines camera calibration results with the precise linear stage. Fourthly, it is very expensive to customize an accurate positional pattern for camera calibration, due to budget limit, this pattern is printed by a printer or even painted on a paper or white board which is inaccurate and contains errors in absolute distance and location. An iterative camera calibration method is proposed. It can compensate up to 10% error and the calibration parameters remain stable. Finally, in the underwater applications, the light travel angle is changed from water to air which makes the normal calibration method less accurate. Hence, a new approach is proposed to compensate between the estimate and real distance in 3D reconstruction with normal calibration parameters. Experimental results show the proposed methods reduce the distance error in 3D down to ±0.2mm underwater. Overall, the developed scanning systems have been successfully applied in several real scanning and 3D modelling projects such as mooring chain, underwater pipeline surface and reducer. Positive feedback has been received from these projects, the scanning results satisfy the resolution and accuracy requirements.
|
Page generated in 0.0466 seconds