371 |
Plan recognition and discourse analysis: an integrated approach for understanding dialogues /Litman, Diane Judith, January 1985 (has links)
Thesis (Ph. D.)--University of Rochester, 1985. / Bibliography: p. 179-183.
|
372 |
Intelligent spatial decision support systemsSandhu, Raghbir Singh January 1998 (has links)
This thesis investigates the conceptual and methodological issues for the development of Intelligent Spatial Decision Support Systems (ISDSS). These are spatial decision support systems (SDSS) integrating intelligent systems techniques (Genetic Algorithms, Neural Networks, Expert Systems, Fuzzy Logic and Nonlinear methods) with traditional modelling and statistical methods for the analysis of spatial problems. The principal aim of this work is to verify the feasibility of heterogeneous systems for spatial decision support derived from a combination of traditional numerical techniques and intelligent techniques in order to provide superior performance and functionality to that achieved through the use of traditional methods alone. This thesis is composed of four distinct sections: (i) a taxonomy covering the employment of intelligent systems techniques in specific applications of geographical information systems and SDSS; (ii) the development of a prototype ISDSS; (iii) application of the prototype ISDSS to modelling the spatiotemporal dynamics of high technology industry in the South-East of England; and (iv) the development of ISDSS architectures utilising interapplication communication techniques. Existing approaches for implementing modelling tools within SDSS and GIS generally fall into one of two schemes - loose coupling or tight coupling - both of which involve a tradeoff between generality and speed of data interchange. In addition, these schemes offer little use of distributed processing resources. A prototype ISDSS was developed in collaboration with KPMG Peat Marwick's High Technology Practice as a general purpose spatiotemporal analysis tool with particular regard to modelling high technology industry. The GeoAnalyser system furnishes the user with animation and time plotting tools for observing spatiotemporal dynamics; such tools are typically not found in existing SDSS or GIS. Furthermore, GeoAnalyser employs the client/server model of distributed computing to link the front end client application with the back end modelling component contained within the server application. GeoAnalyser demonstrates a hybrid approach to spatial problem solving - the application utilises a nonlinear model for the temporal evolution of spatial variables and a genetic algorithm for calibrating the model in order to establish a good fit for the dataset under investigation. Several novel architectures are proposed for ISDSS based on existing distributed systems technologies. These architectures are assessed in terms of user interface, data and functional integration. Implementation issues are also discussed. The research contributions of this work are four-fold: (i) it lays the foundation for ISDSS as a distinct type of system for spatial decision support by examining the user interface, performance and methodological requirements of such systems; (ii) it explores a new approach for linking modelling techniques and SDSS; (iii) it investigates the possibility of modelling high technology industry; and (iv) it details novel architectures for ISDSS based on distributed systems.
|
373 |
Perception modelling using type-2 fuzzy setsJohn, Robert January 2000 (has links)
Type-1 fuzzy logic has, for over thirty years, provided an approach for modelling uncertainty and imprecision. This methodology has been highly successful with a history of successful applications in a number of areas - particularly control. However, type-1 fuzzy systems are essentially `crisp' in nature. This is not only paradoxical but also raises concerns for knowledge representation and inferencing. In particular type-1 fuzzy logic is flawed when representing perceptions such as colour, beauty, comfort etc. since these perceptions do not have a measurable domain. This fundamental paradox is tackled in this research by employing a type-2 fuzzy paradigm. The investigation of the type-2 approach concludes that the uncertainty or imprecision that exists in most real problems can be more effectively modelled by a type-2 approach. The research reported in this thesis explores the properties of type-2 fuzzy sets as well as showing how useful they can be for knowledge representation and inferencing. It is shown that type-2 fuzzy sets have an important role to play in modelling perceptions. Results are given of using type-2 fuzzy sets to represent perceptions of a medical expert for shin image analysis indicating that the type-2 fuzzy paradigm is particularly helpful for perception representation. A methodology has been developed that allows linguistic inputs to an adaptive system that implements a type-2 fuzzy system(the Adaptive Fuzzy Perception Learner (AFPL)). In this thesis, the rationale and full mathematical detail of the AFPL is presented. The approach has been applied successfully to the, so called, linguistic AND (analogous to the Boolean AND) as an aid to illustrating the methodology. Results are presented of applying the method to a real problem of classifying the acceptability of a car based on perceptions that describe certain features of the car. The AFPL is applied to this large, complex, set of data where the inputs to the network are linguistic. A detailed evaluation of the AFPL is given with recommendations for effective use of the AFPL. The results indicate that we now, truly, have an approach for learning the perceptions and relations in a type-2 fuzzy system
|
374 |
Intelligent process planning for rapid prototypingGault, Rosemary S. January 2000 (has links)
No description available.
|
375 |
Modelling continuous sequential behaviour to enhance training and generalization in neural networksChen, Lihui January 1993 (has links)
This thesis is a conceptual and empirical approach to embody modelling of continuous sequential behaviour in neural learning. The aim is to enhance the feasibility of training and capacity for generalisation. By examining the sequential aspects of the passing of time in a neural network, it is suggested that an alteration to the usual goal weight condition may be made to model these aspects. The notion of a goal weight path is introduced, with a path-based backpropagation (PBP) framework being proposed. Two models using PBP have been investigated in the thesis. One is called Feedforward Continuous BackPropagation (FCBP) which is a generalization of conventional BackPropagation; the other is called Recurrent Continuous BackPropagation (RCBP) which provides a neural dynamic system for I/O associations. Both models make use of the continuity underlying analogue-binary associations and analogue-analogue associations within a fixed neural network topology. A graphical simulator cbptool for Sun workstations has been designed and implemented for supporting the research. The capabilities of FCBP and RCBP have been explored through experiments. The results for FCBP and RCBP confirm the modelling theory. The fundamental alteration made on conventional backpropagation brings substantial improvement in training and generalization to enhance the power of backpropagation.
|
376 |
Explorations into the behaviour-oriented nature of intelligence : fuzzy behavioural mapsGonzalez de Miguel, Ana Maria January 2003 (has links)
This thesis explores the behaviour-oriented nature of intelligence and presents the definition and use of Fuzzy Behavioural Maps (FBMs) as a flexible development framework for providing complex autonomous agent behaviour. This thesis provides a proof-of-concept for simple FBMs, including some experimental results in Mobile Robotics and Fuzzy Logic Control. This practical work shows the design of a collision avoidance behaviour (of a mobile robot) using a simple FBM and, the implementation of this using a Fuzzy Logic Controller (FLC). The FBM incorporates three causally related sensorimotor activities (moving around, perceiving obstacles and, varying speed). This Collision Avoidance FBM is designed (in more detail) using fuzzy relations (between levels of perception, motion and variation of speed) in the form of fuzzy control rules. The FLC stores and manipulates these fuzzy control (FBM) rules using fuzzy inference mechanisms and other related implementation parameters (fuzzy sets and fuzzy logic operators). The resulting FBM-FLC architecture controls the behaviour patterns of the agent. Its fuzzy inference mechanisms determine the level of activation of each FBM node while driving appropriate control actions over the creature's motors. The thesis validates (demonstrates the general fitness of) this control architecture through various pilot tests (computer simulations). This practical work also serves to emphasise some benefits in the use of FLC techniques to implement FBMs (e.g. flexibility of the fuzzy aggregation methods and fuzzy granularity).More generally, the thesis presents and validates a FBM Framework to develop more complex autonomous agent behaviour. This framework represents a top-down approach to derive the BB models using generic FBMs, levels of abstraction and refinement stages. Its major scope is to capture and model behavioural dynamics at different levels of abstraction (through different levels of refinement). Most obviously, the framework maps some required behaviours into connection structures of behaviour-producing modules that are causally related. But the main idea is following as many refinement stages as required to complete the development process. These refinement stages help to identify lower design parameters (i.e. control actions) rather than linguistic variables, fuzzy sets or, fuzzy inference mechanisms. They facilitate the definition of the behaviours selected from first levels of abstraction. Further, the thesis proposes taking the FBM Framework into the implementation levels that are required to build BB control architecture and provides and application case study. This describes how to develop a complex, non-hierarchical, multi-agent behaviour system using the refinement capabilities of the FBM Framework. Finally, the thesis introduces some more general ideas about the use of this framework to cope with some, current complexity issues around the behaviour-oriented nature of intelligence.
|
377 |
Lower Bound Resource Requirements for Machine IntelligenceGilmanov, Timur 06 December 2018 (has links)
<p> Recent advancements in technology and the field of artificial intelligence provide a platform for new applications in a wide range of areas, including healthcare, engineering, vision, and natural language processing, that would be considered unattainable one or two decades ago. With the expected compound annual growth rate of 50% during the years of 2017–2021, the field of global artificial intelligence is set to observe increases in computational complexities and amounts of sensor data processed. </p><p> In spite of the advancements in the field, truly intelligent machine behavior operating in real time is yet an unachieved milestone. First, in order to quantify such behavior, a definition of machine intelligence would be required, which has not been agreed upon by the community at large. Second, delivering full machine intelligence, as defined in this work, is beyond the scope of today's cutting-edge high-performance computing machines. </p><p> One important aspect of machine intelligent systems is resource requirements and the limitations that today's and future machines could impose on such systems. The goal of this research effort is to provide an estimate on the lower bound resource requirements for machine intelligence. A working definition of machine intelligence for purposes of this research is provided, along with definitions of an abstract architecture, workflow, and performance model. Combined together, these tools allow an estimate on resource requirements for problems of machine intelligence, and provide an estimate of such requirements in the future.</p><p>
|
378 |
Real-Time Individual Thermal Preferences Prediction Using Visual SensorsCosma, Andrei Claudiu 19 December 2018 (has links)
<p> The thermal comfort of a building’s occupants is an important aspect of building design. Providing an increased level of thermal comfort is critical given that humans spend the majority of the day indoors, and that their well-being, productivity, and comfort depend on the quality of these environments. In today’s world, Heating, Ventilation, and Air Conditioning (HVAC) systems deliver heated or cooled air based on a fixed operating point or target temperature; individuals or building managers are able to adjust this operating point through human communication of dissatisfaction. Currently, there is a lack in automatic detection of an individual’s thermal preferences in real-time, and the integration of these measurements in an HVAC system controller. </p><p> To achieve this, a non-invasive approach to automatically predict personal thermal comfort and the mean time to discomfort in real-time is proposed and studied in this thesis. The goal of this research is to explore the consequences of human body thermoregulation on skin temperature and tone as a means to predict thermal comfort. For this reason, the temperature information extracted from multiple local body parts, and the skin tone information extracted from the face will be investigated as a means to model individual thermal preferences. </p><p> In a first study, we proposed a real-time system for individual thermal preferences prediction in transient conditions using temperature values from multiple local body parts. The proposed solution consists of a novel visual sensing platform, which we called RGB-DT, that fused information from three sensors: a color camera, a depth sensor, and a thermographic camera. This platform was used to extract skin and clothing temperature from multiple local body parts in real-time. Using this method, personal thermal comfort was predicted with more than 80% accuracy, while mean time to warm discomfort was predicted with more than 85% accuracy. </p><p> In a second study, we introduced a new visual sensing platform and method that uses a single thermal image of the occupant to predict personal thermal comfort. We focused on close-up images of the occupant’s face to extract fine-grained details of the skin temperature. We extracted manually selected features, as well as a set of automated features. Results showed that the automated features outperformed the manual features in all the tests that were run, and that these features predicted personal thermal comfort with more than 76% accuracy. </p><p> The last proposed study analyzed the thermoregulation activity at the face level to predict skin temperature in the context of thermal comfort assessment. This solution uses a single color camera to model thermoregulation based on the side effects of the vasodilatation and vasoconstriction. To achieve this, new methods to isolate skin tone response to an individual’s thermal regulation were explored. The relation between the extracted skin tone measurement and the skin temperature was analyzed using a regression model. </p><p> Our experiments showed that a thermal model generated using noninvasive and contactless visual sensors could be used to accurately predict individual thermal preferences in real-time. Therefore, instantaneous feedback with respect to the occupants' thermal comfort can be provided to the HVAC system controller to adjust the room temperature. </p><p>
|
379 |
The Impact of Cost on Feature Selection for ClassifiersMcCrae, Richard 20 December 2018 (has links)
<p> Supervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time. </p><p> This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2<i><sup>k</sup></i> – 1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers. </p><p> This study used two types of classifiers—artificial neural networks and classification trees—that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures—true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed. </p><p> The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.</p><p>
|
380 |
Cross-Lingual Word Sense Disambiguation for Low-Resource Hybrid Machine TranslationRudnick, Alexander James 08 January 2019 (has links)
<p> This thesis argues that cross-lingual word sense disambiguation (CL-WSD) can be used to improve lexical selection for machine translation when translating from a resource-rich language into an under-resourced one, especially when relatively little bitext is available. In CL-WSD, we perform word sense disambiguation, considering the senses of a word to be its possible translations into some target language, rather than using a sense inventory developed manually by lexicographers. </p><p> Using explicitly trained classifiers that make use of source-language context and of resources for the source language can help machine translation systems make better decisions when selecting target-language words. This is especially the case when the alternative is hand-written lexical selection rules developed by researchers with linguistic knowledge of the source and target languages, but also true when lexical selection would be performed by a statistical machine translation system, when there is a relatively small amount of available target-language text for training language models. </p><p> In this work, I present the Chipa system for CL-WSD and apply it to the task of translating from Spanish to Guarani and Quechua, two indigenous languages of South America. I demonstrate several extensions to the basic Chipa system, including techniques that allow us to benefit from the wealth of available unannotated Spanish text and existing text analysis tools for Spanish, as well as approaches for learning from bitext resources that pair Spanish with languages unrelated to our intended target languages. Finally, I provide proof-of-concept integrations of Chipa with existing machine translation systems, of two completely different architectures.</p><p>
|
Page generated in 0.0328 seconds