Spelling suggestions: "subject:"[een] KNOWLEDGE REPRESENTATION"" "subject:"[enn] KNOWLEDGE REPRESENTATION""
281 |
Um estudo sobre objetos com comportamento inteligente / A study on objects with intelligent behaviorAmaral, Janete Pereira do January 1993 (has links)
Diversos estudos têm sido realizados com o objetivo de definir estruturas para construção de ambientes de desenvolvimento de software. Alguns desses estudos indicam a necessidade de prover inteligência a tais ambientes, para que estes, efetivamente, coordenem e auxiliem o processo de desenvolvimento de software. O paradigma da orientação a objetos (POO) vem sendo utilizado na implementação de sistemas inteligentes, com diferentes enfoques. O POO tem sido experimentado, também, como estrutura para construção de ambientes. A abordagem da construção de sistemas, na qual a inteligência se encontra distribuída, como proposto por Hewitt, Minsky e Lieberman, suscita a idéia de modelar objetos que atuem como solucionadores de problemas, trabalhando cooperativamente para atingir os objetivos do sistema, e experimentar essa abordagem na construção de ambientes inteligentes. Nesta dissertação, é apresentado um estudo sobre a utilização do POO na implementação de sistemas inteligentes, e proposta uma extensão ao conceito de objeto. Essa extensão visa permitir flexibilidade no seu comportamento, autonomia nas suas ações, aquisição de novos conhecimentos e interação com o ambiente externo. A existência de objetos com tais características permite a construção de sistemas inteligentes, modularizados e evolutivos, facilitando, assim, seu projeto, implementação e manutenção. Visando esclarecer os termos utilizados no decorrer desta dissertação, são discutidos os conceitos básicos do POO e suas principais extensões. São apresentadas algumas abordagens sobre inteligência e comportamento inteligente, destacando-se a importância de conhecimento, aprendizado e comportamento flexível. Observa-se que este último decorre da aquisição de novos conhecimentos e da análise das condições do ambiente. Buscando fornecer embasamento para análise das características representacionais do POO, são apresentados os principais esquemas de representação de conhecimento e algumas estratégias para resolução de problemas, utilizados em sistemas inteligentes. E analisado o uso do POO como esquema de representação de conhecimento, destacando-se suas vantagens e deficiências. É sintetizado um levantamento de propostas de utilização do POO na implementação de sistemas inteligentes, realizado com o objetivo de identificar os mecanismos empregados na construção desses sistemas. Observa-se a tendência em apoiar a abordagem da inteligência distribuída, utilizando-se a estruturação do conhecimento propiciado pelo POO e características positivas de outros paradigmas. Propõe-se um modelo de objetos com comportamento inteligente. Nesse modelo, além dos aspectos declarativos e procedimentais do conhecimento, representados através de variáveis de instância e de métodos, são encapsulados mecanismos para prover autonomia e comportamento flexível, permitir a aquisição de novos conhecimentos, e propiciar a comunicação com usuários. Para prover autonomia, foi projetado um gerenciador de mensagens, que recebe solicitações enviadas ao objeto, colocando-as numa fila e atendendo-as segundo seu conhecimento e análise das condições do ambiente. Utilizando-se recursos da programação em lógica, são introduzidas facilidades para flexibilização do comportamento, através de regras comportamentais em encadeamento regressivo. A aquisição de novos conhecimentos é obtida através da inclusão/retirada de fatos, de procedimentos e de regras comportamentais na base de conhecimento do objeto. Para fornecer auxílio e relato de suas atividades, os objetos exibem o status da ativação de suas regras comportamentais, e listas das solicitações atendidas e das mantidas em sua fila de mensagens. Para experimentar o modelo proposto, é implementado um protótipo de um assistente inteligente para as atividades do processo de desenvolvimento de software. Sua implementação utiliza a linguagem Smalltalk/V com recursos da programação em lógica, integrados através de Prolog/V. A experiência obtida na utilização desse modelo mostrou a viabilidade da inclusão de características complementares ao modelo de objetos do POO, e a simplicidade de sua implementação, utilizando-se recursos multiparadigmáticos. Esse modelo constitui, assim, uma alternativa viável para construção de ambientes inteligentes. / Aiming at defining structures for Software Engineering Environments (SEE) much research has been accomplished. Some of this research results have pointed out the need to provide intelligence to coordinate and assist effectively the software development process. The object-oriented paradigm (OOP) has been applied to implement intelligent systems with several approaches. The OOP as SEE structure has been experimented as well. The system construction approach in which the intelligence is distributed among its elements, proposed by Hewitt, Minsky and Lieberman, elicits the idea of modelling objects that act as problem-solvers, working cooperatively to reach the system objectives, and to experiment this approach in the construction of intelligent environments. In this dissertation, a study of the OOP use in the implementation of intelligent systems is presented. An extension to the object concept is proposed to allow objects to exhibit a flexible behavior, to have autonomy in their tasks fulfillment, to acquire new knowledge, and to interact with the external environment. The existence of objects with this ability, enables the construction of modulated and evolutionary intelligent systems, making its design, implementation and maintenance easier. The OOP basic concepts and main extensions are discussed to elucidate the concepts that will be used throughout this dissertation. Some intelligence and intelligent behavior approaches are presented, emphasizing knowledge, learning and flexible behavior. This flexible behavior comes from new knowledge acquisition and from the analysis of environment conditions. The main knowledge representation schemes and several problem solving strategies used in intelligent systems are presented to provide background for representational characteristics analysis of the OOP. The OOP used as a knowledge representation scheme is analyzed and emphasized its advantages and shortcomings. In order to identify mechanisms engaged in the implementation of intelligent systems, a survey of proposals of the OOP used in that systems is synthesized. In that survey the emphasis to support the distributed intelligence approach through the use of the knowledge representation model provided by OOP and positive characteristics of other paradigms is observed. An object model with intelligent behavior is proposed, in which, besides the declarative and procedural aspects of knowledge represented through instance variables and methods, mechanisms are encapsulated to provide autonomy and flexible behavior, to allow new knowledge acquisition, and to promote communications with users. To provide autonomy a message manager which receives requests from other objects was developed. The message manager puts messages in a queue and dispatches them according to its knowledge and the analysis of environment conditions. Using programming in logic resources, facilities are introduced to get behavior flexibility through behavioral rules in backward chaining. Knowledge is acquired through facts, procedures, and behavioral rules asserted/retracted in the object's knowledge-base. To provide assistance and report on their activities, the objects exhibit the status of their behavioral rules firing, and lists of granted requests as well as the ones kept in its message queue. To explore the proposed model properties, one intelligent assistant prototype to support the activities of the system development process was implemented. For its implementation, the Smalltalk/V language with programming in logic resources integrated by Prolog/V was used. The experience acquired in using this model, indicated the feasibility of the inclusion of additional characteristics to the OOP model, and the clearness of its implementation using multiparadigm resources. Therefore, this model is a viable alternative to the construction of intelligent environments.
|
282 |
SDIP: um ambiente inteligente para a localização de informações na internet / SDIP: an intelligent system to discover information on the internetFernandez, Luis Fernando Nunes January 1995 (has links)
A proposta do trabalho descrito detalhadamente neste texto é implementar um sistema inteligente, que seja capaz de auxiliar os seus usuários na tarefa de localizar e recuperar informações, dentro da rede Internet. Com o intuito de alcançar o objetivo proposto, construímos um sistema que oferece aos seus usuários duas formas distintas, porem integradas, de interfaces: língua natural e gráfica (baseada em menus, janelas etc.). Adicionalmente, a pesquisa das informações é realizada de maneira inteligente, ou seja, baseando-se no conhecimento gerenciado pelo sistema, o qual é construído e estruturado dinamicamente pelo próprio usuário. Em linhas gerais, o presente trabalho está estruturado logicamente em quatro partes, a saber: 1. Estudo introdutório dos mais difundidos sistemas de pesquisa e recuperação de informações, hoje existentes dentro da Internet. Com o crescimento desta rede, aumentaram enormemente a quantidade e a variedade das informações por ela mantidas, e disponibilizadas aos seus usuários. Concomitantemente, diversificaram-se os sistemas que permitem o acesso a este conjunto de informações, distribuídas em centenas de servidores por todo o mundo. Nesse sentido, com o intuito de situar e informar o leitor a respeito do tema, discutimos detidamente os sistemas Archie, gopher, WAIS e WWW; 2. Estudo introdutório a respeito da Discourse Representation Theory (DRT). Em linhas gerais, a DRT é um formalismo para a representação do discurso que faz use de modelos para a avaliação semântica das estruturas geradas, que o representam. Por se tratar de um estudo introdutório, neste trabalho discutiremos tão somente os aspectos relativos a representação do discurso que são propostos pela teoria, dando ênfase a, forma de se representar sentenças simples, notadamente aquelas de interesse do sistema; 3. Estudo detalhado da implementação, descrevendo cada um dos processos que formam o sistema. Neste estudo são abordados os seguintes módulos: Processo Archie: modulo onde está implementadas as facilidades que permitem ao sistema interagir com os servidores Archie; Processo FTP: permite ao SDIP recuperar arquivos remotos, utilizando o protocolo padrão da Internet FTP; Front-end e Interface SABI: possibilitam a realização de consultas bibliográficas ao sistema SABI, instalado na Universidade Federal do Rio Grande do Sul; Servidor de Correio Eletrônico: implementa uma interface alternativa para o acesso ao sistema, realizado, neste caso, por intermédio de mensagens do correio eletrônico; Interface Gráfica: oferece aos usuários um ambiente gráfico para a interação com o sistema; Processo Inteligente: Modulo onde está implementada a parte inteligente do sistema, provendo, por exemplo, as facilidades de interpretação de sentenças da língua portuguesa. 4. Finalmente, no epilogo deste trabalho, mostramos exemplos que ilustram a utilização das facilidades oferecidas pelo ambiente gráfico do SDIP. Descrevendo sucinta.mente o funcionamento do sistema, os comandos e consultas dos usuários podem ser formuladas de duas maneiras distintas. No primeiro caso, o sistema serve apenas como um intermediário para o acesso aos servidores Archie e SABI, oferecendo aos usuários um ambiente gráfico para a interação com estes dois sistemas. Na segunda modalidade, os usuários formulam as suas consultas ou comandos, utilizando-se de sentenças em língua natural. Neste Ultimo caso, quando se tratar de uma consulta, o sistema, utilizando-se de sua base de conhecimento, procurara aperfeiçoar a consulta efetuada pelo usuário, localizando, desta forma, as informações que melhor atendam as necessidades do mesmo. / The proposal of the work describe detailedly in this master dissertation is to implement an intelligent system that will be capable of to help of its users in the task of locate and retrieve informations, inside of the Internet. With the object of reach this goal, was builded a system that offer to its users two distincts types, however integrated, of interfaces: natural language and graphic ( based in menus, windows, etc ). Furthermore, the search of the informations is realized of intelligent way, based it in the knowledgement managed by system, which is builded and structured dinamically by the users. In general lines, the present work are structured logically in four parts, which are listed below: 1. Introdutory study of the most divulgated systems of search and retrieval of informations, today existent inside of the Internet. With growth of this net, increase greatfull the quantity and variety of the informations keeped and published for users by it. Beside it, has appeared to many systems that allow the access to this set of informations, distributed on hundreds of servers in the whole world. In these sense, with the intuit of situate and to inform the reader about the subject, we describe formally the systems archie, gopher, WAIS and WWW , respectively; 2. An Introdutory study of the Discourse Representation Theory (DRT). In this work, the DRT is the formalism utilized for the representation of the discourse that uses models to evaluate semanticly the structures generated, which represent it. In fact, we will discusse in this work so only the aspects relatives to discourse representation that are purposes by theory, given emphasis for the way to represent simple sentences, notory those recognized and important for the system ; 3. Detailed study of the implementation, describing each of the process that compose the system. In this study are described the following modules : Archie Process: Module where are implemented the facilities that allow the system to interact whit the Archie Servers in the Internet; FTP Process: it allows the SDIP to retrieve remote files, utilizing the standard protocol of the Internet, called FTP (File Transfer Protocol); Front-end and Interface SABI: these components are used by system to realize bibliographic queries to SABI manager, installed at Universidade Federal do Rio Grande do Sul; Eletronic Mail Server: it implements an alternative interface to access SDIP, realized in this case, throught eletronic mail messages, which transport firstly the user's query and secondly the system's response; Graphic Interface : it offers to the users a graphical environment for the interaction with the system ; Intelligent Process: module where are implemented the intelligent part of the system, providing, for instance, the facilities for interpretation of sentences wrote in portuguese language. 4. Finally, in the epilogue of this work, we show samples that illustrate the utilization of the facilities implemented at SDIP's graphical environment. Describing the functionability of the system, the users's commands and queries could be formulated of two disctincts ways. In the first case, the system serves only as the intermediary for the access to Archie servers and SABI, offering for its users a graphical environment for the interaction with these two others systems. In the second modality, the users formulate their queries or commands, utilizing sentences in natural language. In this last case, when it is a query, the system utilizing its base of knowledgement, will try to refine the user's question, localizing the set of information that better satisfies his needs.
|
283 |
Organização de conhecimento e informações para integração de componentes em um arcabouço de projeto orientado para a manufaturaRamos, André Luiz Tietböhl January 2015 (has links)
A constante evolução de métodos, tecnologias e ferramentas associadas na área de projeto fornece maior capacidade para o projetista. Entretanto, ela também aumenta os requisitos de interfaces e controle do conjunto de componentes de projeto consideravelmente. Tipicamente, este aspecto está presente na área de Projeto Orientado para a Manufatura (DFM) onde existem diversos distintos componentes. Cada um dos componentes existentes, ou futuros, pode ter foco diferente, consequentemente com requisitos de informação, utilização e execução distintos. Este trabalho propõe a utilização de padrões conceituais flexíveis de informação e controle de forma abrangente em uma arquitetura de Projeto Orientado para a Manufatura (DFM). O objetivo principal é auxiliar a análise e resolução de DFM, bem como dar suporte à atividade de projeto estruturando e propondo uma solução em relevantes aspectos em DFM: estruturação do contexto das informações (ou conhecimento) em DFM. A arquitetura utiliza as seguintes atividades de projeto em processos de usinagem: Tolerância, Custo, Acessibilidade da ferramenta, Disponibilidade de máquinas e ferramentas e Análise de materiais para demonstrar a relevância da correta contextualização e utilização da informação no domínio DFM . Sob forma geral, concomitantemente, as amplas necessidades de compreensão dos distintos tipos e formas da informação em DFM demandam que uma arquitetura de projeto tenha capacidade de gerenciar/administrar diferentes contextos de informações de projeto. Este é um tópico relevante tendo em vista que existem diferentes atividades DFM que eventualmente devem ser incluídas no ato de projetar. Tipicamente, cada uma delas tem requisitos distintos em termos de dados e conhecimento, ou contextualização do projeto, que idealmente poderiam ser gerenciados através da arquitetura de informação atual – STEP.Aarquitetura proposta gerencia contextos de informações de projeto através de ontologias direcionadas no domínio DFM. Através dela, será possível compreender e utilizar melhor as intrínsecas interfaces existentes nas informações deste domínio, além de, através disto, aumentar a flexibilidade e eficácia de sistemas DFM. / This work proposes the use of industry standards to support the utilization of Design for Manufacturing (DFM) techniques in a comprehensive scale in the design field. The specific aspect being considered in an architecture is the definition and structure of DFM information context. In order to demonstrate the research concepts, some design activities are implemented the framework (which is focused in machining processes): Tolerancing model, Cost model based on material remove processes, Tool Accessibility model taking into consideration the part being designed, Availability of Machines and Tools model, and Material analysis. The broad needs of design–based frameworks, in general, require that its architecture must have the capabilities to handle di erent framework design information utilization contexts, or information context concepts. This is a relevant aspect since there are severalDFMcomponents/activities that preferably should be included in the design process. Traditionally, each one of them might have distinct data & knowledge requirements, which can be handled by the current information architecture – STEP – only in part. Additionally, each one of them might have, or need, di erent forms of understanding DFM information (information context). The framework handles information context concepts through the use of the ontologies targeted to the DFMfield. It is expected that a better comprehension and usage of the intrinsic information interfaces existent in its domain be achieved. Through it, more flexible and e ective DFM systems information-wise can be obtained.
|
284 |
USING MACHINE LEARNING TECHNIQUES TO IMPROVE STATIC CODE ANALYSIS TOOLS USEFULNESSEnas Ahmad Alikhashashneh (7013450) 16 October 2019 (has links)
<p>This dissertation proposes an approach to reduce the cost of manual inspections for as large a number of false positive warnings that are being reported by Static Code Analysis (SCA) tools as much as possible using Machine Learning (ML) techniques. The proposed approach neither assume to use the particular SCA tools nor depends on the specific programming language used to write the target source code or the application. To reduce the number of false positive warnings we first evaluated a number of SCA tools in terms of software engineering metrics using a highlighted synthetic source code named the Juliet test suite. From this evaluation, we concluded that the SCA tools report plenty of false positive warnings that need a manual inspection. Then we generated a number of datasets from the source code that forced the SCA tool to generate either true positive, false positive, or false negative warnings. The datasets, then, were used to train four of ML classifiers in order to classify the collected warnings from the synthetic source code. From the experimental results of the ML classifiers, we observed that the classifier that built using the Random Forests</p>
<p>(RF) technique outperformed the rest of the classifiers. Lastly, using this classifier and an instance-based transfer learning technique, we ranked a number of warnings that were aggregated from various open-source software projects. The experimental results show that the proposed approach to reduce the cost of the manual inspection of the false positive warnings outperformed the random ranking algorithm and was highly correlated with the ranked list that the optimal ranking algorithm generated.</p>
|
285 |
Etude d’une méthodologie pour la construction d’un système de télésurveillance médicale : application à une plateforme dédiée au maintien et au suivi à domicile de personnes atteintes d’insuffisance cardiaque / Toward a methodology for the construction of a telemonitoring system : application to a platform dedicated to home monitoring of people with heart failureAhmed Benyahia, Amine 27 May 2015 (has links)
La thèse, réalisée dans le cadre du projet « investissements d'avenir » E-care, propose un processus méthodologique pour faciliter l'analyse et la conception de systèmes de télésurveillance médicale pour la détection précoce de signes précurseurs à toute complication. La méthodologie proposée est basée sur un système multi-agents utilisant plusieurs types d'ontologies associées à un système expert. Le système multi-agents est adapté à la télésurveillance médicale avec une architecture distribuée pour permettre l’autonomie et la réactivité au sein des sites de déploiement, en particulier les habitats. Les architectures ainsi conçues, prennent en compte l'ensemble des données du patient : son profil, ses antécédents médicaux, les traitements médicamenteux, les données physiologiques et comportementales ainsi que les données relatives à son environnement et à son hygiène de vie. Ces architectures doivent également être ouvertes pour s'adapter à de nouvelles sources de données.Cette méthodologie a été appliquée au projet E-care définissant ainsi son système d'information. Ce système d'information est composé de deux types d'ontologies représentant les connaissances pertinentes ainsi qu'un système expert pour la détection de situations à risque. Une première ontologie du problème a été construite pour gérer le système, les acteurs et leurs taches. Par la suite, trois ontologies de domaines ont été construites représentant, les maladies, les médicaments et les facteurs de risque cardio-vasculaire. Le système expert exploite des règles d'inférences construites en collaboration avec les experts médicaux et en utilisant des guides de bonnes pratiques dans le domaine de la cardiologie. Cette méthodologie a défini également l'architecture du système composé de quatre types d'agents autonomes à savoir : des capteurs pour la prise de mesures, une passerelle pour la collecte et la transmission depuis les habitats, un serveur pour le traitement et l'accès aux données, et enfin une base de données pour le stockage sécurisé des données des patients.Le système E-care a été testé et validé en utilisant des tests et des simulations inspirés de cas réels. Par la suite, une expérimentation a été faite pour la validation les différents composants du système dans milieu de télésurveillance médicale. Cette expérimentation est passée par deux phases, la première s'est déroulée au CHRU de Strasbourg, et la deuxième est en cours aux domiciles des patients. / The thesis, conducted as part of the E-care project, proposes a methodological process to facilitate the analysis and design of medical remote monitoring systems for early detection of signs of any complications. The proposed methodology is based on a multi-agent system using several types of ontologies associated with an expert system. The multi-agent system is suitable for medical monitoring with a distributed architecture to keep some autonomy and responsiveness of habitats. The process identifies the generic and specific aspects of each system. The designed architectures take into account all the patient data such as: patient profile, medical history, drug treatments, physiological and behavioral data, as well as data relating to patient's environment and his lifestyle. These architectures should be open to be adapted to new data sources.This methodology was applied to E-care project in order to define its information system. This information system is composed of two types of ontologies (problem ontology and several domain ontologies) and an expert system for the detection of risk situations. The problem ontology was built to manage the system including users and their tasks. Three domain ontologies have been built to represent, disease, drugs and cardiovascular risk factors. The expert system uses inference rules, which are defined in collaboration with medical experts using their knowledge and some medical guidelines. This methodology also defined the system architecture, which consists of four autonomous agents types namely: medical sensors to collect physiological measurements. The gateway collects data from sensors and transmits them from the patients' homes to the server. The server processes data and gives access to them. Finally the database secures storage of patient data.As part of the E-care project, an experiment was conducted to validate the various system components. This experiment has gone through two phases, the first was held at the University Hospital of Strasbourg, and the second is in the patients' homes.
|
286 |
Web ontologies e rappresentazione della conoscenza. Concetti e strumenti per la didattica / Web Ontologies and Knowledge RepresentationCARMINATI, VERA MARIA 02 April 2007 (has links)
Il lavoro mette in luce le reciproche implicazioni di due mondi, quello delle tecnologie e quello dell'educazione, rispetto a temi di interesse condiviso: l'evoluzione della Rete in termini semantici attraverso l'impiego di ontologie informatiche e i complessi rapporti tra formalismi per la rappresentazione della conoscenza e didattica.
La ricostruzione storica delle relazioni tra didattica, tecnologie e sistemi di espressione e comunicazione dei saperi ci ha condotto alla ricomprensione del Semantic Web nell'archeologia delle forme di rappresentazione della conoscenza, osservate con attenzione alle loro potenzialità didattiche e in rapporto all'evoluzione della cultura occidentale. La trattazione intende provvedere un modello per la lettura delle intersezioni tra Web Ontologies e scienze dell'educazione.
Con uno sguardo al panorama internazionale della ricerca educativa su questi temi, si sono isolate e descritte alcune esperienze significative di impiego dell'approccio ontologico in ambienti e sistemi per l'e-learning, per calare nella realtà delle applicazioni e degli strumenti il discorso teorico proposto. / The work highlights the mutual implications between technologies and education: the two worlds have interest in common matters such as Internet semantic evolution through the implementation of informatic ontologies and the connections we can draw between knowledge representation and didactics.
The historical reconstruction of the relationships among didactics, technologies and cultural resources leads us to the Semantic Web as a stage in the knowledge representation archaeology, regarded from cultural transmission and educational mediation perspectives. The work provides an explanation model for the intersections we can see between ontologies and sciences of education. With regard to the international research panorama about these themes, we point out and describe some significant experiences, which put the theory in practice.
We analyze tools and applications involving the ontological approach in the development of e-learning environments and systems.
|
287 |
Rechnerunterstützung für die Suche nach verarbeitungstechnischen PrinziplösungenMajschak, Jens-Peter 20 March 2013 (has links) (PDF)
Die hier zur Verfügung gestellte Datei ist leider nicht vollständig, aus technischen Gründen sind die folgenden Anhänge leider nicht enthalten:
Anhang 3: Begriffshierarchie "verarbeitungstechnische Funktion" S. 141
Anhang 4: Begriffshierarchie "Eigenschaftsänderung" S. 144
Anhang 5: Begriffshierarchie "Verarbeitungsgut" S. 149
Anhang 6: Begriffshierarchie "Verarbeitungstechnisches Prinzip" S. 151
Konsultieren Sie die Druckausgabe, die Sie im Bestand der SLUB Dresden finden: http://slubdd.de/katalog?TN_libero_mab21079933
|
288 |
Visual problem solving in autism, psychometrics, and AI: the case of the Raven's Progressive Matrices intelligence testKunda, Maithilee 03 April 2013 (has links)
Much of cognitive science research and almost all of AI research into problem solving has focused on the use of verbal or propositional representations. However, there is significant evidence that humans solve problems using different representational modalities, including visual or iconic ones. In this dissertation, I investigate visual problem solving from the perspectives of autism, psychometrics, and AI.
Studies of individuals on the autism spectrum show that they often use atypical patterns of cognition, and anecdotal reports have frequently mentioned a tendency to "think visually." I examined one precise characterization of visual thinking in terms of iconic representations. I then conducted a comprehensive review of data on several cognitive tasks from the autism literature and found numerous instances indicating that some individuals with autism may have a disposition towards visual thinking.
One task, the Raven's Progressive Matrices test, is of particular interest to the field of psychometrics, as it represents one of the single best measures of general intelligence that has yet been developed. Typically developing individuals are thought to solve the Raven's test using largely verbal strategies, especially on the more difficult subsets of test problems. In line with this view, computational models of information processing on the Raven's test have focused exclusively on propositional representations. However, behavioral and fMRI studies of individuals with autism suggest that these individuals may use instead a predominantly visual strategy across most or all test problems.
To examine visual problem solving on the Raven's test, I first constructed a computational model, called the Affine and Set Transformation Induction (ASTI) model, which uses a combination of affine transformations and set operations to solve Raven's problems using purely pixel-based representations of problem inputs, without any propositional encoding. I then performed four analyses using this model.
First, I tested the model against three versions of the Raven's test, to determine the sufficiency of visual representations for solving this type of problem. The ASTI model successfully solves 50 of the 60 problems on the Standard Progressive Matrices (SPM) test, comparable in performance to the best computational models that use propositional representations. Second, I evaluated model robustness in the face of changes to the representation of pixels and visual similarity. I found that varying these low-level representational commitments causes only small changes in overall performance. Third, I performed successive ablations of the model to create a new classification of problem types, based on which transformations are necessary and sufficient for finding the correct answer. Fourth, I examined if patterns of errors made on the SPM can provide a window into whether a visual or verbal strategy is being used. While many of the observed error patterns were predicted by considering aspects of the model and of human behavior, I found that overall error patterns do not seem to provide a clear indicator of strategy type.
The main contributions of this dissertation include: (1) a rigorous definition and examination of a disposition towards visual thinking in autism; (2) a sufficiency proof, through the construction of a novel computational model, that visual representations can successfully solve many Raven's problems; (3) a new, data-based classification of problem types on the SPM; (4) a new classification of conceptual error types on the SPM; and (5) a methodology for analyzing, and an analysis of, error patterns made by humans and computational models on the SPM. More broadly, this dissertation contributes significantly to our understanding of visual problem solving.
|
289 |
Automated Theorem Proving for General Game PlayingHaufe, Sebastian 10 July 2012 (has links) (PDF)
While automated game playing systems like Deep Blue perform excellent within their domain, handling a different game or even a slight change of rules is impossible without intervention of the programmer. Considered a great challenge for Artificial Intelligence, General Game Playing is concerned with the development of techniques that enable computer programs to play arbitrary, possibly unknown n-player games given nothing but the game rules in a tailor-made description language. A key to success in this endeavour is the ability to reliably extract hidden game-specific features from a given game description automatically. An informed general game player can efficiently play a game by exploiting structural game properties to choose the currently most appropriate algorithm, to construct a suited heuristic, or to apply techniques that reduce the search space. In addition, an automated method for property extraction can provide valuable assistance for the discovery of specification bugs during game design by providing information about the mechanics of the currently specified game description. The recent extension of the description language to games with incomplete information and elements of chance further induces the need for the detection of game properties involving player knowledge in several stages of the game.
In this thesis, we develop a formal proof method for the automatic acquisition of rich game-specific invariance properties. To this end, we first introduce a simple yet expressive property description language to address knowledge-free game properties which may involve arbitrary finite sequences of successive game states. We specify a semantic based on state transition systems over the Game Description Language, and develop a provably correct formal theory which allows to show the validity of game properties with respect to their semantic across all reachable game states. Our proof theory does not require to visit every single reachable state. Instead, it applies an induction principle on the game rules based on the generation of answer set programs, allowing to apply any off-the-shelf answer set solver to practically verify invariance properties even in complex games whose state space cannot totally be explored. To account for the recent extension of the description language to games with incomplete information and elements of chance, we correctly extend our induction method to properties involving player knowledge. With an extensive evaluation we show its practical applicability even in complex games.
|
290 |
Age-related differences in deceit detection: The role of emotion recognitionTehan, Jennifer R. 17 April 2006 (has links)
This study investigated whether age differences in deceit detection are related to impairments in emotion recognition. Key cues to deceit are facial expressions of emotion (Frank and Ekman, 1997). The aging literature has shown an age-related decline in decoding emotions (e.g., Malatesta, Izard, Culver, and Nicolich, 1987). In the present study, 354 participants were presented with 20 interviews and asked to decide whether each man was lying or telling the truth. Ten interviews involved a crime and ten a social opinion. Each participant was in one of three presentation conditions: 1) visual only, 2) audio only, or 3) audio-visual. For crime interviews, age-related impairments in emotion recognition hindered older adults in the visual only condition. In the opinion topic interviews, older adults exhibited a truth bias which rendered them worse at detecting deceit than young adults. Cognitive and dispositional variables did not help to explain the age differences in the ability to detect deceit.
|
Page generated in 0.0539 seconds