151 |
Knowledge-Based General Game PlayingSchiffel, Stephan 14 June 2012 (has links) (PDF)
The goal of General Game Playing (GGP) is to develop a system, that is able to automatically play previously unseen games well, solely by being given the rules of the game.
In contrast to traditional game playing programs, a general game player cannot be given game specific knowledge.
Instead, the program has to discover this knowledge and use it for effectively playing the game well without human intervention.
In this thesis, we present a such a program and general methods that solve a variety of knowledge discovery problems in GGP.
Our main contributions are methods for the automatic construction of heuristic evaluation functions, the automated discovery of game structures, a system for proving properties of games, and symmetry detection and exploitation for general games.
|
152 |
Towards Next Generation Sequential and Parallel SAT Solvers / Hin zur nächsten Generation Sequentieller und Paralleler SAT-SolverManthey, Norbert 08 January 2015 (has links) (PDF)
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving.
To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further.
Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers.
The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
|
153 |
Context-aware anchoring, semantic mapping and active perception for mobile robotsGünther, Martin 30 November 2021 (has links)
An autonomous robot that acts in a goal-directed fashion requires a world model of the elements that are relevant to the robot's task. In real-world, dynamic environments, the world model has to be created and continually updated from uncertain sensor data. The symbols used in plan-based robot control have to be anchored to detected objects. Furthermore, robot perception is not only a bottom-up and passive process: Knowledge about the composition of compound objects can be used to recognize larger-scale structures from their parts. Knowledge about the spatial context of an object and about common relations to other objects can be exploited to improve the quality of the world model and can inform an active search for objects that are missing from the world model. This thesis makes several contributions to address these challenges: First, a model-based semantic mapping system is presented that recognizes larger-scale structures like furniture based on semantic descriptions in an ontology. Second, a context-aware anchoring process is presented that creates and maintains the links between object symbols and the sensor data corresponding to those objects while exploiting the geometric context of objects. Third, an active perception system is presented that actively searches for a required object while being guided by the robot's knowledge about the environment.
|
154 |
Imitation Learning of Motor Skills for Synthetic HumanoidsBen Amor, Heni 12 November 2010 (has links)
This thesis addresses the question of how to teach dynamic motor skills to synthetic humanoids. A general approach based on imitation learning is presented and evaluated on a number of synthetic humanoids, as well as a number of different motor skills. The approach allows for intuitive and natural specification of motor skills without the need for expert knowledge. Using this approach we show that various important problems in robotics and computer animation can be tackled, including the synthesis of natural grasping, the synthesis of locomotion behavior or the physical interaction between humans and robots.
|
155 |
Belief Change in Reasoning Agents: Axiomatizations, Semantics and ComputationsJin, Yi 17 January 2007 (has links)
The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model.
|
156 |
Knowledge-Based General Game PlayingSchiffel, Stephan 29 July 2011 (has links)
The goal of General Game Playing (GGP) is to develop a system, that is able to automatically play previously unseen games well, solely by being given the rules of the game.
In contrast to traditional game playing programs, a general game player cannot be given game specific knowledge.
Instead, the program has to discover this knowledge and use it for effectively playing the game well without human intervention.
In this thesis, we present a such a program and general methods that solve a variety of knowledge discovery problems in GGP.
Our main contributions are methods for the automatic construction of heuristic evaluation functions, the automated discovery of game structures, a system for proving properties of games, and symmetry detection and exploitation for general games.:1. Introduction
2. Preliminaries
3. Components of Fluxplayer
4. Game Tree Search
5. Generating State Evaluation Functions
6. Distance Estimates for Fluents and States
7. Proving Properties of Games
8. Symmetry Detection
9. Related Work
10. Discussion
|
157 |
Erweiterungen des fallbasierten Schließens zur prognostischen Fundierung von Planungsaufgaben - Konzeption und prototypische Implementierung am Beispiel von Kapazitätsnachfrageprognosen zur Fundierung der Kapazitätsplanung auf verschiedenen Ebenen hochschulinterner PlanungssystemePöppelmann, Daniel 09 March 2015 (has links)
In der vorliegenden Arbeit wird das Konzept eines zusammengesetzten Decision Support Systems (DSS), bestehend aus einer wissensgetriebenen und einer datengetriebenen Komponente vorgeschlagen, welches Entscheidungsträger der Kapazitätsplanung auf verschiedenen Ebenen eines hochschulinternen Planungssystems mit prognostizierten, den Kernprozess Lehre betreffenden Nachfragegrößen unterstützt.
Den Kern des zusammengesetzten DSS stellt eine wissensgetriebene Komponente dar, die basierend auf dem Paradigma des Case-based Reasoning (CBR) die Prognose individueller Studienverläufe aller Studierender einer Hochschule ermöglicht. Dazu erfolgt die Wiederverwendung von Erfahrungen hinsichtlich der Modul- und Klausurbelegung von Alumni und im Studium fortgeschrittener Studierender, die in Form von Fällen repräsentiert werden. Die domänenspezifischen Anpassungen und Erweiterungen des Paradigmas des CBR umfassen erstens die Repräsentation von Erfahrungen mit heterogenem Zeitbezug durch Fälle. Diesbezüglich wird einerseits eine dynamische, vom zu lösenden Problem abhängige Zuordnung von Fallattributen zu den Komponenten Beschreibung und Lösung eines Falls konzipiert. Andererseits wird eine Möglichkeit zur Abbildung von zeitabhängigen Attributen sowohl in der Beschreibung als auch in der Lösung eines Falls geschaffen. Zweitens erfolgt eine Erweiterung des CBR-cycle, des Problemlösungsprozesses, der im erarbeiteten Konzept zur Erstellung von Prognosen verwendet wird. Die Erweiterungen umfassen insbesondere die automatisierte Erkennung von zu lösenden (Prognose-)Fällen, die Überprüfung und Anpassung erstellter Prognosen mithilfe eines regelbasierten Systems, das sich auf Domänenwissen aus einer Ontologie stützt, sowie die zeitasynchrone Einbeziehung einer Vielzahl von Studierenden als Domänenexperten zur Anpassung individuell prognostizierter Studienverläufe.
Eine datengetriebene Komponente bildet den zweiten Teil des zusammengesetzten DSS. Diese dient der Bereitstellung der Ergebnisse der wissensgetriebenen Komponente in einer von Entscheidungsträgern der hochschulinternen Kapazitätsplanung verwertbaren Form. Die durch die wissensgetriebene Komponente erstellten Prognosen werden hierzu in einen multidimensional modellierten Data Mart geladen und mithilfe von analytischen und Standardberichten auf verschiedenen Aggregationsniveaus zur Auswertung bereitgestellt.
Zur Evaluation des Konzepts erfolgt dessen prototypische Implementierung am Beispiel der Universität Osnabrück. Der Fokus der Bewertung liegt auf dem Kriterium der Prognosegenauigkeit, welches durch verschiedene, auf dem Prognosefehler basierende Gütemaße operationalisiert wird. Letztere werden anhand von Prognosesimulationen mittels des Prototyps ermittelt und auf Basis eines Interpretationsschemas sowie durch Gegenüberstellung mit den Ergebnissen eines Referenzverfahrens interpretiert und bewertet.
|
158 |
Eine funktionale Methode der WissensrepräsentationOertel, Wolfgang 01 March 2024 (has links)
Das Anliegen der Arbeit besteht in der Entwicklung eines Wissensrepräsentationsmodells, das sich insbesondere für die Beschreibung komplex strukturierter Objekte eignet. Den Ausgangspunkt bildet eine Charakterisierung der Problematik der Wissensrepräsentation. Aus der Darstellung eines für das Gebiet der rechnergestützten Konstruktion typischen Diskursbereiches Getriebekonstruktion lassen sich Anforderungen an Modelle zur Beschreibung komplex strukturierter Objekte in Wissensbasen ableiten. Der Hauptteil der Arbeit besteht in der Entwicklung eines funktionalen Wissensrepräsentationsmodells, das diesen Anforderungen gerecht wird. Das Modell ermöglicht gleichzeitig eine effiziente Implementation wissensbasierter Systeme auf der Grundlage der Programmiersprache LISP sowie das Herstellen von Beziehungen zu Datenmodellen einerseits und Wissensrepräsentationsmodellen, insbesondere der Prädikatenlogik erster Ordnung, andererseits. Unter Bezugnahme auf die Datenbanktechnologie wird die Struktur von Wissensbanksystemen beschrieben. Ein wesentlicher Aspekt der Arbeit besteht im Aufzeigen der Möglichkeit und des Weges, das Wissen eines Konstrukteurs zu formalisieren und in eine Wissensbasis abzubilden.:1. Einleitung
2. Wissensrepräsentation in technischen Systemen
3. Beispielsdiskursbereiche
4. Funktionales Wissensrepräsentationsmodell
5. Beziehungen zwischen Prädikatenlogik erster Ordnung und funktionalem Wissensrepräsentationsmodell
6. Aufbau von Wissensbanksystemen
7. Anwendung des funktionalen Wissensrepräsentationsmodells für die Implementation wissensbasierter Systeme
8. Schlussbemerkungen
|
159 |
Knowledge-Based Analysis and Synthesis of Complex Graphic ObjectsOertel, Wolfgang 27 June 2024 (has links)
A software concept is described combining computer graphics and artificial intelligence to support practical graphic systems to check, correct or generate their spatiotemporal objects with the help of knowledge-based inferences. The unified approach is demonstrated at four quite different applications. / Ein Softwarekonzept wird beschrieben, das Computergrafik und Künstliche Intelligenz kombiniert, um praktische Grafiksysteme beim Überprüfen, Korrigieren oder Generieren ihrer raumzeitlichen Objekte mit Hilfe von wissensbasierten Inferenzen zu unterstützen. Das einheitliche Verfahren wird an vier ganz unterschiedlichen Anwendungen demonstriert.
|
160 |
Cognitive Computing11 November 2015 (has links) (PDF)
"Cognitive Computing" has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and therapy. Cognitive computing is based on artificial intelligence, image processing, pattern recognition, robotics, adaptive software, networks and other modern computer science areas, but also includes sensors and actuators to interact with the physical world.
Cognitive computers – also called "intelligent machines" – are emulating the human cognitive, mental and intellectual capabilities. They aim to do for human mental power (the ability to use our brain in understanding and influencing our physical and information environment) what the steam engine and combustion motor did for muscle power. We can expect a massive impact of cognitive computing on life and work. Many modern complex infrastructures, such as the electricity distribution grid, railway networks, the road traffic structure, information analysis (big data), the health care system, and many more will rely on intelligent decisions taken by cognitive computers.
A drawback of cognitive computers will be a shift in employment opportunities: A raising number of tasks will be taken over by intelligent machines, thus erasing entire job categories (such as cashiers, mail clerks, call and customer assistance centres, taxi and bus drivers, pilots, grid operators, air traffic controllers, …).
A possibly dangerous risk of cognitive computing is the threat by “super intelligent machines” to mankind. As soon as they are sufficiently intelligent, deeply networked and have access to the physical world they may endanger many areas of human supremacy, even possibly eliminate humans.
Cognitive computing technology is based on new software architectures – the “cognitive computing architectures”. Cognitive architectures enable the development of systems that exhibit intelligent behaviour.
|
Page generated in 0.0891 seconds