• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 69
  • 69
  • 66
  • 66
  • 27
  • 17
  • 11
  • 10
  • 9
  • 9
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Effective partial ontology mapping in a pervasive computing environment

Kong, Choi-yu., 江采如. January 2004 (has links)
published_or_final_version / abstract / Computer Science and Information Systems / Master / Master of Philosophy
32

Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons

Almassian, Amin 23 March 2016 (has links)
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
33

The Perceived Impact of Technology-Based Informal Learning on Membership Organizations

Unknown Date (has links)
Educational leadership goes beyond the boundaries of the classroom; skills needed for talent development professionals in business closely align with those needed in traditional educational leadership positions as both are responsible for the development and growth of others. Traditionally, the role of professional membership associations or organizations such as the Association for Talent Development (ATD, formerly known as the American Society for Training and Development), the group dedicated to individuals in the field of workplace learning and development, is to provide learning opportunities, set standards, identify best practices in their respective fields, and allow members to network with other professionals who share their interests. However, with the rampant increase in the use of technology and social networking, individuals are now able to access a vast majority of information for free online via tools such as LinkedIn, Facebook, Google, and YouTube. Where has this left organizations that typically charged for access to this type of information in the past? Surveys and interviews were conducted with ATD members in this mixed-methods study to answer the following research questions: 1. What are the perceptions of Association for Talent Development (ATD) members regarding the effect of technology-based informal learning on the role of ATD? 2. How do ATD members utilize technology for informal learning? 3. Are there factors such as gender, age, ethnicity, educational level, or length of time in the field that predict a member's likelihood to utilize technology for informal learning? 4. Are there certain ATD competency areas for which informal learning is preferred over non-formal or formal learning? The significance of the study includes the identification of how the Association for Talent Development (ATD, formerly ASTD) can continue to support professionals in our constantly evolving te chnological society as well as advancing the field by contributing research connecting informal learning with technology to membership organization roles. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
34

A predicated network formalism for commonsense reasoning.

January 2000 (has links)
Chiu, Yiu Man Edmund. / Thesis submitted in: December 1999. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 269-248). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Beginning Story --- p.2 / Chapter 1.2 --- Background --- p.3 / Chapter 1.2.1 --- History of Nonmonotonic Reasoning --- p.3 / Chapter 1.2.2 --- Formalizations of Nonmonotonic Reasoning --- p.6 / Chapter 1.2.3 --- Belief Revision --- p.13 / Chapter 1.2.4 --- Network Representation of Knowledge --- p.17 / Chapter 1.2.5 --- Reference from Logic Programming --- p.21 / Chapter 1.2.6 --- Recent Work on Network-type Automatic Reasoning Sys- tems --- p.22 / Chapter 1.3 --- A Novel Inference Network Approach --- p.23 / Chapter 1.4 --- Objectives --- p.23 / Chapter 1.5 --- Organization of the Thesis --- p.24 / Chapter 2 --- The Predicate Inference Network PIN --- p.25 / Chapter 2.1 --- Preliminary Terms --- p.26 / Chapter 2.2 --- Overall Structure --- p.27 / Chapter 2.3 --- Object Layer --- p.29 / Chapter 2.3.1 --- Virtual Object --- p.31 / Chapter 2.4 --- Predicate Layer --- p.33 / Chapter 2.4.1 --- Node Values --- p.34 / Chapter 2.4.2 --- Information Source --- p.35 / Chapter 2.4.3 --- Belief State --- p.36 / Chapter 2.4.4 --- Predicates --- p.37 / Chapter 2.4.5 --- Prototypical Predicates --- p.37 / Chapter 2.4.6 --- Multiple Inputs for a Single Belief --- p.39 / Chapter 2.4.7 --- External Program Call --- p.39 / Chapter 2.5 --- Variable Layer --- p.40 / Chapter 2.6 --- Inter-Layer Links --- p.42 / Chapter 2.7 --- Chapter Summary --- p.43 / Chapter 3 --- Computation for PIN --- p.44 / Chapter 3.1 --- Computation Functions for Propagation --- p.45 / Chapter 3.1.1 --- Computational Functions for Combinative Links --- p.45 / Chapter 3.1.2 --- Computational Functions for Alternative Links --- p.49 / Chapter 3.2 --- Applying the Computation Functions --- p.52 / Chapter 3.3 --- Relations Represented in PIN --- p.55 / Chapter 3.3.1 --- Relations Represented by Combinative Links --- p.56 / Chapter 3.3.2 --- Relations Represented by Alternative Links --- p.59 / Chapter 3.4 --- Chapter Summary --- p.61 / Chapter 4 --- Dynamic Knowledge Update --- p.62 / Chapter 4.1 --- Operations for Knowledge Update --- p.63 / Chapter 4.2 --- Logical Expression --- p.63 / Chapter 4.3 --- Applicability of Operators --- p.64 / Chapter 4.4 --- Add Operation --- p.65 / Chapter 4.4.1 --- Add a fully instantiated single predicate proposition with no virtual object --- p.66 / Chapter 4.4.2 --- Add a fully instantiated pure disjunction --- p.68 / Chapter 4.4.3 --- Add a fully instantiated expression which is a conjunction --- p.71 / Chapter 4.4.4 --- Add a human biased relation --- p.74 / Chapter 4.4.5 --- Add a single predicate expression with virtual objects --- p.76 / Chapter 4.4.6 --- Add a IF-THEN rule --- p.80 / Chapter 4.5 --- Remove Operation --- p.88 / Chapter 4.5.1 --- Remove a Belief --- p.88 / Chapter 4.5.2 --- Remove a Rule --- p.91 / Chapter 4.6 --- Revise Operation --- p.94 / Chapter 4.6.1 --- Revise a Belief --- p.94 / Chapter 4.6.2 --- Revise a Rule --- p.96 / Chapter 4.7 --- Consistency Maintenance --- p.97 / Chapter 4.7.1 --- Logical Suppression --- p.98 / Chapter 4.7.2 --- Example on Handling Inconsistent Information --- p.99 / Chapter 4.8 --- Chapter Summary --- p.102 / Chapter 5 --- Knowledge Query --- p.103 / Chapter 5.1 --- Domains of Quantification --- p.104 / Chapter 5.2 --- Reasoning through Recursive Rules --- p.109 / Chapter 5.2.1 --- Infinite Looping Control --- p.110 / Chapter 5.2.2 --- Proof of the finite termination of recursive rules --- p.111 / Chapter 5.3 --- Query Functions --- p.117 / Chapter 5.4 --- Type I Queries --- p.119 / Chapter 5.4.1 --- Querying a Simple Single Predicate Proposition (Type I) --- p.122 / Chapter 5.4.2 --- Querying a Belief with Logical Connective(s) (Type I) --- p.128 / Chapter 5.5 --- Type II Queries --- p.132 / Chapter 5.5.1 --- Querying Single Predicate Expressions (Type II) --- p.134 / Chapter 5.5.2 --- Querying an Expression with Logical Connectives (Type II) --- p.143 / Chapter 5.6 --- Querying an Expression with Virtual Objects --- p.152 / Chapter 5.6.1 --- Type I Queries Involving Virtual Object --- p.152 / Chapter 5.6.2 --- Type II Queries involving Virtual Objects --- p.156 / Chapter 5.7 --- Chapter Summary --- p.157 / Chapter 6 --- Uniqueness and Finite Termination --- p.159 / Chapter 6.1 --- Proof Structure --- p.160 / Chapter 6.2 --- Proof for Completeness and Finite Termination of Domain Search- ing Procedure --- p.161 / Chapter 6.3 --- Proofs for Type I Queries --- p.167 / Chapter 6.3.1 --- Proof for Single Predicate Expressions --- p.167 / Chapter 6.3.2 --- Proof of Type I Queries on Expressions with Logical Con- nectives --- p.172 / Chapter 6.3.3 --- General Proof for Type I Queries --- p.174 / Chapter 6.4 --- Proofs for Type II Queries --- p.175 / Chapter 6.4.1 --- Proof for Type II Queries on Single Predicate Expressions --- p.176 / Chapter 6.4.2 --- Proof for Type II Queries on Disjunctions --- p.178 / Chapter 6.4.3 --- Proof for Type II Queries on Conjunctions --- p.179 / Chapter 6.4.4 --- General Proof for Type II Queries --- p.181 / Chapter 6.5 --- Proof for Queries Involving Virtual Objects --- p.182 / Chapter 6.6 --- Uniqueness and Finite Termination of PIN Queries --- p.183 / Chapter 6.7 --- Chapter Summary --- p.184 / Chapter 7 --- Lifschitz's Benchmark Problems --- p.185 / Chapter 7.1 --- Structure --- p.186 / Chapter 7.2 --- Default Reasoning --- p.186 / Chapter 7.2.1 --- Basic Default Reasoning --- p.186 / Chapter 7.2.2 --- Default Reasoning with Irrelevant Information --- p.187 / Chapter 7.2.3 --- Default Reasoning with Several Defaults --- p.188 / Chapter 7.2.4 --- Default Reasoning with a Disabled Default --- p.190 / Chapter 7.2.5 --- Default Reasoning in Open Domain --- p.191 / Chapter 7.2.6 --- Reasoning about Unknown Exceptions I --- p.193 / Chapter 7.2.7 --- Reasoning about Unknown Exceptions II --- p.194 / Chapter 7.2.8 --- Reasoning about Unknown Exceptions III --- p.196 / Chapter 7.2.9 --- Priorities between Defaults --- p.198 / Chapter 7.2.10 --- Priorities between Instances of a Default --- p.199 / Chapter 7.2.11 --- Reasoning about Priorities --- p.199 / Chapter 7.3 --- Inheritance --- p.200 / Chapter 7.3.1 --- Linear Inheritance --- p.200 / Chapter 7.3.2 --- Tree-Structured Inheritance --- p.202 / Chapter 7.3.3 --- One-Step Multiple Inheritance --- p.203 / Chapter 7.3.4 --- Multiple Inheritance --- p.204 / Chapter 7.4 --- Uniqueness of Names --- p.205 / Chapter 7.4.1 --- Unique Names Hypothesis for Objects --- p.205 / Chapter 7.4.2 --- Unique Names Hypothesis for Functions --- p.206 / Chapter 7.5 --- Reasoning about Action --- p.206 / Chapter 7.6 --- Autoepistemic Reasoning --- p.206 / Chapter 7.6.1 --- Basic Autoepistemic Reasoning --- p.206 / Chapter 7.6.2 --- Autoepistemic Reasoning with Incomplete Information --- p.207 / Chapter 7.6.3 --- Autoepistemic Reasoning with Open Domain --- p.207 / Chapter 7.6.4 --- Autoepistemic Default Reasoning --- p.208 / Chapter 8 --- Comparison with PROLOG --- p.214 / Chapter 8.1 --- Introduction of PROLOG --- p.215 / Chapter 8.1.1 --- Brief History --- p.215 / Chapter 8.1.2 --- Structure and Inference --- p.215 / Chapter 8.1.3 --- Why Compare PIN with Prolog --- p.216 / Chapter 8.2 --- Representation Power --- p.216 / Chapter 8.2.1 --- Close World Assumption and Negation as Failure --- p.216 / Chapter 8.2.2 --- Horn Clauses --- p.217 / Chapter 8.2.3 --- Quantification --- p.218 / Chapter 8.2.4 --- Build-in Functions --- p.219 / Chapter 8.2.5 --- Other Representation Issues --- p.220 / Chapter 8.3 --- Inference and Query Processing --- p.220 / Chapter 8.3.1 --- Unification --- p.221 / Chapter 8.3.2 --- Resolution --- p.222 / Chapter 8.3.3 --- Computation Efficiency --- p.225 / Chapter 8.4 --- Knowledge Updating and Consistency Issues --- p.227 / Chapter 8.4.1 --- PIN and AGM Logic --- p.228 / Chapter 8.4.2 --- Knowledge Merging --- p.229 / Chapter 8.5 --- Chapter Summary --- p.229 / Chapter 9 --- Conclusion and Discussion --- p.230 / Chapter 9.1 --- Conclusion --- p.231 / Chapter 9.1.1 --- General Structure --- p.231 / Chapter 9.1.2 --- Representation Power --- p.231 / Chapter 9.1.3 --- Inference --- p.232 / Chapter 9.1.4 --- Dynamic Update and Consistency --- p.233 / Chapter 9.1.5 --- Soundness and Completeness Versus Efficiency --- p.233 / Chapter 9.2 --- Discussion --- p.234 / Chapter 9.2.1 --- Different Selection Criteria --- p.234 / Chapter 9.2.2 --- Link Order --- p.235 / Chapter 9.2.3 --- Inheritance Reasoning --- p.236 / Chapter 9.3 --- Future Work --- p.237 / Chapter 9.3.1 --- Implementation --- p.237 / Chapter 9.3.2 --- Application --- p.237 / Chapter 9.3.3 --- Probabilistic and Fuzzy PIN --- p.238 / Chapter 9.3.4 --- Temporal Reasoning --- p.238 / Bibliography --- p.239
35

Perspectives on belief and change

Aucher, Guillaume, n/a January 2008 (has links)
This thesis is about logical models of belief (and knowledge) representation and belief change. This means that we propose logical systems which are intended to represent how agents perceive a situation and reason about it, and how they update their beliefs about this situation when events occur. These agents can be machines, robots, human beings. . . but they are assumed to be somehow autonomous. The way a fixed situation is perceived by agents can be represented by statements about the agents� beliefs: for example �agent A believes that the door of the room is open� or �agent A believes that her colleague is busy this afternoon�. �Logical systems� means that agents can reason about the situation and their beliefs about it: if agent A believes that her colleague is busy this afternoon then agent A infers that he will not visit her this afternoon. We moreover often assume that our situations involve several agents which interact between each other. So these agents have beliefs about the situation (such as �the door is open�) but also about the other agents� beliefs: for example agent A might believe that agent B believes that the door is open. These kinds of beliefs are called higher-order beliefs. Epistemic logic [Hintikka, 1962; Fagin et al., 1995; Meyer and van der Hoek, 1995], the logic of belief and knowledge, can capture all these phenomena and will be our main starting point to model such fixed (�static�) situations. Uncertainty can of course be expressed by beliefs and knowledge: for example agent A being uncertain whether her colleague is busy this afternoon can be expressed by �agent A does not know whether her colleague is busy this afternoon�. But we sometimes need to enrich and refine the representation of uncertainty: for example, even if agent A does not know whether her colleague is busy this afternoon, she might consider it more probable that he is actually busy. So other logics have been developed to deal more adequately with the representation of uncertainty, such as probabilistic logic, fuzzy logic or possibilistic logic, and we will refer to some of them in this thesis (see [Halpern, 2003] for a survey on reasoning about uncertainty). But things become more complex when we introduce events and change in the picture. Issues arise even if we assume that there is a single agent. Indeed, if the incoming information conveyed by the event is coherent with the agent�s beliefs then the agent can just add it to her beliefs. But if the incoming information contradicts the agent�s beliefs then the agent has somehow to revise her beliefs, and as it turns out there is no obvious way to decide what should be her resulting beliefs. Solving this problem was the goal of the logic-based belief revision theory developed by Alchourrón, Gärdenfors and Makinson (to which we will refer by the term AGM) [Alchourrón et al., 1985; Gärdenfors, 1988; Gärdenfors and Rott, 1995]. Their idea is to introduce �rationality postulates� that specify which belief revision operations can be considered as being �rational� or reasonable, and then to propose specific revision operations that fulfill these postulates. However, AGM does not consider situations where the agent might also have some uncertainty about the incoming information: for example agent A might be uncertain due to some noise whether her colleague told her that he would visit her on Tuesday or on Thursday. In this thesis we also investigate this kind of phenomenon. Things are even more complex in a multi-agent setting because the way agents update their beliefs depends not only on their beliefs about the event itself but also on their beliefs about the way the other agents perceived the event (and so about the other agents� beliefs about the event). For example, during a private announcement of a piece of information to agent A the beliefs of the other agents actually do not change because they believe nothing is actually happening; but during a public announcement all the agents� beliefs might change because they all believe that an announcement has been made. Such kind of subtleties have been dealt with in a field called dynamic epistemic logic (Gerbrandy and Groeneveld, 1997; Baltag et al., 1998; van Ditmarsch et al., 2007b]. The idea is to represent by an event model how the event is perceived by the agents and then to define a formal update mechanism that specifies how the agents update their beliefs according to this event model and their previous representaton of the situation. Finally, the issues concerning belief revision that we raised in the single agent case are still present in the multi-agent case. So this thesis is more generally about information and information change. However, we will not deal with problems of how to store information in machines or how to actually communicate information. Such problems have been dealt with in information theory [Cover and Thomas, 1991] and Kolmogorov complexity theory [Li and Vitányi, 1993]. We will just assume that such mechanisms are already available and start our investigations from there. Studying and proposing logical models for belief change and belief representation has applications in several areas. First in artificial intelligence, where machines or robots need to have a formal representation of the surrounding world (which might involve other agents), and formal mechanisms to update this representation when they receive incoming information. Such formalisms are crucial if we want to design autonomous agents, able to act autonomously in the real world or in a virtual world (such as on the internet). Indeed, the representation of the surrounding world is essential for a robot in order to reason about the world, plan actions in order to achieve goals... and it must be able to update and revise its representation of the world itself in order to cope autonomously with unexpected events. Second in game theory (and consequently in economics), where we need to model games involving several agents (players) having beliefs about the game and about the other agents� beliefs (such as agent A believes that agent B has the ace of spade, or agent A believes that agent B believes that agent A has the ace of heart...), and how they update their representation of the game when events (such as showing privately a card or putting a card on the table) occur. Third in cognitive psychology, where we need to model as accurately as possible epistemic state of human agents and the dynamics of belief and knowledge in order to explain and describe cognitive processes. The thesis is organized as follows. In Chapter 2, we first recall epistemic logic. Then we observe that representing an epistemic situation involving several agents depends very much on the modeling point of view one takes. For example, in a poker game the representation of the game will be different depending on whether the modeler is a poker player playing in the game or the card dealer who knows exactly what the players� cards are. In this thesis, we will carefully distinguish these different modeling approaches and the. different kinds of formalisms they give rise to. In fact, the interpretation of a formalism relies quite a lot on the nature of these modeling points of view. Classically, in epistemic logic, the models built are supposed to be correct and represent the situation from an external and objective point of view. We call this modeling approach the perfect external approach. In Chapter 2, we study the modeling point of view of a particular modeler-agent involved in the situation with other agents (and so having a possibly erroneous perception of the situation). We call this modeling approach the internal approach. We propose a logical formalism based on epistemic logic that this agent uses to represent �for herself� the surrounding world. We then set some formal connections between the internal approach and the (perfect) external approach. Finally we axiomatize our logical formalism and show that the resulting logic is decidable. In Chapter 3, we first recall dynamic epistemic logic as viewed by Baltag, Moss and Solecki (to which we will refer by the term BMS). Then we study in which case seriality of the accessibility relations of epistemic models is preserved during an update, first for the full updated model and then for generated submodels of the full updated model. Finally, observing that the BMS formalism follows the (perfect) external approach, we propose an internal version of it, just as we proposed an internal version of epistemic logic in Chapter 2. In Chapter 4, we still follow the internal approach and study the particular case where the event is a private announcement. We first show, thanks to our study in Chapter 3, that in a multi-agent setting, expanding in the AGM style corresponds to performing a private announcement in the BMS style. This indicates that generalizing AGM belief revision theory to a multi-agent setting amounts to study private announcement. We then generalize the AGM representation theorems to the multi-agent case. Afterwards, in the spirit of the AGM approach, we go beyond the AGM postulates and investigate multi-agent rationality postulates specific to our multi-agent setting inspired from the fact that the kind of phenomenon we study is private announcement. Finally we provide an example of revision operation that we apply to a concrete example. In Chapter 5, we follow the (perfect) external approach and enrich the BMS formalism with probabilities. This enables us to provide a fined-grained account of how human agents interpret events involving uncertainty and how they revise their beliefs. Afterwards, we review different principles for the notion of knowledge that have been proposed in the literature and show how some principles that we argue to be reasonable ones can all be captured in our rich and expressive formalism. Finally, we extend our general formalism to a multi-agent setting. In Chapter 6, we still follow the (perfect) external approach and enrich our dynamic epistemic language with converse events. This language is interpreted on structures with accessibility relations for both beliefs and events, unlike the BMS formalism where events and beliefs are not on the same formal level. Then we propose principles relating events and beliefs and provide a complete characterization, which yields a new logic EDL. Finally, we show that BMS can be translated into our new logic EDL thanks to the converse operator: this device enables us to translate the structure of the event model directly within a particular axiomatization of EDL, without having to refer to a particular event model in the language (as done in BMS). In Chapter 7 we summarize our results and give an overview of remaining technical issues and some desiderata for future directions of research. Parts of this thesis are based on publication, but we emphasize that they have been entirely rewritten in order to make this thesis an integrated whole. Sections 4.2.2 and 4.3 of Chapter 4 are based on [Aucher, 2008]. Sections 5.2, 5.3 and 5.5 of Chapter 5 are based on [Aucher, 2007]. Chapter 6 is based on [Aucher and Herzig, 2007].
36

Catastrophic forgetting and the pseudorehearsal solution in Hopfield networks

McCallum, Simon, n/a January 2007 (has links)
Most artificial neural networks suffer from the problem of catastrophic forgetting, where previously learnt information is suddenly and completely lost when new information is learnt. Memory in real neural systems does not appear to suffer from this unusual behaviour. In this thesis we discuss the problem of catastrophic forgetting in Hopfield networks, and investigate various potential solutions. We extend the pseudorehearsal solution of Robins (1995) enabling it to work in this attractor network, and compare the results with the unlearning procedure proposed by Crick and Mitchison (1983). We then explore a familiarity measure based on the energy profile of the learnt patterns. By using the ratio of high energy to low energy parts of the network we can robustly distinguish the learnt patterns from the large number of spurious "fantasy" patterns that are common in these networks. This energy ratio measure is then used to improve the pseudorehearsal solution so that it can store 0.3N patterns in the Hopfield network, significantly more than previous proposed solutions to catastrophic forgetting. Finally, we explore links between the mechanisms investigated in this thesis and the consolidation of newly learnt material during sleep.
37

Decentralising the codification of rules in a decision support expert knowledge base

De Kock, Erika. January 2003 (has links)
Thesis (M. Sc.(Computer Science))--University of Pretoria, 2003. / Includes bibliographical references.
38

Debugging and repair of description logic ontologies.

Moodley, Kodylan. January 2010 (has links)
In logic-based Knowledge Representation and Reasoning (KRR), ontologies are used to represent knowledge about a particular domain of interest in a precise way. The building blocks of ontologies include concepts, relations and objects. Those can be combined to form logical sentences which explicitly describe the domain. With this explicit knowledge one can perform reasoning to derive knowledge that is implicit in the ontology. Description Logics (DLs) are a group of knowledge representation languages with such capabilities that are suitable to represent ontologies. The process of building ontologies has been greatly simpli ed with the advent of graphical ontology editors such as SWOOP, Prote ge and OntoStudio. The result of this is that there are a growing number of ontology engineers attempting to build and develop ontologies. It is frequently the case that errors are introduced while constructing the ontology resulting in undesirable pieces of implicit knowledge that follows from the ontology. As such there is a need to extend current ontology editors with tool support to aid these ontology engineers in correctly designing and debugging their ontologies. Errors such as unsatis able concepts and inconsistent ontologies frequently occur during ontology construction. Ontology Debugging and Repair is concerned with helping the ontology developer to eliminate these errors from the ontology. Much emphasis, in current tools, has been placed on giving explanations as to why these errors occur in the ontology. Less emphasis has been placed on using this information to suggest e cient ways to eliminate the errors. Furthermore, these tools focus mainly on the errors of unsatis able concepts and inconsistent ontologies. In this dissertation we ll an important gap in the area by contributing an alternative approach to ontology debugging and repair for the more general error of a list of unwanted sentences. Errors such as unsatis able concepts and inconsistent ontologies can be represented as unwanted sentences in the ontology. Our approach not only considers the explanation of the unwanted sentences but also the identi cation of repair strategies to eliminate these unwanted sentences from the ontology. / Thesis (M.Sc.)-University of KwaZulu-Natal, Westville, 2010.
39

A functional theory of creative reading : process, knowledge, and evaluation

Moorman, Kenneth Matthew 08 1900 (has links)
No description available.
40

A study on object-oriented knowledge representation

Salgado-Arteaga, Francisco January 1995 (has links)
This thesis is a study on object-oriented knowledge representation. The study defines the main concepts of the object model. It also shows pragmatically the use of object-oriented methodology in the development of a concrete software system designed as the solution to a specific problem.The problem is to simulate the interaction between several animals and various other objects that exist in a room. The proposed solution is an artificial intelligence (Al) program designed according to the object-oriented model, which closely simulates objects in the problem domain. The AI program is conceived as an inference engine that maps together a given knowledge base with a database. The solution is based conceptually on the five major elements of the model, namely abstraction, encapsulation, modularity, hierarchy, and polymorphism.The study introduces a notation of class diagrams and frames to capture the essential characteristics of the system defined by analysis and design. The solution to the problem allows the application of any object-oriented programming language. Common Lisp Object System (CLOS) is the language used for the implementation of the software system included in the appendix. / Department of Computer Science

Page generated in 0.2106 seconds