• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3096
  • 562
  • 236
  • 231
  • 196
  • 127
  • 100
  • 83
  • 83
  • 83
  • 83
  • 83
  • 83
  • 30
  • 29
  • Tagged with
  • 5728
  • 5728
  • 2115
  • 1569
  • 1345
  • 1181
  • 889
  • 880
  • 782
  • 754
  • 698
  • 611
  • 559
  • 535
  • 532
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Realizations of common channeling constraints in constraint satisfaction: theory and algorithms.

January 2006 (has links)
Lam Yee Gordon. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 109-117). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constraint Satisfaction Problems --- p.1 / Chapter 1.2 --- Motivations and Goals --- p.2 / Chapter 1.3 --- Outline of the Thesis --- p.4 / Chapter 2 --- Background --- p.5 / Chapter 2.1 --- CSP --- p.5 / Chapter 2.2 --- Classes of Variable --- p.6 / Chapter 2.3 --- Solution of a CSP --- p.7 / Chapter 2.4 --- Constraint Solving Techniques --- p.8 / Chapter 2.4.1 --- Local Consistencies --- p.8 / Chapter 2.4.2 --- Constraint Tightness --- p.10 / Chapter 2.4.3 --- Tree Search --- p.10 / Chapter 2.5 --- Graph --- p.14 / Chapter 3 --- Common Channeling Constraints --- p.16 / Chapter 3.1 --- Models --- p.16 / Chapter 3.2 --- Channeling Constraints --- p.17 / Chapter 3.2.1 --- Int-Int Channeling Constraint (II) --- p.18 / Chapter 3.2.2 --- Set-Int Channeling Constraint (SI) --- p.21 / Chapter 3.2.3 --- Set-Set Channeling Constraint (SS) --- p.24 / Chapter 3.2.4 --- Int-Bool Channeling Constraint (IB) --- p.25 / Chapter 3.2.5 --- Set-Bool Channeling Constraint (SB) --- p.27 / Chapter 3.2.6 --- Discussions --- p.29 / Chapter 4 --- Realization in Existing Solvers --- p.31 / Chapter 4.1 --- Implementation by if-and-only-if constraint --- p.32 / Chapter 4.1.1 --- "Realization of iff in CHIP, ECLiPSe, and SICStus Prolog" --- p.32 / Chapter 4.1.2 --- Realization of iff in Oz and ILOG Solver --- p.32 / Chapter 4.2 --- Implementations by Element Constraint --- p.38 / Chapter 4.2.1 --- "Realization of ele in CHIP, ECLiPSe, and SICStus Prolog" --- p.40 / Chapter 4.2.2 --- Realization of ele in Oz and ILOG Solver --- p.40 / Chapter 4.3 --- Global Constraint Implementations --- p.41 / Chapter 4.3.1 --- "Realization of glo in CHIP, SICStus Prolog, and ILOG Solver" --- p.42 / Chapter 5 --- Consistency Levels --- p.43 / Chapter 5.1 --- Int-Int Channeling (II) --- p.44 / Chapter 5.2 --- Set-Int Channeling (SI) --- p.49 / Chapter 5.3 --- Set-Set Channeling Constraints (SS) --- p.53 / Chapter 5.4 --- Int-Bool Channeling (IB) --- p.55 / Chapter 5.5 --- Set-Bool Channeling (SB) --- p.57 / Chapter 5.6 --- Discussion --- p.59 / Chapter 6 --- Algorithms and Implementation --- p.61 / Chapter 6.1 --- Source of Inefficiency --- p.62 / Chapter 6.2 --- Generalized Element Constraint Propagators --- p.63 / Chapter 6.3 --- Global Channeling Constraint --- p.66 / Chapter 6.3.1 --- Generalization of Existing Global Channeling Constraints --- p.66 / Chapter 6.3.2 --- Maintaining GAC on Int-Int Channeling Constraint --- p.68 / Chapter 7 --- Experiments --- p.72 / Chapter 7.1 --- Int-Int Channeling Constraint --- p.73 / Chapter 7.1.1 --- Efficient AC implementations --- p.74 / Chapter 7.1.2 --- GAC Implementations --- p.75 / Chapter 7.2 --- Set-Int Channeling Constraint --- p.83 / Chapter 7.3 --- Set-Set Channeling Constraint --- p.89 / Chapter 7.4 --- Int-Bool Channeling Constraint --- p.89 / Chapter 7.5 --- Set-Bool Channeling Constraint --- p.91 / Chapter 7.6 --- Discussion --- p.93 / Chapter 8 --- Related Work --- p.101 / Chapter 8.1 --- Empirical Studies --- p.101 / Chapter 8.2 --- Theoretical Studies --- p.102 / Chapter 8.3 --- Applications --- p.103 / Chapter 8.4 --- Other Kinds of Channeling Constraints --- p.104 / Chapter 9 --- Concluding Remarks --- p.106 / Chapter 9.1 --- Contributions --- p.106 / Chapter 9.2 --- Future Work --- p.108 / Bibliography --- p.109
162

ASLP: a list processor for artificial intelligence applications.

January 1990 (has links)
by Cheang Sin Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1990. / Bibliography: leaves 137-140. / ABSTRACT --- p.i / ACKNOWLEDGEMENTS --- p.ii / TABLE OF CONTENTS --- p.iii / Chapter CHAPTER 1 --- INTRODUCTION --- p.1 / Chapter 1.1 --- Lisp as an AI Programming Language --- p.1 / Chapter 1.2 --- Assisting List Processing with Hardware --- p.2 / Chapter 1.3 --- Simulation Study --- p.2 / Chapter 1.4 --- Implementation --- p.3 / Chapter 1.4.1 --- Hardware --- p.3 / Chapter 1.4.2 --- Software --- p.3 / Chapter 1.5 --- Performance --- p.4 / Chapter CHAPTER 2 --- LISP AND EXISTING LISP MACHINES --- p.5 / Chapter 2.1 --- Lisp and its Internal Structure --- p.5 / Chapter 2.1.1 --- The List Structure in Lisp --- p.5 / Chapter 2.1.2 --- Data Types in Lisp --- p.7 / Chapter 2.1.3 --- Lisp Functions --- p.8 / Chapter 2.1.4 --- Storage Management of Lisp --- p.9 / Chapter 2.2 --- Existing Lisp Machines --- p.11 / Chapter 2.2.1 --- Types of AI Architecture --- p.11 / Language-Based architecture --- p.11 / Knowledge-Based architecture --- p.12 / Semantic networks --- p.12 / Chapter 2.2.2 --- Lisp Machines --- p.12 / Solving problems of Lisp --- p.13 / Chapter 2.2.3 --- Classes of Lisp Machines --- p.14 / Two M Lisp machine examples --- p.15 / A class P machine example --- p.17 / A class S machine example --- p.17 / The best class for Lisp --- p.19 / Chapter 2.3 --- Execution Time Analysis of a Lisp System --- p.20 / Chapter 2.3.1 --- CPU Time Statistics --- p.20 / Chapter 2.3.2 --- Statistics Analysis --- p.24 / Chapter CHAPTER 3 --- OVERALL ARCHITECTURE OF THE ASLP --- p.27 / Chapter 3.1 --- An Arithmetical & Symbolical List Processor --- p.27 / Chapter 3.2 --- Multiple Memory Modules --- p.30 / Chapter 3.3 --- Large Number of Registers --- p.31 / Chapter 3.4 --- Multiple Buses --- p.34 / Chapter 3.5 --- Special Function Units --- p.35 / Chapter CHAPTER 4 --- PARALLELISM IN THE ASLP --- p.36 / Chapter 4.1 --- Parallel Data Movement --- p.36 / Chapter 4.2 --- Wide Memory Modules --- p.37 / Chapter 4.3 --- Parallel Memory Access --- p.39 / Chapter 4.3.1 --- Parallelism and Pipelining --- p.39 / Chapter 4.4 --- Pipelined Micro-Instructions --- p.40 / Chapter 4.4.1 --- Memory access pipelining --- p.41 / Chapter 4.5 --- Performance Estimation --- p.44 / Chapter 4.6 --- Parallel Execution with the Host Computer --- p.45 / Chapter CHAPTER 5 --- SIMULATION STUDY OF THE ASLP --- p.47 / Chapter 5.1 --- Why Simulation is needed for the ASLP? --- p.47 / Chapter 5.2 --- The Structure of the HOCB Simulator --- p.48 / Chapter 5.2.1 --- Activity-Oriented Simulation for the ASLP --- p.50 / Chapter 5.3 --- The Hardware Object Declaration Method --- p.50 / Chapter 5.4 --- A Register-Level Simulation of the ASLP --- p.53 / Chapter 5.4.1 --- A List Function Simulation --- p.54 / Chapter CHAPTER 6 --- DESIGN AND IMPLEMENTATION OF THE ASLP --- p.57 / Chapter 6.1 --- Hardware --- p.57 / Chapter 6.1.1 --- Microprogrammable Controller --- p.57 / The instruction cycle of the micro-controller --- p.59 / Chapter 6.1.2 --- Chip Selection and Allocation --- p.59 / Chapter 6.2 --- Software --- p.61 / Chapter 6.2.1 --- Instruction Passing --- p.61 / Chapter 6.2.2 --- Microprogram Development --- p.62 / Microprogram field definition --- p.64 / Micro-assembly language --- p.65 / Macro-instructions --- p.65 / Down-loading of Micro-Codes --- p.66 / Interfacing to C language --- p.66 / A Turbo C Function Library --- p.67 / Chapter CHAPTER 7 --- PERFORMANCE EVALUATION OF THE ASLP …… --- p.68 / Chapter 7.1 --- Micro-Functions in the ASLP --- p.68 / Chapter 7.2 --- Functions in the C Library --- p.71 / Chapter CHAPTER 8 --- FUNCTIONAL EVALUATION OF THE ASLP --- p.77 / Chapter 8.1 --- A Relational Database on the ASLP --- p.77 / Chapter 8.1.1 --- Data Representation --- p.77 / Chapter 8.1.2 --- Performance of the Database System --- p.79 / Chapter 8.2 --- Other Potential Applications --- p.80 / Chapter CHAPTER 9 --- FUTURE DEVELOPMENT OF THE ASLP --- p.81 / Chapter 9.1 --- An Expert System Shell on the ASLP --- p.81 / Chapter 9.1.1 --- Definition of Objects --- p.81 / Chapter 9.1.2 --- Knowledge Representation --- p.84 / Chapter 9.1.3 --- Knowledge Representation in the ASLP --- p.85 / Chapter 9.1.4 --- Overall Structure --- p.88 / Chapter 9.2 --- Reducing the Physical Size by Employing VLSIs --- p.89 / Chapter CHAPTER 10 --- CONCLUSION --- p.92 / Chapter APPENDIX A --- BLOCK DIAGRAM --- p.95 / Chapter APPENDIX B --- ASLP CIRCUIT DIAGRAMS --- p.97 / Chapter APPENDIX C --- ASLP PC-BOARD LAYOUTS --- p.114 / Chapter APPENDIX D --- MICRO-CONTROL SIGNAL ASSIGNMENT --- p.121 / Chapter APPENDIX E --- MICRO-FIELD DEFINITION --- p.124 / Chapter APPENDIX F --- MACRO DEFINITION --- p.133 / Chapter APPENDIX G --- REGISTER ASSIGNMENT --- p.134 / PUBLICATIONS --- p.136 / REFERENCES --- p.137
163

Learning algorithms for non-overlapped trees of probabilistic logic neurons.

January 1990 (has links)
by Law Hing Man, Hudson. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1990. / Bibliography: leaves 109-112. / Acknowledgements / Abstract / Chapter Chapter I. --- Introduction --- p.1 / Chapter 1.1 --- Overview of the thesis --- p.2 / Chapter 1.2 --- Organization of the thesis --- p.7 / Chapter Chapter II. --- Artificial Neural Networks --- p.9 / Chapter 2.1 --- Architectures of Artificial Neural Networks --- p.10 / Chapter 2.1.1 --- Neuron Models --- p.10 / Chapter 2.1.2 --- Network Models --- p.12 / Chapter 2.2 --- Learning algorithms --- p.13 / Chapter Chapter III. --- From Logic Neuron to Non-Overlapped Trees --- p.15 / Chapter 3.1 --- Deterministic Logic Neuron (DLN) --- p.15 / Chapter 3.2 --- Probabilistic Logic Neuron (PLN) --- p.20 / Chapter 3.2.1 --- Well-behaved learning of orthogonal patterns in PLN network --- p.23 / Chapter 3.2.2 --- Well-behaved learning algorithm for non-orthogonal patterns --- p.23 / Chapter 3.3 --- Non-Overlapped Trees --- p.28 / Chapter 3.3.1 --- Homogeneous learning algorithm --- p.30 / Chapter 3.3.2 --- An external comparator --- p.34 / Chapter 3.3.3 --- Problems solved by NOTPLN --- p.35 / Chapter Chapter IV. --- Properties of NOTPLN --- p.37 / Chapter 4.1 --- Noise Insensitivity --- p.37 / Chapter 4.1.1 --- Noise insensitivity with one bit noise --- p.38 / Chapter 4.1.2 --- Noise insensitivity under different noise distributions --- p.40 / Chapter 4.2 --- Functionality --- p.46 / Chapter 4.3 --- Capacity --- p.49 / Chapter 4.4 --- Distributed representation --- p.50 / Chapter 4.5 --- Generalization --- p.51 / Chapter 4.5.1 --- Text-to-Phoneme Problem --- p.52 / Chapter 4.5.2 --- Automobile Learning --- p.53 / Chapter Chapter V. --- Learning Algorithms --- p.54 / Chapter 5.1 --- Presentation methods --- p.54 / Chapter 5.2 --- Learning algorithms --- p.56 / Chapter 5.2.1 --- Heterogeneous algorithm --- p.57 / Chapter 5.2.2 --- Conflict reduction agorithm --- p.61 / Chapter 5.3 --- Side effects of learning algorithms --- p.68 / Chapter 5.3.1 --- Existence of Side Effects --- p.68 / Chapter 5.3.2 --- Removal of Side Effects --- p.69 / Chapter Chapter VI. --- Practical Considerations --- p.71 / Chapter 6.1 --- Input size constraint --- p.71 / Chapter 6.2 --- Limitations of functionality --- p.72 / Chapter 6.3 --- Thermometer code --- p.72 / Chapter 6.4 --- Output definitions --- p.73 / Chapter 6.5 --- More trees for one bit --- p.74 / Chapter 6.6 --- Repeated recall --- p.75 / Chapter Chapter VII. --- Implementation and Simulations --- p.78 / Chapter 7.1 --- Implementation --- p.78 / Chapter 7.2 --- Simulations --- p.81 / Chapter 7.2.1 --- Parity learning --- p.81 / Chapter 7.2.2 --- Performance of learning algorithms under different hamming distances --- p.82 / Chapter 7.2.3 --- Performance of learning algorithms with different output size --- p.83 / Chapter 7.2.4 --- Numerals recognition and noise insensitivity --- p.84 / Chapter 7.2.5 --- Automobile learning and generalization --- p.86 / Chapter Chapter VIII. --- Spoken Numerals Recognition System based on NOTPLN --- p.89 / Chapter 8.1 --- End-point detection --- p.90 / Chapter 8.2 --- Linear Predictive Analysis --- p.91 / Chapter 8.3 --- Formant Frequency Extraction --- p.93 / Chapter 8.4 --- Coding --- p.95 / Chapter 8.5 --- Results and discussion --- p.96 / Chapter Chapter IX. --- Concluding Remarks --- p.97 / Chapter 9.1 --- Revisit of the contributions of the thesis --- p.97 / Chapter 9.2 --- Further researches --- p.99 / Chapter Appendix A --- Equation for calculating the probability of random selection --- p.102 / Chapter Appendix B --- Training sets with different hamming distances --- p.103 / Chapter Appendix C --- Set of numerals with their associated binary values --- p.107 / References --- p.109
164

A fuzzy constraint satisfaction approach to achieving stability in dynamic constraint satisfaction problems.

January 2001 (has links)
by Wong, Yin Pong Anthony. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 101-107). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Constraint Satisfaction Problems --- p.2 / Chapter 1.2 --- Solution Stability in Dynamic Constraint Satisfaction Problems --- p.3 / Chapter 1.3 --- Motivation of the Research --- p.5 / Chapter 1.4 --- Overview of the Thesis --- p.5 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Complete Search Algorithms --- p.7 / Chapter 2.1.1 --- DnAC-4 --- p.8 / Chapter 2.1.2 --- ac --- p.9 / Chapter 2.1.3 --- DnAC-6 --- p.9 / Chapter 2.2 --- Algorithms for Stability --- p.10 / Chapter 2.2.1 --- Bellicha --- p.10 / Chapter 2.2.2 --- Dynamic Dynamic Backtracking --- p.11 / Chapter 2.2.3 --- Wallace and Freuder --- p.12 / Chapter 2.2.4 --- Unimodular Probing --- p.13 / Chapter 2.2.5 --- Train Rescheduling --- p.14 / Chapter 2.3 --- Constrained Optimization Algorithms --- p.14 / Chapter 2.3.1 --- Guided Local Search --- p.14 / Chapter 2.3.2 --- Anytime CSA with Iterative Deepening --- p.15 / Chapter 2.4 --- A Real-life Application --- p.16 / Chapter 3 --- Background --- p.17 / Chapter 3.1 --- Fuzzy Constraint Satisfaction Problems --- p.17 / Chapter 3.2 --- Fuzzy GENET --- p.19 / Chapter 3.2.1 --- Network Architecture --- p.19 / Chapter 3.2.2 --- Convergence Procedure --- p.21 / Chapter 3.3 --- Deficiency in Fuzzy GENET --- p.24 / Chapter 3.4 --- Rectification of Fuzzy GENET --- p.26 / Chapter 4 --- Using Fuzzy GENET for Solving Stability Problems --- p.30 / Chapter 4.1 --- Modelling Stability Problems as FCSPs --- p.30 / Chapter 4.2 --- Extending Fuzzy GENET for Solving Stability Problems --- p.36 / Chapter 4.3 --- Experiments --- p.38 / Chapter 4.3.1 --- Dynamic CSP Generation --- p.39 / Chapter 4.3.2 --- Problems Using Hamming Distance Function --- p.41 / Chapter 4.3.2.1 --- Variation in Number of Variables --- p.42 / Chapter 4.3.2.2 --- Variation in Domain Size --- p.45 / Chapter 4.3.2.3 --- Variation in Density and Tightness --- p.47 / Chapter 4.3.3 --- Comparison in Using Different Thresholds --- p.47 / Chapter 4.3.4 --- Problems Using Manhattan Distance Function --- p.50 / Chapter 5 --- Enhancement of the Modelling Scheme --- p.56 / Chapter 5.1 --- Distance Bound --- p.56 / Chapter 5.2 --- Enhancement of Convergence Procedure --- p.57 / Chapter 5.3 --- Comparison with Optimal Solutions --- p.60 / Chapter 5.4 --- Comparison with Fuzzy GENET(dcsp) --- p.64 / Chapter 5.4.1 --- Medium-sized Problems --- p.64 / Chapter 5.4.2 --- The 150-10-15-15 Problem --- p.67 / Chapter 5.4.3 --- Variation in Density and Tightness --- p.73 / Chapter 5.4.4 --- Variation in Domain Size --- p.76 / Chapter 5.5 --- Analysis of Fuzzy GENET(dcsp2) --- p.94 / Chapter 6 --- Conclusion --- p.98 / Chapter 6.1 --- Contributions --- p.98 / Chapter 6.2 --- Future Work --- p.99 / Bibliography --- p.101
165

Model induction: a new source of model redundancy for constraint satisfaction problems.

January 2002 (has links)
Law Yat Chiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 85-89). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Related Work --- p.4 / Chapter 2.1 --- Equivalence of CSPs --- p.4 / Chapter 2.2 --- Dual Viewpoint --- p.4 / Chapter 2.3 --- CSP Reformulation --- p.5 / Chapter 2.4 --- Multiple Modeling --- p.5 / Chapter 2.5 --- Redundant Modeling --- p.6 / Chapter 2.6 --- Minimal Combined Model --- p.6 / Chapter 2.7 --- Permutation CSPs and Channeling Constraints --- p.6 / Chapter 3 --- Background --- p.8 / Chapter 3.1 --- From Viewpoints to CSP Models --- p.8 / Chapter 3.2 --- Constraint Satisfaction Techniques --- p.10 / Chapter 3.2.1 --- Backtracking Search --- p.11 / Chapter 3.2.2 --- Consistency Techniques and Constraint Propagation --- p.12 / Chapter 3.2.3 --- Incorporating Consistency Techniques into Backtracking Search --- p.18 / Chapter 4 --- Model Induction --- p.21 / Chapter 4.1 --- Channeling Constraints --- p.21 / Chapter 4.2 --- Induced Models --- p.22 / Chapter 4.3 --- Properties --- p.30 / Chapter 5 --- Exploiting Redundancy from Model Induction --- p.35 / Chapter 5.1 --- Combining Redundant Models --- p.35 / Chapter 5.1.1 --- Model Intersection --- p.36 / Chapter 5.1.2 --- Model Channeling --- p.38 / Chapter 5.2 --- Three New Forms of Model Redundancy --- p.39 / Chapter 5.3 --- Experiments --- p.42 / Chapter 5.3.1 --- Langford's Problem --- p.44 / Chapter 5.3.2 --- Random Permutation CSPs --- p.53 / Chapter 5.3.3 --- Golomb Rulers --- p.72 / Chapter 5.3.4 --- Circular Golomb Rulers --- p.74 / Chapter 5.3.5 --- All-Interval Series Problem --- p.78 / Chapter 6 --- Concluding Remarks --- p.82 / Chapter 6.1 --- Contributions --- p.82 / Chapter 6.2 --- Future Work --- p.83
166

Disjunctive argumentation semantics (DAS) for reasoning over distributed uncertain knowledge.

January 1998 (has links)
by Benson, Ng Hin Kwong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 111-117). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.9 / Chapter 1.1 --- Our approach --- p.11 / Chapter 1.2 --- Organization of the thesis --- p.12 / Chapter 2 --- Logic Programming --- p.13 / Chapter 2.1 --- Logic programming in Horn clauses --- p.14 / Chapter 2.1.1 --- Problem with incomplete information --- p.15 / Chapter 2.1.2 --- Problem with inconsistent information --- p.15 / Chapter 2.1.3 --- Problem with indefinite information --- p.16 / Chapter 2.2 --- Logic programming in non-Horn clauses --- p.16 / Chapter 2.2.1 --- Reasoning under incomplete information --- p.17 / Chapter 2.2.2 --- Reasoning under inconsistent information --- p.17 / Chapter 2.2.3 --- Reasoning under indefinite information --- p.20 / Chapter 2.3 --- "Coexistence of incomplete, inconsistent and indefinite information" --- p.21 / Chapter 2.4 --- Stable semantics --- p.22 / Chapter 2.5 --- Well-founded semantics --- p.23 / Chapter 2.6 --- Chapter summary --- p.25 / Chapter 3 --- Argumentation --- p.26 / Chapter 3.1 --- Toulmin's informal argumentation model --- p.27 / Chapter 3.2 --- Rescher's formal argumentation model --- p.28 / Chapter 3.3 --- Argumentation in AI research --- p.30 / Chapter 3.3.1 --- Poole's Logical Framework for Default Reasoning --- p.30 / Chapter 3.3.2 --- Inheritance Reasoning Framework of Touretzky et. al --- p.31 / Chapter 3.3.3 --- Pollock's Theory of Defeasible Reasoning --- p.32 / Chapter 3.3.4 --- Dung's Abstract Argumentation Framework --- p.33 / Chapter 3.3.5 --- Lin and Shoham's Argument System --- p.35 / Chapter 3.3.6 --- Vreeswijk's Abstract Argumentation --- p.35 / Chapter 3.3.7 --- Kowalski and Toni's Uniform Argumentation --- p.36 / Chapter 3.3.8 --- John Fox's Qualitative Argumentation --- p.37 / Chapter 3.3.9 --- Thomas Gordon's Pleading Games --- p.38 / Chapter 3.3.10 --- Chris Reed's Persuasive Dialogue --- p.39 / Chapter 3.3.11 --- Ronald Loui's Argument Game --- p.39 / Chapter 3.3.12 --- "Verheij's Reason-Based, Logics and CumulA" --- p.40 / Chapter 3.3.13 --- Prakken's Defeasible Argumentation --- p.40 / Chapter 3.3.14 --- Summary of existing frameworks --- p.41 / Chapter 3.4 --- Chapter summary --- p.42 / Chapter 4 --- Disjunctive Argumentation Semantics I --- p.46 / Chapter 4.1 --- Background --- p.47 / Chapter 4.2 --- Definition --- p.48 / Chapter 4.3 --- Conflicts within a KBS --- p.52 / Chapter 4.4 --- Conflicts between KBSs --- p.54 / Chapter 4.4.1 --- Credulous View --- p.56 / Chapter 4.4.2 --- Skeptical View --- p.57 / Chapter 4.4.3 --- Generalized Skeptical View --- p.58 / Chapter 4.5 --- Semantics --- p.60 / Chapter 4.6 --- Dialectical proof theory --- p.61 / Chapter 4.7 --- Relation to existing framework --- p.61 / Chapter 4.8 --- Issue on paraconsistency --- p.63 / Chapter 4.9 --- An illustrative example --- p.63 / Chapter 4.10 --- Chapter summary --- p.65 / Chapter 5 --- Disjunctive Argumentation Semantics II --- p.67 / Chapter 5.1 --- Background --- p.68 / Chapter 5.2 --- Definition --- p.70 / Chapter 5.2.1 --- Rules --- p.70 / Chapter 5.2.2 --- Splits --- p.71 / Chapter 5.3 --- Conflicts --- p.74 / Chapter 5.3.1 --- Undercut conflicts --- p.75 / Chapter 5.3.2 --- Rebuttal conflicts --- p.76 / Chapter 5.3.3 --- Thinning conflicts --- p.78 / Chapter 5.4 --- Semantics --- p.80 / Chapter 5.5 --- Relation to existing frameworks --- p.81 / Chapter 5.6 --- Issue on paraconsistency --- p.82 / Chapter 5.7 --- An illustrative example --- p.83 / Chapter 5.8 --- Chapter summary --- p.85 / Chapter 6 --- Evaluation --- p.86 / Chapter 6.1 --- Introduction --- p.86 / Chapter 6.2 --- Methodology --- p.87 / Chapter 6.3 --- DAS I --- p.88 / Chapter 6.3.1 --- Inoue's Benchmark problems --- p.88 / Chapter 6.3.2 --- Sherlock Holmes' problems --- p.96 / Chapter 6.4 --- DAS II --- p.100 / Chapter 6.4.1 --- Inoue's benchmark problems --- p.100 / Chapter 6.4.2 --- Sherlock Holmes' problem --- p.103 / Chapter 6.5 --- Analysis --- p.103 / Chapter 6.5.1 --- Possible extension --- p.104 / Chapter 6.6 --- Chapter summary --- p.106 / Chapter 7 --- Conclusion --- p.108 / Chapter 7.0.1 --- Possible extension of the present work --- p.109 / Bibliography --- p.117 / Chapter A --- First Oreder Logic (FOL) --- p.118 / Chapter B --- DAS-I Proof --- p.121 / Chapter B.1 --- Monotone proof --- p.121 / Chapter B.2 --- Soundness proof --- p.122 / Chapter B.3 --- Completeness proof --- p.123 / Chapter C --- Sherlock Holmes' Silver Blaze Excerpts --- p.125 / Chapter C.1 --- Double life --- p.125 / Chapter C.2 --- Poison stable boy --- p.125
167

Skald| Exploring Story Generation and Interactive Storytelling by Reconstructing Minstrel

Tearse, Brandon 16 February 2019 (has links)
<p> Within the realm of computational story generation sits Minstrel, a decades old system which was once used to explore the idea that, under the correct conditions, novel stories can be generated by taking an existing story and replacing some of its elements with similar ones found in a different story. This concept would eventually fall within the bounds of a strategy known as Case-Based Reasoning (CBR), in which problems are solved by recalling solutions to past problems (the cases), and mutating the recalled cases in order to create an appropriate solution to the current problem. This dissertation uses a rational reconstruction of Minstrel called Minstrel Remixed, a handful of upgraded variants of Minstrel Remixed, and a pair of similar but unrelated storytelling systems, to explore various characteristics of Minstrel-style storytelling systems. </p><p> In the first part of this dissertation I define the class of storytelling systems that are similar to Minstrel. This definition allows me to compare the features of these systems and discuss the various strengths and weaknesses of the variants. Furthermore, I briefly describe the rational reconstruction of Minstrel and then provide a detailed overview of the inner workings of the resulting system, Minstrel Remixed. </p><p> Once Minstrel Remixed was complete, I chose to upgrade it in order to explore the set of stories that it could produced and ways to alter or reconfigure the system with the goal of intentionally influencing the set of possible outputs. This investigation resulted in two new storytelling systems called Conspiracy Forever and Problem Planets. The second portion of this dissertation discusses these systems as well as a number of discoveries about the strengths and weaknesses of Minstrel Style Storytelling Systems in general. More specifically, I discuss that, 1) a human reader's capacity for creating patterns out of an assortment of statements is incredibly useful and output should be crafted to use this potential, 2) Minstrel-Style Storytelling tends to be amnesiac and do a poor job of creating long stories that remain cohesive, and 3) the domain that a storytelling system is working from is incredibly important and must be well engineered. I continue by discussing the methods that I discovered for cleaning up and maintaining a domain and conclude with a section covering interviews with other storytelling system creators about the strengths and weaknesses of their systems in light of my findings about Minstrel Remixed. </p><p> In the final portion of this document I create a framework of six interrelated attributes of stories (length, coherence, creativity, complexity, contextuality, and consolidation,) and use this along with the learning discussed in the first two portions of the dissertation to discuss the strengths and weaknesses of this class of CBR systems when applied to both static story generation and interactive storytelling. I discuss the finding that these systems seem to have some amount of power and although they can be tweaked to produce for example, longer or more consolidated stories, these improvements always come along with a reduction in complexity, coherence, or one of the other attributes. Further discussion of the output power of this class of storytelling systems revolves around the primary limiting factor to their potential, namely the fact that they have no understanding of the symbols and patterns that they are manipulating. Finally, I introduce a number of strategies that I found to be fruitful for increasing the 'output power' of the system and getting around the lack of commonsense reasoning, chiefly improving the domain and adding new subsystems.</p><p>
168

Machine Learning and Network-Based Systems Toxicology Modeling of Chemotherapy-Induced Peripheral Neuropathy

Bloomingdale, Peter 21 March 2019 (has links)
<p> The overarching goal of my thesis work was to utilize the combination of mathematical and experimental models towards an effort to resolve chemotherapy-induced peripheral neuropathy (CIPN), one of the most common adverse effects of cancer chemotherapy. In chapter two, we have developed quantitative-structure toxicity relationship (QSTR) models using machine learning algorithms that enable the prediction of peripheral neuropathy incidence solely from a chemicals molecular structure. The QSTR models enable the prediction of clinical neurotoxicity, which could be potentially useful in early drug discovery to screen out compounds that are highly neurotoxic and identify safer drug candidates to move forward into further development. The QSTR model was used to suggest modifications to the molecular structure of bortezomib that may reduce the number of patients who develop peripheral neuropathy from bortezomib therapy. In the third chapter, we conducted a network-based comparative systems pharmacology analysis of proteasome inhibitions. The concept behind this work was to use <i>in silico</i> pharmacological interaction networks to elucidate the neurotoxic differences between bortezomib and carfilzomib. Our theoretical results suggested the importance of the unfolded protein response in bortezomib neurotoxicity and that the mechanisms of neurotoxicity by proteasome inhibitors closely relate to the pathogenesis of Guillian-Barr&eacute; syndrome caused by the Epstein-Barr virus. In chapter four we have written a review article to introduce the concept of Boolean network modeling in systems pharmacology. Due to the lack of knowledge about parameter values that govern the cellular dynamic processes involved in peripheral nerve damage, the development of a quantitative systems pharmacology model would not be feasible. Therefore, in chapter five, we developed a Boolean network-based systems pharmacology model of intracellular signaling and gene regulation in peripheral neurons. The model was used to simulate the neurotoxic effects of bortezomib and to identify potential treatment strategies for proteasome-inhibitor induced peripheral neuropathy. A novel combinatorial treatment strategy was identified that consists of a TNF? inhibitor, NMDA receptor antagonist, and reactive oxygen species inhibitor. Our subsequent goals were aimed towards translating this finding with the endeavor to hopefully one-day impact human health. Initially we had proposed to use three separate agents for each of these targets, however the clinical administration of three agents to prevent the neurotoxicity of one is likely unfeasible. We then came across a synthetic cannabinoid derivative, dexanabinol, that promiscuously inhibits all three of these targets and was previously developed for its intended use to treat traumatic brain injury. We believe that this drug candidate was worth investigating due to the overlapping pharmacological activity with suggested targets from network analyses, previously established favorable safety profile in humans, notable <i>in vitro/vivo</i> neuroprotective properties, and rising popularity for the therapeutic potential of cannabinoids to treat CIPN. In chapter six we assessed the efficacy of dexanabinol for preventing the neurotoxic effects of bortezomib in various experimental models. Due to the limited translatability of 2D cell culture techniques, we investigated the pharmacodynamics of dexanabinol using a microphysiological model of the peripheral nerve. Bortezomib caused a reduction in electrophysiological endpoints, which were partially restored by dexanabinol. In chapter 7 we evaluated the possible interaction of dexanabinol on the anti-cancer effects of bortezomib. We observed no significant differences in tumor volume between bortezomib alone and in combination with dexanabinol in a multiple myeloma mouse model. Lastly, we are currently investigating the efficacy of dexanabinol in well-established rat model of bortezomib-induced peripheral neuropathy. We believe that positive results would warrant a clinical trial. In conclusion, the statistical and mechanistic models of peripheral neuropathy that were developed could be used to reduce the overall burden of CIPN through the design of safer chemotherapeutics and discovery of novel neuroprotective treatment strategies.</p><p>
169

Spiking Neural Networks and Sparse Deep Learning

Tavanaei, Amirhossein 23 March 2019 (has links)
<p> This document proposes new methods for training multi-layer and deep spiking neural networks (SNNs), specifically, spiking convolutional neural networks (CNNs). Training a multi-layer spiking network poses difficulties because the output spikes do not have derivatives and the commonly used backpropagation method for non-spiking networks is not easily applied. Our methods use novel versions of the brain-like, local learning rule named spike-timing-dependent plasticity (STDP) that incorporates supervised and unsupervised components. Our method starts with conventional learning methods and converts them to spatio-temporally local rules suited for SNNs. </p><p> The training uses two components for unsupervised feature extraction and supervised classification. The first component refers to new STDP rules for spike-based representation learning that trains convolutional filters and initial representations. The second introduces new STDP-based supervised learning rules for spike pattern classification via an approximation to gradient descent by combining the STDP and anti-STDP rules. Specifically, the STDP-based supervised learning model approximates gradient descent by using temporally local STDP rules. Stacking these components implements a novel sparse, spiking deep learning model. Our spiking deep learning model is categorized as a variation of spiking CNNs of integrate-and-fire (IF) neurons with performance comparable with the state-of-the-art deep SNNs. The experimental results show the success of the proposed model for image classification. Our network architecture is the only spiking CNN which provides bio-inspired STDP rules in a hierarchy of feature extraction and classification in an entirely spike-based framework.</p><p>
170

Communicating Plans in Ad Hoc Multiagent Teams

Santarra, Trevor 16 April 2019 (has links)
<p> With the rising use of autonomous agents within robotic and software settings, agents may be required to cooperate in teams while having little or no information regarding the capabilities of their teammates. In these ad hoc settings, teams must collaborate on the fly, having no prior opportunity for coordination. Prior research in this area commonly either assumes that communication between agents is impossible given their heterogeneous design or has left communication as an open problem. Typically, to accurately predict a teammate's behavior at a future point in time, ad hoc agents leverage models learned from past experience and attempt to infer a teammate's intended strategy through observing its current course of action. However, these approaches can fail to arrive at accurate policy predictions, leaving the coordinating agent uncertain and unable to adapt to its teammates' plans. We introduce the problem of communicating minimal sets of teammate policies in order to provide information for collaboration in such ad hoc environments. We demonstrate how an agent may determine what information it should solicit from its peers but further illustrate how optimal solutions to such a problem have intractable computational requirements. Nonetheless, through the characterization of this difficulty, we identify strategies that permit approximate or heuristic approaches, allowing the practical application of this capacity in ad hoc teams.</p><p>

Page generated in 0.104 seconds