• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 16
  • 9
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Attribute Exploration on the Web

Jäschke, Robert, Rudolph, Sebastian 28 May 2013 (has links) (PDF)
We propose an approach for supporting attribute exploration by web information retrieval, in particular by posing appropriate queries to search engines, crowd sourcing systems, and the linked open data cloud. We discuss underlying general assumptions for this to work and the degree to which these can be taken for granted.
12

An Approach to Incremental Learning Good Classification Tests

Naidenova, Xenia, Parkhomenko, Vladimir 28 May 2013 (has links) (PDF)
An algorithm of incremental mining implicative logical rules is pro-posed. This algorithm is based on constructing good classification tests. The in-cremental approach to constructing these rules allows revealing the interde-pendence between two fundamental components of human thinking: pattern recognition and knowledge acquisition.
13

Parallelisierung von Algorithmen zur Nutzung auf Architekturen mit Teilwortparallelität / Parallelization of Algorithms for using on Architectures with Subword Parallelism

Schaffer, Rainer 12 October 2010 (has links) (PDF)
Der technologische Fortschritt gestattet die Implementierung zunehmend komplexerer Prozessorarchitekturen auf einem Schaltkreis. Ein Trend der letzten Jahre ist die Implementierung von mehr und mehr Verarbeitungseinheiten auf einem Chip. Daraus ergeben sich neue Herausforderungen für die Abbildung von Algorithmen auf solche Architekturen, denn alle Verarbeitungseinheiten sollen effizient bei der Ausführung des Algorithmus genutzt werden. Der Schwerpunkt der eingereichten Dissertation ist die Ausnutzung der Parallelität von Rechenfeldern mit Teilwortparallelität. Solche Architekturen erlauben Parallelverarbeitung auf mehreren Ebenen. Daher wurde eine Abbildungsstrategie, mit besonderem Schwerpunkt auf Teilwortparallelität entwickelt. Diese Abbildungsstrategie basiert auf den Methoden des Rechenfeldentwurfs. Rechenfelder sind regelmäßig angeordnete Prozessorelemente, die nur mit ihren Nachbarelementen kommunizieren. Die Datenein- und -ausgabe wird durch die Prozessorelemente am Rand des Rechenfeldes realisiert. Jedes Prozessorelement kann mehrere Funktionseinheiten besitzen, welche die Rechenoperationen des Algorithmus ausführen. Die Teilwortparallelität bezeichnet die Fähigkeit zur Teilung des Datenpfads der Funktionseinheit in mehrere schmale Datenpfade für die parallele Ausführung von Daten mit geringer Wortbreite. Die entwickelte Abbildungsstrategie unterteilt sich in zwei Schritte, die \"Vorverarbeitung\" und die \"Mehrstufige Modifizierte Copartitionierung\" (kurz: MMC). Die \"Vorverarbeitung\" verändert den Algorithmus in einer solchen Art, dass der veränderte Algorithmus schnell und effizient auf die Zielarchitektur abgebildet werden kann. Hierfür wurde ein Optimierungsproblem entwickelt, welches schrittweise die Parameter für die Transformation des Algorithmus bestimmt. Die \"Mehrstufige Modifizierte Copartitionierung\" wird für die schrittweise Anpassung des Algorithmus an die Zielarchitektur eingesetzt. Darüber hinaus ermöglicht die Abbildungsmethode die Ausnutzung der lokalen Register in den Prozessorelementen und die Anpassung des Algorithmus an die Speicherarchitektur, an die das Rechenfeld angebunden ist. Die erste Stufe der MMC dient der Transformation eines Algorithmus mit Einzeldatenoperationen in einen Algorithmus mit teilwortparallelen Operationen. Mit der zweiten Copartitionierungsstufe wird der Algorithmus an die lokalen Register und an das Rechenfeld angepasst. Weitere Copartitionierungsstufen können zur Anpassung des Algorithmus an die Speicherarchitektur verwendet werden. / The technological progress allows the implementation of complex processor architectures on a chip. One trend of the last years is the implemenation of more and more execution units on one chip. That implies new challenges for the mapping of algorithms on such architectures, because the execution units should be used efficiently during the execution of the algorithm. The focus of the submitted dissertation thesis is the utilization of the parallelism of processor arrays with subword parallelism. Such architectures allow parallel executions on different levels. Therefore an algorithm mapping strategy was developed, where the exploitation of the subword parallelism was in the focus. This algorithm mapping strategy is based on the methods of the processor array design. Processor arrays are regular arranged processor elements, which communicate with their neighbors elements only. The data in- and output will be realized by the processor elements on the border of the array. Each processor element can have several functional units, which execute the computational operations. Subword parallelism means the capability for splitting the data path of the functional units in several smaller chunks for the parallel execution of data with lower word width. The developed mapping strategy is subdivided in two steps, the \"Preprocessing\" and the \"Multi-Level Modified Copartitioning\" (kurz: MMC), whereat the MMC means the method of the step simultaneously. The \"Preprocessing\" alter the algorithm in such a kind, that the altered algorithm can be fast and efficient mapped on the target architecture. Therefore an optimization problem was developed, which determines gradual the parameter for the transformation of the algorithm. The \"Multi-Level Modified Copartitioning\" is used for mapping the algorithm gradual on the target architecture. Furthermore the mapping methodology allows the exploitation of the local registers in the processing elements and the adaptation of the algorithm on the memory architecture, where the processing array is connected on. The first level of the MMC is used for the transformation of an algorithm with operation based on single data to an algorithm with subword parallel operations. With the second level, the algorithm will be adapted to the local registers in the processing elements and to the processor array. Further copartition levels can be used for matching the algorithm to the memory architecture.
14

Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis

Distel, Felix 29 June 2011 (has links) (PDF)
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
15

Formal Concept Analysis Methods for Description Logics / Formale Begriffsanalyse Methoden für Beschreibungslogiken

Sertkaya, Baris 09 July 2008 (has links) (PDF)
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
16

Standard and Non-standard reasoning in Description Logics / Standard- und Nicht-Standard-Inferenzen in Beschreibungslogiken

Brandt, Sebastian-Philipp 23 May 2006 (has links) (PDF)
The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question.
17

Anchoring Symbols to Percepts in the Fluent Calculus / Verankern von Objektsymbolen mithilfe des Fluentenkalküls

Fichtner, Matthias 04 January 2010 (has links) (PDF)
An abstract knowledge representation of cognitive robots - as used for reasoning and planning - typically relies on symbols denoting objects of the world and states of affairs. The process of creating and maintaining the correct connection between a symbol denoting an object and its corresponding perceptual image (called percept), both referring to the same physical object, is called symbol anchoring. Most current cognitive systems implement an ad hoc solution which may work for the specific, intended application under certain conditions. Conversely, we suggest a formal and general approach to the symbol anchoring problem, which enhances previous approaches in terms of flexibility, applicability and expressiveness, and which completely automates the process of determining and maintaining all plausible hypotheses of correspondences between object symbols and perceptual images of physical objects. Based on the first-order logical Fluent Calculus, our approach inherits its rich expressiveness with respect to knowledge representation and reasoning. Implementing all required symbol anchoring functionalities, our approach also complies with fundamental concepts of phenomenalism, representationalism and the sense-data theory of philosophy of cognition.

Page generated in 0.0139 seconds