• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 8
  • 3
  • Tagged with
  • 24
  • 14
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

MArachna eine semantische Analyse der mathematischen Sprache für ein computergestütztes Information-Retrieval-System /

Natho, Nicole. Unknown Date (has links) (PDF)
Techn. Universiẗat, Diss., 2005--Berlin.
12

Kontextbezogene, workflowbasierte Assessmentverfahren auf der Grundlage semantischer Wissensbasen

Molch, Silke January 2015 (has links)
Mit diesem Beitrag sollen Anwendungs- und Einsatzszenarien von komplexen, kontextbezogenen Echtzeit-Assessment- bzw. Evaluierungsverfahren im Bereich des operativen Prozessmanagements bei interdisziplinären, ganzheitlichen Planungen vorgestellt und prototypisch demonstriert werden. Dazu werden kurz die jeweiligen strukturellen und prozessoralen Grundvoraussetzungen bzw. eingesetzten Methoden erläutert und deren aufeinander abgestimmtes Zusammenspiel im gesamten Handlungsablauf demonstriert.
13

Aus Ideen werden Projekte, werden Ergebnisse, werden Ideen

Heller, Lambert, Hoffmann, Tracy 11 April 2017 (has links)
Wie können Bibliothekare digitales Gemeingut in der modernen Informationsgesellschaft mitgestalten? Welche Chancen ergeben sich aus der Kooperation mit Netz-Communities wie der Wikipedia? Wer kreiert überhaupt Wissen in einer Bibliothek? Auf der Suche nach Antworten auf diese Fragen trafen sich im Dezember letzten Jahres Bibliothekarinnen und Wikipedia-Aktive zum ersten WikiLibrary-Barcamp in Dresden. / enthält die Titel: Bibliotheken im Netz – eine Insel im Ozean des freien Wissens? Das Barcamp-Feeling – ein Erlebnisbericht
14

WebKnox: Web Knowledge Extraction

Urbansky, David 21 August 2009 (has links) (PDF)
This thesis focuses on entity and fact extraction from the web. Different knowledge representations and techniques for information extraction are discussed before the design for a knowledge extraction system, called WebKnox, is introduced. The main contribution of this thesis is the trust ranking of extracted facts with a self-supervised learning loop and the extraction system with its composition of known and refined extraction algorithms. The used techniques show an improvement in precision and recall in most of the matters for entity and fact extractions compared to the chosen baseline approaches.
15

Konsistenzerhaltende Techniken für generierbare Wissensbasen zum Entwurf eingebetteter Systeme

Sporer, Mathias 18 February 2008 (has links) (PDF)
Der Entwurfsprozess informationsverarbeitender Systeme ist gekennzeichnet durch die Beschreibung von speichernden, verarbeitenden und übertragenden Komponenten auf unterschiedlichen Abstraktionsstufen. Sowohl für spezifische Anwendungsdomänen als auch für die jeweiligen Abstraktionsstufen wurden in der Vergangenheit Werkzeuge entwickelt, die den Systemdesigner von der Phase der Anforderungsspezifikation bis hin zu Implementierung und funktionaler Erprobung begleiten. Beim Entwurf komplexer Systeme im allgemeinen und eingebetteter Systeme im besonderen stellt sich zusätzlich das Problem der Wiederverwendung von Komponenten aus früheren Entwürfen, der Transformation des Entwurfswissens über die Grenzen der Abstraktionsstufen hinweg sowie die Integration einer variablen Anzahl domänenspezifischer Werkzeuge in den Entwurfsprozess. Voraussetzung eines korrekten Designs ist dabei die anwendungsinvariante Integritätserhaltung aller beteiligten Entwurfsdaten unabhängig von ihrer Repräsentation. Nach der Diskussion des Integritätsbegriffs für konventionelle Informationssysteme und den nötigen Erweiterungen für eingebettete Systeme werden Verfahren zur Modellierung des Entwurfsprozesses vorgestellt, mit deren Hilfe eine der spezifischen Entwicklungsaufgabe optimal entsprechende Wissensbasis generiert und fortwährend neuen Anforderungen von Fremdwerkzeugen und Entwurfsverfahren angepasst werden kann. Sie erfordert vom Anwender keine Detailkenntnisse des zugrunde liegenden Datenmodells. Die Generierbarkeit der Wissensbasis und ihrer Werkzeuge beruht auf einem Metamodell, das sich auf eine erweiterbare Objektalgebra zur Struktur- und Verhaltensbeschreibung informationsverarbeitender Systeme stützt und in domänenspezifische Zielsysteme transformierbar ist. / The design process of data processing systems is characterized by the description of storing, processing and transmitting components on different levels of abstraction. In the past tools have been developed for specific application domains as well as for the respective abstraction levels. They support the system designer from the stage of the requirements specification down to implementation and functional test. During the sketch of complex systems in general and embedded systems in particular, problems occur in the following areas: reusing the components from former drafts; transforming the design knowledge across the boundaries of abstraction levels; integrating a variable number of domain specific tools in the design process. The precondition for a correct design is the integrity preservation of all involved draft data no matter which sources such as databases, XML files or conventional HOST file systems provide them. After discussing the integrity term regarding conventional information systems and the extensions necessary for embedded systems, approaches for modelling the design process are presented. They help to generate a knowledge base which is optimally adjusted to a particular design task and can be continuously adapted to new requests coming from external tools and design processes. The user does not need detailed knowledge about the knowledge base's underlying data model. The capability of generating the knowledge base and its tools is based on a meta model. First, this model is based on an extensible object algebra applied when describing the structure and behaviour of data processing systems and second, the model is transformable into domain specific target systems.
16

Verification of Data-aware Business Processes in the Presence of Ontologies

Santoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging. In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs. We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
17

Towards a Unifying Visualization Ontology

Voigt, Martin, Polowinski, Jan 13 April 2011 (has links) (PDF)
Although many terminologies, taxonomies and also first ontologies for visualization have been suggested, there is still no unified and formal knowledge representation including the various fields of this interdisciplinary domain. We moved a step towards such an ontology by systematically reviewing existing models and classifications, identifying important fields and discussing inconsistently used terms. Finally, we specified an initial visualization ontology which can be used for both classification and synthesis of graphical representations. Our ontology can also serve the visualization community as a foundation to further formalize, align and unify its existing and future knowledge.
18

Konsistenzerhaltende Techniken für generierbare Wissensbasen zum Entwurf eingebetteter Systeme

Sporer, Mathias 16 July 2007 (has links)
Der Entwurfsprozess informationsverarbeitender Systeme ist gekennzeichnet durch die Beschreibung von speichernden, verarbeitenden und übertragenden Komponenten auf unterschiedlichen Abstraktionsstufen. Sowohl für spezifische Anwendungsdomänen als auch für die jeweiligen Abstraktionsstufen wurden in der Vergangenheit Werkzeuge entwickelt, die den Systemdesigner von der Phase der Anforderungsspezifikation bis hin zu Implementierung und funktionaler Erprobung begleiten. Beim Entwurf komplexer Systeme im allgemeinen und eingebetteter Systeme im besonderen stellt sich zusätzlich das Problem der Wiederverwendung von Komponenten aus früheren Entwürfen, der Transformation des Entwurfswissens über die Grenzen der Abstraktionsstufen hinweg sowie die Integration einer variablen Anzahl domänenspezifischer Werkzeuge in den Entwurfsprozess. Voraussetzung eines korrekten Designs ist dabei die anwendungsinvariante Integritätserhaltung aller beteiligten Entwurfsdaten unabhängig von ihrer Repräsentation. Nach der Diskussion des Integritätsbegriffs für konventionelle Informationssysteme und den nötigen Erweiterungen für eingebettete Systeme werden Verfahren zur Modellierung des Entwurfsprozesses vorgestellt, mit deren Hilfe eine der spezifischen Entwicklungsaufgabe optimal entsprechende Wissensbasis generiert und fortwährend neuen Anforderungen von Fremdwerkzeugen und Entwurfsverfahren angepasst werden kann. Sie erfordert vom Anwender keine Detailkenntnisse des zugrunde liegenden Datenmodells. Die Generierbarkeit der Wissensbasis und ihrer Werkzeuge beruht auf einem Metamodell, das sich auf eine erweiterbare Objektalgebra zur Struktur- und Verhaltensbeschreibung informationsverarbeitender Systeme stützt und in domänenspezifische Zielsysteme transformierbar ist. / The design process of data processing systems is characterized by the description of storing, processing and transmitting components on different levels of abstraction. In the past tools have been developed for specific application domains as well as for the respective abstraction levels. They support the system designer from the stage of the requirements specification down to implementation and functional test. During the sketch of complex systems in general and embedded systems in particular, problems occur in the following areas: reusing the components from former drafts; transforming the design knowledge across the boundaries of abstraction levels; integrating a variable number of domain specific tools in the design process. The precondition for a correct design is the integrity preservation of all involved draft data no matter which sources such as databases, XML files or conventional HOST file systems provide them. After discussing the integrity term regarding conventional information systems and the extensions necessary for embedded systems, approaches for modelling the design process are presented. They help to generate a knowledge base which is optimally adjusted to a particular design task and can be continuously adapted to new requests coming from external tools and design processes. The user does not need detailed knowledge about the knowledge base's underlying data model. The capability of generating the knowledge base and its tools is based on a meta model. First, this model is based on an extensible object algebra applied when describing the structure and behaviour of data processing systems and second, the model is transformable into domain specific target systems.
19

Verification of Data-aware Business Processes in the Presence of Ontologies

Santoso, Ario 13 May 2016 (has links)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging. In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs. We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
20

WebKnox: Web Knowledge Extraction

Urbansky, David 26 January 2009 (has links)
This thesis focuses on entity and fact extraction from the web. Different knowledge representations and techniques for information extraction are discussed before the design for a knowledge extraction system, called WebKnox, is introduced. The main contribution of this thesis is the trust ranking of extracted facts with a self-supervised learning loop and the extraction system with its composition of known and refined extraction algorithms. The used techniques show an improvement in precision and recall in most of the matters for entity and fact extractions compared to the chosen baseline approaches.

Page generated in 0.0491 seconds