• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for semantic web implementation based on context-oriented controlled automatic annotation.

Hatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site¿s pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application¿s domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text¿s meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the ¿Intelligent Document¿ ¿The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation¿. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
2

A framework for semantic web implementation based on context-oriented controlled automatic annotation

Hatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site's pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application's domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text's meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the 'Intelligent Document' 'The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation'. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
3

Modélisation des signes dans les ontologies biomédicales pour l'aide au diagnostic. / Representation of the signs in the biomedical ontologies for the help to the diagnosis.

Donfack Guefack, Pierre Sidoine V. 20 December 2013 (has links)
Introduction : Établir un diagnostic médical fiable requiert l’identification de la maladie d’un patient sur la base de l’observation de ses signes et symptômes. Par ailleurs, les ontologies constituent un formalisme adéquat et performant de représentation des connaissances biomédicales. Cependant, les ontologies classiques ne permettent pas de représenter les connaissances liées au processus du diagnostic médical : connaissances probabilistes et connaissances imprécises et vagues. Matériel et méthodes : Nous proposons des méthodes générales de représentation des connaissances afin de construire des ontologies adaptées au diagnostic médical. Ces méthodes permettent de représenter : (a) Les connaissances imprécises et vagues par la discrétisation des concepts (définition de plusieurs catégories distinctes à l’aide de valeurs seuils ou en représentant les différentes modalités possibles). (b) Les connaissances probabilistes (les sensibilités et les spécificités des signes pour les maladies, et les prévalences des maladies pour une population donnée) par la réification des relations ayant des arités supérieures à 2. (c) Les signes absents par des relations et (d) les connaissances liées au processus du diagnostic médical par des règles SWRL. Un moteur d’inférences abductif et probabiliste a été conçu et développé. Ces méthodes ont été testées à l’aide de dossiers patients réels. Résultats : Ces méthodes ont été appliquées à trois domaines (les maladies plasmocytaires, les urgences odontologiques et les lésions traumatiques du genou) pour lesquels des modèles ontologiques ont été élaborés. L’évaluation a permis de mesurer un taux moyen de 89,34% de résultats corrects. Discussion-Conclusion : Ces méthodes permettent d’avoir un modèle unique utilisable dans le cadre des raisonnements abductif et probabiliste, contrairement aux modèles proposés par : (a) Fenz qui n’intègre que le mode de raisonnement probabiliste et (b) García-crespo qui exprime les probabilités hors du modèle ontologique. L’utilisation d’un tel système nécessitera au préalable son intégration dans le système d’information hospitalier pour exploiter automatiquement les informations du dossier patient électronique. Cette intégration pourrait être facilitée par l’utilisation de l’ontologie du système. / Introduction: Making a reliable medical diagnosis requires the identification of the patient’s disease based on the observation of signs. Moreover, ontologies provide an adequate and efficient formalism for medical knowledge representation. However, classical ontologies do not allow representing knowledge associated with medical reasoning such as probabilistic, imprecise, or vague knowledge. Material and methods: In the current work, general knowledge representation methods are proposed. They aim at building ontologies fitting to medical diagnosis. They allow to represent: (a) imprecise or vague knowledge by discretizing concepts (definition of several distinct categories thanks to threshold values or by representing the various possible modalities), (b) probabilistic knowledge (sensitivity, specificity and prevalence) by reification of relations of arity greater than 2, (c) absent signs by relations and (d) medical reasoning and reasoning on the absent signs by SWRL rules. An abductive reasoning engine and a probabilistic reasoning engine were designed and implemented. The methods were evaluated by use of real patient records. Results: These methods were applied to three domains (the plasma cell diseases, the dental emergencies and traumatic knee injuries) for which the ontological models were developed. The average rate of correct diagnosis was 89.34 %. Discussion-Conclusion: In contrast with other methods proposed by Fenz and García-crespo, the proposed methods allow to have a unique model which can be used both for abductive and probabilistic reasoning. The use of such a system will require beforehand its integration in the hospital information system for the automatic exploitation of the electronic patient record. This integration might be made easier by the use of the ontology on which the system is based.

Page generated in 0.0582 seconds