Spelling suggestions: "subject:"topic mass"" "subject:"topic mas""
1 |
Construction, Évolution et Visualisation de Topic Maps contextualisées / Construction, evolution and visualisation of contextualized Topic MapsKhelifa, Lydia Nadia 18 December 2014 (has links)
Cette thèse s’inscrit dans le cadre de la construction et de l’évolution de Topic Maps en tant que ressources sémantiques servant à l’organisation de contenus pluridisciplinaires et multilingues. Cette Topic Map vise à prendre en charge la variation du sens des termes afin d’assurer une meilleure recherche d’informations au sein d'un contenu. Une approche de visualisation de cette Topic Map a également été proposée. La problématique de cette thèse a découlé du programme FSP-Maghreb qui est un projet franco-maghrébin initié par la FMSH (Fondation Maison des Sciences Humaines et Sociales). Ce projet vise à promouvoir l'échange et le partage de connaissances dans le domaine des sciences humaines et sociales. Ce projet consiste en la construction et la mise en œuvre d’un Wiktionnaire (dictionnaire électronique implémenté sous la technologie wiki sémantique) multilingue et multiculturel pour les sciences humaines et sociales. / This thesis concerns the construction and the evolution of Topic Maps as semantic ressource used to describe and organise multidisciplinary and multilingual contents. This Topic Map aims to support variation of meaning to ensure a better information retrieval in content.The problematic of this research work is resulted from a project initiated by the FMSH called FSP-Maghreb. This project allows exchanges between Maghrebi and French researchers. It also allows the sharing of knowledge related to the two cultures and to the two societies in the human and social sciences. This project consists of the construction of an on-line multicultural and multilingual dictionary based on wiki technology.
|
2 |
Využití XSLT při zpracování Topic MapsJaneček, Petr January 2007 (has links)
Diplomová práce zkoumá možnosti využití XSLT 2.0 při zpracování Topic Maps. Cílem je popsat výhody spojení XSLT a Topic Maps v prostředí webu. Popsány jsou všechny související technologie, zejména: TMAPI, XSLT, XPath a XML formát pro uložení map témat XTM. Pro ověření možností je vytvořeno rozhraní v XSLT 2.0, které je obdobou TMAPI rozhraní. Zároveň je vytvořena ukázková šablona využívající toto rozhraní, která umožňuje převod XTM do HTML. Zjištěno bylo, že spojení těchto technologií je pro prostředí webu vhodné.
|
3 |
Natūralios kalbos apdorojimo terminų ontologija: kūrimo problemos ir jų sprendimo būdai / Ontology of natural language processing terms: development issues and their solutionsRamonas, Vilmantas 17 June 2010 (has links)
Šiame darbe aptariamas natūralios kalbos apdorojimo terminų ontologijos kūrimas, kūrimo problemos ir jų sprendimo būdai. Tam, iš skirtingų šaltinių surinkta 217 NLP terminų. Terminai išversti į lietuvių kalbą. Trumpai aptartos problemos verčiant. Aprašytos tiek kompiuterinės, tiek filosofinės ontologijos, paminėti jų panašumai ir skirtumai. Išsamiau aptartas filosofinis požiūris į sąvokų ir daiktų panašumą, ką reikia žinoti, siekiant kiek galima geriau suprasti kompiuterinių ontologijų sudarymo principus. Išnagrinėtas pats NLP terminas, kas sudaro NLP, kokios natūralios kalbos apdorojimo technologijos jau sukurtos, kokios dar kuriamos.
NLP terminų ontologijos sudarymui pasirinkus Teminių žemėlapių ontologijos struktūrą ir principus, plačiai aprašyti Teminių žemėlapių (TM) sudarymo principai, pagrindinės TM sudedamosios dalys: temos, temų vardai, asociacijos, vaidmenys asociacijose ir kiti.
Vėliau, iš turimų terminų, paliekant tokią struktūrą, kokia rasta šaltinyje, nubraižytas medis. Prieita išvados, jog terminų skaičių reikia mažinti ir atsisakyti pirminės iš šaltinių atsineštos struktūros. Tad palikti tik 69 terminai, darant prielaidą, jog šie svarbiausi. Šiems terminams priskirta keliolika tipų, taip juos suskirstant į grupes.
Ieškant dar geresnio skirstymo būdo, kiekvienam iš terminų priskirtas vienas ar keli jį geriausiai nusakantys meta aprašymai, pvz.: mašininis vertimas – vertimas, aukštas automatizavimo lygis. Visi meta aprašymai suskirstyti į 7 stambiausias grupes... [toliau žr. visą tekstą] / In this work it is discussed the development of ontology of natural language processing terms, developmental problems and their solutions. In order to reveal the topic of this work was gathered a collection of 217 NLP terms from different sources. The terms were translated into Lithuanian language. Briefly were revealed the problems of translation. There were described both the computer and philosophical ontology, mentioned their similarities and differences. There was discussed in detail the philosophical approach to the similarity of concepts and objects which is needed to know seeking to understand the ontology of computer principles as much as possible. There was examined the term of NLP, what is the NLP, which natural language processing technologies have already been developed, which are still being developed.
For the composition of ontology of NLP terms were chosen the structure and principles of the Topic Maps in order to describe in broad the principles of composition of Topic Maps (TM), the main components of TM: theme, topic names, associations, role in association and others.
Later from the got terms there was drawn the tree leaving the structure which was found in the source. It was found that the number of terms should be reduced and it is needed to refuse the primary structure taken from the sources. So, there were left only 69 terms, assuming that they are the most important. There were assigned several types for these terms dividing them into the groups... [to full text]
|
4 |
Extracting metadata from textual documents and utilizing metadata for adding textual documents to an ontologyCaubet, Marc, Cifuentes, Mònica January 2006 (has links)
The term Ontology is borrowed from philosophy, where an ontology is a systematic account of Existence. In Computer Science, ontology is a tool allowing the effective use of information, making it understandable and accessible to the computer. For these reasons, the study of ontologies gained growing interest recently. Our motivation is to create a tool able to build ontologies from a set of textual documents. We present a prototype implementation which extracts metadata from textual documents and uses the metadata for adding textual documents to an ontology. In this paper we will investigate which techniques we have available and which ones have been used to accomplish our problem. Finally, we will show a program written in Java which allows us to build ontologies from textual documents using our approach.
|
5 |
Podpora sémantiky v CMS Drupal / Support of Semantics in CMS DrupalKubaliak, Lukáš January 2011 (has links)
The work concern about the support of semantics in known content managing systems. It is describing the possibilities of use for these technologies and their public accessibility. We find out, that today's technologies and methods are in the state of public inducting. In the question of semantic support in CMS Drupal we developed a tool for extending its support of semantic formats. This tool allows CMS Drupal to export its information in a Topic Maps format. For this it uses the XTM file.
|
6 |
Konzeption und Implementierung einer semantischen Suchmaschine für Topic MapsWindisch, Sven 27 February 2018 (has links)
In den vergangenen Jahren hat die Topic-Maps-Technologie eine zunehmende Bedeutung unter den Datenintegrationstechnologien gewonnen. Für die direkte Abfrage von Informationen auseiner Topic Map existiert mit der Topic-Maps-Abfragesprache TMQL ein mächtiges Werkzeug. Um diese nutzen zu können, muss der Benutzer jedoch sowohl über Kenntnisse der Abfragesprache verfügen als auch das Schema der Topic Map kennen. Deshalb wird eine Suchmaschine benötigt, mit der auch unerfahrene Benutzer die Topic-Maps-Datenbasis durchsuchen können. Nach einer Einführung in die relevanten Topic-Maps-Grundlagen werden zunächst verschiedene auf Topic-Maps-Daten spezialisierte Indexierungsalgorithmen untersucht. Einen Spezialfall stellt dabei die Indexierung virtuell zusammengeführter Topic
Maps dar. Zu diesem Problem werden verschiedene Lösungsmöglichkeiten untersucht. Auf Basis der Suchmaschinenbibliothek Lucene wird eine semantische Suchmaschine entwickelt, welche die Topic-Maps-immanenten Elemente mit expliziter als auch mit impliziter Bedeutung sowohl bei der Indexierung als auch bei der Gewichtung der Suchergebnisse nutzt. Darüber hinaus wird ein allgemeines Modell zur Beschreibung von Topic-Maps-basierten Facetten vorgestellt. Darauf aufbauend werden Möglichkeiten der Erstellung generischer Facetten untersucht. Weiterhin wird mit Hilfe der Topic-Maps-Abfragesprache TMQL eine Methode zur Definition von domänen-spezifischen Facetten entworfen und erläutert. Mit der prototypischen Implementierung einer Schnittstelle, mit der die entstandene Suchmaschine in Topic-Maps-basiertenWebapplikationen genutzt werden kann, wird die einfache Integration der entwickelten Suchmaschine in bestehende Web-Applikationen demonstriert. Dies wird durchdie Schaffung einesneuen Pakets für die Middleware RTM ermöglicht.
|
7 |
Designing a Griotte for the Global Village: Increasing the Evidentiary Value of Oral Histories for Use in Digital LibrariesDunn, Rhonda Thayer 2011 August 1900 (has links)
A griotte in West African culture is a female professional storyteller, responsible for preserving a tribe's history and genealogy by relaying its folklore in oral and musical recitations. Similarly, Griotte is an interdisciplinary project that seeks to foster collaboration between tradition bearers, subject experts, and computer specialists in an effort to build high quality digital oral history collections. To accomplish this objective, this project preserves the primary strength of oral history, namely its ability to disclose "our" intangible culture, and addresses its primary criticism, namely its dubious reliability due to reliance on human memory and integrity. For a theoretical foundation and a systematic model, William Moss's work on the evidentiary value of historical sources is employed. Using his work as a conceptual framework, along with Semantic Web technologies (e.g. Topic Maps and ontologies), a demonstrator system is developed to provide digital oral history tools to a "sample" of the target audience(s).
This demonstrator system is evaluated via two methods: 1) a case study conducted to employ the system in the actual building of a digital oral history collection (this step also created sample data for the following assessment), and 2) a survey which involved a task-based evaluation of the demonstrator system. The results of the survey indicate that integrating oral histories with documentary evidence increases the evidentiary value of oral histories. Furthermore, the results imply that individuals are more likely to use oral histories in their work if their evidentiary value is increased. The contributions of this research – primarily in the area of organizing metadata on the World Wide Web – and considerations for future research are also provided.
|
8 |
Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web RepresentationsStefanova, Silvia January 2013 (has links)
This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases. An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs. When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing. To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time. / eSSENCE
|
9 |
Introduction de raisonnement dans un outil industriel de gestion des connaissancesCarloni, Olivier 24 November 2008 (has links) (PDF)
Le travail de thèse présenté dans ce document porte sur la conception d'un service de validation et d'enrichissement d'annotations pour un outil industriel de gestion des connaissances basé sur le langage des Topic Maps (TM). Un tel service nécessitant la mise en oeuvre de raisonnements sur les connaissances, il a été nécessaire de doter le langage des TM d'une sémantique formelle. Ceci a été réalisé par l'intermédiaire d'une transformation réversible des TM vers le formalisme logique des graphes conceptuels qui dispose d'une représentation graphique des connaissances (les TM pouvant facilement en être munie d'une). La solution a été mise en oeuvre dans deux applications, l'une conçue pour la veille médiatique et l'autre pour la promotion de ressources touristiques. Schématiquement, des annotations sont extraites automatiquement des documents selon le domaine concerné (actualité/économie ou tourisme) puis ajoutées à la base de connaissances. Elles sont ensuite fournies au service d'enrichissement et de validation qui les complète de nouvelles connaissances et décide de leur validité, puis retourne à la base de connaissance le résultat de l'enrichissement et de la validation.
|
10 |
Weaving the semantic web: Contributions and insightsCregan, Anne, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
|
Page generated in 0.059 seconds