• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 9
  • 6
  • 2
  • 1
  • Tagged with
  • 38
  • 38
  • 12
  • 12
  • 8
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Annotation et recherche contextuelle des documents multimédias socio-personnels / Context-aware annotation and retrieval of socio-personal multimedia documents

Lajmi, Sonia 11 March 2011 (has links)
L’objectif de cette thèse est d’instrumentaliser des moyens, centrés utilisateur, de représentation, d’acquisition, d’enrichissement et d’exploitation des métadonnées décrivant des documents multimédias socio-personnels. Afin d’atteindre cet objectif, nous avons proposé un modèle d’annotation, appelé SeMAT avec une nouvelle vision du contexte de prise de vue. Nous avons proposé d’utiliser des ressources sémantiques externes telles que GeoNames , et Wikipédia pour enrichir automatiquement les annotations partant des éléments de contexte capturés. Afin d’accentuer l’aspect sémantique des annotations, nous avons modélisé la notion de profil social avec des outils du web sémantique en focalisant plus particulièrement sur la notion de liens sociaux et un mécanisme de raisonnement permettant d’inférer de nouveaux liens sociaux non explicités. Le modèle proposé, appelé SocialSphere, construit un moyen de personnalisation des annotations suivant la personne qui consulte les documents (le consultateur). Des exemples d’annotations personnalisées peuvent être des objets utilisateurs (e.g. maison, travail) ou des dimensions sociales (e.g. ma mère, le cousin de mon mari). Dans ce cadre, nous avons proposé un algorithme, appelé SQO, permettant de suggérer au consultateur des dimensions sociales selon son profil pour décrire les acteurs d’un document multimédia. Dans la perspective de suggérer à l’utilisateur des évènements décrivant les documents multimédias, nous avons réutilisé son expérience et l’expérience de son réseau de connaissances en produisant des règles d’association. Dans une dernière partie, nous avons abordé le problème de correspondance (ou appariement) entre requête et graphe social. Nous avons proposé de ramener le problème de recherche de correspondance à un problème d’isomorphisme de sous-graphe partiel. Nous avons proposé un algorithme, appelé h-Pruning, permettant de faire une correspondance rapprochée entre les nœuds des deux graphes : motif (représentant la requête) et social. Pour la mise en œuvre, nous avons réalisé un prototype à deux composantes : web et mobile. La composante mobile a pour objectif de capturer les éléments de contexte lors de la création des documents multimédias socio-personnels. Quant à la composante web, elle est dédiée à l’assistance de l’utilisateur lors de son annotation ou consultation des documents multimédias socio-personnels. L’évaluation a été effectuée en se servant d’une collection de test construite à partir du service de médias sociaux Flickr. Les tests ont prouvé : (i) l’efficacité de notre approche de recherche dans le graphe social en termes de temps d’exécution ; (ii) l’efficacité de notre approche de suggestion des événements (en effet, nous avons prouvé notre hypothèse en démontrant l’existence d’une cooccurrence entre le contexte spatio-temporel et les événements) ; (iii) l’efficacité de notre approche de suggestion des dimensions sociales en termes de temps d’exécution. / The overall objective of this thesis is to exploit a user centric means of representation, acquisition, enrichment and exploitation of multimedia document metadata. To achieve this goal, we proposed an annotation model, called SeMAT with a new vision of the snapshot context. We proposed the usage of external semantic resources (e.g. GeoNames ,, Wikipedia , etc.) to enrich the annotations automatically from the snapshot contextual elements. To accentuate the annotations semantic aspect, we modeled the concept of ‘social profile’ with Semantic web tools by focusing, in particular, on social relationships and a reasoning mechanism to infer a non-explicit social relationship. The proposed model, called SocialSphere is aimed to exploit a way to personalize the annotations to the viewer. Examples can be user’s objects (e.g. home, work) or user’s social dimensions (e.g. my mother, my husband's cousin). In this context, we proposed an algorithm, called SQO to suggest social dimensions describing actors in multimedia documents according to the viewer’s social profile. For suggesting event annotations, we have reused user experience and the experience of the users in his social network by producing association rules. In the last part, we addressed the problem of pattern matching between query and social graph. We proposed to steer the problem of pattern matching to a sub-graph isomorphism problem. We proposed an algorithm, called h-Pruning, for partial sub-graph isomorphism to ensure a close matching between nodes of the two graphs: motive (representing the request) and the social one. For implementation, we realized a prototype having two components: mobile and web. The mobile component aims to capture the snapshot contextual elements. As for the web component, it is dedicated to the assistance of the user during his socio-personnel multimedia document annotation or socio-personnel multimedia document consultation. The evaluation have proven: (i) the effectiveness of our exploitation of social graph approach in terms of execution time, (ii) the effectiveness of our event suggestion approach (we proved our hypothesis by demonstrating the existence of co-occurrence between the spatio-temporal context and events), (iii) the effectiveness of our social dimension suggestion approach in terms of execution time.
32

Intégration des approches ontologiques et d'ingénierie dirigée par les modèles pour la résolution de problèmes d'interopérabilité / Integration of model driven engineering and ontology approaches for solving interoperability issues

Liu, Hui 13 October 2011 (has links)
Quand des entreprises collaborent entre elles pour atteindre leurs objectifs métiers, des problèmes d'interopérabilité seront rencontrés. Afin de résoudre ces problèmes, nous étudions les domaines suivants : les processus métier collaboratifs, MDA, SOA, ESB et l'ontologie. Nous proposons alors un cadre intégrant ces cinq domaines pour les solutions TI (technologies de l’'information) aux problèmes d'interopérabilité. Pour construire ce cadre, nous proposons une Méthode Basée sur des Processus pour l'Interopérabilité d'Entreprise (MBPIE), qui utilise des processus collaboratifs pour représenter des exigences de collaboration. MBPIE transforme des processus collaboratifs en plusieurs processus d'interopérabilité exécutables par des transformations de modèles. En MBPIE, l'ontologie est utilisée pour annoter les processus collaboratifs. Pendant la transformation des processus, de nouvelles informations ontologiques sont ajoutées dans les processus pour les rendre exécutables. Nous avons conçu un bus de services sémantiques Basé sur l'Ontologie et Dirigé par des Buts (BODB) pour supporter l'exécution des processus d'interopérabilité. Ce bus est basé sur un mécanisme symétrique pour l'invocation de services sémantiques. Ce mécanisme utilise l’extension de SOAP (Simple Object Access Protocol) qui est composée de trois parties : le format des messages BODB, le module BODB et le modèle de traitement BODB. Ce mécanisme a trois propriétés de transparence (emplacement, sémantique et technique) qui sont essentielles à l'exécution des processus d'interopérabilité. Ensemble, MBPIE et le bus constituent une approche fédérée pour résoudre les problèmes d'interopérabilité. / When enterprises collaborate with others to achieve business objectives, enterprise interoperability problems will be encountered. In order to solve the problems, in this thesis, we analyze the five related research domains: collaborative business process, MDA, SOA, ESB and ontology. Consequently, we propose a framework for IT solutions to interoperability problems, which integrates the above five domains together. In order to realize the framework, we propose a Process-Based Method for Enterprise Interoperability (PBMEI), which employs collaborative processes to represent collaboration requirements between enterprises. PBMEI transforms collaborative processes into multiple executable interoperability processes through model transformations. In PBMEI, ontology is used to annotate collaborative processes. During model transformation, new ontology information will be added into processes. Such information will contribute to process execution. In order to support execution of interoperability processes, an ontology-based and goal-driven (OBGD) semantic service bus is designed. This bus is based on a symmetric mechanism for OBGD service invocation. The mechanism is designed according to OBGD Simple Object Access Protocol (SOAP) which is composed of three parts: OBGD message format, OBGD module and OBGD processing model. This mechanism has three properties: location transparency, semantics transparency and technique transparency, which are critical to execution of interoperability processes. The bus also supports federated deployment for inter-enterprise interoperability. PBMEI and the OBGD bus together constitute a federated approach for solving interoperability problems.
33

Semantically-enriched and semi-autonomous collaboration framework for the Web of Things : design, implementation and evaluation of a multi-party collaboration framework with semantic annotation and representation of sensors in the Web of Things and a case study on disaster management

Amir, Mohammad January 2015 (has links)
This thesis proposes a collaboration framework for the Web of Things based on the concepts of Service-oriented Architecture and integrated with semantic web technologies to offer new possibilities in terms of efficient asset management during operations requiring multi-actor collaboration. The motivation for the project comes from the rise in disasters where effective cross-organisation collaboration can increase the efficiency of critical information dissemination. Organisational boundaries of participants as well as their IT capability and trust issues hinders the deployment of a multi-party collaboration framework, thereby preventing timely dissemination of critical data. In order to tackle some of these issues, this thesis proposes a new collaboration framework consisting of a resource-based data model, resource-oriented access control mechanism and semantic technologies utilising the Semantic Sensor Network Ontology that can be used simultaneously by multiple actors without impacting each other’s networks and thus increase the efficiency of disaster management and relief operations. The generic design of the framework enables future extensions, thus enabling its exploitation across many application domains. The performance of the framework is evaluated in two areas: the capability of the access control mechanism to scale with increasing number of devices, and the capability of the semantic annotation process to increase in efficiency as more information is provided. The results demonstrate that the proposed framework is fit for purpose.
34

Anotação semântica baseada em ontologia: um estudo do português brasileiro em documentos históricos do final do século XIX

Pereira, Juliana Wolf 01 July 2014 (has links)
Made available in DSpace on 2016-06-02T19:06:12Z (GMT). No. of bitstreams: 1 5898.pdf: 11774674 bytes, checksum: 3cc87530008d9b42c105781f8a1068a3 (MD5) Previous issue date: 2014-07-01 / Financiadora de Estudos e Projetos / This dissertation presents an approach to proceed with semantic annotation in historical documents from the 19th century that discuss the constitution of the mother tongue, the Portuguese Language in Brazil. The objective is to generate a group of semantically annotated documents in agreement with a domain ontology. To provide this domain ontology, the IntrumentoLinguistico Ontology was built, and it supported the process of automatic semantic annotation. The results obtained with the annotation were analyzed in comparison with the Gold Standard and they presented an elevated level of coincidence, between 0.86 and 1.00 for the Fl-score measure. Besides that, it was possible to locate new documents about the discussed domain in a sample of the Revistas Brazileiras. These results prove the efficacy of the approach of automatic semantic annotation. / Esta dissertação apresenta uma abordagem de anotação semântica automática em documentos históricos do século XIX que discutem a constituição da língua pátria, a Língua Portuguesa no Brasil. O objetivo e gerar um conjunto de documentos semanticamente anotados em acordo com uma ontologia de domínio. Para prover essa ontologia de domínio, foi construída a Ontologia Instrumento Linguístico que apoiou o processo para a realização da anotação semântica automática. Os resultados obtidos com a anotação foram analisados em comparação com o Gold Standard e apresentaram alto grau de coincidência, entre 0.86 e 1.00 para a medida F1-Score. Além disso, foi possível localizar novos documentos sobre o domínio discutido em uma amostra das Revistas Brazileiras. Esses resultados comprovam a eficácia da abordagem de anotação semântica automática.
35

Extrakce strukturovaných dat z českého webu s využitím extrakčních ontologií / Extracting Structured Data from Czech Web Using Extraction Ontologies

Pouzar, Aleš January 2012 (has links)
The presented thesis deals with the task of automatic information extraction from HTML documents for two selected domains. Laptop offers are extracted from e-shops and free-published job offerings are extracted from company sites. The extraction process outputs structured data of high granularity grouped into data records, in which corresponding semantic label is assigned to each data item. The task was performed using the extraction system Ex, which combines two approaches: manually written rules and supervised machine learning algorithms. Due to the expert knowledge in the form of extraction rules the lack of training data could be overcome. The rules are independent of the specific formatting structure so that one extraction model could be used for heterogeneous set of documents. The achieved success of the extraction process in the case of laptop offers showed that extraction ontology describing one or a few product types could be combined with wrapper induction methods to automatically extract all product type offers on a web scale with minimum human effort.
36

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 18 May 2017 (has links)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
37

Semantically-enriched and semi-Autonomous collaboration framework for the Web of Things. Design, implementation and evaluation of a multi-party collaboration framework with semantic annotation and representation of sensors in the Web of Things and a case study on disaster management

Amir, Mohammad January 2015 (has links)
This thesis proposes a collaboration framework for the Web of Things based on the concepts of Service-oriented Architecture and integrated with semantic web technologies to offer new possibilities in terms of efficient asset management during operations requiring multi-actor collaboration. The motivation for the project comes from the rise in disasters where effective cross-organisation collaboration can increase the efficiency of critical information dissemination. Organisational boundaries of participants as well as their IT capability and trust issues hinders the deployment of a multi-party collaboration framework, thereby preventing timely dissemination of critical data. In order to tackle some of these issues, this thesis proposes a new collaboration framework consisting of a resource-based data model, resource-oriented access control mechanism and semantic technologies utilising the Semantic Sensor Network Ontology that can be used simultaneously by multiple actors without impacting each other’s networks and thus increase the efficiency of disaster management and relief operations. The generic design of the framework enables future extensions, thus enabling its exploitation across many application domains. The performance of the framework is evaluated in two areas: the capability of the access control mechanism to scale with increasing number of devices, and the capability of the semantic annotation process to increase in efficiency as more information is provided. The results demonstrate that the proposed framework is fit for purpose.
38

From Legal Contracts to Formal Specifications

Soavi, Michele 27 October 2022 (has links)
The challenge of implementing and executing a legal contract in a machine has been gaining significant interest recently with the advent of blockchain, smart contracts, LegalTech and IoT technologies. Popular software engineering methods, including agile ones, are unsuitable for such outcome-critical software. Instead, formal specifications are crucial for implementing smart contracts to ensure they capture the intentions of stakeholders, also that their execution is compliant with the terms and conditions of the original natural-language legal contract. This thesis concerns supporting the semi-automatic generation of formal specifications of legal contracts written in Natural Language (NL). The main contribution is a framework, named Contratto, where the transformation process from NL to a formal specification is subdivided into 5 steps: (1) identification of ambiguous terms in the contract and manual disambiguation; (2) structural and semantic annotation of the legal contract; (3) discovery of relationships among the concepts identified in step (2); (4) formalization of the terms used in the NL text into a domain model; (5) generation of formal expressions that describe what should be implemented by programmers in a smart contract. A systematic literature review on the main topic of the thesis was performed to support the definition of the framework. Requirements were derived from standard business contracts for a preliminary implementation of tools that support the transformation process, particularly concerning step (2). A prototype environment was proposed to semi-automate the transformation process although significant manual intervention is required. The preliminary evaluation confirms that the annotation tool can perform the annotation as well as human annotators, albeit novice ones.

Page generated in 0.0811 seconds