Spelling suggestions: "subject:"ehe semantic web"" "subject:"hhe semantic web""
191 |
Predição de tags usando linked data: um estudo de caso no banco de dados Arquigrafia / Tag prediction using linked data: a case study in the Arquigrafia databaseSouza, Ricardo Augusto Teixeira de 17 December 2013 (has links)
Dada a grande quantidade de conteúdo criado por usuários na Web, uma proposta para ajudar na busca e organização é a criação de sistemas de anotações (tagging systems), normalmente na forma de palavras-chave, extraídas do próprio conteúdo ou sugeridas por visitantes. Esse trabalho aplica um algoritmo de mineração de dados em um banco de dados RDF, contendo instâncias que podem fazer referências à rede Linked Data do DBpedia, para recomendação de tags utilizando as medidas de similaridade taxonômica, relacional e literal de descrições RDF. O banco de dados utilizado é o Arquigrafia, um sistema de banco de dados na Web cujo objetivo é catalogar imagens de projetos arquitetônicos, e que permite que visitantes adicionem tags às imagens. Foram realizados experimentos para a avaliação da qualidade das recomendações de tags realizadas considerando diferentes modelos do Arquigrafia incluindo o modelo estendido do Arquigrafia que faz referências ao DBpedia. Os resultados mostram que a qualidade da recomendação de determinadas tags pode melhorar quando consideramos diferentes modelos (com referências à rede Linked Data do DBpedia) na fase de aprendizado. / Given the huge content created by users in the Web, a way to help in search and organization is the creation of tagging systems, usually in a keyword form (extracted from the Web content or suggested by users). This work applies a data mining algorithm in a RDF database, which contain instances that can reference the DBpedia Linked Data repository, to recommend tags using the taxonomic, relational and literal similarities from RDF descriptions. The database used is the Arquigrafia, a database system available in the Web which goal is to catalog architecture projects, and it allows a user to add tags to images. Experiments were performed to evaluate the quality of the tag recommendations made considering differents models of Arquigrafia\'s database, including an extended model which has references to DBpedia. The results shown that the quality of the recommendations of some tags can be improved when we consider different models (with references to DBpedia Linked Data repository) in the learning phase.
|
192 |
Consequence-based reasoning for SRIQ ontologiesBate, Andrew January 2016 (has links)
Description logics (DLs) are knowledge representation formalisms with numerous applications and well-understood model-theoretic semantics and computational properties. SRIQ is a DL that provides the logical underpinning for the semantic web language OWL 2, which is the W3C standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. Consequence-based calculi are a family of reasoning techniques for DLs. Such calculi have proved very effective in practice and enjoy a number of desirable theoretical properties. Up to now, however, they were proposed for either Horn DLs (which do not support disjunctive reasoning), or for DLs without cardinality constraints. In this thesis we present a novel consequence-based algorithm for TBox reasoning in SRIQ - a DL that supports both disjunctions and cardinality constraints. Combining the two features is non-trivial since the intermediate consequences that need to be derived during reasoning cannot be captured using DLs themselves. Furthermore, cardinality constraints require reasoning over equality, which we handle using the framework of ordered paramodulation - a state-of-the-art method for equational theorem proving. We thus obtain a calculus that can handle an expressive DL, while still enjoying all the favourable properties of existing consequence-based algorithms, namely optimal worst-case complexity, one-pass classification, and pay-as-you-go behaviour. To evaluate the practicability of our calculus, we implemented it in Sequoia - a new DL reasoning system. Empirical results show substantial robustness improvements over well-established algorithms and implementations, and performance competitive with closely related work.
|
193 |
Une approche sémantique pour l’exploitation de données environnementales : application aux données d’un observatoire / A semantic-based approach to exploit environmental data : application to an observatory’s dataTran, Ba Huy 23 November 2017 (has links)
La nécessité de collecter des observations sur une longue durée pour la recherche sur des questions environnementales a entrainé la mise en place de Zones Ateliers par le CNRS. Ainsi, depuis plusieurs années, de nombreuses bases de données à caractère spatio-temporel sont collectées par différentes équipes de chercheurs. Afin de faciliter les analyses transversales entre différentes observations, il est souhaitable de croiser les informations provenant de ces sources de données. Néanmoins, chacune de ces sources est souvent construite de manière indépendante de l'une à l'autre, ce qui pose des problèmes dans l'analyse et l'exploitation. De ce fait, cette thèse se propose d'étudier les potentialités des ontologies à la fois comme objets de modélisation, d'inférence, et d'interopérabilité. L'objectif est de fournir aux experts du domaine une méthode adaptée permettant d'exploiter l'ensemble de données collectées. Étant appliquées dans le domaine environnemental, les ontologies doivent prendre en compte des caractéristiques spatio-temporelles de ces données. Vu le besoin d'une modélisation des concepts et des opérateurs spatiaux et temporaux, nous nous appuyons sur la solution de réutilisation des ontologies de temps et de l'espace. Ensuite, une approche d'intégration de données spatio-temporelles accompagnée d'un mécanisme de raisonnement sur leurs relations a été introduite. Enfin, les méthodes de fouille de données ont été adoptées aux données spatio-temporelles sémantiques pour découvrir de nouvelles connaissances à partir de la base de connaissances. L'approche a ensuite été mise en application au sein du prototype Geminat qui a pour but d'aider à comprendre les pratiques agricoles et leurs relations avec la biodiversité dans la zone atelier Plaine et Val de Sèvre. De l'intégration de données à l'analyse de connaissances, celui-ci offre les éléments nécessaires pour exploiter des données spatio-temporelles hétérogènes ainsi qu'en extraire de nouvelles connaissances. / The need to collect long-term observations for research on environmental issues led to the establishment of "Zones Ateliers" by the CNRS. Thus, for several years, many databases of a spatio-temporal nature are collected by different teams of researchers. To facilitate transversal analysis of different observations, it is desirable to cross-reference information from these data sources. Nevertheless, these sources are constructed independently of each other, which raise problems of data heterogeneity in the analysis.Therefore, this thesis proposes to study the potentialities of ontologies as both objects of modeling, inference, and interoperability. The aim is to provide experts in the field with a suitable method for exploiting heterogeneous data. Being applied in the environmental domain, ontologies must take into account the spatio-temporal characteristics of these data. As the need for modeling concepts and spatial and temporal operators, we rely on the solution of reusing the ontologies of time and space. Then, a spatial-temporal data integration approach with a reasoning mechanism on the relations of these data has been introduced. Finally, data mining methods have been adapted to spatio-temporal RDF data to discover new knowledge from the knowledge-base. The approach was then applied within the Geminat prototype, which aims to help understand farming practices and their relationships with the biodiversity in the "zone atelier Plaine and Val de Sèvre". From data integration to knowledge analysis, it provides the necessary elements to exploit heterogeneous spatio-temporal data as well as to discover new knowledge.
|
194 |
Ontology Learning and Information Extraction for the Semantic WebKavalec, Martin January 2006 (has links)
The work gives overview of its three main topics: semantic web, information extraction and ontology learning. A method for identification relevant information on web pages is described and experimentally tested on pages of companies offering products and services. The method is based on analysis of a sample web pages and their position in the Open Directory catalogue. Furthermore, a modfication of association rules mining algorithm is proposed and experimentally tested. In addition to an identification of a relation between ontology concepts, it suggest possible naming of the relation.
|
195 |
Representação da informação dinâmica em ambientes digitaisCamila Ribeiro 09 August 2013 (has links)
Este trabalho é um estudo exploratório interdisciplinar, pois converge de duas áreas não pertencentes à mesma classe acadêmica, Ciência da Informação (CI) e Ciência da Computação. O objetivo é, além de estudar a representação no ambiente virtual, encontrar uma forma de representar a informação não textual (multimídia) que atenda essas \"novas necessidades\" e possibilidades que a Web Semântica requer no desenvolvimento de contextos com uso do XML. Conforme a complexidade dos documentos multimodais que envolvem textos, vídeos e imagens descritos em mais de um formato, a opção para a interoperabilidade da descrição foi representar o contexto destes documentos com uso de ontologia. Através de uma metodologia de pesquisa qualitativa de análise exploratória e descritiva, apresentam-se ontologias que permitam que esta descrição feita em padrões convencionais, mas interoperáveis, de formatos de descrição, e que possam atingir um conjunto de objetos multimodais. A descrição desta ontologia, em dois formatos interoperáveis, MARC21 e Dublin Core, foi criada utilizando o software Protégé; e para validação da ontologia, foram feitas 3 aplicações práticas com vídeos acadêmicos (uma aula, um trabalho de conclusão de curso e uma defesa de dissertação de mestrado), que possuem imagens retiradas dos slideshows e compostas num documento final. O resultado alcançado é uma representação dinâmica de vídeo, que faça as relações com os outros objetos que a vídeo trás além da interoperabilidade dos formatos de descrição, tais como: Dublin Core e MARC21. / This work is an exploratory interdisciplinary study, since it mixes two different academic areas: Information science (IS) and Computer Science. The search for a new way of represent non-textual information (media) that supplies the current needs and possibilities that semantic web requires on XML developed contexts is one of the aims of this study. According to the complexity of multimodal documents that converge text, videos and images described in more than one format, ontology use was choose to represent the description interoperability. Through a qualitative research using exploratory and descriptive analysis will be presented ontologies that allow the conventional patterns of description to be interoperable, being able to show a multimodal object set. This ontology description was made in two interoperable formats: MARC21 and Dublin Core. It was created using the Protégé software. To validate the ontologies, they will be applied in 3 academic videos (a lesson video, a graduation defense, and a masters defense), and all of three are composed with slideshows images that are attached in the final document. The result obtained is a dynamic video representation that can make relations with the other video objects beyond interoperability of description formats, such as Dublin Core and MARC21.
|
196 |
Serviços semânticos: uma abordagem RESTful. / Semantic web services: a RESTful approachOtávio Freitas Ferreira Filho 06 April 2010 (has links)
Este trabalho foca na viabilização do desenvolvimento de serviços semânticos de acordo com o estilo arquitetural REST. Mais especificamente, considera-se a realização REST baseada no protocolo HTTP, resultando em serviços semânticos RESTful. A viabilização de serviços semânticos tem sido tema de diversas publicações no meio acadêmico. Porém, a grande maioria dos esforços considera apenas os serviços desenvolvidos sob o estilo arquitetural RPC, através do protocolo SOAP. A abordagem RPC, fortemente incentivada pela indústria de software, é perfeitamente realizável em termos tecnológicos, mas agrega computações e definições desnecessárias, o que resulta em serviços mais complexos, com baixo desempenho e pouca escalabilidade. O fato é que serviços REST compõem a maioria dos serviços disponibilizados na Web 2.0 nome amplamente adotado para referenciar a atual fase da Web, notoriamente focada na geração colaborativa de conteúdo. A proposta oferecida por este trabalho utiliza uma seleção específica de linguagens e protocolos já existentes, reforçando sua realizabilidade. Utiliza-se a linguagem OWL-S como ontologia de serviços e a linguagem WADL para a descrição sintática dos mesmos. O protocolo HTTP é utilizado na transferência das mensagens, na definição da ação a ser executada e no escopo de execução desta ação. Identificadores URI são utilizados na definição da interface de acesso ao serviço. A compilação final dá origem à ontologia RESTfulGrounding, uma especialização de OWL-S. / The proposal is to allow the development of semantic Web services according to an architectural style called REST. More specifically, it considers a REST implementation based on the HTTP protocol, resulting in RESTful Semantic Web Services. The development of semantic Web services has been the subject of various academic papers. However, the predominant effort considers Web services designed according to another architectural style named RPC, mainly through the SOAP protocol. The RPC approach, strongly stimulated by the software industry, aggregates unnecessary processing and definitions that make Web services more complex than desired. Therefore, services end up being not as scalable and fast as possible. In fact, REST services form the majority of Web services developed within the Web 2.0 context, an environment clearly focused on user-generated content and social aspects. The proposal presented here makes use of a specific selection of existing languages and protocols, reinforcing its feasibility. Firstly, OWL-S is used as the base ontology for services, whereas WADL is for syntactically describing them. Secondly, the HTTP protocol is used for transferring messages; defining the action to be executed; and also defining the execution scope. Finally, URI identifiers are responsible for specifying the service interface. The final compilation proposed results in an ontology named RESTfulGrounding, which extends OWL-S.
|
197 |
Construindo ontologias a partir de recursos existentes: uma prova de conceito no domínio da educação. / Building ontologies from existent resources: a proof of concept in education domain.Regina Claudia Cantele 07 April 2009 (has links)
Na Grécia antiga, Aristóteles (384-322 aC) reuniu todo conhecimento de sua época para criar a Enciclopédia. Na última década surgiu a Web Semântica representando o conhecimento organizado em ontologias. Na Engenharia de Ontologias, o Aprendizado de Ontologias reúne os processos automáticos ou semi-automáticos de aquisição de conhecimento a partir de recursos existentes. Por outro lado, a Engenharia de Software faz uso de vários padrões para permitir a interoperabilidade entre diferentes ferramentas como os criados pelo Object Management Group (OMG) Model Driven Architecture (MDA), Meta Object Facility (MOF), Ontology Definition Metamodel (ODM) e XML Metadata Interchange (XMI). Já o World Wide Web Consortium (W3C) disponibilizou uma arquitetura em camadas com destaque para a Ontology Web Language (OWL). Este trabalho propõe um framework para reunir estes conceitos fundamentado no ODM, no modelo OWL, na correspondência entre metamodelos, nos requisitos de participação para as ferramentas e na seqüência de atividades a serem aplicadas até obter uma representação inicial da ontologia. Uma prova de conceito no domínio da Educação foi desenvolvida para testar esta proposta. / In ancient Greece, Aristotle (384-322 BCE) endeavored to collect all the existing science in his world to create the Encyclopedia. In the last decade, Berners-Lee and collaborators idealized the Web as a structured repository, observing an organization they called Semantic Web. Usually, domain knowledge is organized in ontologies. As a consequence, a great number of researchers are working on method and technique to build ontologies in Ontology Engineering. Ontology Learning meets automatic or semi-automatic processes which perform knowledge acquisition from existing resources. On the other hand, software engineering uses a collection of theories, methodologies and techniques to support information abstraction and several standards have been used, allowing interoperability and different tools promoted by the Object Management Group (OMG) Model Driven Architecture (MDA), Meta Object Facility (MOF), Ontology Definition Metamodel (ODM) and XML Metadata Interchange (XMI). The World Wide Web Consortium (W3C) released architecture in layers for implementing the Semantic Web with emphasis on the Web Ontology Language (OWL). A framework was developed to combine these concepts based on ODM, on OWL model, the correlation between metamodels, the requirements for the tools to participate; in it, the steps sequence was defined to be applied until initial representations of ontology were obtained. A proof of concept in the Education domain was developed to test this proposal.
|
198 |
Educação a distância e a WEB Semântica: modelagem ontológica de materiais e objetos de aprendizagem para a plataforma COL. / e-Learning and semantic Web: learning materials and objects for CoL plataform.Moysés de Araujo 11 September 2003 (has links)
A World Wide Web está se tornando uma grande biblioteca virtual, onde a informação sobre qualquer assunto está disponível a qualquer hora e em qualquer lugar, com ou sem custo, criando oportunidades em várias áreas do conhecimento humano, dentre as quais a Educação não é exceção. Embora muitas aplicações educacionais baseadas na Web tenham sido desenvolvidas nos últimos anos, alguns problemas nesta área não foram resolvidos, entre as quais está a pesquisa de materiais e objetos de aprendizagem mais inteligentes e eficientes, pois como as informações na World Wide Web não são estruturadas e organizadas, as máquinas não podem compreender e nem interpretar o significado das informações semânticas. Para dar uma nova infra-estrutura para a World Wide Web está surgindo uma nova tecnologia conhecida com Web Semântica, cuja finalidade é estruturar e organizar as informações para buscas mais inteligentes e eficientes, utilizando-se principalmente do conceito de ontologia. Este trabalho apresenta uma proposta de modelagem ontológica de materiais e objetos de aprendizagem baseada nas tecnologias da Web Semântica para a plataforma de ensino a distância CoL - Cursos on LARC. Esta proposta estende esta plataforma adicionando-lhe a capacidade de organizar e estruturar seus materiais de aprendizagem, de forma a que pesquisas mais inteligentes e estruturadas possam ser realizadas, nestes materiais e propiciando a possibilidade de reutilização do conteúdo desses materiais. / The World Wide Web is turning itself into a huge virtual library, where a piece of information about any subject is available at any time in any place, with or without fees, creating opportunities in several areas of human knowledge. Education is no exception among this areas. Although many Web based educational applications have been recently developed, some problems in the area have not been solved yet. Among these is the search for more intelligent and effective object learning and materials, since the World Wide Web information is not structured, nor organized. The machines do not understand neither interpret the meaning of semantic information. In order to restructure the World Wide Web there is a new technology, known as Web Semantics, being developed. It aims to structure and organize information for more intelligent and effective search, making use of the ontology concept. This work presents an ontological modeling for learning subjects and materials, based on the Web Semantics Technology for the long distance education platform CoL Courses on LARC. This proposal extends such platform, adding to it the possibility of organizing and structuring its learning materials, making possible more intelligent and structured searches on the materials as well as making possible the re-use of the materials contents.
|
199 |
Integração de recursos da web semântica e mineração de uso para personalização de sites / Integrating semantic web resources and web usage mining for websites personalizationRigo, Sandro Jose January 2008 (has links)
Um dos motivos para o crescente desenvolvimento da área de mineração de dados encontra-se no aumento da quantidade de documentos gerados e armazenados em formato digital, estruturados ou não. A Web contribui sobremaneira para este contexto e, de forma coerente com esta situação, observa-se o surgimento de técnicas específicas para utilização nesta área, como a mineração de estrutura, de conteúdo e de uso. Pode-se afirmar que esta crescente oferta de informação na Web cria o problema da sobrecarga cognitiva. A Hipermídia Adaptativa permite minorar este problema, com a adaptação de hiperdocumentos e hipermídia aos seus usuários segundo suas necessidades, preferências e objetivos. De forma resumida, esta adaptação é realizada relacionando-se informações sobre o domínio da aplicação com informações sobre o perfil de usuários. Um dos tópicos importantes de pesquisa em sistemas de Hipermídia Adaptativa encontra-se na geração e manutenção do perfil dos usuários. Dentre as abordagens conhecidas, existe um contínuo de opções, variando desde cadastros de informações preenchidos manualmente, entrevistas, até a aquisição automática de informações com acompanhamento do uso da Web. Outro ponto fundamental de pesquisa nesta área está ligado à construção das aplicações, sendo que recursos da Web Semântica, como ontologias de domínio ou anotações semânticas de conteúdo podem ser observados no desenvolvimento de sistemas de Hipermídia Adaptativa. Os principais motivos para tal podem ser associados com a inerente flexibilidade, capacidade de compartilhamento e possibilidades de extensão destes recursos. Este trabalho descreve uma arquitetura para a aquisição automática de perfis de classes de usuários, a partir da mineração do uso da Web e da aplicação de ontologias de domínio. O objetivo principal é a integração de informações semânticas, obtidas em uma ontologia de domínio descrevendo o site Web em questão, com as informações de acompanhamento do uso obtidas pela manipulação dos dados de sessões de usuários. Desta forma é possível identificar mais precisamente os interesses e necessidades de um usuário típico. Integra o trabalho a implementação de aplicação de Hipermídia Adaptativa a partir de conceitos de modelagem semântica de aplicações, com a utilização de recursos de serviços Web, para validação experimental da proposta. / One of the reasons for the increasing development observed in Data Mining area is the raising in the quantity of documents generated and stored in digital format, structured or not. The Web plays central role in this context and some specific techniques can be observed, as structure, content and usage mining. This increasing information offer in the Web brings the cognitive overload problem. The Adaptive Hypermedia permits a reduction of this problem, when the contents of selected documents are presented in accordance with the user needs, preferences and objectives. Briefly put, this adaptation is carried out on the basis of relationship between information concerning the application domain and information concerning the user profile. One of the important points in Adaptive Hypermedia systems research is to be found in the generation and maintenance of the user profiles. Some approaches seek to create the user profile from data obtained from registration, others incorporate the results of interviews, and some have the objective of automatic acquisition of information by following the usage. Another fundamental research point is related with the applications construction, where can be observed the use of Web semantic resources, such as semantic annotation and domain ontologies. This work describes the architecture for automatic user profile acquisition, using domain ontologies and Web usage mining. The main objective is the integration of usage data, obtained from user sessions, with semantic description, obtained from a domain ontology. This way it is possible to identify more precisely the interests and needs of a typical user. The implementation of an Adaptive Hypermedia application based on the concepts of semantic application modeling and the use of Web services resources that were integrated into the proposal permitted greater flexibility and experimentation possibilities.
|
200 |
Template-Based Question Answering over Linked Data using Recursive Neural NetworksJanuary 2018 (has links)
abstract: The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies used by these knowledge graphs. So, to make these knowledge graphs more accessible (even for non- experts) several question answering (QA) systems have been developed over the last decade. Due to the complexity of the task, several approaches have been undertaken that include techniques from natural language processing (NLP), information retrieval (IR), machine learning (ML) and the Semantic Web (SW). At a higher level, most question answering systems approach the question answering task as a conversion from the natural language question to its corresponding SPARQL query. These systems then utilize the query to retrieve the desired entities or literals. One approach to solve this problem, that is used by most systems today, is to apply deep syntactic and semantic analysis on the input question to derive the SPARQL query. This has resulted in the evolution of natural language processing pipelines that have common characteristics such as answer type detection, segmentation, phrase matching, part-of-speech-tagging, named entity recognition, named entity disambiguation, syntactic or dependency parsing, semantic role labeling, etc.
This has lead to NLP pipeline architectures that integrate components that solve a specific aspect of the problem and pass on the results to subsequent components for further processing eg: DBpedia Spotlight for named entity recognition, RelMatch for relational mapping, etc. A major drawback in this approach is error propagation that is a common problem in NLP. This can occur due to mistakes early on in the pipeline that can adversely affect successive steps further down the pipeline. Another approach is to use query templates either manually generated or extracted from existing benchmark datasets such as Question Answering over Linked Data (QALD) to generate the SPARQL queries that is basically a set of predefined queries with various slots that need to be filled. This approach potentially shifts the question answering problem into a classification task where the system needs to match the input question to the appropriate template (class label).
This thesis proposes a neural network approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination for the need of laborious feature engineering that can be cumbersome and error prone. The input question would be encoded into a vector representation. The model will be trained and evaluated on the LC-QuAD Dataset (Large-scale Complex Question Answering Dataset). The dataset was created explicitly for machine learning based QA approaches for learning complex SPARQL queries. The dataset consists of 5000 questions along with their corresponding SPARQL queries over the DBpedia dataset spanning 5042 entities and 615 predicates. These queries were annotated based on 38 unique templates that the model will attempt to classify. The resulting model will be evaluated against both the LC-QuAD dataset and the Question Answering Over Linked Data (QALD-7) dataset.
The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset.
After slot filling, the overall system achieves a macro F-score 0.419 on the LC- QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset. / Dissertation/Thesis / Masters Thesis Software Engineering 2018
|
Page generated in 0.0665 seconds