• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 376
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 917
  • 917
  • 270
  • 206
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 107
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Linked Data : ligação de dados bibliográficos /

Arakaki, Felipe Augusto. January 2016 (has links)
Orientadora: Plácida Leopoldina Ventura Amorim da Costa Santos / Banca: José Eduardo Santarém Segundo / Banca: Rogério Aparecido Sá Ramalho / Resumo: As Unidades de Informação necessitam de constante atualização no uso das tecnologias disponíveis para otimizar o gerenciamento de recursos informacionais. As propostas da Web Semântica e as possibilidades apresentadas pelo Linked Data para descrição de acervos e a catalogação em bibliotecas para promoção e a ligação de dados, surgem como instrumentos importantes na gestão do dado bibliográfico. A questão desse estudo caracteriza-se em quais as possibilidades de ligação de dados bibliográficos nas práticas de Linked Data? O objetivo geral é analisar os principais aspectos da proposta do Linked Data com o intuito de promover a ligação e interoperabilidade de dados bibliográficos na Web. A metodologia é caracterizada por uma pesquisa qualitativa e exploratória por meio de revisão bibliográfica sobre o Linked Data. A relevância e justificativa da proposta é corroborar com conhecimento teórico sobre os instrumentos que orientam a ligação de dados do domínio bibliográfico na Web, pois trará benefícios aos usuários e a prática do catalogador. Como resultados foram identificadas iniciativas que estão trabalhando em estruturar seus catálogos para promoção do Linked Data, entre elas destacaram-se os trabalhos da Library of Congress e da Online Computer Library Center (OCLC) dos Estados Unidos e o trabalho da Europeana. A primeiro momento, a estruturação dos dados de bibliotecas mostra-se um trabalho minucioso e detalhado. Entretanto, com o uso das tecnologias da Web Semântica e os prin... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The Information Centers need constant updating without using the available technologies to optimize the management of informational resources. As proposals of the Semantic Web and as possibilities presented by Data linked to description of collections and a cataloging in libraries for the promotion and connection data, as important tools in the management of the bibliographic data. The question of this study is characterized in such as possibilities of connection of bibliographic data in the practices of Linked Data? The principal objective is to analyze the main aspects of the Linked Data proposal in order to promote the connection and interoperability of bibliographic data on the Web. The methodology is characterized by a qualitative and exploratory research through a bibliographic review of Linked Data. The relevance and justification of the proposal is corroborated with theoretical knowledge about the instruments that guide the bibliographic domain data link in the Web, as it will bring benefits to the users. Thus, initiatives have been identified that are working on structuring their catalogs to promote Linked Data, including the work of the Library of Congress and the Online Computer Library Center (OCLC) in the United States and the work of Europeana. At first, the structuring of library data shows a detailed and detailed work. However, with the use of Semantic Web technologies and the principles of Linked Data, it presents greater flexibility for the construction of registries. In this way, a cataloger can establish connections between resources through unique identifiers, causing the cataloger to re-describe the other resource. Initially, the structuring of library data shows detailed and detailed work. With the use of Semantic Web technologies and the principles of Linked Data, it presents greater flexibility for the construction of ... (Resumo completo, clicar acesso eletrônico abaixo) / Mestre
152

Metodologia de avaliação de qualidade de dados no contexto do linked data /

Melo, Jessica Oliveira de Souza Ferreira. January 2017 (has links)
Orientador: José Eduardo Santarém Segundo / Banca: Silvana Aparecida Borsetti Gregório Vidotti / Banca: Leonardo Castro Botega / Resumo: A Web Semântica sugere a utilização de padrões e tecnologias que atribuem estrutura e semântica aos dados, de modo que agentes computacionais possam fazer um processamento inteligente, automático, para cumprir tarefas específicas. Neste contexto, foi criado o projeto Linked Open Data (LOD), que consiste em uma iniciativa para promover a publicação de dados linkados (Linked Data). Com o evidente crescimento dos dados publicados como Linked Data, a qualidade tornou-se essencial para que tais conjuntos de dados (datasets) atendam os objetivos básicos da Web Semântica. Isso porque problemas de qualidade nos datasets publicados constituem em um empecilho não somente para a sua utilização, mas também para aplicações que fazem uso de tais dados. Considerando que os dados disponibilizados como Linked Data possibilitam um ambiente favorável para aplicações inteligentes, problemas de qualidade podem também dificultar ou impedir a integração dos dados provenientes de diferentes datasets. A literatura aplica diversas dimensões de qualidade no contexto do Linked Data, porém indaga-se a aplicabilidade de tais dimensões para avaliação de qualidade de dados linkados. Deste modo, esta pesquisa tem como objetivo propor uma metodologia para avaliação de qualidade nos datasets de Linked Data, bem como estabelecer um modelo do que pode ser considerado qualidade de dados no contexto da Web Semântica e do Linked Data. Para isso adotou-se uma abordagem exploratória e descritiva a fim de estabelecer ... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The Semantic Web suggests the use of patterns and technologies that assign structure and semantics to the data, so that computational agents can perform intelligent, automatic processing to accomplish specific tasks. In this context, the Linked Open Data (LOD) project was created, which consists of an initiative to promote the publication of Linked Data. With the evident growth of data published as Linked Data, quality has become essential for such datasets to meet the basic goals of the Semantic Web. This is because quality problems in published datasets are a hindrance not only to their use but also to applications that make use of such data. Considering that data made available as Linked Data enables a favorable environment for intelligent applications, quality problems can also hinder or prevent the integration of data coming from different datasets. The literature applies several quality dimensions in the context of Linked Data, however, the applicability of such dimensions for quality evaluation of linked data is investigated. Thus, this research aims to propose a methodology for quality evaluation in Linked Data datasets, as well as to establish a model of what can be considered data quality in the Semantic Web and Linked Data context. For this, an exploratory and descriptive approach was adopted in order to establish problems, dimensions and quality requirements and quantitative methods in the evaluation methodology in order to perform the assignment of quality indexes. This work resulted in the definition of seven quality dimensions applicable to the Linked Data domain and 14 different formulas for the quantification of the quality of datasets on scientific publications. Finally, a proof of concept was developed in which the proposed quality assessment methodology was applied in a dataset promoted by the LOD. It is concluded ... (Complete abstract click electronic access below) / Mestre
153

OntoBacen: uma ontologia para gestão de riscos do sistema financeiro brasileiro / OntoBacen: an ontology for risk management in the Brazilian financial system

Filipe Ricardo Polizel 17 March 2016 (has links)
A crise mundial de 2008 impulsionou o avanço das políticas de governança do sistema financeiro global. Parte dessas políticas envolve a reformulação de processos de gerenciamento de informações, e neste cenário de reestruturação tecnológica, várias iniciativas se propõem a solucionar alguns dos problemas já conhecidos. Para viabilizar a adoção de um sistema financeiro global integrado e robusto, grandes empresas de tecnologia e instituições financeiras somam esforços para atender melhor às necessidades do setor. A arquitetura da World Wide Web é uma constante nessas iniciativas, e parte delas buscam os benefícios previstos pelas tecnologias semânticas, tais como sua alta capacidade de integração de dados heterogêneos e utilização de algoritmos de inferência para a dedução de informações. O objetivo deste estudo é utilizar ontologias e tecnologias semânticas, tais como OWL, na gestão de riscos do sistema financeiro, particularmente para verificar a sua aplicabilidade nas políticas de gestão de riscos específicas do sistema financeiro brasileiro, estabelecidas pelas normas publicadas pelo Banco Central (BACEN). / The 2008 global crisis boosted the advancement of governance policies for the global financial system. Some of these policies involve reformulating information management processes; in this restructuring scenario, several initiatives are intended to address some of the well-known issues. To enable the adoption of an integrated and robust global financial system, large technology companies and financial institutions joined efforts to meet industry needs in a better way. The architecture of the World Wide Web is a constant in these initiatives, and some of these seek the benefits provided by semantic technologies, such as its high capacity for integration of heterogeneous databases and the use of inference algorithms for acquiring new information. The goal of this work is to use ontologies and semantic technologies such as OWL in the financial system risk management, particularly verifying the applicability of these techniques to the Brazilian financial system risk management policies, as published by the Brazil Central Bank (BACEN).
154

ACLRO: An Ontology for the Best Practice in ACLR Rehabilitation

Phalakornkule, Kanitha 10 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / With the rise of big data and the demands for leveraging artificial intelligence (AI), healthcare requires more knowledge sharing that offers machine-readable semantic formalization. Even though some applications allow shared data interoperability, they still lack formal machine-readable semantics in ICD9/10 and LOINC. With ontology, the further ability to represent the shared conceptualizations is possible, similar to SNOMED-CT. Nevertheless, SNOMED-CT mainly focuses on electronic health record (EHR) documenting and evidence-based practice. Moreover, due to its independence on data quality, the ontology enhances advanced AI technologies, such as machine learning (ML), by providing a reusable knowledge framework. Developing a machine-readable and sharable semantic knowledge model incorporating external evidence and individual practice’s values will create a new revolution for best practice medicine. The purpose of this research is to implement a sharable ontology for the best practice in healthcare, with anterior cruciate ligament reconstruction (ACLR) as a case study. The ontology represents knowledge derived from both evidence-based practice (EBP) and practice-based evidence (PBE). First, the study presents how the domain-specific knowledge model is built using a combination of Toronto Virtual Enterprise (TOVE) and a bottom-up approach. Then, I propose a top-down approach using Open Biological and Biomedical Ontology (OBO) Foundry ontologies that adheres to the Basic Formal Ontology (BFO)’s framework. In this step, the EBP, PBE, and statistic ontologies are developed independently. Next, the study integrates these individual ontologies into the final ACLR Ontology (ACLRO) as a more meaningful model that endorses the reusability and the ease of the model-expansion process since the classes can grow independently from one another. Finally, the study employs a use case and DL queries for model validation. The study's innovation is to present the ontology implementation for best-practice medicine and demonstrate how it can be applied to a real-world setup with semantic information. The ACLRO simultaneously emphasizes knowledge representation in health-intervention, statistics, research design, and external research evidence, while constructing the classes of data-driven and patient-focus processes that allow knowledge sharing explicit of technology. Additionally, the model synthesizes multiple related ontologies, which leads to the successful application of best-practice medicine.
155

Leveraging Schema Information For Improved Knowledge Graph Navigation

Chittella, Rama Someswar 02 August 2019 (has links)
No description available.
156

The application of Web Ontology Language for information sharing in the dairy industry /

Gao, Yongchun, 1977- January 2005 (has links)
No description available.
157

GraphQL2RDF : A proof-of-concept method to expose GraphQL data to the Semantic Web

Nilsson, Anton January 2021 (has links)
The Semantic Web was introduced to bring structure to the Web. The goal is to allow computer agents to be able to traverse the web and carry out tasks for human users. In the Semantic Web, data is stored using the RDF data model. The purpose of this study is to explore the possibility of exposing GraphQL data to the Semantic Web using a data-to-data translation inspired by Ontology Based Data Access (OBDA). This was done by introducing GraphQL2RDF, a proof-of-concept method to materialize GraphQL data as RDF triples. GraphQL2RDF uses two mapping schemas: a GraphQL-mapping schema annotated with directives to filter and select GraphQL data and an RDF-mapping schema to specify which RDF triples to create. GraphQL2RDF supports directives for filtering based on the SQL where-clause used for filtering in SQL. The approach is demonstrated in a library use-case, in which library data exposed in a GraphQL endpoint is mapped into RDF. GraphQL2RDF demonstrates a method for exposing GraphQL data as RDF, while imposing a minimal set of requirements on the GraphQL endpoint. Future work includes improvements of the model and exploring extensions of this translation method towards an OBDA approach that does not require full materialization of the RDF data.
158

Ontology-Based Free-Form Query Processing for the Semantic Web

Vickers, Mark S. 23 June 2006 (has links) (PDF)
With the onset of the semantic web, the problem of making semantic content effectively searchable for the general public emerges. Demanding an understanding of ontologies or familiarity with a new query language would likely frustrate semantic web users and prevent widespread success. Given this need, this thesis describes AskOntos, which is a system that uses extraction ontologies to convert conjunctive, free-form queries into structured queries for semantically annotated web pages. AskOntos then executes these structured queries and provides answers as tables of extracted values. In experiments conducted AskOntos was able to translate queries with a precision of 88% and a recall of 81%.
159

Machine Learning-Based Ontology Mapping Tool to Enable Interoperability in Coastal Sensor Networks

Bheemireddy, Shruthi 11 December 2009 (has links)
In today’s world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
160

Developing a Semantic Web Crawler to Locate OWL Documents

Koron, Ronald Dean 18 September 2012 (has links)
No description available.

Page generated in 0.0713 seconds