• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 217
  • 76
  • 44
  • 24
  • 20
  • 19
  • 18
  • 17
  • 14
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 839
  • 839
  • 249
  • 189
  • 176
  • 155
  • 139
  • 112
  • 108
  • 105
  • 105
  • 104
  • 102
  • 97
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Um processo de software e um modelo ontológico para apoio ao desenvolvimento de aplicações sensíveis a contexto / A software process and a ontological model for supporting the development of context-aware applications

Renato de Freitas Bulcão Neto 13 December 2006 (has links)
Aplicações sensíveis a contexto utilizam informações de contexto para fornecer serviços adaptados a usuários na realização de suas tarefas. Informação de contexto é qualquer informação considerada relevante para caracterizar entidades de uma interação usuário-computador, como a identidade e a localização de usuários. Esta tese trata a carência de uma abordagem que considere, em termos de processo de software, a complexidade de desenvolvimento de software sensível a contexto. O problema em questão é tratado por meio de três linhas de investigação: modelagem de informação contextual, serviços para tratamento de informação contextual e processo de software para computação sensível a contexto. As contribuições desta tese incluem: (i) o processo de software POCAp (Process for Ontological Context-aware Applications) para apoiar a construção de aplicações sensíveis a contexto baseadas em ontologias; (ii) o modelo de informações de contexto SeCoM (Semantic Context Model) baseado em ontologias e em padrões da Web Semântica; (iii) a infra-estrutura de serviços configuráveis SCK (Semantic Context Kernel) para interpretar informações de contexto apoiadas por modelos ontológicos de informação contextual, como o modelo SeCoM; (iv) uma instanciação do processo POCAp correspondente à extensão de uma aplicação com informações de contexto apoiadas pelo modelo SeCoM, e sua integração com serviços da infra-estrutura SCK; e (v) a identificação de questões de projeto relacionadas à inferência sobre informação contextual ontológica / In order to provide adaptive services according to users? tasks, context-aware applications exploit context information, which is any information used to characterize entities of a user-computer interaction such as user identity or user location. This thesis deals with the lack of a software process-based approach to supporting the inherent complexity of developing context-aware systems. The work reported in this thesis followed three main lines of investigation: context information modeling, services for processing context information, and a software process for context-aware computing. The contributions of this thesis include: (i) the Process for Ontological Context-aware Applications (POCAp) to support the development of context-aware applications based on ontologies; (ii) the Semantic Context Model (SeCoM) based on Semantic Web standards and ontologies; (iii) the Semantic Context Kernel (SCK) services infrastructure for interpreting ontological context information models such as the SeCoM model; (iv) an implementation of the POCAp process for the extension of an application with context information based on the SeCoM model, and its integration with services of the SCK infrastructure; and (v) the identification of design issues related to the inference over ontology-based context information
492

Populando ontologias através de informações em HTML - o caso do currículo lattes / Populating ontologies using HTML information - the currículo lattes case

André Casado Castaño 06 May 2008 (has links)
A Plataforma Lattes é, hoje, a principal base de currículos dos pesquisadores brasileiros. Os currículos da Plataforma Lattes armazenam de forma padronizada dados profissionais, acadêmicos, de produções bibliográficas e outras informações dos pesquisadores. Através de uma base de Currículos Lattes, podem ser gerados vários tipos de relatórios consolidados. As ferramentas existentes da Plataforma Lattes não são capazes de detectar alguns problemas que aparecem na geração dos relatórios consolidados como duplicidades de citações ou produções bibliográficas classificadas de maneiras distintas por cada autor, gerando um número total de publicações errado. Esse problema faz com que os relatórios gerados necessitem ser revistos pelos pesquisadores e essas falhas deste processo são a principal inspiração deste projeto. Neste trabalho, utilizamos como fonte de informações currículos da Plataforma Lattes para popular uma ontologia e utilizá-la principalmente como uma base de dados a ser consultada para geração de relatórios. Analisamos todo o processo de extração de informações a partir de arquivos HTML e seu posterior processamento para inserí-las corretamente dentro da ontologia, de acordo com sua semântica. Com a ontologia corretamente populada, mostramos também algumas consultas que podem ser realizadas e fazemos uma análise dos métodos e abordagens utilizadas em todo processo, comentando seus pontos fracos e fortes, visando detalhar todas as dificuldades existentes no processo de população (instanciação) automática de uma ontologia. / Lattes Platform is the main database of Brazilian researchers resumés in use nowadays. It stores in a standardized form professional, academic, bibliographical productions and other data from these researchers. From these Lattes resumés database, several types of reports can be generated. The tools available for Lattes platform are unable to detect some of the problems that emerge when generating consolidated reports, such as citation duplicity or bibliographical productions misclassified by their authors, generating an incorrect number of publications. This problem demands a revision performed by the researcher on the reports generated, and the flaws of this process are the main inspiration for this project. In this work we use the Lattes platform resumés database as the source for populating an ontology that is intended to be used to generate reports. We analyze the whole process of information gathering from HTML files and its post-processing to insert them correctly in the ontology, according to its semantics. With this ontology correctly populated, we show some new reports that can be generated and we perform also an analysis of the methods and approaches used in the whole process, highlighting their strengths and weaknesses, detailing the dificulties faced in the automated populating process (instantiation) of an ontology.
493

Extração e consulta de informações do Currículo Lattes baseada em ontologias / Ontology-based Queries and Information Extraction from the Lattes CV

Eduardo Ferreira Galego 06 November 2013 (has links)
A Plataforma Lattes é uma excelente base de dados de pesquisadores para a sociedade brasileira, adotada pela maioria das instituições de fomento, universidades e institutos de pesquisa do País. Entretanto, é limitada quanto à exibição de dados sumarizados de um grupos de pessoas, como por exemplo um departamento de pesquisa ou os orientandos de um ou mais professores. Diversos projetos já foram desenvolvidos propondo soluções para este problema, alguns inclusive desenvolvendo ontologias a partir do domínio de pesquisa. Este trabalho tem por objetivo integrar todas as funcionalidades destas ferramentas em uma única solução, a SOS Lattes. Serão apresentados os resultados obtidos no desenvolvimento desta solução e como o uso de ontologias auxilia nas atividades de identificação de inconsistências de dados, consultas para construção de relatórios consolidados e regras de inferência para correlacionar múltiplas bases de dados. Além disto, procura-se por meio deste trabalho contribuir com a expansão e disseminação da área de Web Semântica, por meio da criação de uma ferramenta capaz de extrair dados de páginas Web e disponibilizar sua estrutura semântica. Os conhecimentos adquiridos durante a pesquisa poderão ser úteis ao desenvolvimento de novas ferramentas atuando em diferentes ambientes. / The Lattes Platform is an excellent database of researchers for the Brazilian society , adopted by most Brazilian funding agencies, universities and research institutes. However, it is limited as to displaying summarized data from a group of people, such as a research department or students supervised by one or more professor. Several projects have already been developed which propose solutions to this problem, including some developing ontologies from the research domain. This work aims to integrate all the functionality of these tools in a single solution, SOS Lattes. The results obtained in the development of this solution are presented as well as the use of ontologies to help identifying inconsistencies in the data, queries for building consolidated reports and rules of inference for correlating multiple databases. Also, this work intends to contribute to the expansion and dissemination of the Semantic Web, by creating a tool that can extract data from Web pages and provide their semantic structure. The knowledge gained during the study may be useful for the development of new tools operating in different environments.
494

Extrakce informací z webu založená na ontologiích / Ontology Based Information Extraction from the Web

Buba, Vojtěch January 2017 (has links)
The main aim of this thesis is an extraction of information from web based on conceptual modeling with ontology. The main goal of this thesis in question is implementation of a tool, which process input ontology and enable additional editing through graphical user interface. Readers of this thesis will be introduced with languages for writing ontologies, for example RDF, RDFS or OWL. Two extraction methods which use ontologies to describe extracted informations are also explained. Final solution is designed to consider all needs of extraction task defined by Ing. Radek Burget Ph.D. Output of this tool is definition of extraction task compatible with solution FITLayout, being developed at FIT BUT.
495

Analyse de trajectoires de soins à partir de bases de données médico-administratives : apport d'un enrichissement par des connaissances biomédicales issues du Web des données / Care trajectory analysis using medico-administrative data : contribution of a knowledge-based enrichment from the Linked Data

Rivault, Yann 28 January 2019 (has links)
Pour la recherche en santé publique, réutiliser les bases médicoadministratives est pertinent et ouvre de nouvelles perspectives. En pharmacoépidémiologie, ces données permettent d’étudier à grande échelle l’état de santé, les maladies ainsi que la consommation et le recours aux soins d’une population. Le traitement de ces données est cependant limité par des complexités inhérentes à la nature comptable des données. Cette thèse porte sur l’utilisation conjointe de bases de données médicoadministratives et de connaissances biomédicales pour l’étude des trajectoires de soin. Cela recouvre à la fois (1) l’exploration et l’identification des trajectoires de soins pertinentes dans des flux volumineux au moyen de requêtes et (2) l’analyse des trajectoires retenues. Les technologies du Web Sémantique et les ontologies du Web des données ont permis d’explorer efficacement les données médicoadministratives, en identifiant dans des trajectoires de soins des interactions, ou encore des contre-indications. Nous avons également développé le package R queryMed afin de rendre plus accessible les ontologies médicales aux chercheurs en santé publique. Après avoir permis d’identifier les trajectoires intéressantes, les connaissances relatives aux nomenclatures médicales de ces bases de données ont permis d’enrichir des méthodes d’analyse de trajectoires de soins pour mieux prendre en compte leurs complexités. Cela s’est notamment traduit par l’intégration de similarités sémantiques entre concepts médicaux. Les technologies du Web Sémantique ont également été utilisées pour explorer les résultats obtenus. / Reusing healthcare administrative databases for public health research is relevant and opens new perspectives. In pharmacoepidemiology, it allows to study large scale diseases as well as care consumption for a population. Nevertheless, reusing these information systems that were initially designed for accounting purposes and whose interoperability is limited raises new challenges in terms of representation, integration, exploration and analysis. This thesis deals with the joint use of healthcare administrative databases and biomedical knowledge for the study of patient care trajectories. This includes both (1) exploration and identification through queries of relevant care pathways in voluminous flows, and (2) analysis of retained trajectories. Semantic Web technologies and biomedical ontologies from the Linked Data allowed to identify care trajectories containing a drug interaction or a potential contraindication between a prescribed drug and the patient’s state of health. In addition, we have developed the R queryMed package to enable public health researchers to carry out such studies by overcoming the difficulties of using Semantic Web technologies and ontologies. After identifying potentially interesting trajectories, knowledge from biomedical nomenclatures and ontologies has also enriched existing methods of analysing care trajectories to better take into account the complexity of data. This resulted notably in the integration of semantic similarities between medical concepts. Semantic Web technologies have also been used to explore obtained results.
496

Community-Driven Engineering of the DBpedia Infobox Ontology and DBpedia Live Extraction

Stadler, Claus 23 November 2017 (has links)
The DBpedia project aims at extracting information based on semi-structured data present in Wikipedia articles, interlinking it with other knowledge bases, and publishing this information as RDF freely on the Web. So far, the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the manual effort required to produce and publish a new version of the dataset – which was already partially outdated the moment it was released – has been a drawback. Additionally, the maintenance of the DBpedia Ontology, an ontology serving as a structural backbone for the extracted data, made the release cycles even more heavyweight. In the course of this thesis, we make two contributions: Firstly, we develop a wiki-based solution for maintaining the DBpedia Ontology. By allowing anyone to edit, we aim to distribute the maintenance work among the DBpedia community. Secondly, we extend DBpedia with a Live Extraction Framework, which is capable of extracting RDF data from articles that have recently been edited on the English Wikipedia. By making this RDF data automatically public in near realtime, namely via SPARQL and Linked Data, we overcome many of the drawbacks of the former release cycles.
497

Xodx – Konzeption und Implementierung eines Distributed Semantic Social Network Knotens

Arndt, Natanael 26 February 2018 (has links)
Betrieb eines Knotens in einem Distributed Semantic Social Network. Der Knoten umfasst Funktionen zur Erstellung einer persönlichen Beschreibung, zur Verwaltung von Freundschaftsbeziehungen und zur Kommunikation mit anderen Teilnehmern des Netzwerks. Die entstandene Implementierung ist bereits auf leistungsschwacher, kostengünstiger und energieeffizienter Hardware praktisch im Einsatz. Zusätzlich wurden ihre Skalierungseigenschaften in einem Testaufbau mit mehreren Knoten untersucht.
498

Using Semantic Web Technology in Requirements Specifications

Kroha, Petr, Labra Gayo, José Emilio 05 November 2008 (has links)
In this report, we investigate how the methods developed for using in Semantic Web technology could be used in capturing, modeling, developing, checking, and validating of requirements specifications. Requirements specification is a complex and time-consuming process. The goal is to describe exactly what the user wants and needs before the next phase of the software development cycle will be started. Any failure and mistake in requirements specification is very expensive because it causes the development of software parts that are not compatible with the real needs of the user and must be reworked later. When the analysis phase of a project starts, analysts have to discuss the problem to be solved with the customer (users, domain experts) and then write the requirements found in form of a textual description. This is a form the customer can understand. However, any textual description of requirements can be (and usually is) incorrect, incomplete, ambiguous, and inconsistent. Later on, the analyst specifies a UML model based on the requirements description written by himself before. However, users and domain experts cannot validate the UML model as most of them do not understand (semi-)formal languages such as UML. It is well-known that the most expensive failures in software projects have their roots in requirements specifications. Misunderstanding between analysts, experts, users, and customers (stakeholders) is very common and brings projects over budget. The goal of this investigation is to do some (at least partial) checking and validation of the UML model using a predefined domain-specific ontology in OWL, and to process some checking using the assertions in descriptive logic. As we described in our previous papers, we have implemented a tool obtaining a modul (a computer linguistic component) that can generate a text of requirements description using information from UML models, so that the stakeholders can read it and decide whether the analyst's understanding is right or how different it is from their own one. We argue that the feedback caused by the UML model checking (by ontologies and OWL DL reasoning) can have an important impact on the quality of the outgoing requirements. This report contains a description and explanation of methods developed and used in Semantic Web Technology and a proposed concept for their use in requirements specification. It has been written during my sabbatical in Oviedo and it should serve as a starting point for theses of our students who will implement ideas described here and run some experiments concerning the efficiency of the proposed method.
499

Sacherschliessung 2 ½

Schneider, René 10 August 2009 (has links)
René Schneider setzte das derzeit stark diskutierte Konzept einer "Bibliothek 2.0" in Bezug zur Geschichte und den aktuellen und möglichen Weiterentwicklungen des Web. Im Vordergrund standen dabei Werkzeuge des Web 2.0 (Folksonomies, RSSFeeds, Widgets, Mash-Ups) und deren Realisierungen im bibliothekarischen Kontext sowie der Zusammenhang zwischen dem sog. Web 3.0 oder SemanticWeb und der Sacherschliessung. Anschliessend wurde die Sacherschliessung als Schnittstellenproblem, d.h. aus der Perspektive der Gestaltung benutzerfreundlicher Oberflächen betrachtet und es wurde erörtert, inwieweit der Einsatz von Web 2.0- und Web 3.0-Technologie zur besseren Vermittlung des informationellen Mehrwerts der Sacherschliessung führen kann. René Schneider als Visionär: Für ihn ist das Internet eine große Bibliothek und alle ihre Nutzer sind ihre Bibliothekare.
500

Relational Learning and Optimization in the Semantic Web

Fischer, Thomas 07 July 2011 (has links)
In this paper, the author presents his current research topic, objectives of research as well as research questions. The paper motivates the integration of implicit background knowledge in data mining and optimization techniques based on semantic web knowledge bases. Furthermore, it outlines work of related research areas and states the research methodology

Page generated in 0.0472 seconds