• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 37
  • 37
  • 17
  • 13
  • 9
  • 8
  • 7
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A More Decentralized Vision for Linked Data

Polleres, Axel, Kamdar, Maulik R., Fernandez Garcia, Javier David, Tudorache, Tania, Musen, Mark A. 25 June 2018 (has links) (PDF)
In this deliberately provocative position paper, we claim that ten years into Linked Data there are still (too?) many unresolved challenges towards arriving at a truly machine-readable and decentralized Web of data. We take a deeper look at the biomedical domain - currently, one of the most promising "adopters" of Linked Data - if we believe the ever-present "LOD cloud" diagram. Herein, we try to highlight and exemplify key technical and non-technical challenges to the success of LOD, and we outline potential solution strategies. We hope that this paper will serve as a discussion basis for a fresh start towards more actionable, truly decentralized Linked Data, and as a call to the community to join forces. / Series: Working Papers on Information Systems, Information Business and Operations
2

A More Decentralized Vision for Linked Data

Polleres, Axel, Kamdar, Maulik R., Fernandez Garcia, Javier David, Tudorache, Tania, Musen, Mark A. January 2018 (has links) (PDF)
We claim that ten years into Linked Data there are still many unresolved challenges towards arriving at a truly machine-readable and decentralized Web of data. With a focus on the the biomedical domain, currently, one of the most promising adopters of Linked Data, we highlight and exemplify key technical and non-technical challenges to the success of Linked Data, and we outline potential solution strategies.
3

HEALTH AND WELLNESS INFORMATION SYSTEM

Rangel, Monica 01 June 2019 (has links)
The greatest wealth is health. It is sometimes said your health is a function of what you are not doing, not what you are currently doing. The degree to which individuals can attain, process, and comprehend the necessary health information and services they need to make proper health decisions is vital for optimal health and well-being. This project documents the analysis, design, development, and implementation of a prototype web-based data-driven health & wellness system targeted for college students. The architecture for this system uses business intelligence to develop a smart online platform for real-time analysis based on inputs entered by its users. The objective is to develop modules that can be used to provide meal plan options that dietitians can recommend to students, while also providing a standard wellness health check. This also promotes constant awareness for students with specialized health diets. User-health and wellness history of each Student is collected and stored for generating progress and wellness reports for end users. The dietitian can monitor the user in real time through the data collected and stored in the data server. Users can monitor their own progress. The system incorporates user context and feedback to personalize each user's lifestyle. Implementation of this system provides a complete and easy to use integrated system that promotes the process of analyzing wellness and improving the user’s overall health. The system is designed to be in a non-clinical setting and hence more lifestyle-oriented compared to other health-oriented systems. It is thus more relevant and convenient to student’s everyday life context.
4

The dynamic integration of distributed GIS through semantic Web services

Shi, Xuan, January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2007. / Title from document title page. Document formatted into pages; contains x, 171 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 165-171).
5

Webová a mobilní aplikace pro efektivní využití pracovní doby

Cvrček, David January 2018 (has links)
Cvrček, D. Web and mobile application for effective using working hours. Diploma thesis. Brno, 2018. This diploma thesis is focused on improvement of the work efficiency. The goal is to study existing technique called Pomodoro, design and implement a mobile and web application, which can provide the user an overview of performed work and its efficiency. Further, a mobile application for smartwatches which will be implemented for tracking of the user behavior and related personal notifications. All described applications are tested on a group of selected users.
6

Knowledge Extraction for Hybrid Question Answering

Usbeck, Ricardo 22 May 2017 (has links) (PDF)
Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources.
7

Methodology for Conflict Detection and Resolution in Semantic Revision Control Systems

Hensel, Stephan, Graube, Markus, Urbas, Leon January 2016 (has links)
Revision control mechanisms are a crucial part of information systems to keep track of changes. It is one of the key requirements for industrial application of technologies like Linked Data which provides the possibility to integrate data from different systems and domains in a semantic information space. A corresponding semantic revision control system must have the same functionality as established systems (e.g. Git or Subversion). There is also a need for branching to enable parallel work on the same data or concurrent access to it. This directly introduces the requirement of supporting merges. This paper presents an approach which makes it possible to merge branches and to detect inconsistencies before creating the merged revision. We use a structural analysis of triple differences as the smallest comparison unit between the branches. The differences that are detected can be accumulated to high level changes, which is an essential step towards semantic merging. We implemented our approach as a prototypical extension of therevision control system R43ples to show proof of concept.
8

Enriching the Web of Data with topics and links

Böhm, Christoph January 2013 (has links)
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input. / Die vorliegende Arbeit stellt neue Ideen sowie Forschungsergebnisse für das Web of Data vor. Hierbei handelt es sich um ein globales Netz aus sogenannten Linked Open Data (LOD) Quellen. Diese Datenquellen genügen gewissen Prinzipien, um Nutzern einen leichten Zugriff über das Internet und deren Verwendung zu ermöglichen. LOD ist bereits weit verbreitet und es existiert eine Vielzahl von Daten-Veröffentlichungen entsprechend der LOD Prinzipien. Trotz dessen ist LOD bisher kein fester Baustein des Webs des 21. Jahrhunderts. Die folgende Arbeit erläutert den aktuellen Stand der Forschung und Technik für Linked Open Data und identifiziert dessen Schwächen. Einigen Schwachstellen von LOD widmen wir uns in dem darauf folgenden Hauptteil. Zu Beginn stellen wir neuartige Metadaten für Datenquellen vor – die Themen von Datenquellen (engl. Topics). Solche Themen könnten mit Beschreibungen von Datenquellen veröffentlicht werden und eine Reihe von Anwendungsfällen, wie das Auffinden und Explorieren relevanter Daten, unterstützen. Wir diskutieren unseren Ansatz für die Extraktion dieser Metainformationen – die Annotated Pattern Percolation (APP). Experimentelle Ergebnisse werden mit Themen aus Wikipedia Portalen verglichen. Des Weiteren ergänzen wir den Stand der Forschung für das Auffinden verschiedener Repräsentationen eines Reale-Welt-Objektes (engl. Entity Linking). Für jenes Auffinden werden nicht nur lokale Entscheidungen getroffen, sondern es wird die Gesamtheit der Objektbeziehungen genutzt. Wir diskutieren unser Optimierungsmodel, beweisen dessen Schwere und präsentieren drei Ansätze zur Berechnung einer Lösung. Alle Ansätze wurden im LINked Data Alignment (LINDA) System implementiert. Die erste Methode arbeitet auf einer Maschine, kann jedoch Mehrkern-Prozessoren ausnutzen. Die weiteren Ansätze wurden für Rechnercluster ohne gemeinsamen Speicher entwickelt. Wir evaluieren unsere Ergebnisse auf mehr als 100 Millionen Entitäten und erläutern Vor- sowie Nachteile der jeweiligen Ansätze. Im verbleibenden Teil der Arbeit behandeln wir das Linking von Konzepten – ein Teilproblem des Entity Linking. Unser Ansatz, Holistic Concept Matching (HCM), betrachtet abermals die Gesamtheit der Daten. Wir gruppieren die Eingabe um eine geringe Laufzeit bei der Verarbeitung von mehreren Hunderttausenden Konzepten zu erreichen. Innerhalb der Gruppen berechnen wir komplexe Ähnlichkeiten, und spüren semantische Schlussfolgerungen und Widersprüche auf. Die Qualität des Ergebnisses evaluieren wir ebenfalls auf realen Datenmengen. Zusammenfassend trägt diese Arbeit zum aktuellen Stand der Forschung für das Web of Data bei. Alle diskutierten Techniken wurden mit realen, heterogenen und großen Datenmengen getestet.
9

Adaptable metadata creation for the Web of Data

Enoksson, Fredrik January 2014 (has links)
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM. / <p>QC 20141028</p>
10

Personalisierungsstrategien im E-Commerce : die Webloganalyse als Instrument der Personalisierung im Rahmen des eCRM /

Mayer, Thomas. January 2007 (has links)
Universiẗat, Diss., 2006--Freiburg (Breisg.).

Page generated in 0.0854 seconds