• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 30
  • 14
  • 12
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 224
  • 224
  • 106
  • 91
  • 52
  • 45
  • 38
  • 35
  • 31
  • 31
  • 30
  • 30
  • 28
  • 24
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Linked Enterprise Data als semantischer, integrierter Informationsraum für die industrielle Datenhaltung

Graube, Markus 01 March 2018 (has links)
Zunehmende Vernetzung und gesteigerte Flexibilität in Planungs- und Produktionsprozessen sind die notwendigen Antworten auf die gesteigerten Anforderungen an die Industrie in Bezug auf Agilität und Einführung von Mehrwertdiensten. Dafür ist eine stärkere Digitalisierung aller Prozesse und Vernetzung mit den Informationshaushalten von Partnern notwendig. Heutige Informationssysteme sind jedoch nicht in der Lage, die Anforderungen eines solchen integrierten, verteilten Informationsraums zu erfüllen. Ein vielversprechender Kandidat ist jedoch Linked Data, das aus dem Bereich des Semantic Web stammt. Aus diesem Ansatz wurde Linked Enterprise Data entwickelt, welches die Werkzeuge und Prozesse so erweitert, dass ein für die Industrie nutzbarer und flexibler Informationsraum entsteht. Kernkonzept dabei ist, dass die Informationen aus den Spezialwerkzeugen auf eine semantische Ebene gehoben, direkt auf Datenebene verknüpft und für Abfragen sicher bereitgestellt werden. Dazu kommt die Erfüllung industrieller Anforderungen durch die Bereitstellung des Revisionierungswerkzeugs R43ples, der Integration mit OPC UA über OPCUA2LD, der Anknüpfung an industrielle Systeme (z.B. an COMOS), einer Möglichkeit zur Modelltransformation mit SPARQL sowie feingranularen Informationsabsicherung eines SPARQL-Endpunkts. / Increasing collaboration in production networks and increased flexibility in planning and production processes are responses to the increased demands on industry regarding agility and the introduction of value-added services. A solution is the digitalisation of all processes and a deeper connectivity to the information resources of partners. However, today’s information systems are not able to meet the requirements of such an integrated, distributed information space. A promising candidate is Linked Data, which comes from the Semantic Web area. Based on this approach, Linked Enterprise Data was developed, which expands the existing tools and processes. Thus, an information space can be created that is usable and flexible for the industry. The core idea is to raise information from legacy tools to a semantic level, link them directly on the data level even across organizational boundaries, and make them securely available for queries. This includes the fulfillment of industrial requirements by the provision of the revision tool R43ples, the integration with OPC UA via OPCUA2LD, the connection to industrial systems (for example to COMOS), a possibility for model transformation with SPARQL as well as fine granular information protection of a SPARQL endpoint.
72

[en] ENRICHING AND ANALYZING SEMANTIC TRAJECTORIES WITH LINKED OPEN DATA / [pt] ENRIQUECENDO E ANALISANDO TRAJETÓRIAS SEMÂNTICAS COM DADOS ABERTOS INTERLIGADOS

LIVIA COUTO RUBACK RODRIGUES 26 February 2018 (has links)
[pt] Os últimos anos testemunharam o uso crescente de dispositivos que rastreiam objetos móveis: equipamentos com GPS e telefones móveis, veículos ou outros sensores da Internet das Coisas, além de dados de localização de check-ins de redes sociais. Estes dados de mobilidade são representados como trajetórias, e armazenam a sequência de posições de um objeto móvel. Porém, estas sequências representam somente os dados de posição originais, que precisam ser semanticamente enriquecidos para permitir tarefas de análise e apoiar um entendimento profundo sobre o comportamento do movimento. Um outro espaço de dados global sem precedentes tem crescido rapidamente, a Web de Dados, graças à iniciativa de Dados Interligados. Estes dados semânticos ricos e livremente disponíveis fornecem uma nova maneira de enriquecer dados de trajetória. Esta tese apresenta contribuições para os desafios que surgem considerando este cenário. Em primeiro lugar, a tese investiga como dados de trajetória podem se beneficiar da iniciativa de dados interligados, guiando todo o processo de enriquecimento semântico utilizando fontes de dados externas. Em segundo lugar, aborda o tópico de computação de similaridade entre entidades representadas como dados interligados com o objetivo de computar a similaridade entre trajetórias semanticamente enriquecidas. A novidade da abordagem apresentada nesta tese consiste em considerar as características relevantes das entidades como listas ranqueadas. Por último, a tese aborda a computação da similaridade entre trajetórias enriquecidas comparando a similaridade entre todas as entidades representadas como dados interligados que representam as trajetórias enriquecidas. / [en] The last years witnessed a growing number of devices that track moving objects: personal GPS equipped devices and GSM mobile phones, vehicles or other sensors from the Internet of Things but also the location data deriving from the Social Networks check-ins. These mobility data are represented as trajectories, recording the sequence of locations of the moving object. However, these sequences only represent the raw location data and they need to be semantically enriched to be meaningful in the analysis tasks and to support a deep understanding of the movement behavior. Another unprecedented global space that is also growing at a fast pace is the Web of Data, thanks to the emergence of the Linked Data initiative. These freely available semantic rich datasets provide a novel way to enhance trajectory data. This thesis presents a contribution to the many challenges that arise from this scenario. First, it investigates how trajectory data may benefit from the Linked Data Initiative by guiding the whole trajectory enrichment process with the use of external datasets. Then, it addresses the pivotal topic of the similarity computation between Linked Data entities with the final objective of computing the similarity between semantically enriched trajectories. The novelty of our approach is that the thesis considers the relevant entity features as a ranked list. Finally, the thesis targets the computation of the similarity between enriched trajectories by comparing the similarity of the Linked Data entities that represent the enriched trajectories.
73

Peer-to-peer, multi-agent interaction adapted to a web architecture

Bai, Xi January 2013 (has links)
The Internet and Web have brought in a new era of information sharing and opened up countless opportunities for people to rethink and redefine communication. With the development of network-related technologies, a Client/Server architecture has become dominant in the application layer of the Internet. Nowadays network nodes are behind firewalls and Network Address Translations, and the centralised design of the Client/Server architecture limits communication between users on the client side. Achieving the conflicting goals of data privacy and data openness is difficult and in many cases the difficulty is compounded by the differing solutions adopted by different organisations and companies. Building a more decentralised or distributed environment for people to freely share their knowledge has become a pressing challenge and we need to understand how to adapt the pervasive Client/Server architecture to this more fluid environment. This thesis describes a novel framework by which network nodes or humans can interact and share knowledge with each other through formal service-choreography specifications in a decentralised manner. The platform allows peers to publish, discover and (un)subscribe to those specifications in the form of Interaction Models (IMs). Peer groups can be dynamically formed and disbanded based on the interaction logs of peers. IMs are published in HTML documents as normal Web pages indexable by search engines and associated with lightweight annotations which semantically enhance the embedded IM elements and at the same time make IM publications comply with the Linked Data principles. The execution of IMs is decentralised on each peer via conventional Web browsers, potentially giving the system access to a very large user community. In this thesis, after developing a proof-of-concept implementation, we carry out case studies of the resulting functionality and evaluate the implementation across several metrics. An increasing number of service providers have began to look for customers proactively, and we believe that in the near future we will not search for services but rather services will find us through our peer communities. Our approaches show how a peer-to-peer architecture for this purpose can be obtained on top of a conventional Client/Server Web infrastructure.
74

Examination of the epidemiology of acute myocardial infarction in England using linked hospital and mortality data

Smolina, Ekaterina January 2011 (has links)
Background: Acute myocardial infarction (AMI) is a major public health concern. There are limited recent national-level population-based epidemiological data on AMI in England. As a result, the current burden of disease is difficult to quantify. Aim: This thesis addresses gaps in knowledge on AMI in England. It aims to provide a comprehensive analysis of AMI epidemiology over the last decade. Methods: This is a population-based study using person-linked routine hospital and mortality data for England for the period from 1 April 1998 to 31 March 2008. Main outcome measures include: trends in event rate, case fatality, and mortality for AMI, as well as trends in characteristics of, and hospital care for, the AMI patient population between 1999 and 2007; rates of occurrence and case fatality for first and recurrent AMI in 2007; and five-year survival and risk of a second AMI for 2003 to 2007. Results: Total age-standardised AMI mortality rate fell by around half, while the age-standardised event rate and case fatality rate each declined by around one third between 1999 and 2007. Approximately half of the decline in AMI mortality was attributed to a decline in event rate and half to improved survival. During the 2000s, the hospitalised AMI patient population became increasingly elderly, presented with more comorbidities, underwent more revascularisation procedures, and spent less time in hospital. In 2007, approximately 90,000 AMIs occurred in England, of which around one third were fatal, one in seven were reinfarctions, and three quarters were AMIs in those aged 65 years and older. Among 30-day survivors of a first AMI, around one in three men and one in four women died within five years, and about one in eight men and one in six women experienced a second AMI in the same time period. Conclusions: There have been substantial improvements in AMI occurrence, survival, and mortality over the last decade in England. This was driven by improvements in prevention and acute medical treatment. The results in this thesis emphasise the importance of both.
75

[en] ENLIDA: ENRICHMENT OF LINKED DATA CUBE DESCRIPTIONS / [pt] ENLIDA: ENRIQUECIMENTO DAS DESCRIÇÕES DE LINKED DATA CUBES

XIMENA ALEXANDRA CABRERA TAPIA 12 January 2015 (has links)
[pt] O termo dados interligados refere-se a conjuntos de triplas RDF organizados segundo certos princípios que facilitam a publicação e o acesso a dados por meio da infraestrutura da Web. Os princípios para organização de dados interligados são de grande importância pois oferecem uma forma de minimizar o problema de interoperabilidade entre bancos de dados expostos na Web. Este trabalho propõe enriquecer um banco de dados que contém descrições em RDF de cubos de dados, interligando seus componentes com entidades definidas em fontes de dados externas através de triplas owl:sameAs. O trabalho propõe uma arquitetura composta por dois componentes principais, o enriquecedor automático e o enriquecedor manual. O primeiro componente gera triplas owl:sameAs automaticamente enquanto que o segundo componente permite ao usuário definir manualmente as ligações. Em conjunto, estes componentes facilitam a definição de cubos de dados de acordo com os princípios de dados interligados / [en] The term Linked Data refers to a set of RDF triples organized according to certain principles that facilitate the publishing and consumption of data using the Web infrastructure. The importance of the Linked Data principles stems from the fact that they offer a way to minimize the interoperability problem between databases exposed on the Web. This dissertation proposes to enrich a database that contains Linked Data cube descriptions by interconnecting the components of the data cubes with entities defined in external data sources, using owl:sameAs triples. The dissertation proposes an architecture consisting of two major components, the automatic enriching component and the manual enriching component. The first component automatically generates owl:sameAs triples, while the second component helps the user manually define owl:sameAs triples that the automatic component was not able to uncover. Together, these components therefore facilitate the definition of data cubes according to the Linked Data principles.
76

Hledání a vytváření relací mezi sloupci v CSV souborech s využitím Linked Dat / Discovering and Creating Relations among CSV Columns Using Linked Data Knowledge Bases

Brodec, Václav January 2019 (has links)
A large amount of data produced by governmental organizations is accessible in the form of tables encoded as CSV files. Semantic table interpretation (STI) strives to transform them into linked data in order to make them more useful. As significant portion of the tabular data is of statistical nature, and therefore comprises predominantly of numeric values, it is paramount to possess effective means for interpreting relations between the entities and their numeric properties as captured in the tables. As the current general-purpose STI tools infer the annotations of the columns almost exclusively from numeric objects of RDF triples already present in the linked data knowledge bases, they are unable to handle unknown input values. This leaves them with weak evidence for their suggestions. On the other hand, known techniques focusing on the numeric values also have their downsides. Either their background knowledge representation is built in a top-down manner from general knowledge bases, which do not reflect the domain of input and in turn do not contain the values in a recognizable form. Or they do not make use of context provided by the general STI tools. This causes them to mismatch annotations of columns consisting from similar values, but of entirely different meaning. This thesis addresses the...
77

Integrating Linked Data search results using statistical relational learning approaches

Al Shekaili, Dhahi January 2017 (has links)
Linked Data (LD) follows the web in providing low barriers to publication, and in deploying web-scale keyword search as a central way of identifying relevant data. As in the web, searchesinitially identify results in broadly the form in which they were published, and the published form may be provided to the user as the result of a search. This will be satisfactory in some cases, but the diversity of publishers means that the results of the search may be obtained from many different sources, and described in many different ways. As such, there seems to bean opportunity to add value to search results by providing userswith an integrated representation that brings together features from different sources. This involves an on-the-fly and automated data integration process being applied to search results, which raises the question as to what technologies might bemost suitable for supporting the integration of LD searchresults. In this thesis we take the view that the problem of integrating LD search results is best approached by assimilating different forms ofevidence that support the integration process. In particular, thisdissertation shows how Statistical Relational Learning (SRL) formalisms (viz., Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL)) can beexploited to assimilate different sources of evidence in a principledway and to beneficial effect for users. Specifically, in this dissertation weconsider syntactic evidence derived from LD search results and from matching algorithms, semantic evidence derived from LD vocabularies, and user evidence,in the form of feedback. This dissertation makes the following key contributions: (i) a characterisation of key features of LD search results that are relevant to their integration, and a description of some initial experiences in the use of MLN for interpreting search results; (ii)a PSL rule-base that models the uniform assimilation of diverse kinds of evidence;(iii) an empirical evaluation of how the contributed MLN and PSL approaches perform in terms of their ability to infer a structure for integrating LD search results;and (iv) concrete examples of how populating such inferred structures for presentation to the end user is beneficial, as well as guiding the collection of feedbackwhose assimilation further improves search results presentation.
78

Softwarová architektura otevřené veřejné správy / Software architecture of open government

Kroupa, Tomáš January 2012 (has links)
Public administration owns a large amount of information, whose value is not utilized yet. An application of Open Data and Linked Data principles could enable not only to effectively publish this information, but also to exploit the value. The aim of this thesis is to analyse contemporary situation, assess and debate the barriers and also suggest the solutions for application of the principles in the Public Administration of The Czech Republic.
79

Template-Based Question Answering over Linked Data using Recursive Neural Networks

January 2018 (has links)
abstract: The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies used by these knowledge graphs. So, to make these knowledge graphs more accessible (even for non- experts) several question answering (QA) systems have been developed over the last decade. Due to the complexity of the task, several approaches have been undertaken that include techniques from natural language processing (NLP), information retrieval (IR), machine learning (ML) and the Semantic Web (SW). At a higher level, most question answering systems approach the question answering task as a conversion from the natural language question to its corresponding SPARQL query. These systems then utilize the query to retrieve the desired entities or literals. One approach to solve this problem, that is used by most systems today, is to apply deep syntactic and semantic analysis on the input question to derive the SPARQL query. This has resulted in the evolution of natural language processing pipelines that have common characteristics such as answer type detection, segmentation, phrase matching, part-of-speech-tagging, named entity recognition, named entity disambiguation, syntactic or dependency parsing, semantic role labeling, etc. This has lead to NLP pipeline architectures that integrate components that solve a specific aspect of the problem and pass on the results to subsequent components for further processing eg: DBpedia Spotlight for named entity recognition, RelMatch for relational mapping, etc. A major drawback in this approach is error propagation that is a common problem in NLP. This can occur due to mistakes early on in the pipeline that can adversely affect successive steps further down the pipeline. Another approach is to use query templates either manually generated or extracted from existing benchmark datasets such as Question Answering over Linked Data (QALD) to generate the SPARQL queries that is basically a set of predefined queries with various slots that need to be filled. This approach potentially shifts the question answering problem into a classification task where the system needs to match the input question to the appropriate template (class label). This thesis proposes a neural network approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination for the need of laborious feature engineering that can be cumbersome and error prone. The input question would be encoded into a vector representation. The model will be trained and evaluated on the LC-QuAD Dataset (Large-scale Complex Question Answering Dataset). The dataset was created explicitly for machine learning based QA approaches for learning complex SPARQL queries. The dataset consists of 5000 questions along with their corresponding SPARQL queries over the DBpedia dataset spanning 5042 entities and 615 predicates. These queries were annotated based on 38 unique templates that the model will attempt to classify. The resulting model will be evaluated against both the LC-QuAD dataset and the Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC- QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset. / Dissertation/Thesis / Masters Thesis Software Engineering 2018
80

Frameworks for Personalized Privacy and Privacy Auditing

Samavi, M. Reza 13 August 2013 (has links)
As individuals are increasingly benefiting from the use of online services, there are growing concerns about the treatment of personal information. Society’s ongoing response to these concerns often gives rise to privacy policies expressed in legislation and regulation. These policies are written in natural language (or legalese) as privacy agreements that users must agree to, or presented as a set of privacy settings and options that users must opt in or out of in order to receive the service they want. But comprehensibility of privacy policies and settings is becoming increasingly challenging as agreements become longer and there are many privacy options to choose from. Additionally, organizations face the challenge of assuring compliance with policies that govern collecting, using, and sharing of personal data. This thesis proposes frameworks for personalized privacy and privacy auditing to address these two problems. In this thesis, we focus our investigation on the comprehensibility issues of personalized privacy using the concrete application domain of personal health data as recorded in systems known as personal health records (PHR). We develop the Privacy Goals and Settings Mediator (PGSM) model, which is based on i* multi-agent modelling techniques, as a way to help users comprehend privacy settings when employing multiple services over a web platform. Additionally, the PGSM model helps privacy experts contribute their privacy knowledge to the users’ privacy decision-making task. To address the privacy auditing problem, we propose two light-weight ontologies, L2TAP and SCIP, that are designed for deployment as Linked Data, an emerging standard for representing and publishing web data. L2TAP (Linked Data Log to Transparency, Accountability and Privacy) provides flexible and extensible provenance-enabled logging of privacy events. SCIP (Simple Contextual Integrity Privacy) provides a simple target for mapping the key concepts of Contextual Integrity and enables SPARQL query-based solutions for two important privacy processes: compliance checking and obligation derivation. This thesis validates the premise of PHR users’ privacy concerns, attitudes and behaviour through an empirical study. The usefulness of the PGSM model for privacy experts is evaluated through interviews with experts. Finally, the scalability and practical benefits of L2TAP+SCIP for log-based privacy auditing are validated experimentally.

Page generated in 0.0509 seconds