Spelling suggestions: "subject:"ehe semantic web"" "subject:"hhe semantic web""
11 |
A goal directed learning agent for the Semantic WebGrimnes, Gunnar Aastrand January 2008 (has links)
This thesis is motivated by the need for autonomous agents on the Semantic Web to be able to learn The Semantic Web is an effort for extending the existing Web with machine understandable information, thus enabling intelligent agents to understand the content of web-pages and help users carrying out tasks online. For such autonomous personal agents working on a world wide Semantic Web we make two observations. Firstly, every user is different and the Semantic Web will never cater for them all - - therefore, it is crucial for an agent to be able to learn from the user and the world around it to provide a personalised view of the web. Secondly, due to the immense amounts of information available on the world wide Semantic Web an agent cannot read and process all available data. We argue that to deal with the information overload a goal-directed approach is needed; an agent must be able to reason about the external world, the internal state and the actions available and only carry out the actions that help activate the current goal. In the first part of this thesis we explore the application of two machine learning techniques to Semantic Web data. Firstly, we investigate the classification of Semantic Web resources, we discuss the issues of mapping Semantic Web format to an input representation suitable for a selection of well-known algorithms, and outline the requirements for these algorithms to work well in a Semantic Web context. Secondly, we consider the clustering of Semantic Web resources. Here we focus on the definition of the similarity between two resources, and how we can determine what part of a large Semantic Web graph is relevant to a single resource. In the second part of the thesis we describe our goal-directed learning agent Smeagol. We present explicit definitions of the classification and clustering techniques devised in the first part of the thesis, allowing Smeagol to use a planning approach to create plans of actions that may fulfil a given top-level goal. We also investigate different ways that Smeagol can dynamically replan when steps within the initial plan fail and show that Smeagol can offer plausible learned answers to a given query, even when no explicit correct answer exists.
|
12 |
iSEE:A Semantic Sensors Selection System for HealthcareJean Paul, Bambanza January 2016 (has links)
The massive use of Internet-based connectivity of devices such as smartphones and sensors has led to the emergence of Internet of Things(IoT). Healthcare is one of the areas that IoT-based applications deployment is becoming more successful. However, the deployment of IoT in healthcare faces one major challenge, the selection of IoT devices by stakeholders (for example, patients, caregivers, health professionals and other government agencies) given an amount of available IoT devices based on a disease(for ex-ample, Asthma) or various healthcare scenarios (for example, disease management, prevention and rehabilitation). Since healthcare stakeholders currently do not have enough knowledge about IoT, the IoT devices selection process has to proceed in a way that it allows users to have more detailed information about IoT devices for example, Quality of Service (QoS) parameters, cost, availability(manufacturer), device placement and associated disease. To address this challenge, this thesis work proposes, develops and validates a novel Semantic sEnsor sElection system(iSEE) for healthcare. This thesis also develops iSEE system prototype and Smart Healthcare Ontology(SHO). A Java application is built to allow users for querying our developed SHO in an efficient way.The iSEE system is evaluated based on query response time and the result-set for the queries. Further, we evaluate SHO using Competency Questions(CQs). The conducted evaluations show that our iSEE system can be used efficiently to support stakeholders within the healthcare domain.
|
13 |
Exploring potential improvements to term-based clustering of web documentsArac̆ić, Damir, January 2007 (has links) (PDF)
Thesis (M.S.)--Washington State University, December 2007. / Includes bibliographical references (p. 66-69).
|
14 |
Meta-Metadata: An Information Semantic Language and Software Architecture for Collection Visualization ApplicationMathur, Abhinav 2009 December 1900 (has links)
Information collection and discovery tasks involve aggregation and manipulation
of information resources. An information resource is a location from which a human
gathers data to contribute to his/her understanding of something significant. Repositories
of information resources include the Google search engine, the ACM Digital Library,
Wikipedia, Flickr, and IMDB. Information discovery tasks involve having new ideas in
contexts of information collecting.
The information one needs to collect is large and diverse and hard to keep track
of. The heterogeneity and scale also make difficult writing software to support
information collection and discovery tasks. Metadata is a structured means for
describing information resources. It forms the basis of digital libraries and search
engines.
As metadata is often called, "data about data," we define meta-metadata as a
formal means for describing metadata as an XML based language. We consider the
lifecycle of metadata in information collection and discovery tasks and develop a metametadata
architecture which deals with the data structures for representation of metadata
inside programs, extraction from information resources, rules for presentation to users, and logic that defines how an application needs to operate on metadata. Semantic
actions for an information resource collection are steps taken to generate representative
objects, including formation of iconographic image and text surrogates, associated with
metadata.
The meta-metadata language serves as a layer of abstraction between information
resources, power users, and application developers. A power user can enhance an
existing collection visualization application by authoring meta-metadata for a new
information resource without modifying the application source code. The architecture
provides a set of interfaces for semantic actions which different information discovery
and visualization applications can implement according to their own custom
requirements. Application developers can modify the implementation of these semantic
actions to change the behavior of their application, regardless of the information
resource.
We have used our architecture in combinFormation, an information discovery
and collection visualization application and validated it through a user study.
|
15 |
Semantic methods for execution level business process modeling modeling support through process verification and service compositionWeber, Ingo M. January 2009 (has links)
Zugl.: Karlsruhe, Univ., Diss., 2009
|
16 |
A framework and methodology for ontology mediation through semantic and syntactic mappingMuthaiyah, Saravanan. January 2008 (has links)
Thesis (Ph. D.)--George Mason University, 2008. / Vita: p. 177. Thesis director: Larry Kerschberg. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed July 3, 2008). Includes bibliographical references (p. 169-176). Also issued in print.
|
17 |
Applying Semantic Wiki Technology to Corporate Metadata Management An Implementation Project at Bayer CropScience /Egloff, Mark. January 2008 (has links) (PDF)
Master-Arbeit Univ. St. Gallen, 2008.
|
18 |
A Semantic Framework for Integrating and Publishing Linked Data on the WebJanuary 2016 (has links)
abstract: Semantic web is the web of data that provides a common framework and technologies for sharing and reusing data in various applications. In semantic web terminology, linked data is the term used to describe a method of exposing and connecting data on the web from different sources. The purpose of linked data and semantic web is to publish data in an open and standard format and to link this data with existing data on the Linked Open Data Cloud. The goal of this thesis to come up with a semantic framework for integrating and publishing linked data on the web. Traditionally integrating data from multiple sources usually involves an Extract-Transform-Load (ETL) framework to generate datasets for analytics and visualization. The thesis proposes introducing a semantic component in the ETL framework to semi-automate the generation and publishing of linked data. In this thesis, various existing ETL tools and data integration techniques have been analyzed and deficiencies have been identified. This thesis proposes a set of requirements for the semantic ETL framework by conducting a manual process to integrate data from various sources such as weather, holidays, airports, flight arrival, departure and delays. The research questions that are addressed are: (i) to what extent can the integration, generation, and publishing of linked data to the cloud using a semantic ETL framework be automated; (ii) does use of semantic technologies produce a richer data model and integrated data. Details of the methodology, data collection, and application that uses the linked data generated are presented. Evaluation is done by comparing traditional data integration approach with semantic ETL approach in terms of effort involved in integration, data model generated and querying the data generated. / Dissertation/Thesis / Masters Thesis Computer Science 2016
|
19 |
CoreSec: uma ontologia para o domínio de segurança da informaçãoRibeiro de Azevedo, Ryan 31 January 2008 (has links)
Made available in DSpace on 2014-06-12T15:54:41Z (GMT). No. of bitstreams: 2
arquivo1991_1.pdf: 2164656 bytes, checksum: 1155c56e11920c8db2f44538c0dec97f (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2008 / Em ambientes corporativos e heterogêneos, o compartilhamento de recursos para
a resolução de problemas é fortemente associado à segurança da informação. Um
aspecto crítico a ser considerado para as organizações é a exigência de uma eficaz e
eficiente aquisição e distribuição de conhecimento a respeito de riscos, vulnerabilidades
e ameaças que podem ser, portanto, exploradas e causar incidentes de segurança e
impactar negativamente nos negócios. Os diversos ambientes de atuação humana
necessitam de meios transparentes para planejar e gerenciar problemas relacionados à
segurança da informação. Há um aumento significativo na complexidade de se projetar
e planejar segurança necessitando que meios de manipulação da informação sejam
adotados. Para isso, esta dissertação propõe uma ontologia para este domínio de
segurança computacional, denominada CoreSec. O estudo visa demonstrar que uma vez
que o conhecimento é formalizado, é possível reusá-lo, realizar inferência, processá-lo
computacionalmente, como também torna-se passível de comunicação entre seres
humanos e agentes inteligentes. Nossa proposta considera que a segurança da
informação será mais eficiente se esta for baseada em um modelo formal de
informações do domínio, tal como uma ontologia, podendo ser aplicada para auxiliar as
atividades dos responsáveis de segurança, na análise e avaliação de riscos, elicitação de
requisitos de segurança, análise de vulnerabilidades e desenvolvimento de ontologias
mais específicas para o domínio de segurança da informação
|
20 |
From the Wall to the Web: A Microformat for Visual ArtBukva, Emir 07 December 2009 (has links)
No description available.
|
Page generated in 0.0849 seconds