Spelling suggestions: "subject:"semantic eeb"" "subject:"semantic beb""
61 |
Semantic Web Enabled Composition of Web ServicesMedjahed, Brahim 30 April 2004 (has links)
In this dissertation, we present a novel approach for the automatic composition of Web services on the envisioned Semantic Web. Automatic service composition requires dealing with three major research thrusts: semantic description of Web services, composability of participant services, and generation of composite service descriptions.
This dissertation deals with the aforementioned research issues. We first propose an ontology-based framework for organizing and describing semantic Web services. We introduce the concept of community to cluster Web services based on their domain of interest. Each community is defined as an instance of an ontology called community ontology. We then propose a composability model to check whether semantic Web services can be combined together, hence avoiding unexpected failures at run time. The model defines formal safeguards for meaningful composition through the use of composability rules. We also introduce the notions of composability degree and tau-composability to cater for partial and total composability. Based on the composability model, we propose a set of algorithms that automatically generate detailed descriptions of composite services from high-level specifications of composition requests. We introduce a Quality of Composition (QoC) model to assess the quality of the generated composite services. The techniques presented in this dissertation are implemented in WebDG, a prototype for accessing e-government Web services. Finally, we conduct an extensive performance study (analytical and experimental) of the proposed composition algorithms. / Ph. D.
|
62 |
A Semantic Web-Based Digital Library Infrastructure to Facilitate Computational EpidemiologyHasan, S. M. Shamimul 15 September 2017 (has links)
Computational epidemiology generates and utilizes massive amounts of data. There are two primary categories of datasets: reported and synthetic. Reported data include epidemic data published by organizations (e.g., WHO, CDC, other national ministries and departments of health) during and following actual outbreaks, while synthetic datasets are comprised of spatially explicit synthetic populations, labeled social contact networks, multi-cell statistical experiments, and output data generated from the execution of computer simulation experiments. The discipline of computational epidemiology encounters numerous challenges because of the size, volume, and dynamic nature of both types of these datasets.
In this dissertation, we present semantic web-based schemas to organize diverse reported and synthetic computational epidemiology datasets. There are three layers of these schemas: conceptual, logical, and physical. The conceptual layer provides data abstraction by exposing common entities and properties to the end user. The logical layer captures data fragmentation and linking aspects of the datasets. The physical layer covers storage aspects of the datasets. We can create mapping files from the schemas. The schemas are flexible and can grow.
The schemas presented include data linking approaches that can connect large-scale and widely varying epidemic datasets. This linked data leads to an integrated knowledge-base, enabling an epidemiologist to ask complex queries that employ multiple datasets. We demonstrate the utility of our knowledge-base by developing a query bank, which represents typical analyses carried out by an epidemiologist during the course of planning for or responding to an epidemic. By running queries with different data mapping techniques, we demonstrate the performance of various tools. The empirical results show that leveraging semantic web technology is an effective strategy for: reasoning over multiple datasets simultaneously, developing network queries pertinent in an epidemic analysis, and conducting realistic studies undertaken in an epidemic investigation. The performance of queries varies according to the choice of hardware, underlying database, and resource description framework (RDF) engine. We provide application programming interfaces (APIs) on top of our linked datasets, which an epidemiologist can use for information retrieval, without knowing much about underlying datasets. The proposed semantic web-based digital library infrastructure can be highly beneficial for epidemiologists as they work to comprehend disease propagation for timely outbreak detection and efficient disease control activities. / PHD / Computational epidemiology generates and utilizes massive amounts of data, and the field faces numerous challenges because of the volume and dynamic nature of the datasets utilized. There are two primary categories of datasets. The first contains epidemic datasets tracking actual outbreaks of disease, which are reported by governments, private companies, and associated parties. The second category is synthetic data created through computer simulation. We present semantic web-based schemas to organize diverse reported and synthetic computational epidemiology datasets. The schemas are flexible in use and scale, and utilize data linking approaches that can connect large-scale and widely varying epidemic datasets. This linked data leads to an integrated knowledge-base, enabling an epidemiologist to ask complex queries that employ multiple datasets. This ability helps epidemiologists better understand disease propagation, for efficient outbreak detection and disease control activities.
|
63 |
Semantic information systems engineering : a query-based approach for semi-automatic annotation of web servicesAl Asswad, Mohammad Mourhaf January 2011 (has links)
There has been an increasing interest in Semantic Web services (SWS) as a proposed solution to facilitate automatic discovery, composition and deployment of existing syntactic Web services. Successful implementation and wider adoption of SWS by research and industry are, however, profoundly based on the existence of effective and easy to use methods for service semantic description. Unfortunately, Web service semantic annotation is currently performed by manual means. Manual annotation is a difficult, error-prone and time-consuming task and few approaches exist aiming to semi-automate that task. Existing approaches are difficult to use since they require ontology building. Moreover, these approaches employ ineffective matching methods and suffer from the Low Percentage Problem. The latter problem happens when a small number of service elements - in comparison to the total number of elements – are annotated in a given service. This research addresses the Web services annotation problem by developing a semi-automatic annotation approach that allows SWS developers to effectively and easily annotate their syntactic services. The proposed approach does not require application ontologies to model service semantics. Instead, a standard query template is used: This template is filled with data and semantics extracted from WSDL files in order to produce query instances. The input of the annotation approach is the WSDL file of a candidate service and a set of ontologies. The output is an annotated WSDL file. The proposed approach is composed of five phases: (1) Concept extraction; (2) concept filtering and query filling; (3) query execution; (4) results assessment; and (5) SAWSDL annotation. The query execution engine makes use of name-based and structural matching techniques. The name-based matching is carried out by CN-Match which is a novel matching method and tool that is developed and evaluated in this research. The proposed annotation approach is evaluated using a set of existing Web services and ontologies. Precision (P), Recall (R), F-Measure (F) and Percentage of annotated elements are used as evaluation metrics. The evaluation reveals that the proposed approach is effective since - in relation to manual results - accurate and almost complete annotation results are obtained. In addition, high percentage of annotated elements is achieved using the proposed approach because it makes use of effective ontology extension mechanisms.
|
64 |
Modelagem de contexto utilizando ontologias. / Context modeling using ontologies.Ponce Escobedo, Edgardo Paúl 05 May 2008 (has links)
Com os avanços dos processos da microeletrônica temos dispositivos menores e com maior poder de computação e comunicação. Um Ambiente Pervasivo contém diferentes dispositivos, tais como sensores, atuadores, eletroeletrônicos e dispositivos móveis que interagem com a pessoa de forma natural ao conhecer o contexto. A diversidade de dispositivos e informações do Ambiente Pervasivo introduz um problema de interoperabilidade. Um Ambiente Pervasivo é dinâmico devido à mobilidade do usuário, a variedade de dispositivos. Neste trabalho, é proposto um modelo semântico de contexto para permitir interoperabilidade e fornecer suporte ao dinamismo do Ambiente Pervasivo. O modelo proposto contém características da modelagem de contexto realizadas por trabalhos anteriores, assim como sua integração com a modelagem de preferências das pessoas, políticas de privacidade e serviços. Verificou-se que o modelo de contexto proposto é adequado mediante sua aplicação em um Estudo de Caso e mediante testes realizados. Mostra-se que a modelo de contexto utilizado ontologias e Serviços Web Semânticos permite tratar com informação incompleta e inconsistente, bem como fornece suporte na interoperabilidade e ao dinamismo do Ambiente Pervasivo. / Advances in microelectronic processes have allowed smaller devices with more computation and communication power. Pervasive environment contains different devices like electronic sensor, actuators and mobile devices which interact with the person naturally after the context is known. The device and information diversity introduce an interoperability problem. Pervasive environments are dynamics because of user\'s mobility and a variety of devices. In this work, we propose a context model to allow interoperability and to give support to pervasive environment dynamism. The proposed model contains features of context modeling developed in previous works, as well as, their integration with the modeling of the people\'s preferences, privacy policies and services. It was verified that the context model is appropriate by their application in a Case Study and by accomplished tests. It is shown that the model of context using ontologies and Semantic Web Services allow us to work with inconsistent and incomplete information, as well as gives support to interoperability and dynamism of the Pervasive Environment.
|
65 |
Modelagem de contexto utilizando ontologias. / Context modeling using ontologies.Edgardo Paúl Ponce Escobedo 05 May 2008 (has links)
Com os avanços dos processos da microeletrônica temos dispositivos menores e com maior poder de computação e comunicação. Um Ambiente Pervasivo contém diferentes dispositivos, tais como sensores, atuadores, eletroeletrônicos e dispositivos móveis que interagem com a pessoa de forma natural ao conhecer o contexto. A diversidade de dispositivos e informações do Ambiente Pervasivo introduz um problema de interoperabilidade. Um Ambiente Pervasivo é dinâmico devido à mobilidade do usuário, a variedade de dispositivos. Neste trabalho, é proposto um modelo semântico de contexto para permitir interoperabilidade e fornecer suporte ao dinamismo do Ambiente Pervasivo. O modelo proposto contém características da modelagem de contexto realizadas por trabalhos anteriores, assim como sua integração com a modelagem de preferências das pessoas, políticas de privacidade e serviços. Verificou-se que o modelo de contexto proposto é adequado mediante sua aplicação em um Estudo de Caso e mediante testes realizados. Mostra-se que a modelo de contexto utilizado ontologias e Serviços Web Semânticos permite tratar com informação incompleta e inconsistente, bem como fornece suporte na interoperabilidade e ao dinamismo do Ambiente Pervasivo. / Advances in microelectronic processes have allowed smaller devices with more computation and communication power. Pervasive environment contains different devices like electronic sensor, actuators and mobile devices which interact with the person naturally after the context is known. The device and information diversity introduce an interoperability problem. Pervasive environments are dynamics because of user\'s mobility and a variety of devices. In this work, we propose a context model to allow interoperability and to give support to pervasive environment dynamism. The proposed model contains features of context modeling developed in previous works, as well as, their integration with the modeling of the people\'s preferences, privacy policies and services. It was verified that the context model is appropriate by their application in a Case Study and by accomplished tests. It is shown that the model of context using ontologies and Semantic Web Services allow us to work with inconsistent and incomplete information, as well as gives support to interoperability and dynamism of the Pervasive Environment.
|
66 |
Using Semantic Web Technologies for Classification Analysis in Social NetworksOpuszko, Marek 12 March 2012 (has links) (PDF)
The Semantic Web enables people and computers to interact and exchange
information. Based on Semantic Web technologies, different machine learning applications have been designed. Particularly to emphasize is the possibility to create complex metadata descriptions for any problem domain, based on pre-defined ontologies. In this paper we evaluate the use of a semantic similarity measure based on pre-defined ontologies as an input for a classification analysis. A link prediction between actors of a social network is performed, which could serve as a recommendation system. We measure the prediction performance based on an ontology-based metadata modeling as well as a feature vector modeling. The findings demonstrate that the prediction accuracy based on ontology-based metadata is comparable to traditional approaches and shows that data mining using ontology-based metadata can be considered as a very promising approach.
|
67 |
Using Semantic Web Services For Data Integration In Banking DomainOkat, Caglar 01 May 2010 (has links) (PDF)
A semantic model oriented transformation mechanism is developed for the centralization of intra-enterprise data integration. Such a mechanism is especially crucial in the banking domain which is selected in this study. A new domain ontology is constructed to provide basis for annotations. A bottom-up approach is preferred for semantic annotations to utilize existing web service definitions. Transformations between syntactic web service XML responses and semantic model concepts are defined in transformation files. Transformation files are stored and executed in a separate central transformation repository to enhance abstraction and reusability. An RDF-Store is implemented to store transformed RDF data. Inference power of semantic model is exposed by executing semantic queries in the RDF-Store.
|
68 |
An ontology based approach towards a universal description framework for home networksDocherty, Liam S. January 2009 (has links)
Current home networks typically involve two or more machines sharing network resources. The vision for the home network has grown from a simple computer network, to every day appliances embedded with network capabilities. In this environment devices and services within the home can interoperate, regardless of protocol or platform. Network clients can discover required resources by performing network discovery over component descriptions. Common approaches to this discovery process involve simple matching of keywords or attribute/value pairings. Interest emerging from the Semantic Web community has led to ontology languages being applied to network domains, providing a logical and semantically rich approach to both describing and discovering network components. In much of the existing work within this domain, developers have focused on defining new description frameworks in isolation from existing protocol frameworks and vocabularies. This work proposes an ontology-based description framework which takes the ontology approach to the next step, where existing description frameworks are in- corporated into the ontology-based framework, allowing discovery mechanisms to cover multiple existing domains. In this manner, existing protocols and networking approaches can participate in semantically-rich discovery processes. This framework also includes a system architecture developed for the purpose of reconciling existing home network solutions with the ontology-based discovery process. This work also describes an implementation of the approach and is deployed within a home-network environment. This implementation involves existing home networking frameworks, protocols and components, allowing the claims of this work to be examined and evaluated from a ‘real-world’ perspective.
|
69 |
Linked Data Quality Assessment and its Application to Societal Progress MeasurementZaveri, Amrapali 19 May 2015 (has links) (PDF)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented.
With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously.
In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself.
A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets.
Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology.
Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
|
70 |
An ontology for enhancing automation and interoperability in Enterprise Crowdsourcing EnvironmentsHetmank, Lars 17 November 2014 (has links) (PDF)
Enterprise crowdsourcing transforms the way in which traditional business tasks can be processed by harnessing the collective intelligence and workforce of a large and often diver-sified group of people. At the present time, data and information residing within enterprise crowdsourcing systems and other business applications are insufficiently interlinked and are rarely made publicly available in an open and semantically structured manner – neither to the corporate intranet nor to the World Wide Web (WWW). However, the semantic annotation of enterprise crowdsourcing activities is a promising research and application domain. The Semantic Web and its related technologies, methods and principles for publishing structured data offer an extension of the traditional layout-oriented Web to provide more intelligent and complex services.
This technical report describes the efforts toward a universal and lightweight yet powerful Semantic Web vocabulary for the domain of enterprise crowdsourcing. As a methodology for developing the vocabulary, the approach of ontology engineering is applied. To illustrate the purpose and to limit the scope of the ontology, several informal competency questions as well as functional and non-functional requirements are presented. The subsequent con-ceptualization of the ontology applies different sources of knowledge and considers various perspectives. A set of semantic entities is derived from a review of existing crowdsourcing applications and a review of recent crowdsourcing literature. During the domain capture, all partial results of the review are integrated into a consistent data dictionary and structured as a UML data schema. The designed ontology includes 24 classes, 22 object properties and 30 datatype properties to describe the key aspects of a crowdsourcing model (CSM). To demonstrate the technical feasibility, the ontology is implemented using the Web Ontology Language (OWL). Finally, the ontology is evaluated by means of transforming informal to formal competency questions, comparing it to existing semantic vocabularies, and calculat-ing ontology metrics. Evidence is shown that the CSM ontology covers the key representa-tional needs of the enterprise crowdsourcing domain. At the end of the technical report, cur-rent limitations are illustrated and directions for future research are proposed.
|
Page generated in 0.0555 seconds