171 |
Descoberta e composição de serviços web semânticos através de algoritmo genético baseado em tipos abstratos de dados. / Discovery and composition of semantic web services through genetic algorithms based on abstract data types.Soares, Elvys Alves 13 November 2009 (has links)
The Semantic Web is an extension of the current Web, where the availability of information
is expected to enable the cooperation between man and, above all, machines. The creation
of standards which express shared meaning enable the construction of applications to solve
integration, collaboration and automation problems which were already been identified by scientific
community and technology consumers.
The use of Web Services has brought several advances in this sense, and their annotation
in semantic terms, transforming them into Semantic Web Services, enables the Semantic Web
intent. Several technologies also enable the creation of such elements and their inherent use as
basic blocks of application development whose scope is embedded on Web. This way, due to
the fast growing of the number of services, some approaches to effectively solve the problem
of services integration and use become necessary.
This work proposes a modeling of a software solution to the discovery and composition
of Semantic Web Services problem through the use of a genetic algorithm based on abstract
data types. It is also proposed a tool implementation using OWL, OWL-S and OWL-S API
languages and frameworks as well as the formal problem definition along with the scientific
community expectations to the given solution. / AWeb Semântica é uma ampliação da web atual onde a disposição da informação viabiliza
a cooperação entre homens e, sobretudo, entre máquinas. O surgimento de padrões web que
expressam significado compartilhado possibilitam a construção de aplicações que resolvem
problemas de integração, colaboração e automação já identificados pela comunidade científica
e mercado consumidor de tecnologias.
A utilização de Serviços Web trouxe grandes ganhos neste sentido, e sua anotação em
termos semânticos, tornando-os Serviços Web Semânticos, viabiliza a proposta da Web Semântica.
Diversas tecnologias viabilizam a construção de tais elementos e sua conseqüente
utilização como blocos básicos do desenvolvimento de aplicações cujo escopo é embarcado
na web. Assim, dado o rápido crescimento da quantidade de serviços, tornam-se necessárias
abordagens que resolvam de forma efetiva, com garantias de qualidade e tempo de resposta
aceitável, a integração e posterior utilização destes.
Este trabalho propõe a modelagem de uma solução de software para o problema da Descoberta
e Composição de Serviços Web Semânticos através do uso do Algoritmo Genético
Baseado em Tipos Abstratos de Dados. Também é proposta uma implementação utilizando
OWL, OWL-S e a OWL-S API. São apresentadas a definição formal do problema, as expectativas
da comunidade científica quanto às soluções elaboradas e os resultados obtidos com
respeito à viabilidade da proposta.
|
172 |
An Evaluation Platform for Semantic Web TechnologyÅberg, Cécile January 2006 (has links)
The vision of the Semantic Web aims at enhancing today's Web in order to provide a more efficient and reliable environment for both providers and consumers of Web resources (i.e. information and services). To deploy the Semantic Web, various technologies have been developed, such as machine understandable description languages, language parsers, goal matchers, and resource composition algorithms. Since the Semantic Web is just emerging, each technology tends to make assumptions about different aspects of the Semantic Web's architecture and use, such as the kind of applications that will be deployed, the resource descriptions, the consumers' and providers' requirements, and the existence and capabilities of other technologies. In order to ensure the deployment of a robust and useful Semantic Web and the applications that will rely on it, several aspects of the technologies must be investigated, such as whether the assumptions made are reasonable, whether the existing technologies allow construction of a usable Semantic Web, and the systematic identification of which technology to use when designing new applications. In this thesis we provide a means of investigating these aspects for service discovery, which is a critical task in the context of the Semantic Web. We propose a simulation and evaluation platform for evaluating current and future Semantic Web technology with different resource sets and consumer and provider requirements. For this purpose we provide a model to represent the Semantic Web, a model of the evaluation platform, an implementation of the evaluation platform as a multi-agent system, and an illustrative use of the platform to evaluate some service discovery technology in a travel scenario. The implementation of the platform shows the feasibility of our evaluation approach. We show how the platform provides a controlled setting to support the systematic identification of bottlenecks and other challenges for new Semantic Web applications. Finally, the evaluation shows that the platform can be used to assess technology with respect to both hardware issues such as the kind and number of computers involved in a discovery scenario, and other issues such as the evaluation of the quality of the service discovery result.
|
173 |
Ubiquitous Semantic ApplicationsErmilov, Timofey 18 December 2014 (has links)
As Semantic Web technology evolves many open areas emerge, which attract more research focus. In addition to quickly expanding Linked Open Data (LOD) cloud, various embeddable metadata formats (e.g. RDFa, microdata) are becoming more common. Corporations are already using existing Web of Data to create new technologies that were not possible before. Watson by IBM an artificial intelligence computer system capable of answering questions posed in natural language can be a great example.
On the other hand, ubiquitous devices that have a large number of sensors and integrated devices are becoming increasingly powerful and fully featured computing platforms in our pockets and homes. For many people smartphones and tablet computers have already replaced traditional computers as their window to the Internet and to the Web. Hence, the management and presentation of information that is useful to a user is a main requirement for today’s smartphones. And it is becoming extremely important to provide access to the emerging Web of Data from the ubiquitous devices.
In this thesis we investigate how ubiquitous devices can interact with the Semantic Web. We discovered that there are five different approaches for bringing the Semantic Web to ubiquitous devices. We have outlined and discussed in detail existing challenges in implementing this approaches in section 1.2. We have described a conceptual framework for ubiquitous semantic applications in chapter 4. We distinguish three client approaches for accessing semantic data using ubiquitous devices depending on how much of the semantic data processing is performed on the device itself (thin, hybrid and fat clients). These are discussed in chapter 5 along with the solution to every related challenge. Two provider approaches (fat and hybrid) can be distinguished for exposing data from ubiquitous devices on the Semantic Web. These are discussed in chapter 6 along with the solution to every related challenge. We conclude our work with a discussion on each of the contributions of the thesis and propose future work for each of the discussed approach in chapter 7.
|
174 |
Linked Data Quality Assessment and its Application to Societal Progress MeasurementZaveri, Amrapali 17 April 2015 (has links)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented.
With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously.
In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself.
A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets.
Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology.
Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
|
175 |
An ontology for enhancing automation and interoperability in Enterprise Crowdsourcing EnvironmentsHetmank, Lars January 2014 (has links)
Enterprise crowdsourcing transforms the way in which traditional business tasks can be processed by harnessing the collective intelligence and workforce of a large and often diver-sified group of people. At the present time, data and information residing within enterprise crowdsourcing systems and other business applications are insufficiently interlinked and are rarely made publicly available in an open and semantically structured manner – neither to the corporate intranet nor to the World Wide Web (WWW). However, the semantic annotation of enterprise crowdsourcing activities is a promising research and application domain. The Semantic Web and its related technologies, methods and principles for publishing structured data offer an extension of the traditional layout-oriented Web to provide more intelligent and complex services.
This technical report describes the efforts toward a universal and lightweight yet powerful Semantic Web vocabulary for the domain of enterprise crowdsourcing. As a methodology for developing the vocabulary, the approach of ontology engineering is applied. To illustrate the purpose and to limit the scope of the ontology, several informal competency questions as well as functional and non-functional requirements are presented. The subsequent con-ceptualization of the ontology applies different sources of knowledge and considers various perspectives. A set of semantic entities is derived from a review of existing crowdsourcing applications and a review of recent crowdsourcing literature. During the domain capture, all partial results of the review are integrated into a consistent data dictionary and structured as a UML data schema. The designed ontology includes 24 classes, 22 object properties and 30 datatype properties to describe the key aspects of a crowdsourcing model (CSM). To demonstrate the technical feasibility, the ontology is implemented using the Web Ontology Language (OWL). Finally, the ontology is evaluated by means of transforming informal to formal competency questions, comparing it to existing semantic vocabularies, and calculat-ing ontology metrics. Evidence is shown that the CSM ontology covers the key representa-tional needs of the enterprise crowdsourcing domain. At the end of the technical report, cur-rent limitations are illustrated and directions for future research are proposed.:Table of Contents I
List of Figures III
List of Tables IV
List of Code Listings V
List of Abbreviations VI
Abstract VIII
1 Introduction 1
2 Research Objective 4
3 Ontology Engineering 6
4 Purpose and Scope 9
4.1 Informal Competency Questions 10
4.2 Requirements 11
4.2.1 Functional Requirements 12
4.2.2 Non-Functional Requirements 15
5 Ontology Development 18
5.1 Conceptualization 18
5.1.1 System Review 18
5.1.2 Literature Review 21
5.2 Domain Capture 26
5.3 Integration 28
5.3.1 Semantic Vocabularies and Standards 28
5.3.2 Implications for the Design 33
5.4 Implementation 33
6 Evaluation 35
6.1 Transforming Informal to Formal Competency Questions 36
6.2 Comparing the Ontology to other Semantic Vocabularies 42
6.3 Calculating Ontology Metrics 44
7 Conclusion 46
8 References 48
Appendix A (System Review) i
Appendix B (Crowdsourcing Taxonomies) v
Appendix C (Data Dictionary) ix
Appendix D (Semantic Vocabularies) xi
Appendix E (CSM Ontology Source Code) xv
Appendix F (Sample Data Instance 1) xxxi
Appendix G (Sample Data Instance 2) xxxiv
|
176 |
Enhancing Automation and Interoperability in Enterprise Crowdsourcing EnvironmentsHetmank, Lars 01 September 2016 (has links)
The last couple of years have seen a fascinating evolution. While the early Web predominantly focused on human consumption of Web content, the widespread dissemination of social software and Web 2.0 technologies enabled new forms of collaborative content creation and problem solving. These new forms often utilize the principles of collective intelligence, a phenomenon that emerges from a group of people who either cooperate or compete with each other to create a result that is better or more intelligent than any individual result (Leimeister, 2010; Malone, Laubacher, & Dellarocas, 2010). Crowdsourcing has recently gained attention as one of the mechanisms that taps into the power of web-enabled collective intelligence (Howe, 2008). Brabham (2013) defines it as “an online, distributed problem-solving and production model that leverages the collective intelligence of online communities to serve specific organizational goals” (p. xix). Well-known examples of crowdsourcing platforms are Wikipedia, Amazon Mechanical Turk, or InnoCentive.
Since the emergence of the term crowdsourcing in 2006, one popular misconception is that crowdsourcing relies largely on an amateur crowd rather than a pool of professional skilled workers (Brabham, 2013). As this might be true for low cognitive tasks, such as tagging a picture or rating a product, it is often not true for complex problem-solving and creative tasks, such as developing a new computer algorithm or creating an impressive product design. This raises the question of how to efficiently allocate an enterprise crowdsourcing task to appropriate members of the crowd. The sheer number of crowdsourcing tasks available at crowdsourcing intermediaries makes it especially challenging for workers to identify a task that matches their skills, experiences, and knowledge (Schall, 2012, p. 2).
An explanation why the identification of appropriate expert knowledge plays a major role in crowdsourcing is partly given in Condorcet’s jury theorem (Sunstein, 2008, p. 25). The theorem states that if the average participant in a binary decision process is more likely to be correct than incorrect, then as the number of participants increases, the higher the probability is that the aggregate arrives at the right answer. When assuming that a suitable participant for a task is more likely to give a correct answer or solution than an improper one, efficient task recommendation becomes crucial to improve the aggregated results in crowdsourcing processes. Although some assumptions of the theorem, such as independent votes, binary decisions, and homogenous groups, are often unrealistic in practice, it illustrates the importance of an optimized task allocation and group formation that consider the task requirements and workers’ characteristics.
Ontologies are widely applied to support semantic search and recommendation mechanisms (Middleton, De Roure, & Shadbolt, 2009). However, little research has investigated the potentials and the design of an ontology for the domain of enterprise crowdsourcing. The author of this thesis argues in favor of enhancing the automation and interoperability of an enterprise crowdsourcing environment with the introduction of a semantic vocabulary in form of an expressive but easy-to-use ontology. The deployment of a semantic vocabulary for enterprise crowdsourcing is likely to provide several technical and economic benefits for an enterprise. These benefits were the main drivers in efforts made during the research project of this thesis:
1. Task allocation: With the utilization of the semantics, requesters are able to form smaller task-specific crowds that perform tasks at lower costs and in less time than larger crowds. A standardized and controlled vocabulary allows requesters to communicate specific details about a crowdsourcing activity within a web page along with other existing displayed information. This has advantages for both contributors and requesters. On the one hand, contributors can easily and precisely search for tasks that correspond to their interests, experiences, skills, knowledge, and availability. On the other hand, crowdsourcing systems and intermediaries can proactively recommend crowdsourcing tasks to potential contributors (e.g., based on their social network profiles).
2. Quality control: Capturing and storing crowdsourcing data increases the overall transparency of the entire crowdsourcing activity and thus allows for a more sophisticated quality control. Requesters are able to check the consistency and receive appropriate support to verify and validate crowdsourcing data according to defined data types and value ranges. Before involving potential workers in a crowdsourcing task, requesters can also judge their trustworthiness based on previous accomplished tasks and hence improve the recruitment process.
3. Task definition: A standardized set of semantic entities supports the configuration of a crowdsourcing task. Requesters can evaluate historical crowdsourcing data to get suggestions for equal or similar crowdsourcing tasks, for example, which incentive or evaluation mechanism to use. They may also decrease their time to configure a crowdsourcing task by reusing well-established task specifications of a particular type.
4. Data integration and exchange: Applying a semantic vocabulary as a standard format for describing enterprise crowdsourcing activities allows not only crowdsourcing systems inside but also crowdsourcing intermediaries outside the company to extract crowdsourcing data from other business applications, such as project management, enterprise resource planning, or social software, and use it for further processing without retyping and copying the data. Additionally, enterprise or web search engines may exploit the structured data and provide enhanced search, browsing, and navigation capabilities, for example, clustering similar crowdsourcing tasks according to the required qualifications or the offered incentives.:Summary: Hetmank, L. (2014). Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Summary).
Article 1: Hetmank, L. (2013). Components and Functions of Crowdsourcing Systems – A Systematic Literature Review. In 11th International Conference on Wirtschaftsinformatik (WI). Leipzig.
Article 2: Hetmank, L. (2014). A Synopsis of Enterprise Crowdsourcing Literature. In 22nd European Conference on Information Systems (ECIS). Tel Aviv.
Article 3: Hetmank, L. (2013). Towards a Semantic Standard for Enterprise Crowdsourcing – A Scenario-based Evaluation of a Conceptual Prototype. In 21st European Conference on Information Systems (ECIS). Utrecht.
Article 4: Hetmank, L. (2014). Developing an Ontology for Enterprise Crowdsourcing. In Multikonferenz Wirtschaftsinformatik (MKWI). Paderborn.
Article 5: Hetmank, L. (2014). An Ontology for Enhancing Automation and Interoperability in Enterprise Crowdsourcing Environments (Technical Report).
Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155187.
|
177 |
Ontologien als semantische Zündstufe für die digitale Musikwissenschaft?Münnich, Stefan 20 December 2019 (has links)
Ontologien spielen eine zentrale Rolle für die formalisierte Repräsentation von Wissen und Informationen sowie für die Infrastruktur des sogenannten semantic web. Trotz früherer Initiativen der Bibliotheken und Gedächtnisinstitutionen hat sich die deutschsprachige Musikwissenschaft insgesamt nur sehr zögerlich dem Thema genähert. Im Rahmen einer Bestandsaufnahme werden neben der Erläuterung grundlegender Konzepte, Herausforderungen und Herangehensweisen bei der Modellierung von Ontologien daher auch vielversprechende Modelle und bereits erprobte Anwendungsbeispiele für eine ‚semantische‘ digitale Musikwissenschaft identifiziert. / Ontologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
|
178 |
A framework for semantic web implementation based on context-oriented controlled automatic annotationHatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site's pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application's domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text's meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the 'Intelligent Document' 'The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation'. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
|
179 |
Serviços Web Semânticos: da modelagem à composição / Semantic web services: from modeling to compositionPrazeres, Cássio Vinícius Serafim 31 July 2009 (has links)
A automação de tarefas como descoberta, composição e invocação de Serviços Web é um requisito importante para o sucesso da Web Semântica. Nos casos de insucesso na busca por um serviço, por não existir disponível um serviço completo que atenda plenamente a requisição do usuário, uma possibilidade de contorno é compor o serviço procurado a partir de elementos básicos que atendam parcialmente a requisição inicial e que se completem. A composição de Serviços Web pode ser realizada de forma manual ou de forma automática. Na composição manual, o desenvolvedor de Serviços Web pode tirar proveito da sua expertise sobre os serviços envolvidos na composição e sobre o resultado que se deseja alcançar. Esta tese aborda problemas e apresenta contribuições relacionadas ao processo de composição automática de Serviços Web. A composição automática de Serviços Web requer que os serviços sejam descritos e publicados de forma a modelar o conhecimento (semântica explícita) que o desenvolvedor utiliza para realizar a composição manual. A descoberta automática baseada nas descrições semânticas do serviço é também um passo crucial na direção da composição automática, pois é um estágio anterior necessário para a seleção dos serviços candidatos à composição. Trabalhos da área de pesquisa em Serviços Web Semânticos exploram a utilização dos padrões da Web Semântica para enriquecer, com semântica explícita, a descrição dos Serviços Web. O problema da composição automática de Serviços Web é tratado neste trabalho por meio de três linhas de investigação: modelagem dos Serviços Web Semânticos; descoberta automática de Serviços Web Semânticos; e composição automática de Serviços Web Semânticos. As contribuições desta tese incluem: a plataforma RALOWS para modelagem de aplicações Web como Serviços Web Semânticos, tendo como estudo de caso aplicações para realização de experimentos remotos; um algoritmo para descoberta automática de Serviços Web Semânticos; uma proposta baseada em grafos e caminhos de custo mínimo para prover composição automática de Serviços Web Semânticos; uma infra-estrutura e ferramentas de apoio à descrição, publicação, descoberta e composição de Serviços Web Semânticos / The automation of the discovery, composition and invocation of Web Services is an important step to the success of the Semantic Web. If no single Web Service satisfies the functionality required by one user, an alternative is to combine existing services that solve parts of the problem in order to reach a complete solution. Web Services composition can be achieved manually or automatically. When composing services manually, Web Service developers can take advantage of their expertise and knowledge about the composition services and the target service. This thesis addresses issues and presents contributions related to the process of automating Web Services composition. The automatic composition of Web services requires the description and publication of the services in order to model the necessary knowledge (explicit semantics) that the developer uses to perform the manual composition. The automatic Web Service discovery is a crucial step toward the automatic composition, because it is a previous stage necessary to the selection of composition service candidates. Semantic Web Services researches explore the use of the Semantic Web technologies to enrich the Web Services descriptions with explicit semantics. Three main lines of investigation are adopted in this thesis to explore the process of automatic composition of Web Services. They are the following: Semantic Web Services modeling; automatic discovery of Semantic Web Services; and automatic composition of Semantic Web Services. The main contributions of this thesis include: the RALOWS platform for modeling Web applications as Semantic Web Services; an algorithm for the automatic discovery of Semantic Web Services; a graph-based approach to the automatic composition of Semantic Web Services; and an infrastructure and tools to support the Semantic Web Services description, publishing, discovery and composition
|
180 |
Serviços Web Semânticos: da modelagem à composição / Semantic web services: from modeling to compositionCássio Vinícius Serafim Prazeres 31 July 2009 (has links)
A automação de tarefas como descoberta, composição e invocação de Serviços Web é um requisito importante para o sucesso da Web Semântica. Nos casos de insucesso na busca por um serviço, por não existir disponível um serviço completo que atenda plenamente a requisição do usuário, uma possibilidade de contorno é compor o serviço procurado a partir de elementos básicos que atendam parcialmente a requisição inicial e que se completem. A composição de Serviços Web pode ser realizada de forma manual ou de forma automática. Na composição manual, o desenvolvedor de Serviços Web pode tirar proveito da sua expertise sobre os serviços envolvidos na composição e sobre o resultado que se deseja alcançar. Esta tese aborda problemas e apresenta contribuições relacionadas ao processo de composição automática de Serviços Web. A composição automática de Serviços Web requer que os serviços sejam descritos e publicados de forma a modelar o conhecimento (semântica explícita) que o desenvolvedor utiliza para realizar a composição manual. A descoberta automática baseada nas descrições semânticas do serviço é também um passo crucial na direção da composição automática, pois é um estágio anterior necessário para a seleção dos serviços candidatos à composição. Trabalhos da área de pesquisa em Serviços Web Semânticos exploram a utilização dos padrões da Web Semântica para enriquecer, com semântica explícita, a descrição dos Serviços Web. O problema da composição automática de Serviços Web é tratado neste trabalho por meio de três linhas de investigação: modelagem dos Serviços Web Semânticos; descoberta automática de Serviços Web Semânticos; e composição automática de Serviços Web Semânticos. As contribuições desta tese incluem: a plataforma RALOWS para modelagem de aplicações Web como Serviços Web Semânticos, tendo como estudo de caso aplicações para realização de experimentos remotos; um algoritmo para descoberta automática de Serviços Web Semânticos; uma proposta baseada em grafos e caminhos de custo mínimo para prover composição automática de Serviços Web Semânticos; uma infra-estrutura e ferramentas de apoio à descrição, publicação, descoberta e composição de Serviços Web Semânticos / The automation of the discovery, composition and invocation of Web Services is an important step to the success of the Semantic Web. If no single Web Service satisfies the functionality required by one user, an alternative is to combine existing services that solve parts of the problem in order to reach a complete solution. Web Services composition can be achieved manually or automatically. When composing services manually, Web Service developers can take advantage of their expertise and knowledge about the composition services and the target service. This thesis addresses issues and presents contributions related to the process of automating Web Services composition. The automatic composition of Web services requires the description and publication of the services in order to model the necessary knowledge (explicit semantics) that the developer uses to perform the manual composition. The automatic Web Service discovery is a crucial step toward the automatic composition, because it is a previous stage necessary to the selection of composition service candidates. Semantic Web Services researches explore the use of the Semantic Web technologies to enrich the Web Services descriptions with explicit semantics. Three main lines of investigation are adopted in this thesis to explore the process of automatic composition of Web Services. They are the following: Semantic Web Services modeling; automatic discovery of Semantic Web Services; and automatic composition of Semantic Web Services. The main contributions of this thesis include: the RALOWS platform for modeling Web applications as Semantic Web Services; an algorithm for the automatic discovery of Semantic Web Services; a graph-based approach to the automatic composition of Semantic Web Services; and an infrastructure and tools to support the Semantic Web Services description, publishing, discovery and composition
|
Page generated in 0.0388 seconds