• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 11
  • Tagged with
  • 28
  • 28
  • 12
  • 9
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Towards pragmatic interoperability to support scientific workflows development

Neiva, Frâncila Weidt 30 September 2015 (has links)
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-01-26T10:38:41Z No. of bitstreams: 0 / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-01-26T10:56:19Z (GMT) No. of bitstreams: 0 / Made available in DSpace on 2018-01-26T10:56:19Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-09-30 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Fornecer suporte a interoperabilidade apenas considerando a forma e o significado (i.e. sintaxe e semântica) na troca de dados não é suficiente para se atingir uma colaboração efetiva e significativa. Neste sentido, a interoperabilidade pragmática tem se destacado como um requisito fundamental para garantir a colaboração em sistemas distribuídos. Entretanto, preencher este requisito não é uma tarefa trivial. O objetivo deste estudo é propor e avaliar uma solução para apoiar implementação da interoperabilidade pragmática em um sistema colaborativo. Assim, a solução proposta foi implementada e avaliada em um ecossistema de software baseado na web capaz de apoiar o desenvolvimento colaborativo de workflows científicos chamado ECOS Collaborative PL-Science. / Providing interoperability support only considering the format and meaning (i.e. syn-tax and semantic) in data exchange is not enough to achieve effective and meaningful collaboration. Pragmatic interoperability has been identified as a key requirement to fos-ter collaboration in a distributed environment. However, fulfilling this requirement is not a trivial task. The aim of this study is to propose and evaluate a solution to support pragmatic interoperability implementation in a collaborative system. The proposed solu-tion was implemented and evaluated in an open source web-based software ecosystem to support collaborative development of scientific workflows.
12

Abordagem para guiar a reprodução de experimentos computacionais: aplicações em biologia computacional

Knop, Igor de Oliveira 31 March 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-02-09T10:36:22Z No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-09T13:54:43Z (GMT) No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) / Made available in DSpace on 2017-02-09T13:54:43Z (GMT). No. of bitstreams: 1 igordeoliveiraknop.pdf: 9278336 bytes, checksum: 3ba3e63654031ff0b2d334733fcd215b (MD5) Previous issue date: 2016-03-31 / A biologia sistêmica é uma das áreas emergentes mais poderosas no terceiro milênio por combinar de forma interdisciplinar conhecimentos e ferramentas da biologia, ciência da computação, medicina, química e engenharia. Entretanto, o contínuo desenvolvimento de experimentoscomputacionaiséacompanhadoporproblemascomoaintegraçãomanualde ferramentas para simulação e análise; a perda de modelos pela obsolescência de software; e a dificuldade para reprodução dos experimentos devido à falta de detalhes do ambiente de execução utilizado. A maioria dos modelos quantitativos publicados em biologia são perdidos porque eles, ou não estão mais disponíveis, ou porque são insuficientemente caracterizados para permitir sua reprodução. Este trabalho propõe uma abordagem para guiaroregistrodeexperimentosin silico comfoconasuareprodução.Aabordagemprevê a criação de uma série de anotações durante um trabalho em modelagem computacional, amparado por um ambiente de software, onde o pesquisador realiza as etapas de integração de ferramentas, descrição de processos e execução de experimentos. O objetivo é capturaroprocessodemodelagemdeformanãoinvasivaparaaumentaratrocadeconhecimento, permitir repetição e validação dos resultados e diminuir o retrabalho em grupos depesquisainterdisciplinares.Umambientecomputacionalprotótipofoiconstruídoedois fluxos de trabalho de ferramentas diferentes foram integradas como estudos de caso. O primeiro usa modelos da eletrofisiologia cardíaca para se construir novas aplicações sobre o ambiente. O segundo apresenta um novo uso para os metamodelos de dinâmica de sistemas para simular a resposta do sistema imune inato em uma seção planar de tecido. Foi observada a completa captura dos workflow de simulação e tratamento dos dados de saída nos dois experimentos de controle. O ambiente permitiu a reprodução e adaptação dos experimentos em três níveis diferentes: a criação de novos experimentos utilizando a mesma estrutura do original; a definição de novos aplicativos que utilizam variações da estrutura do experimento original; reaproveitamento do fluxo de trabalho para alterações nos modelos e condições originais. / Systems Biology is one of the most powerful emerging areas in the third millennium that combines, in an interdisciplinary way, knowledge and tools of Biology, Computer Science, Medicine, Chemistry and Engineering. However, the continued development of computational experiments is accompanied by problems such as manual integration of tools to simulation and analysis; loss of models due software obsolescence; and the difficulty to reproduce the experiments due to lack of details of the execution environment used. Most quantitative models published in Biology are lost because they, or are no longer available or are insufficiently characterized for reproduction. This work proposes an approach to guide the registration of in silico experiments focused on its reproduction. The approach involvesthecreationofaseriesofannotationsduringcomputationalmodeling,supported by a software environment where the researcher conducts the tool integration steps, process description and execution of experiments. The goal is to noninvasively capture the modelingprocesstoincreasetheexchangeofknowledge,allowrepetitionandvalidationof the results and reduce rework in interdisciplinary research groups. A prototype was built and two different workflows have been integrated as case studies. The first uses models and tools of cardiac electrophysiology to build new applications on the environment. The secondpresentsanewuseforthesystemdynamicsmetamodelingtosimulatetheresponse of the innate immune system in a planar section of tissue. The complete capture of workflow, consisting of simulation and processing of output data, in two control experiments, was observed. The environment allowed the reproduction and adaptation of experiments at three different levels: the creation of new experiments using the same structure as the original one; the definition of new applications that use variations of the structure of the original experiment; the reuse of workflow to change models and original condition.
13

Uma arquitetura de baixo acoplamento para execução de padrões de controle de fluxo em grades / A loosely coupled architecture to run workflow control-flow patterns in grid

Alexandre Ricardo Nardi 27 April 2009 (has links)
O uso de padrões de workflow para controle de fluxo em aplicações de e-Science resulta em maior produtividade por parte do cientista, permitindo que se concentre em sua área de especialização. Todavia, o uso de padrões de workflow para paralelização em grades permanece uma questão em aberto. Este texto apresenta uma arquitetura de baixo acoplamento e extensível, para permitir a execução de padrões com ou sem a presença de grade, de modo transparente ao cientista. Descreve também o Padrão Junção Combinada, que atende a diversos cenários de paralelização comumente encontrados em aplicações de e-Science. Com isso, espera-se auxiliar o trabalho do cientista, oferecendo maior flexibilidade na utilização de grades e na representação de cenários de paralelização. / The use of workflow control-flow patterns in e-Science applications results in productivity improvement, allowing the scientist to concentrate in his/her own research area. However, the use of workflow control-flow patterns for execution in grids remains an opened question. This work presents a loosely coupled and extensible architecture, allowing use of patterns with or without grids, transparently to the scientist. It also describes the Combined Join Pattern, compliant to parallelization scenarios, commonly found in e-Science applications. As a result, it is expected to help the scientist tasks, giving him or her greater flexibility in the grid usage and in representing parallelization scenarios.
14

Digital Provenance Techniques and Applications

Amani M Abu Jabal (9237002) 13 August 2020 (has links)
This thesis describes a data provenance framework and other associated frameworks for utilizing provenance for data quality and reproducibility. We first identify the requirements for the design of a comprehensive provenance framework which can be applicable to various applications, supports a rich set of provenance metadata, and is interoperable with other provenance management systems. We then design and develop a provenance framework, called SimP, addressing such requirements. Next, we present four prominent applications and investigate how provenance data can be beneficial to such applications. The first application is the quality assessment of access control policies. Towards this, we design and implement the ProFact framework which uses provenance techniques for collecting comprehensive data about actions which were either triggered due to a network context or a user (i.e., a human or a device) action. Provenance data are used to determine whether the policies meet the quality requirements. ProFact includes two approaches for policy analysis: structure-based and classification-based. For the structure-based approach, we design tree structures to organize and assess the policy set efficiently. For the classification-based approach, we employ several classification techniques to learn the characteristics of policies and predict their quality. In addition, ProFact supports policy evolution and the assessment of its impact on the policy quality. The second application is workflow reproducibility. Towards this, we implement ProWS which is a provenance-based architecture for retrieving workflows. Specifically, ProWS transforms data provenance into workflows and then organizes data into a set of indexes to support efficient querying mechanisms. ProWS supports composite queries on three types of search criteria: keywords of workflow tasks, patterns of workflow structure, and metadata about workflows (e.g., how often a workflow was used). The third application is the access control policy reproducibility. Towards this, we propose a novel framework, Polisma, which generates attribute-based access control policies from data, namely from logs of historical access requests and their corresponding decisions. Polisma combines data mining, statistical, and machine learning techniques, and capitalizes on potential context information obtained from external sources (e.g., LDAP directories) to enhance the learning process. The fourth application is the policy reproducibility by utilizing knowledge and experience transferability. Towards this, we propose a novel framework, FLAP, which transfer attribute-based access control policies between different parties in a collaborative environment, while considering the challenges of minimal sharing of data and support policy adaptation to address conflict. All frameworks are evaluated with respect to performance and accuracy.
15

Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approach

Khouri, Adilson Lopes 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
16

Gestion multisite de workflows scientifiques dans le cloud / Multisite management of scientific workflows in the cloud

Liu, Ji 03 November 2016 (has links)
Les in silico expérimentations scientifiques à grande échelle contiennent généralement plusieurs activités de calcule pour traiter big data. Workflows scientifiques (SWfs) permettent aux scientifiques de modéliser les activités de traitement de données. Puisque les SWfs moulinent grandes quantités de données, les SWfs orientés données deviennent un problème important. Dans un SWf orienté donnée, les activités sont liées par des dépendances de données ou de contrôle et une activité correspond à plusieurs tâches pour traiter les différentes parties de données. Afin d’exécuter automatiquement les SWfs orientés données, Système de management pour workflows scientifiques (SWfMSs) peut être utilisé en exploitant High Perfmance Comuting (HPC) fournisse par un cluster, grille ou cloud. En outre, SWfMSs génèrent des données de provenance pour tracer l’exécution des SWfs.Puisque le cloud fournit des services stables, diverses ressources, la capacité de calcul et de stockage virtuellement infinie, il devient une infrastructure intéressante pour l’exécution de SWf. Le cloud données essentiellement trois types de services, i.e. Infrastructure en tant que Service (IaaS), Plateforme en tant que Service (PaaS) et Logiciel en tant que Service (SaaS). SWfMSs peuvent être déployés dans le cloud en utilisant des Machines Virtuelles (VMs) pour exécuter les SWfs orientés données. Avec la méthode de pay-as-you-go, les utilisateurs de cloud n’ont pas besoin d’acheter des machines physiques et la maintenance des machines sont assurée par les fournisseurs de cloud. Actuellement, le cloud généralement se compose de plusieurs sites (ou centres de données), chacun avec ses propres ressources et données. Du fait qu’un SWf orienté donnée peut-être traite les données distribuées dans différents sites, l’exécution de SWf orienté donnée doit être adaptée aux multisite cloud en utilisant des ressources de calcul et de stockage distribuées.Dans cette thèse, nous étudions les méthodes pour exécuter SWfs orientés données dans un environnement de multisite cloud. Certains SWfMSs existent déjà alors que la plupart d’entre eux sont conçus pour des grappes d’ordinateurs, grille ou cloud d’un site. En outre, les approches existantes sont limitées aux ressources de calcul statique ou à l’exécution d’un seul site. Nous vous proposons des algorithmes pour partitionner SWfs et d’un algorithme d’ordonnancement des tâches pour l’exécution des SWfs dans un multisite cloud. Nos algorithmes proposés peuvent réduire considérablement le temps global d’exécution d’un SWf dans un multisite cloud.En particulier, nous proposons une solution générale basée sur l’ordonnancement multi-objectif afin d’exécuter SWfs dans un multisite cloud. La solution se compose d’un modèle de coût, un algorithme de provisionnement de VMs et un algorithme d’ordonnancement des activités. L’algorithme de provisionnement de VMs est basé sur notre modèle de coût pour générer les plans à provisionner VMs pour exécuter SWfs dans un cloud d’un site. L’algorithme d’ordonnancement des activités permet l’exécution de SWf avec le coût minimum, composé de temps d’exécution et le coût monétaire, dans un multisite cloud. Nous avons effectué beaucoup d’expérimentations et les résultats montrent que nos algorithmes peuvent réduire considérablement le coût global pour l’exécution de SWf dans un multisite cloud. / Large-scale in silico scientific experiments generally contain multiple computational activities to process big data. Scientific Workflows (SWfs) enable scientists to model the data processing activities. Since SWfs deal with large amounts of data, data-intensive SWfs is an important issue. In a data-intensive SWf, the activities are related by data or control dependencies and one activity may consist of multiple tasks to process different parts of experimental data. In order to automatically execute data-intensive SWfs, Scientific Work- flow Management Systems (SWfMSs) can be used to exploit High Performance Computing (HPC) environments provided by a cluster, grid or cloud. In addition, SWfMSs generate provenance data for tracing the execution of SWfs.Since a cloud offers stable services, diverse resources, virtually infinite computing and storage capacity, it becomes an interesting infrastructure for SWf execution. Clouds basically provide three types of services, i.e. Infrastructure-as-a-Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). SWfMSs can be deployed in the cloud using Virtual Machines (VMs) to execute data-intensive SWfs. With a pay-as-you- go method, the users of clouds do not need to buy physical machines and the maintenance of the machines are ensured by the cloud providers. Nowadays, a cloud is typically made of several sites (or data centers), each with its own resources and data. Since a data- intensive SWf may process distributed data at different sites, the SWf execution should be adapted to multisite clouds while using distributed computing or storage resources.In this thesis, we study the methods to execute data-intensive SWfs in a multisite cloud environment. Some SWfMSs already exist while most of them are designed for computer clusters, grid or single cloud site. In addition, the existing approaches are limited to static computing resources or single site execution. We propose SWf partitioning algorithms and a task scheduling algorithm for SWf execution in a multisite cloud. Our proposed algorithms can significantly reduce the overall SWf execution time in a multisite cloud.In particular, we propose a general solution based on multi-objective scheduling in order to execute SWfs in a multisite cloud. The general solution is composed of a cost model, a VM provisioning algorithm, and an activity scheduling algorithm. The VM provisioning algorithm is based on our proposed cost model to generate VM provisioning plans to execute SWfs at a single cloud site. The activity scheduling algorithm enables SWf execution with the minimum cost, composed of execution time and monetary cost, in a multisite cloud. We made extensive experiments and the results show that our algorithms can reduce considerably the overall cost of the SWf execution in a multisite cloud.
17

Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approach

Adilson Lopes Khouri 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
18

Distributed knowledge sharing and production through collaborative e-Science platforms

Gaignard, Alban 15 March 2013 (has links) (PDF)
This thesis addresses the issues of coherent distributed knowledge production and sharing in the Life-science area. In spite of the continuously increasing computing and storage capabilities of computing infrastructures, the management of massive scientific data through centralized approaches became inappropriate, for several reasons: (i) they do not guarantee the autonomy property of data providers, constrained, for either ethical or legal concerns, to keep the control over the data they host, (ii) they do not scale and adapt to the massive scientific data produced through e-Science platforms. In the context of the NeuroLOG and VIP Life-science collaborative platforms, we address on one hand, distribution and heterogeneity issues underlying, possibly sensitive, resource sharing ; and on the other hand, automated knowledge production through the usage of these e-Science platforms, to ease the exploitation of the massively produced scientific data. We rely on an ontological approach for knowledge modeling and propose, based on Semantic Web technologies, to (i) extend these platforms with efficient, static and dynamic, transparent federated semantic querying strategies, and (ii) to extend their data processing environment, from both provenance information captured at run-time and domain-specific inference rules, to automate the semantic annotation of ''in silico'' experiment results. The results of this thesis have been evaluated on the Grid'5000 distributed and controlled infrastructure. They contribute to addressing three of the main challenging issues faced in the area of computational science platforms through (i) a model for secured collaborations and a distributed access control strategy allowing for the setup of multi-centric studies while still considering competitive activities, (ii) semantic experiment summaries, meaningful from the end-user perspective, aimed at easing the navigation into massive scientific data resulting from large-scale experimental campaigns, and (iii) efficient distributed querying and reasoning strategies, relying on Semantic Web standards, aimed at sharing capitalized knowledge and providing connectivity towards the Web of Linked Data.
19

Composer-science: um framework para a composição de workflows científicos

Silva, Laryssa Aparecida Machado da 05 July 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T11:20:44Z No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-31T12:42:13Z (GMT) No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Made available in DSpace on 2017-05-31T12:42:13Z (GMT). No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) Previous issue date: 2010-07-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um conceito importante nas pesquisas em e-Science é o de workflows científicos, que, em geral, são longos, compostos de várias aplicações que, em conjunto, representam um experimento científico. Uma possibilidade para auxiliar na definição destes workflows científicos é o uso de ferramentas que agreguem semântica para auxiliar na sua composição. Os serviços Web semânticos apresentam tecnologias altamente favoráveis à sua composição para a obtenção de processos mais complexos, tais como o uso de padrões Web, independência de plataforma, independência de linguagem de programação utilizada para o desenvolvimento, possibilidade de processamento distribuído, e, principalmente, o uso de recursos semânticos que possibilitem sua descoberta, composição e invocação automáticas. Com o objetivo de auxiliar na descoberta de serviços Web para a composição de workflows científicos, propomos o desenvolvimento de um framework, denominado Composer-Science, que realize a busca de serviços Web semânticos e componha estes, definindo assim, um workflow científico. O objetivo geral do ComposerScience é permitir que o pesquisador descreva semanticamente um workflow científico e, considerando essa descrição, automatize, por meio do uso de serviços Web semânticos e ontologias, a busca semântica por serviços em repositórios e a geração de workflows científicos a partir dessa composição. O objetivo geral do framework pode ser decomposto em objetivos específicos: o registro e o armazenamento, nos repositórios distribuídos (bancos de dados) do framework, de ontologias de domínio (OWL) e anotações dos serviços Web semânticos (OWL-S); a realização de pesquisa semântica, baseada em requisitos fornecidos pelo pesquisador, nos repositórios distribuídos, a fim de realizar a descoberta de serviços Web semânticos que atendam os requisitos semânticos fornecidos; a análise sintática, baseada em requisitos estruturais (dados de entrada e saída), além da análise semântica dos serviços descobertos por meio da pesquisa semântica, a fim de se obter possíveis composições dos mesmos; a geração de modelos de workflows em WS-BPEL a partir das composições possíveis. Desta forma, os modelos gerados pelo framework podem ser utilizados em Sistemas de Gerenciamento de Workflows Científicos (SGWfC) e serem compostos com outros modelos de workflow. / An important concept in e-Science research is scientific workflows, which are usually long, consisting of several applications that, together, represent a scientific experiment. One possibility to assist in defining these scientific workflows is the use of tools that add semantics to the composition process. Semantic Web services have technologies that are highly favorable to their composition, in order to obtain more complex processes. Examples of these technologies are the use of Web standards, platform independence, programming language independence, possibility of distributed processing and especially the use of semantic resources that enable their discovery, automatic composition and invocation. With the aim of assisting in the discovery of Web services for scientific workflows composition, we propose the development of a framework, named Composer-Science, to conduct the search for semantic Web services and compose them, thus defining a scientific workflow. The overall objective of Composer-Science is to allow researcher to describe semantically a scientific workflow and, considering this description automatize, through the use of semantic web services and ontologies, the semantic search for services in repositories and the generation of scientific workflows from this composition. The overall objective of the framework can be broken down into specific objectives: registration and storage of domain ontologies (OWL) and semantic annotations of Web services (OWL-S), in distributed repositories (databases) of the framework; implementation of semantic search, based on requirements provided by the researcher, in distributed repositories, in order to discovery semantic Web services that match the semantic requirements provided; the syntactic analysis, based on structural requirements (input and output), and semantic analysis of services discovered using semantic search, in order to obtain their possible compositions; the generation of WS-BPEL workflow models from the possible compositions. Finally, the models generated by the framework can be used in Workflow Management Systems (WMS) and composed with other workflow models.
20

Uma abordagem baseada em padrões para o intercâmbio entre especificações de workflows científicos

Bastos, Bruno Fernandes 20 August 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T18:08:54Z No. of bitstreams: 1 brunofernandesbastos.pdf: 2778109 bytes, checksum: 078e98ab953377165b30e0e21520c35c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-24T13:49:46Z (GMT) No. of bitstreams: 1 brunofernandesbastos.pdf: 2778109 bytes, checksum: 078e98ab953377165b30e0e21520c35c (MD5) / Made available in DSpace on 2017-06-24T13:49:46Z (GMT). No. of bitstreams: 1 brunofernandesbastos.pdf: 2778109 bytes, checksum: 078e98ab953377165b30e0e21520c35c (MD5) Previous issue date: 2015-08-20 / Workflows científicos vêm sendo utilizados para resolver problemas complexos em diferentes áreas. Sistemas Gerenciadores de Workflows Científicos (SGWfCs) são utilizados para a especificação e gerenciamento desses workflows. Porém, cada SGWfC pode possuir características diferentes e uma linguagem de especificação de workflows própria, dificultando o reuso dos workflows entre diferentes SGWfCs. A inexistência de uma padronização semântica dificulta ainda mais esse reuso, uma vez que elementos de modelagem de workflow presentes em alguns SGWfCS podem não ser mapeáveis em outros SGWfCs. O uso de uma linguagem intermediária para o intercâmbio de workflows científicos facilita o reuso de workflows desenvolvidos em diferentes SGWfCs ao permitir a definição de um arcabouço comum para esses SGWfCs. No entanto, uma linguagem desse tipo não impede que haja perda de informação semântica durante um processo de transformação de especificações entre esses SGWfCs, uma vez que essa linguagem deve ser robusta o suficiente para representar a semântica de diversos workflows desenvolvidos em diferentes SGWfCs. A existência de padrões (patterns) em workflows científicos pode ajudar a explicitar as informações semânticas mais importantes para a construção desses workflows. Assim a proposta deste trabalho é oferecer uma abordagem baseada em padrões para o intercâmbio entre especificações de workflows científicos, empregando uma linguagem intermediária com suporte a informações semânticas obtidas através da descrição dos padrões. Esta dissertação analisa os resultados obtidos com essa proposta a partir da aplicação da abordagem em especificações de workflows armazenadas no repositório myExperiment. / Scientific workflows have been used to solve complex problems in different areas. Scientific Workflow Management Systems (SWfMSs) are used for specifying and managing these workflows. Nevertheless, each SWfMS may have different characteristics and its own workflow specification language, making its reuse accross different SWfMSs a difficult process. The lack of semantic standardization makes this reuse even more difficult, since the workflow modeling elements available in some SWfMSs may not be mapped onto others SWfMSs. The use of an intermediate language for the interchange of scientific workflows may help with the reuse of workflows developed in different SWfMSs, as it allows for the definition of a common framework for these SWfMSs. Nonetheless, such a language does not prevent the loss of some semantic information during a specification transformation process between different SWfMSs, since this language must be robust enough to represent the semantics of diverse workflows developed in different SWfMSs. The identification of scientific workflow patterns may help to describe the most important semantic information for the construction of these workflows. Thus, the purpose of this study is to provide a pattern-based approach for the interchange of scientific workflow specifications, using an intermediate language that supports semantic information obtained through the description of workflow patterns. This thesis also analyses the results obtained with the proposed approach being applied to workflow specifications stored in the myExperiment repository.

Page generated in 0.2732 seconds