• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

[en] SCIENTIFIC APPLICATION: REENGINEERING TO ADD WORKFLOW CONCEPTS / [pt] REENGENHARIA DE UMA APLICAÇÃO CIENTÍFICA PARA INCLUSÃO DE CONCEITOS DE WORKFLOW

THIAGO MANHENTE DE CARVALHO MARQUES 17 January 2017 (has links)
[pt] A aplicação de técnicas de workflows na área de computação científica é bastante explorada para a condução de experimentos e construção de modelos in silico. Ao analisarmos alguns desafios enfrentados por uma aplicação científica na área de geociências, percebemos que workflows podem ser usados para representar os modelos gerados na aplicação e facilitar o desenvolvimento de funcionalidades que supram as necessidades identificadas. A maioria dos trabalhos e ferramentas na área de workflows científicos, porém, são voltados para uso em ambientes de computação distribuída, como serviços web e computação em grade, sendo de difícil uso ou integração dentro de aplicações científicas mais simples. Nesta dissertação, discutimos como viabilizar a composição e representação de workflows dentro de uma aplicação científica existente. Descrevemos uma arquitetura conceitual de motor de workflows voltado para o uso dentro de uma aplicação stand-alone. Descrevemos também um modelo de implantação em uma aplicação C plus plus usando redes de Petri para modelar um workflow e funções C plus plus para representar as tarefas. Como prova de conceito, implantamos esse modelo de workflows em uma aplicação existente e analisamos o impacto do seu uso na aplicação. / [en] The use of workflow techniques in scientific computing is widely adopted in the execution of experiments and building in silico models. By analysing some challenges faced by a scientific application in the geosciences domain, we noticed that workflows could be used to represent the geological models created using the application so as to ease the development of features to meet those challenges. Most works and tools on the scientific workflows domain, however, are designed for use in distributed computing contexts like web services and grid computing, which makes them unsuitable for integration or use within simpler scientific applications. In this dissertation, we discuss how to make viable the composition and representation of workflows within an existing scientific application. We describe a conceptual architecture of a workflow engine designed to be used within a stand-alone application. We also describe an implementation model of this architecture in a C plus plus application using Petri nets to model a workflow and C plus plus functions to represent tasks. As proof of concept, we implement this workflow model in an existing application and studied its impact on the application.
32

Visual Workflows for Oil and Gas Exploration

Hollt, Thomas 14 April 2013 (has links)
The most important resources to fulfill today’s energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures to plan the path of the borehole, is crucial in order to minimize economic and ecological risks. Before that, the placement, as well as the operations of oil rigs need to be planned carefully, as off-shore oil exploration is vulnerable to hazards caused by strong currents. The oil and gas industry therefore relies on accurate ocean forecasting systems for planning their operations. This thesis presents visual workflows for creating subsurface models as well as planning the placement and operations of off-shore structures. Creating a credible subsurface model poses two major challenges: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. We present a novel, integrated approach to interactively creating subsurface models from reflection seismics, by integrating the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model, on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain, and well data in depth domain. For planning the operations of off-shore structures we present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations used in ocean forecasting, i.e, simulations of sea surface elevation. Changes in sea surface elevation are a good indicator for the movement of loop current eddies. Our visualization approach enables their interactive exploration and analysis. We enable analysis of the spatial domain, for planning the placement of structures, as well as detailed exploration of the temporal evolution at any chosen position, for the prediction of critical ocean states that require the shutdown of rig operations. We illustrate this using a real-world simulation of the Gulf of Mexico.
33

Pingo: A Framework for the Management of Storage of Intermediate Outputs of Computational Workflows

January 2017 (has links)
abstract: Scientific workflows allow scientists to easily model and express the entire data processing steps, typically as a directed acyclic graph (DAG). These scientific workflows are made of a collection of tasks that usually take a long time to compute and that produce a considerable amount of intermediate datasets. Because of the nature of scientific exploration, a scientific workflow can be modified and re-run multiple times, or new scientific workflows are created that might make use of past intermediate datasets. Storing intermediate datasets has the potential to save time in computations. Since storage is limited, one main problem that needs a solution is determining which intermediate datasets need to be saved at creation time in order to minimize the computational time of the workflows to be run in the future. This research thesis proposes the design and implementation of Pingo, a system that is capable of managing the computations of scientific workflows as well as the storage, provenance and deletion of intermediate datasets. Pingo uses the history of workflows submitted to the system to predict the most likely datasets to be needed in the future, and subjects the decision of dataset deletion to the optimization of the computational time of future workflows. / Dissertation/Thesis / Masters Thesis Computer Science 2017
34

Publicação e integração de workflows cientificos na web / Publication and integration of scientific workflows on the web

Pastorello Júnior, Gilberto Zonta 04 June 2005 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-04T08:58:13Z (GMT). No. of bitstreams: 1 PastorelloJunior_GilbertoZonta_M.pdf: 2528977 bytes, checksum: 2357e66378967391c166a97769e6824d (MD5) Previous issue date: 2005 / Resumo: Atividades científicas envolvem processos complexos e, frequentemente, diversos especialistas. Além disso, são multidisciplinares e exigem trabalho cooperativo. Isso traz à tona uma série de problemas em Ciência da Computação para apoiar esse trabalho, indo desde a gestão de dados e processos até interfaces adequadas para aplicativos. Esta dissertação contribui na direção do provimento de soluções para alguns desses problemas. Seu foco é em aprimorar os mecanismos de documentação de processos e em possibilitar sua publicação e integração na Web. Isso facilita a especificação e execução de processos distribuídos na Web e o reuso dessas especificações. O trabalho desenvolvido se apóia em padrões da Web Semântica, visando interoperabilidade, e no uso de workflows cientificos para modelar os processos e sua utilização na Web. As principais contribuições deste trabalho são: (i) um modelo de dados, tendo em vista padrões da Web Semântica, para representar workflows científicos e seu armazenamento em bancos de dados. O modelo induz uma metodologia de especificações de workflows que facilita seu reuso e integração; (ii) uma análise comparativa das propostas de padrões para representar workflows em XML; (iii) a proposta de uma arquitetura voltada para a Web para o gerenciamento de documentos (workflows, principalmente); e, (iv) a implementação parcial dessa proposta de arquitetura. O trabalho utiliza como domínio alvo a área de planejamento ambiental, para elucidar requisitos e validar a proposta / Abstract: Scientific activities involve complex multidisciplinary processes and demand cooperative work. This entails a series of open problems in supporting this work ranging from data and process management to appropriate user interfaces for softwares. This work contributes in providing solutions to some of these problems. It focuses on improving the documentation mechanisms of processes and making it possible to publish and integrate them on the Web. This eases the specification and execution of distributed processes on the Web as well as the reuse of these especifications. The work was based on Semantic Web standards aiming at interoperability and the use of scientific workflows for modeling processes and using them on the Web. The main contributions of this work are: (i) a data model, which takes Semantic Web standards into consideration, for representing scientific workflows and storing them in a database. The model induces a workflow specification method that favors reuse and integration of these specifications; (ii) a comparative analysis of standards proposals for representing workflows in XML; (iii) the proposal of a Webcentered architecture for the management of documents (mainly workflows); and, (iv) the partial implementation of this architecture. The work uses as a motivation the area of environmental planning as a means to elucidate requirements and validate the proposal / Mestrado / Sistemas de Informação / Mestre em Ciência da Computação
35

Soft proofing using liquid crystal displays

Leckner, Sara January 2004 (has links)
Development of colour management systems, the level ofstandardisation, as well as the embedding of facilities forcolour management into computer operating systems and software,enables successful future interoperability of colour reproductionin the graphic arts industry. Yet colour reproduction from onemedium to another, still gives rise to inconsistencies. This thesis investigates colour management and controlprocesses in premedia and press process workflows in graphic artsproduction, including standards, instruments and procedures. Thegoal is to find methods for higher efficiency and control ofcolour print media production processes, aiming at increasingcolour consistency and process automation and of reducingoverheads. The focus is on the control of colour data by displaysin prepress processes producing low quality paper products. Inthis respect the greatest interest of this thesis is on technicaland visual characteristics of displays with respect tp thereproduction of colour, especially desktop Thin Film TransistorLiquid Crystal Displays (TFTLCD) compared to portable TFTLCDs andCathod Ray Tube (CRT) monitors. In order to reach the desired goal, this thesis is based on aliterature survey and empirical studies. The empirical studiesinclude both qualitative and quantitative methods, organised intothree parts:     Colour process management: Analysed case studies of theimplementation of colour management in entire graphic artsproduction workflow processes.     Display technology: LCD and CRT displays have been examinedthrough measurements to establish their fundamental strengthsand weaknesses in reproducing colours.     Comparison of reproduction: A perceptual experiment hasbeen conducted to determine the ability of the disparatecomponents included in a colour management system to co-operateand match reproduced colour, according to the perceivedpreference of observers. It was found that in most cases consistent colour fidelitydepends on the knowledge and experience of the actors involved inthe production process, including the utilisation of routines andequipment. Lack of these factors is not necessarily fatal for thefinal low quality paper colour product, but obstructs theautomation. In addition, increased digitalisation will increasethe importance of displays in such processes. The results showthat CRTs and desktop LCDs meet most of the demands of colourreproduction in various areas of low quality paper productionprocesses, e.g. newspaper production. However, some fundamentalaspects, such as low digital input values, viewing angles andcolour temperature, matters that concern characterisation andcalibration, still need to be developed. Concerning softproofing, the matching correspondence between hard and softcopies gives similar results for both CRT and LCDs forhigh-quality paper originals, if the luminance is decreased onthe LCD (to luminance levels of CRTs). Soft proofing of lowquality papers gives equally lower matching agreement for bothCRT and LCD, in this case when the luminance of the LCD is sethigher (e.g. about twice the levels luminance levels ofCRTs). Keywords:Displays, LCD, CRT, premedia, prepress, softproof, workflows, newspaper, colour management systems, colourcontrol, colour reproduction
36

Dynamic Workflows and Advanced Data Management for Problem Solving Environments

Moisa, Dan 13 May 2004 (has links)
Workflow management in problem solving environments (PSEs) is an emerging topic that aims to combine both data-oriented and execution-oriented views of scientific experiments, and closely integrate the processes underlying the practice of computational science with the software artifacts constituted by the PSE. This thesis presents a workflow management solution called BREW (BetteR Experiments through Workflow management) that provides functionality along four dimensions: components and installation management, experiment execution management, data management, and (full fledged) workflow management. BREW builds upon EMDAG, a first generation experiment management system designed at Virginia Tech which provided rudimentary facilities for supporting (only) the first two functionalities. BREW provides a complete dynamic workflow management solution wherein the PSE user can compose arbitrary scientific experiments and specify intended dynamic behavior of these experiments to an extent not previously possible. Along with the design details of the BREW system, this thesis identifies important tradeoffs underlying workflow management for PSEs, and presents two case studies involving large-scale data assimilation in bioinformatics experiments. / Master of Science
37

Workflow technology for complex socio-technical systems

Bassil, Sarita January 2004 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
38

Análise preditiva de desempenho de workflows usando teoria do campo médio / Predictive performance analysis of workflows using mean field theory

Caro, Waldir Edison Farfán 17 April 2017 (has links)
Os processos de negócio desempenham um papel muito importante na indústria, principalmente pela evolução das tecnologias da informação. As plataformas de computação em nuvem, por exemplo, com a alocação de recursos computacionais sob demanda, possibilitam a execução de processos altamente requisitados. Para tanto, é necessário definir o ambiente de execução dos processos de tal modo que os recursos sejam utilizados de forma ótima e seja garantida a correta funcionalidade do processo. Nesse contexto, diferentes métodos já foram propostos para modelar os processos de negócio e analisar suas propriedades quantitativas e qualitativas. Há, contudo, vários desafios que podem restringir a aplicação desses métodos, especialmente para processos com alta demanda (como os workflows de numerosas instâncias) e que dependem de recursos limitados. A análise de desempenho de workflows de numerosas instâncias via modelagem analítica é o objeto de estudo deste trabalho. Geralmente, para a realização desse tipo de análise usa-se modelos matemáticos baseados em técnicas Markovianas (sistemas estocásticos), que sofrem do problema da explosão do espaço de estados. Entretanto, a Teoria do Campo Médio indica que o comportamento de um sistema estocástico, sob certas condições, pode ser aproximado por o de um sistema determinístico, evitando a explosão do espaço de estados. Neste trabalho usamos tal estratégia e, com base na definição formal de aproximação determinística e suas condições de existência, elaboramos um método para representar os workflows, e seus recursos, como equações diferenciais ordinárias, que descrevem um sistema determinístico. Uma vez definida a aproximação determinística, realizamos a análise de desempenho no modelo determinístico, verificando que os resultados obtidos são uma boa aproximação para a solução estocástica. / Business processes play a very important role in the industry, especially by the evolution of information technologies. Cloud computing platforms, for example, with the allocation of on-demand computing resources enable the execution of highly requested processes. Therefore, it is necessary to define the execution environment of the processes in such a way that the resources are used optimally and the correct functionality of the process is guaranteed. In this context, different methods have already been proposed to model business processes and analyze their quantitative and qualitative properties. There are, however, a number of challenges that may restrict the application of these methods, especially for high-demanded processes (such as workflows of numerous instances) and that rely on resources that are limited. The analysis of the performance of workflows of numerous instances through analytical modeling is the object of study of this work. Generally, for the accomplishment of this type of analysis, mathematical models based on Markovian techniques (stochastic systems) are used, which suffer the problem of the state space explosion. However, the Mean Field Theory, indicates that the behavior of a stochastic system, under certain conditions, can be approximated by that of a deterministic system, avoiding the explosion of the state space. In this work we use such a strategy, based on the formal definition of deterministic approximation and its conditions of existence, we elaborate a method to represent the workflows, and their resources, as ordinary differential equations, which describe a deterministic system. Once the deterministic approximation has been defined, we perform the performance analysis in the deterministic model, verifying that the obtained results are a good approximation for the stochastic solution.
39

Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approach

Khouri, Adilson Lopes 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
40

Gestion multisite de workflows scientifiques dans le cloud / Multisite management of scientific workflows in the cloud

Liu, Ji 03 November 2016 (has links)
Les in silico expérimentations scientifiques à grande échelle contiennent généralement plusieurs activités de calcule pour traiter big data. Workflows scientifiques (SWfs) permettent aux scientifiques de modéliser les activités de traitement de données. Puisque les SWfs moulinent grandes quantités de données, les SWfs orientés données deviennent un problème important. Dans un SWf orienté donnée, les activités sont liées par des dépendances de données ou de contrôle et une activité correspond à plusieurs tâches pour traiter les différentes parties de données. Afin d’exécuter automatiquement les SWfs orientés données, Système de management pour workflows scientifiques (SWfMSs) peut être utilisé en exploitant High Perfmance Comuting (HPC) fournisse par un cluster, grille ou cloud. En outre, SWfMSs génèrent des données de provenance pour tracer l’exécution des SWfs.Puisque le cloud fournit des services stables, diverses ressources, la capacité de calcul et de stockage virtuellement infinie, il devient une infrastructure intéressante pour l’exécution de SWf. Le cloud données essentiellement trois types de services, i.e. Infrastructure en tant que Service (IaaS), Plateforme en tant que Service (PaaS) et Logiciel en tant que Service (SaaS). SWfMSs peuvent être déployés dans le cloud en utilisant des Machines Virtuelles (VMs) pour exécuter les SWfs orientés données. Avec la méthode de pay-as-you-go, les utilisateurs de cloud n’ont pas besoin d’acheter des machines physiques et la maintenance des machines sont assurée par les fournisseurs de cloud. Actuellement, le cloud généralement se compose de plusieurs sites (ou centres de données), chacun avec ses propres ressources et données. Du fait qu’un SWf orienté donnée peut-être traite les données distribuées dans différents sites, l’exécution de SWf orienté donnée doit être adaptée aux multisite cloud en utilisant des ressources de calcul et de stockage distribuées.Dans cette thèse, nous étudions les méthodes pour exécuter SWfs orientés données dans un environnement de multisite cloud. Certains SWfMSs existent déjà alors que la plupart d’entre eux sont conçus pour des grappes d’ordinateurs, grille ou cloud d’un site. En outre, les approches existantes sont limitées aux ressources de calcul statique ou à l’exécution d’un seul site. Nous vous proposons des algorithmes pour partitionner SWfs et d’un algorithme d’ordonnancement des tâches pour l’exécution des SWfs dans un multisite cloud. Nos algorithmes proposés peuvent réduire considérablement le temps global d’exécution d’un SWf dans un multisite cloud.En particulier, nous proposons une solution générale basée sur l’ordonnancement multi-objectif afin d’exécuter SWfs dans un multisite cloud. La solution se compose d’un modèle de coût, un algorithme de provisionnement de VMs et un algorithme d’ordonnancement des activités. L’algorithme de provisionnement de VMs est basé sur notre modèle de coût pour générer les plans à provisionner VMs pour exécuter SWfs dans un cloud d’un site. L’algorithme d’ordonnancement des activités permet l’exécution de SWf avec le coût minimum, composé de temps d’exécution et le coût monétaire, dans un multisite cloud. Nous avons effectué beaucoup d’expérimentations et les résultats montrent que nos algorithmes peuvent réduire considérablement le coût global pour l’exécution de SWf dans un multisite cloud. / Large-scale in silico scientific experiments generally contain multiple computational activities to process big data. Scientific Workflows (SWfs) enable scientists to model the data processing activities. Since SWfs deal with large amounts of data, data-intensive SWfs is an important issue. In a data-intensive SWf, the activities are related by data or control dependencies and one activity may consist of multiple tasks to process different parts of experimental data. In order to automatically execute data-intensive SWfs, Scientific Work- flow Management Systems (SWfMSs) can be used to exploit High Performance Computing (HPC) environments provided by a cluster, grid or cloud. In addition, SWfMSs generate provenance data for tracing the execution of SWfs.Since a cloud offers stable services, diverse resources, virtually infinite computing and storage capacity, it becomes an interesting infrastructure for SWf execution. Clouds basically provide three types of services, i.e. Infrastructure-as-a-Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). SWfMSs can be deployed in the cloud using Virtual Machines (VMs) to execute data-intensive SWfs. With a pay-as-you- go method, the users of clouds do not need to buy physical machines and the maintenance of the machines are ensured by the cloud providers. Nowadays, a cloud is typically made of several sites (or data centers), each with its own resources and data. Since a data- intensive SWf may process distributed data at different sites, the SWf execution should be adapted to multisite clouds while using distributed computing or storage resources.In this thesis, we study the methods to execute data-intensive SWfs in a multisite cloud environment. Some SWfMSs already exist while most of them are designed for computer clusters, grid or single cloud site. In addition, the existing approaches are limited to static computing resources or single site execution. We propose SWf partitioning algorithms and a task scheduling algorithm for SWf execution in a multisite cloud. Our proposed algorithms can significantly reduce the overall SWf execution time in a multisite cloud.In particular, we propose a general solution based on multi-objective scheduling in order to execute SWfs in a multisite cloud. The general solution is composed of a cost model, a VM provisioning algorithm, and an activity scheduling algorithm. The VM provisioning algorithm is based on our proposed cost model to generate VM provisioning plans to execute SWfs at a single cloud site. The activity scheduling algorithm enables SWf execution with the minimum cost, composed of execution time and monetary cost, in a multisite cloud. We made extensive experiments and the results show that our algorithms can reduce considerably the overall cost of the SWf execution in a multisite cloud.

Page generated in 0.0426 seconds