• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 8
  • 2
  • 1
  • Tagged with
  • 26
  • 26
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim 14 November 2014 (has links)
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics, we provide generic optimization patterns that are applicable to most instances of such workflows. We also discuss implementation issues and details that arise as we provide our contributions in each situation.
22

Méthodologie et composants pour la mise en oeuvre de workflows scientifiques / Methodology and components for scientific workflow building

Lin, Yuan 07 December 2011 (has links)
Les sciences relevant des sciences du vivant et de l'environnement (biologie, risques naturels, télédétection, etc.) ont depuis de nombreuses années accumulé les données d'observation et développé de grandes variétés de traitements.Les scientifiques de ces domaines doivent asseoir leur réflexion par des validations expérimentales. Celles-ci nécessitent la mise en œuvre de chaînes de traitements (ou protocoles expérimentaux) plus ou moins complexes.Le concept de "workflow" a été introduit de manière globale et raffiné en "workflow scientifique".Les systèmes actuels restent cependant difficiles à appréhender par des scientifiques dont les préoccupations ne relèvent pas directement de l'ingénierie informatique.L'approche suivie, en terme de méthodologie et de composants, propose une solution à ce problème.L'hypothèse initiale repose sur la vision utilisateur qui conçoit son travail en trois étapes :- La phase de planification, qui consiste à définir un modèle métier abstrait d'un workflow ;- La phase intermédiaire, qui consiste à concrétiser le modèle abstrait précédemment défini, en localisant les diverses ressources existantes au sein de ce que nous désignons comme contexte de travail. La définition, la vérification et la validation des modèles concrets reposent sur la connaissance des experts et sur la compatibilité des éléments du modèles ;- La phase dynamique, qui consiste à exécuter le modèle concret validé avec un moteur d'exécution.La thèse se focalise principalement sur les divers problèmes soulevés dans les deux premières phases (planification et intermédiaire).A partir d'une analyse des travaux existants, nous déclinons les divers maillons :méta modèle et langage de workflow, contexte de travail, graphe de ressources, traitement de cas d'incompatibilité de la proposition.La validation des travaux s'est effectuée dans plusieurs domaines cibles: biologie, risques naturels et télédétection.Un prototype a été développé, il propose les fonctionnalités suivantes :conception et sauvegarde de chaines de traitements abstraites,description et localisation de ressources, vérification de la validité des chaînes concrètes. / For many years in life and the environmental science domains (such asbiology, risk, remote sensing, etc.), observational data haveaccumulated and a great number of related applications have beenimplemented. Scientists working in these domains have to establish theirreflections and evaluations based on experimental validations, whichrequire a more or less complex workflow. The "workflow" has beenintroduced as a global and general concept, and defined as "scientificworkflow". However, the current complex systems remain difficult toaccess by scientist, whose expertise is not directly related to thedomain of computer science engineering.Within the following approach we propose a methodical solution for thisproblem.The initial hypothesis is based on the vision of an user, who conceiveshis work in three stages:1) The conception stage, which consists of constructing an abstractworkflow model;2) The intermediate stage, which represents an instantiation step of thepre-defined abstract model, by locating different existing resources inan environment, named "work context" in our approach. The definition,verification and validation of a concrete model depend on the experts'knowledge of his specialized domain and the compatibility of elements inthe model.3) The dynamic stage, which consists of establishing and executing thevalid concrete model by using a workflow execution engine.In this thesis we mainly concentrate on the different problems raised bythe first two stages (conception and intermediate). Based on an analysisof existing efforts we decline some elements such as meta model and theassociated workflow language, work context, resource graph, solution propositions for incompatible compositions.The validation for our approach has been carried out in various target domains such as biology, natural risk and remote sensing. A prototype has been developed, which provides the following functionalities:construction and saving the abstract workflow models, description and location of (data / application) resource, verification and validation of concrete workflow models.
23

Neuro-Integrative Connectivity: A Scientific Workflow-Based Neuroinformatics Platform For Brain Network Connectivity Studies Using EEG Data

Socrates, Vimig 28 August 2019 (has links)
No description available.
24

Design und Management von Experimentier-Workflows

Kühnlenz, Frank 27 November 2014 (has links)
Experimentieren in der vorliegenden Arbeit bedeutet, Experimente auf der Basis von computerbasierten Modellen durchzuführen, wobei diese Modelle Struktur, Verhalten und Umgebung eines Systems abstrahiert beschreiben. Aus verschiedenen Gründen untersucht man stellvertretend für das System ein Modell dieses Systems. Systematisches Experimentieren bei Variation der Modelleingabeparameterbelegung führt in der Regel zu sehr vielen, potentiell lang andauernden Experimenten, die geplant, dokumentiert, automatisiert ausgeführt, überwacht und ausgewertet werden müssen. Häufig besteht dabei das Problem, dass dem Experimentator (der üblicherweise kein Informatiker ist) adäquate Ausdrucksmittel fehlen, um seine Experimentier-Prozesse formal zu beschreiben, so dass sie von einem Computersystem automatisiert ausgeführt werden können. Dabei müssen Verständlichkeit, Nachnutzbarkeit und Reproduzierbarkeit gewahrt werden. Der neue Ansatz besteht darin, generelle Experimentier-Workflow-Konzepte als Spezialisierung von Scientific-Workflows zu identifizieren und diese als eine metamodellbasierte Domain-Specific-Language (DSL) zu formalisieren, die hier als Experimentation-Language (ExpL) bezeichnet wird. ExpL beinhaltet allgemeine Workflow-Konzepte und erlaubt das Modellieren von Experimentier-Workflows auf einer frameworkunabhängigen, konzeptuellen Ebene. Dadurch werden die Nachnutzbarkeit und das Publizieren von Experimentier-Workflows nicht mehr durch die Gebundenheit an ein spezielles Framework behindert. ExpL wird immer in einer konkreten Experimentierdomäne benutzt, die spezifische Anforderungen an Konfigurations- und Auswertemethoden aufweist. Um mit dieser Domänenspezifik umzugehen, wird in dieser Arbeit gezeigt, diese beiden Aspekte separat in zwei weiteren, abhängigen Domain-Specific-Languages (DSLs) zu behandeln: für Konfiguration und Auswertung. / Experimentation in my work means performing experiments based on computer-based models, which describe system structure and behaviour abstractly. Instead of the system itself models of the system will be explored due to several reasons. Systematic experimentation using model input parameter variation assignments leads to lots of possibly long-running experiments that must be planned, documented, automated executed, monitored and evaluated. The problem is, that experimenters (who are usually not computer scientists) miss the proper means of expressions (e. g., to express variations of parameter assignments) to describe experimentation processes formally in a way, that allows their automatic execution by a computer system while preserving reproducibility, re-usability and comprehension. My approach is to identify general experimentation workflow concepts as a specialization of a scientific workflow and formalize them as a meta-model-based domain-specific language (DSL) that I call experimentation language (ExpL). experimentation language (ExpL) includes general workflow concepts like control flow and the composition of activities, and some new declarative language elements. It allows modeling of experimentation workflows on a framework-independent, conceptional level. Hence, re-using and sharing the experimentation workflow with other scientists is not limited to a particular framework anymore. ExpL is always being used in a specific experimentation domain that has certain specifics in configuration and evaluation methods. Addressing this, I propose to separate the concerns and use two other, dependent domain-specific languages (DSLs) additionally for configuration and evaluation.
25

An effective method to optimize docking-based virtual screening in a clustered fully-flexible receptor model deployed on cloud platforms / Um m?todo efetivo para otimizar a triagem virtual baseada em docagem de um modelo de receptor totalmente flex?vel agrupado utilizando computa??es em nuvem

De Paris, Renata 28 October 2016 (has links)
Submitted by Caroline Xavier (caroline.xavier@pucrs.br) on 2017-06-05T14:58:52Z No. of bitstreams: 1 TES_RENATA_DE_PARIS_COMPLETO.pdf: 8873897 bytes, checksum: 43b2a883518fc9ce39978e816042ab5f (MD5) / Made available in DSpace on 2017-06-05T14:58:53Z (GMT). No. of bitstreams: 1 TES_RENATA_DE_PARIS_COMPLETO.pdf: 8873897 bytes, checksum: 43b2a883518fc9ce39978e816042ab5f (MD5) Previous issue date: 2016-10-28 / Conselho Nacional de Pesquisa e Desenvolvimento Cient?fico e Tecnol?gico - CNPq / O uso de conforma??es obtidas por trajet?rias da din?mica molecular nos experimentos de docagem molecular ? a abordagem mais precisa para simular o comportamento de receptores e ligantes em ambientes moleculares. Entretanto, tais simula??es exigem alto custo computacional e a sua completa execu??o pode se tornar uma tarefa impratic?vel devido ao vasto n?mero de informa??es estruturais consideradas para representar a expl?cita flexibilidade de receptores. Al?m disso, o problema ? ainda mais desafiante quando deseja-se utilizar modelos de receptores totalmente flex?veis (Fully-Flexible Receptor - FFR) para realizar a triagem virtual em bibliotecas de ligantes. Este estudo apresenta um m?todo inovador para otimizar a triagem virtual baseada em docagem molecular de modelos FFR por meio da redu??o do n?mero de experimentos de docagem e, da invoca??o escalar de workflows de docagem para m?quinas virtuais de plataformas em nuvem. Para esse prop?sito, o workflow cient?fico basedo em nuvem, chamado e-FReDock, foi desenvolvido para acelerar as simula??es da docagem molecular em larga escala. e-FReDock ? baseado em um m?todo seletivo sem param?tros para executar experimentos de docagem ensemble com m?ltiplos ligantes. Como dados de entrada do e-FReDock, aplicou-se seis m?todos de agrupamento para particionar conforma??es com diferentes caracter?sticas estruturais no s?tio de liga??o da cavidade do substrato do receptor, visando identificar grupos de conforma??es favor?veis a interagir com espec?ficos ligantes durante os experimentos de docagem. Os resultados mostram o elevado n?vel de qualidade obtido pelos modelos de receptores totalmente flex?veis reduzidos (Reduced Fully-Flexible Receptor - RFFR) ao final dos experimentos em dois conjuntos de an?lises. O primeiro mostra que e-FReDock ? capaz de preservar a qualidade do modelo FFR entre 84,00% e 94,00%, enquanto a sua dimensionalidade reduz em uma m?dia de 49,68%. O segundo relata que os modelos RFFR resultantes s?o capazes de melhorar os resultados de docagem molecular em 97,00% dos ligantes testados quando comparados com a vers?o r?gida do modelo FFR. / The use of conformations obtained from molecular dynamics trajectories in the molecular docking experiments is the most accurate approach to simulate the behavior of receptors and ligands in molecular environments. However, such simulations are computationally expensive and their execution may become an infeasible task due to the large number of structural information, typically considered to represent the explicit flexibility of receptors. In addition, the computational demand increases when Fully-Flexible Receptor (FFR) models are routinely applied for screening of large compounds libraries. This study presents a novel method to optimize docking-based virtual screening of FFR models by reducing the size of FFR models at docking runtime, and scaling docking workflow invocations out onto virtual machines from cloud platforms. For this purpose, we developed e-FReDock, a cloud-based scientific workflow that assists in faster high-throughput docking simulations of flexible receptors and ligands. e-FReDock is based on a free-parameter selective method to perform ensemble docking experiments with multiple ligands from a clustered FFR model. The e-FReDock input data was generated by applying six clustering methods for partitioning conformations with different features in their substrate-binding cavities, aiming at identifying groups of snapshots with favorable interactions for specific ligands at docking runtime. Experimental results show the high quality Reduced Fully-Flexible Receptor (RFFR) models achieved by e-FReDock in two distinct sets of analyses. The first analysis shows that e-FReDock is able to preserve the quality of the FFR model between 84.00% and 94.00%, while its dimensionality reduces on average 49.68%. The second analysis reports that resulting RFFR models are able to reach better docking results than those obtained from the rigid version of the FFR model in 97.00% of the ligands tested.
26

Estratégia computacional para avaliação de propriedades mecânicas de concreto de agregado leve

Bonifácio, Aldemon Lage 16 March 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-21T11:44:49Z No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T19:04:13Z (GMT) No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Made available in DSpace on 2017-08-07T19:04:13Z (GMT). No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) Previous issue date: 2017-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O concreto feito com agregados leves, ou concreto leve estrutural, é considerado um material de construção versátil, bastante usado em todo o mundo, em diversas áreas da construção civil, tais como, edificações pré-fabricadas, plataformas marítimas, pontes, entre outros. Porém, a modelagem das propriedades mecânicas deste tipo de concreto, tais como o módulo de elasticidade e a resistência a compressão, é complexa devido, principalmente, à heterogeneidade intrínseca aos componentes do material. Um modelo de predição das propriedades mecânicas do concreto de agregado leve pode ajudar a diminuir o tempo e o custo de projetos ao prover dados essenciais para os cálculos estruturais. Para esse fim, este trabalho visa desenvolver uma estratégia computacional para a avaliação de propriedades mecânicas do concreto de agregado leve, por meio da combinação da modelagem computacional do concreto via MEF (Método de Elementos Finitos), do método de inteligência computacional via SVR (Máquina de vetores suporte com regressão, do inglês Support Vector Regression) e via RNA (Redes Neurais Artificiais). Além disso, com base na abordagem de workflow científico e many-task computing, uma ferramenta computacional foi desenvolvida com o propósito de facilitar e automatizar a execução dos experimentos científicos numéricos de predição das propriedades mecânicas. / Concrete made from lightweight aggregates, or lightweight structural concrete, is considered a versatile construction material, widely used throughout the world, in many areas of civil construction, such as prefabricated buildings, offshore platforms, bridges, among others. However, the modeling of the mechanical properties of this type of concrete, such as the modulus of elasticity and the compressive strength, is complex due mainly to the intrinsic heterogeneity of the components of the material. A predictive model of the mechanical properties of lightweight aggregate concrete can help reduce project time and cost by providing essential data for structural calculations. To this end, this work aims to develop a computational strategy for the evaluation of mechanical properties of lightweight concrete by combining the concrete computational modeling via Finite Element Method, the computational intelligence method via Support Vector Regression, and via Artificial Neural Networks. In addition, based on the approachs scientific workflow and many-task computing, a computational tool will be developed with the purpose of facilitating and automating the execution of the numerical scientific experiments of prediction of the mechanical properties.

Page generated in 0.0861 seconds