• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 7
  • 2
  • 1
  • Tagged with
  • 24
  • 24
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Neuro-Integrative Connectivity: A Scientific Workflow-Based Neuroinformatics Platform For Brain Network Connectivity Studies Using EEG Data

Socrates, Vimig 28 August 2019 (has links)
No description available.
22

Design und Management von Experimentier-Workflows

Kühnlenz, Frank 27 November 2014 (has links)
Experimentieren in der vorliegenden Arbeit bedeutet, Experimente auf der Basis von computerbasierten Modellen durchzuführen, wobei diese Modelle Struktur, Verhalten und Umgebung eines Systems abstrahiert beschreiben. Aus verschiedenen Gründen untersucht man stellvertretend für das System ein Modell dieses Systems. Systematisches Experimentieren bei Variation der Modelleingabeparameterbelegung führt in der Regel zu sehr vielen, potentiell lang andauernden Experimenten, die geplant, dokumentiert, automatisiert ausgeführt, überwacht und ausgewertet werden müssen. Häufig besteht dabei das Problem, dass dem Experimentator (der üblicherweise kein Informatiker ist) adäquate Ausdrucksmittel fehlen, um seine Experimentier-Prozesse formal zu beschreiben, so dass sie von einem Computersystem automatisiert ausgeführt werden können. Dabei müssen Verständlichkeit, Nachnutzbarkeit und Reproduzierbarkeit gewahrt werden. Der neue Ansatz besteht darin, generelle Experimentier-Workflow-Konzepte als Spezialisierung von Scientific-Workflows zu identifizieren und diese als eine metamodellbasierte Domain-Specific-Language (DSL) zu formalisieren, die hier als Experimentation-Language (ExpL) bezeichnet wird. ExpL beinhaltet allgemeine Workflow-Konzepte und erlaubt das Modellieren von Experimentier-Workflows auf einer frameworkunabhängigen, konzeptuellen Ebene. Dadurch werden die Nachnutzbarkeit und das Publizieren von Experimentier-Workflows nicht mehr durch die Gebundenheit an ein spezielles Framework behindert. ExpL wird immer in einer konkreten Experimentierdomäne benutzt, die spezifische Anforderungen an Konfigurations- und Auswertemethoden aufweist. Um mit dieser Domänenspezifik umzugehen, wird in dieser Arbeit gezeigt, diese beiden Aspekte separat in zwei weiteren, abhängigen Domain-Specific-Languages (DSLs) zu behandeln: für Konfiguration und Auswertung. / Experimentation in my work means performing experiments based on computer-based models, which describe system structure and behaviour abstractly. Instead of the system itself models of the system will be explored due to several reasons. Systematic experimentation using model input parameter variation assignments leads to lots of possibly long-running experiments that must be planned, documented, automated executed, monitored and evaluated. The problem is, that experimenters (who are usually not computer scientists) miss the proper means of expressions (e. g., to express variations of parameter assignments) to describe experimentation processes formally in a way, that allows their automatic execution by a computer system while preserving reproducibility, re-usability and comprehension. My approach is to identify general experimentation workflow concepts as a specialization of a scientific workflow and formalize them as a meta-model-based domain-specific language (DSL) that I call experimentation language (ExpL). experimentation language (ExpL) includes general workflow concepts like control flow and the composition of activities, and some new declarative language elements. It allows modeling of experimentation workflows on a framework-independent, conceptional level. Hence, re-using and sharing the experimentation workflow with other scientists is not limited to a particular framework anymore. ExpL is always being used in a specific experimentation domain that has certain specifics in configuration and evaluation methods. Addressing this, I propose to separate the concerns and use two other, dependent domain-specific languages (DSLs) additionally for configuration and evaluation.
23

Estratégia computacional para avaliação de propriedades mecânicas de concreto de agregado leve

Bonifácio, Aldemon Lage 16 March 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-21T11:44:49Z No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T19:04:13Z (GMT) No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Made available in DSpace on 2017-08-07T19:04:13Z (GMT). No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) Previous issue date: 2017-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O concreto feito com agregados leves, ou concreto leve estrutural, é considerado um material de construção versátil, bastante usado em todo o mundo, em diversas áreas da construção civil, tais como, edificações pré-fabricadas, plataformas marítimas, pontes, entre outros. Porém, a modelagem das propriedades mecânicas deste tipo de concreto, tais como o módulo de elasticidade e a resistência a compressão, é complexa devido, principalmente, à heterogeneidade intrínseca aos componentes do material. Um modelo de predição das propriedades mecânicas do concreto de agregado leve pode ajudar a diminuir o tempo e o custo de projetos ao prover dados essenciais para os cálculos estruturais. Para esse fim, este trabalho visa desenvolver uma estratégia computacional para a avaliação de propriedades mecânicas do concreto de agregado leve, por meio da combinação da modelagem computacional do concreto via MEF (Método de Elementos Finitos), do método de inteligência computacional via SVR (Máquina de vetores suporte com regressão, do inglês Support Vector Regression) e via RNA (Redes Neurais Artificiais). Além disso, com base na abordagem de workflow científico e many-task computing, uma ferramenta computacional foi desenvolvida com o propósito de facilitar e automatizar a execução dos experimentos científicos numéricos de predição das propriedades mecânicas. / Concrete made from lightweight aggregates, or lightweight structural concrete, is considered a versatile construction material, widely used throughout the world, in many areas of civil construction, such as prefabricated buildings, offshore platforms, bridges, among others. However, the modeling of the mechanical properties of this type of concrete, such as the modulus of elasticity and the compressive strength, is complex due mainly to the intrinsic heterogeneity of the components of the material. A predictive model of the mechanical properties of lightweight aggregate concrete can help reduce project time and cost by providing essential data for structural calculations. To this end, this work aims to develop a computational strategy for the evaluation of mechanical properties of lightweight concrete by combining the concrete computational modeling via Finite Element Method, the computational intelligence method via Support Vector Regression, and via Artificial Neural Networks. In addition, based on the approachs scientific workflow and many-task computing, a computational tool will be developed with the purpose of facilitating and automating the execution of the numerical scientific experiments of prediction of the mechanical properties.
24

Predictive Resource Management for Scientific Workflows

Witt, Carl Philipp 21 July 2020 (has links)
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen. Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung. Die wichtigsten Beiträge dieser Arbeit sind: 1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen. 2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können. 3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen. 4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht. / Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately. This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance. The main contributions are: 1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets. 2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved. 3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models. 4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.

Page generated in 0.0458 seconds