• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 189
  • 134
  • 56
  • 45
  • 44
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 924
  • 924
  • 924
  • 404
  • 395
  • 351
  • 351
  • 329
  • 325
  • 320
  • 319
  • 316
  • 314
  • 313
  • 313
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

ZIH-Info

13 July 2020 (has links)
- Staatsministerbesuch zu KI-Forschung - Virtuelle Sommerschule - ScaDS.AI - Telefonie-Umstellung: Amtsanschluss auf VoIP - Abschaltung Unix Mail im September 2020 - Neue Hardware für KI-Forschung - Kursangebote im TU-Weiterbildungskatalog - Barrierefreiheit für digitale Angebote - Veranstaltungen
842

Dynamische Lastbalancierung und Modellkopplung zur hochskalierbaren Simulation von Wolkenprozessen

Lieber, Matthias 03 September 2012 (has links)
Die komplexen Interaktionen von Aerosolen, Wolken und Niederschlag werden in aktuellen Vorhersagemodellen nur ungenügend dargestellt. Simulationen mit spektraler Beschreibung von Wolkenprozessen können zu verbesserten Vorhersagen beitragen, sind jedoch weitaus rechenintensiver. Die Beschleunigung dieser Simulationen erfordert eine hochparallele Ausführung. In dieser Arbeit wird ein Konzept zur Kopplung spektraler Wolkenmikrophysikmodelle mit atmosphärischen Modellen entwickelt, das eine effiziente Nutzung der heute verfügbaren Parallelität der Größenordnung von 100.000 Prozessorkernen ermöglicht. Aufgrund des stark variierenden Rechenaufwands ist dafür eine hochskalierbare dynamische Lastbalancierung des Wolkenmikrophysikmodells unumgänglich. Dies wird durch ein hierarchisches Partitionierungsverfahren erreicht, das auf raumfüllenden Kurven basiert. Darüber hinaus wird eine hochskalierbare Verknüpfung von dynamischer Lastbalancierung und Modellkopplung durch ein effizientes Verfahren für die regelmäßige Bestimmung der Überschneidungen zwischen unterschiedlichen Partitionierungen ermöglicht. Durch die effiziente Nutzung von Hochleistungsrechnern ermöglichen die Ergebnisse der Arbeit die Anwendung spektraler Wolkenmikrophysikmodelle zur Simulation realistischer Szenarien auf hochaufgelösten Gittern. / Current forecast models insufficiently represent the complex interactions of aerosols, clouds and precipitation. Simulations with spectral description of cloud processes allow more detailed forecasts. However, they are much more computationally expensive. Reducing the runtime of such simulations requires a highly parallel execution. This thesis presents a concept for coupling spectral cloud microphysics models with atmospheric models that allows for efficient utilization of today\'s available parallelism in the order of 100.000 processor cores. Due to the strong workload variations, highly scalable dynamic load balancing of the cloud microphysics model is essential in order to reach this goal. This is achieved through a hierarchical partitioning method based on space-filling curves. Furthermore, a highly scalable connection of dynamic load balancing and model coupling is facilitated by an efficient method to regularly determine the intersections between different partitionings. The results of this thesis enable the application of spectral cloud microphysics models for the simulation of realistic scenarios with high resolution grids by efficient use of high performance computers.
843

ZIH-Info

18 May 2021 (has links)
- Neues Machine-Learning-Cluster für ScaDS.AI - Erforschung der COVID-19/Immun-Interaktion - PDFs unter Linux signieren - Neuerungen in der Enterprise-Cloud - Neues Newsletter-Tool an der TU Dresden - Erweiterung Videokonferenzdienste - Feedback zu IT-Diensten - Implementierungsprogramm Forschungsdaten - Veranstaltungen
844

ZIH-Info

18 May 2021 (has links)
- Erreichbarkeit Service Desk - Neuer Domainanbieter (Registrar) - Dienste für Schullogin - Neues Gewand für FAQs im Ticketsystem - Whitepaper zu nachhaltiger Software - Einblicke in die Forschungsdaten-Praxis - Girls‘Day 2021 als Online-Angebot - Data Science Challenge - Veranstaltungen
845

ZIH-Info

18 May 2021 (has links)
- Neue IT-Ordnung beschlossen - OpenVPN feiert einjähriges Jubiläum - BigBlueButton – positives Image setzt sich durch - Abschluss Projekt PRESTIGE - Ausbau der Kontaktstelle Forschungsdaten - Sommerschule „Mathematics of life' - HPC-Statuskonferenz mit zwei TUD-Vorträgen - 100. Jubiläum Prof. Dr. N. J. Lehmann - Veranstaltungen
846

ZIH-Info

18 May 2021 (has links)
- Neuigkeiten im Sprachdienst (VoIP) - Digitale Prüfungen auf dem Vormarsch - Windows Terminal-Server-Farm - Videokonferenzsystem Jitsi - Neues Speichersystem für die Virtualisierung - Neuer Mailserver-Cluster Exchange 2016 - Universitätssammlungen online – robotron*Daphne - Workshop „Transparenz von KI-Systemen“ - Veranstaltungen
847

Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis

Yuankun Fu (10223831) 29 April 2021 (has links)
<div>As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.</div><div><br></div><div>I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.</div><div><br></div><div>After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.</div>
848

ZIH-Info

23 June 2021 (has links)
- Windows-Server-Lizenzen - BigBlueButton-Update - Virtueller Messestand ZIH@ISC2021 - Planung neuer HPC-Trainingsangebote am ZIH - Fortführung ZIH-Kolloquium - TU Lectures Corona - HPCN-Workshop 2021 - Veranstaltungen Schaufler Lab@TU Dresden - Veranstaltungen
849

Energy Measurements of High Performance Computing Systems: From Instrumentation to Analysis

Ilsche, Thomas 31 July 2020 (has links)
Energy efficiency is a major criterion for computing in general and High Performance Computing in particular. When optimizing for energy efficiency, it is essential to measure the underlying metric: energy consumption. To fully leverage energy measurements, their quality needs to be well-understood. To that end, this thesis provides a rigorous evaluation of various energy measurement techniques. I demonstrate how the deliberate selection of instrumentation points, sensors, and analog processing schemes can enhance the temporal and spatial resolution while preserving a well-known accuracy. Further, I evaluate a scalable energy measurement solution for production HPC systems and address its shortcomings. Such high-resolution and large-scale measurements present challenges regarding the management of large volumes of generated metric data. I address these challenges with a scalable infrastructure for collecting, storing, and analyzing metric data. With this infrastructure, I also introduce a novel persistent storage scheme for metric time series data, which allows efficient queries for aggregate timelines. To ensure that it satisfies the demanding requirements for scalable power measurements, I conduct an extensive performance evaluation and describe a productive deployment of the infrastructure. Finally, I describe different approaches and practical examples of analyses based on energy measurement data. In particular, I focus on the combination of energy measurements and application performance traces. However, interweaving fine-grained power recordings and application events requires accurately synchronized timestamps on both sides. To overcome this obstacle, I develop a resilient and automated technique for time synchronization, which utilizes crosscorrelation of a specifically influenced power measurement signal. Ultimately, this careful combination of sophisticated energy measurements and application performance traces yields a detailed insight into application and system energy efficiency at full-scale HPC systems and down to millisecond-range regions.:1 Introduction 2 Background and Related Work 2.1 Basic Concepts of Energy Measurements 2.1.1 Basics of Metrology 2.1.2 Measuring Voltage, Current, and Power 2.1.3 Measurement Signal Conditioning and Analog-to-Digital Conversion 2.2 Power Measurements for Computing Systems 2.2.1 Measuring Compute Nodes using External Power Meters 2.2.2 Custom Solutions for Measuring Compute Node Power 2.2.3 Measurement Solutions of System Integrators 2.2.4 CPU Energy Counters 2.2.5 Using Models to Determine Energy Consumption 2.3 Processing of Power Measurement Data 2.3.1 Time Series Databases 2.3.2 Data Center Monitoring Systems 2.4 Influences on the Energy Consumption of Computing Systems 2.4.1 Processor Power Consumption Breakdown 2.4.2 Energy-Efficient Hardware Configuration 2.5 HPC Performance and Energy Analysis 2.5.1 Performance Analysis Techniques 2.5.2 HPC Performance Analysis Tools 2.5.3 Combining Application and Power Measurements 2.6 Conclusion 3 Evaluating and Improving Energy Measurements 3.1 Description of the Systems Under Test 3.2 Instrumentation Points and Measurement Sensors 3.2.1 Analog Measurement at Voltage Regulators 3.2.2 Instrumentation with Hall Effect Transducers 3.2.3 Modular Instrumentation of DC Consumers 3.2.4 Optimal Wiring for Shunt-Based Measurements 3.2.5 Node-Level Instrumentation for HPC Systems 3.3 Analog Signal Conditioning and Analog-to-Digital Conversion 3.3.1 Signal Amplification 3.3.2 Analog Filtering and Analog-To-Digital Conversion 3.3.3 Integrated Solutions for High-Resolution Measurement 3.4 Accuracy Evaluation and Calibration 3.4.1 Synthetic Workloads for Evaluating Power Measurements 3.4.2 Improving and Evaluating the Accuracy of a Single-Node Measuring System 3.4.3 Absolute Accuracy Evaluation of a Many-Node Measuring System 3.5 Evaluating Temporal Granularity and Energy Correctness 3.5.1 Measurement Signal Bandwidth at Different Instrumentation Points 3.5.2 Retaining Energy Correctness During Digital Processing 3.6 Evaluating CPU Energy Counters 3.6.1 Energy Readouts with RAPL 3.6.2 Methodology 3.6.3 RAPL on Intel Sandy Bridge-EP 3.6.4 RAPL on Intel Haswell-EP and Skylake-SP 3.7 Conclusion 4 A Scalable Infrastructure for Processing Power Measurement Data 4.1 Requirements for Power Measurement Data Processing 4.2 Concepts and Implementation of Measurement Data Management 4.2.1 Message-Based Communication between Agents 4.2.2 Protocols 4.2.3 Application Programming Interfaces 4.2.4 Efficient Metric Time Series Storage and Retrieval 4.2.5 Hierarchical Timeline Aggregation 4.3 Performance Evaluation 4.3.1 Benchmark Hardware Specifications 4.3.2 Throughput in Symmetric Configuration with Replication 4.3.3 Throughput with Many Data Sources and Single Consumers 4.3.4 Temporary Storage in Message Queues 4.3.5 Persistent Metric Time Series Request Performance 4.3.6 Performance Comparison with Contemporary Time Series Storage Solutions 4.3.7 Practical Usage of MetricQ 4.4 Conclusion 5 Energy Efficiency Analysis 5.1 General Energy Efficiency Analysis Scenarios 5.1.1 Live Visualization of Power Measurements 5.1.2 Visualization of Long-Term Measurements 5.1.3 Integration in Application Performance Traces 5.1.4 Graphical Analysis of Application Power Traces 5.2 Correlating Power Measurements with Application Events 5.2.1 Challenges for Time Synchronization of Power Measurements 5.2.2 Reliable Automatic Time Synchronization with Correlation Sequences 5.2.3 Creating a Correlation Signal on a Power Measurement Channel 5.2.4 Processing the Correlation Signal and Measured Power Values 5.2.5 Common Oversampling of the Correlation Signals at Different Rates 5.2.6 Evaluation of Correlation and Time Synchronization 5.3 Use Cases for Application Power Traces 5.3.1 Analyzing Complex Power Anomalies 5.3.2 Quantifying C-State Transitions 5.3.3 Measuring the Dynamic Power Consumption of HPC Applications 5.4 Conclusion 6 Summary and Outlook
850

Modélisation et simulation de l’écoulement diphasique dans les moteurs-fusées à propergol solide par des approches eulériennes polydispersées en taille et en vitesse / Eulerian modeling and simulation of two-phase flows in solid rocket motors taking into account size polydispersion and droplet trajectory crossing

Dupif, Valentin 22 June 2018 (has links)
Les gouttes d’oxyde d’aluminium présentes en masse dans l’écoulement interne des moteurs-fusées à propergol solide ont tendance à influerde façon importante sur l’écoulement et sur le fonctionnement du moteur quel que soit le régime. L’objectif de la thèse est d’améliorerles modèles diphasiques eulériens présents dans le code de calcul semi-industriel pour l’énergétique de l’ONERA, CEDRE, en y incluant lapossibilité d’une dispersion locale des particules en vitesse en plus de la dispersion en taille déjà présente dans le code, tout en gardant unestructure mathématique bien posée du système d’équations à résoudre. Cette nouvelle caractéristique rend le modèle capable de traiter lescroisements de trajectoires anisotropes, principale difficulté des modèles eulériens classiques pour les gouttes d’inertie modérément grande.En plus de la conception et de l’analyse détaillée d’une classe de modèles basés sur des méthodes de moments, le travail se concentre sur larésolution des systèmes d’équations obtenus en configurations industrielles. Pour cela, de nouvelles classes de schémas précis et réalisables pourle transport des particules dans l’espace physique et l’espace des phases sont développées. Ces schémas assurent la robustesse de la simulationmalgré différentes singularités (dont des chocs, -chocs, zones de pression nulle et zones de vide...) tout en gardant une convergence d’ordredeux pour les solutions régulières. Ces développements sont conduits en deux et trois dimensions, en plus d’un référentiel bidimensionnelaxisymétrique, dans le cadre de maillages non structurés.La capacité des schémas numériques à maintenir un niveau de précision élevé tout en restant robuste dans toutes les conditions est un pointclé pour les simulations industrielles de l’écoulement interne des moteurs à propergol solide. Pour illustrer cela, le code de recherche SIERRA,originellement conçu durant les année 90 pour les problématiques d’instabilités de fonctionnement en propulsion solide, a été réécrit afin depouvoir comparer deux générations de modèles et de méthodes numériques et servir de banc d’essais avant une intégration dans CEDRE. Lesrésultats obtenus confirment l’efficacité de la stratégie numérique choisie ainsi que le besoin d’introduire, pour les simulations axisymétriques,une condition à la limite spécifique, développée dans le cadre de cette thèse. En particulier, les effets à la fois du modèle et de la méthodenumérique dans le contexte d’une simulation de l’écoulement interne instationnaire dans les moteurs-fusées à propergol solide sont détaillés.Par cette approche, les liens entre des aspects fondamentaux de modélisation et de schémas numériques ainsi que leurs conséquences pour lesapplications sont mis en avant. / The massive amount of aluminum oxide particles carried in the internal flow of solid rocket motors significantly influences their behavior.The objective of this PhD thesis is to improve the two-phase flow Eulerian models available in the semi-industrial CFD code for energeticsCEDRE at ONERA by introducing the possibility of a local velocity dispersion in addition to the size dispersion already taken into accountin the code, while keeping the well-posed characteristics of the system of equations. Such a new feature enables the model to treat anisotropicparticle trajectory crossings, which is a key issue of Eulerian models for droplets of moderately large inertia.In addition to the design and detailed analysis of a class of models based on moment methods, the conducted work focuses on the resolution ofthe system of equations for industrial configurations. To do so, a new class of accurate and realizable numerical schemes for the transport ofthe particles in both the physical and the phase space is proposed. It ensures the robustness of the simulation despite the presence of varioussingularities (including shocks, -shocks, zero pressure area and vacuum...), while keeping a second order accuracy for regular solutions. Thesedevelopments are conducted in two and three dimensions, including the two dimensional axisymmetric framework, in the context of generalunstructured meshes.The ability of the numerical schemes to maintain a high level of accuracy in any condition is a key aspect in an industrial simulation of theinternal flow of solid rocket motors. In order to assess this, the in-house code SIERRA, originally designed at ONERA in the 90’s for solidrocket simulation purpose, has been rewritten, restructured and augmented in order to compare two generations of models and numericalschemes, to provide a basis for the integration of the features developed in CEDRE. The obtained results assess the efficiency of the chosennumerical strategy and confirm the need to introduce a new specific boundary condition in the context of axisymmetric simulations. Inparticular, it is shown that the model and numerical scheme can have an impact in the context of the simulation of the internal flow ofsolid rocket motors and their instabilities. Through our approach, the shed light on the links between fundamental aspects of modeling andnumerical schemes and their consequences on the applications.

Page generated in 0.0848 seconds