• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Lawn in the Sky

Hutchinson, Simon 03 October 2013 (has links)
"A Lawn in the Sky" is a musical drama in two acts on a libretto by Katherine Hollander. The piece is based on the true story of Lieutenant Onoda Hiroo, a Japanese "straggler" who refused to believe that Japan had surrendered in World War II and continued to wage guerilla warfare in the jungles of the Philippines until 1974. The librettist constructed this fictionalized account drawing from information in newspaper articles and Onoda's memoir, No Surrender: My Thirty-Year War. While both Ms. Hollander and I referred to these historical sources, the story is a work of fiction, including a total cast of nine characters, several purely fictional. These roles are supported by an ensemble of Western instruments: flute, clarinet, saxophone, oboe, bassoon, percussion, piano, and contrabass; Japanese instruments: nohkan, shamisen, and taiko; and fixed media electronics. This mixed ensemble parallels the characters' divergent views of reality and offers opportunities for multidimensional commentary on both the libretto and the story. Included with this document is a supplemental zip file which contains the audio samples and sample players for the electronic portion of the score.
2

Determination of Stellar Parameters through the Use of All Available Flux Data and Model Spectral Energy Distributions

Ekanayake, Gemunu 01 January 2017 (has links)
Basic stellar atmospheric parameters, such as effective temperature, surface gravity, and metallicity plays a vital role in the characterization of various stellar populations in the Milky Way. The Stellar parameters can be measured by adopting one or more observational techniques, such as spectroscopy, photometry, interferometry, etc. Finding new and innovative ways to combine these observational data to derive reliable stellar parameters and to use them to characterize some of the stellar populations in our galaxy is the main goal of this thesis. Our initial work, based on the spectroscopic and photometric data available in literature, had the objective of calibrating the stellar parameters from a range of available flux observations from far-UV to far-IR. Much effort has been made to estimate probability distributions of the stellar parameters using Bayesian inference, rather than point estimates. We applied these techniques to blue straggler stars (BSSs) in the galactic field, which are thought to be a product of mass transfer mechanism associated with binary stars. Using photometry available in SDSS and GALEX surveys we identified 85 stars with UV excess in their spectral energy distribution (SED) : indication of a hot white dwarf companion to BSS. To determine the parameter distributions (mass, temperature and age) of the WD companions, we developed algorithms that could fit binary model atmospheres to the observed SED. The WD mass distribution peaks at 0.4M , suggests the primary formation channel of field BSSs is Case-B mass transfer, i.e. when the donor star is in red giant phase of its evolution. Based on stellar evolutionary models, we estimate the lower limit of binary mass transfer efficiency β ~ 0.5. Next, we have focused on the Canis Major overdensity (CMO), a substructure located at low galactic latitude in the Milky Way, where the interstellar reddening (E(B-V )) due to dust is significantly high. In this study we estimated the reddening, metallicity distribution and kinematics of the CMO using a sample of red clump (RC) stars. The averageE(B-V)(~0.19)is consistent with that measured from Schlegel maps (Schlegal et.al. 1998). The overall metallicity and kinematic distribution is in agreement with the previous estimates of the disk stars. But the measured mean alpha element abundance is relatively larger with respect to the expected value for disk stars.
3

Energy-efficient Straggler Mitigation for Big Data Applications on the Clouds / Amélioration de l'efficacité énergétique de la prévention des stragglers pour les applications Big Data sur les Clouds

Phan, Tien-Dat 30 November 2017 (has links)
La consommation d’énergie est une préoccupation importante pour les systèmes de traitement Big Data à grande échelle, ce qui entraîne un coût monétaire énorme. En raison de l’hétérogénéité du matériel et des conflits entre les charges de travail simultanées, les stragglers (i.e., les tâches qui sont relativement plus lentes que les autres tâches) peuvent augmenter considérablement le temps d’exécution et la consommation d’énergie du travail. Par conséquent, l’atténuation des stragglers devient une technique importante pour améliorer les performances des systèmes de traitement Big Data à grande échelle. Typiquement, il se compose de deux phases: la détection de stragglers et la manipulation de stragglers. Dans la phase de détection, les tâches lentes (par exemple, les tâches avec une vitesse ou une progression inférieure à la moyenne) sont marquées en tant que stragglers. Ensuite, les stragglers sont traités en utilisant la technique d’exécution spéculative. Avec cette technique, une copie du straggler détecté est lancée en parallèle avec le straggler dans l’espoir qu’il puisse finir plus tôt, réduisant ainsi le temps d’exécution du straggler. Bien qu’un grand nombre d’études aient été proposées pour améliorer les performances des applications Big Data en utilisant la technique d’exécution spéculative, peu d’entre elles ont étudié l’efficacité énergétique de leurs solutions.Dans le cadre de cette thèse, nous commençons par caractériser l’impact de l’atténuation des stragglers sur la performance et la consommation d’énergie des systèmes de traitement de Big Data. Nous observons que l’efficacité énergétique des techniques actuelles d’atténuation des stragglers pourrait être considérablement améliorée. Cela motive une étude détaillée de ses deux phases: détection de straggler et traitement de straggler. En ce qui concerne la détection de straggler, nous introduisons un cadre novateur pour caractériser et évaluer de manière exhaustive les mécanismes de détection de straggler. En conséquence, nous proposons un nouveau mécanisme énergétique de détection de straggler. Ce mécanisme de détection est implémenté dans Hadoop et se révèle avoir une efficacité énergétique plus élevée par rapport aux mécanismes les plus récentes. En ce qui concerne le traitement de straggler, nous présentons une nouvelle méthode pour répartir des copies spéculatives, qui prend en compte l’impact de l’hétérogénéité des ressources sur la performance et la consommation d’énergie. Enfin, nous introduisons un nouveau mécanisme éconergétique pour gérer les stragglers. Ce mécanisme fournit plus de ressources disponibles pour lancer des copies spéculatives, en utilisant une approche de réservation dynamique de ressources. Il est démontré qu’elle améliore considérablement l’efficacité énergétique en utilisant une simulation. / Energy consumption is an important concern for large-scale Big Data processing systems, which results in huge monetary cost. Due to the hardware heterogeneity and contentions between concurrent workloads, stragglers (i.e., tasks performing relatively slower than other tasks) can severely increase the job’s execution time and energy consumption. Consequently, straggler mitigation becomes an important technique to improve the performance of large-scale Big Data processing systems. Typically, it consists of two phases: straggler detection and straggler handling. In the detection phase, slow tasks (e.g., tasks with speed or progress below the average) are marked as stragglers. Then, stragglers are handled using the speculative execution technique. With this technique, a copy of the detected straggler is launched in parallel with the straggler with the expectation that it can finish earlier, thus, reduce the straggler’s execution time. Although a large number of studies have been proposed to improve the performance of Big Data applications using speculative execution technique, few of them have studied the energy efficiency of their solutions. Addressing this lack, we conduct an experimental study to fully understand the impact of straggler mitigation techniques on the performance and the energy consumption of Big Data processing systems. We observe that current straggler mitigation techniques are not energy efficient. As a result, this promotes further studies aiming at higher energy efficiency for straggler mitigation. In terms of straggler detection, we introduce a novel framework for comprehensively characterizing and evaluating straggler detection mechanisms. Accordingly, we propose a new energy-driven straggler detection mechanism. This straggler detection mechanism is implemented into Hadoop and is demonstrated to have higher energy efficiency compared to the state-of-the-art mechanisms. In terms of straggler handling, we present a new speculative copy allocation method, which takes into consideration the impact of resource heterogeneity on performance and energy consumption. Finally, an energy efficient straggler handling mechanism is introduced. This mechanism provides more resource availability for launching speculative copies, by adopting a dynamic resource reservation approach. It is demonstrated, via a trace-driven simulation, to bring a high improvement in energy efficiency.
4

Scalability Analysis and Optimization for Large-Scale Deep Learning

Pumma, Sarunya 03 February 2020 (has links)
Despite its growing importance, scalable deep learning (DL) remains a difficult challenge. Scalability of large-scale DL is constrained by many factors, including those deriving from data movement and data processing. DL frameworks rely on large volumes of data to be fed to the computation engines for processing. However, current hardware trends showcase that data movement is already one of the slowest components in modern high performance computing systems, and this gap is only going to increase in the future. This includes data movement needed from the filesystem, within the network subsystem, and even within the node itself, all of which limit the scalability of DL frameworks on large systems. Even after data is moved to the computational units, managing this data is not easy. Modern DL frameworks use multiple components---such as graph scheduling, neural network training, gradient synchronization, and input pipeline processing---to process this data in an asynchronous uncoordinated manner, which results in straggler processes and consequently computational imbalance, further limiting scalability. This thesis studies a subset of the large body of data movement and data processing challenges that exist in modern DL frameworks. For the first study, we investigate file I/O constraints that limit the scalability of large-scale DL. We first analyze the Caffe DL framework with Lightning Memory-Mapped Database (LMDB), one of the most widely used file I/O subsystems in DL frameworks, to understand the causes of file I/O inefficiencies. Based on our analysis, we propose LMDBIO---an optimized I/O plugin for scalable DL that addresses the various shortcomings in existing file I/O for DL. Our experimental results show that LMDBIO significantly outperforms LMDB in all cases and improves overall application performance by up to 65-fold on 9,216 CPUs of the Blues and Bebop supercomputers at Argonne National Laboratory. Our second study deals with the computational imbalance problem in data processing. For most DL systems, the simultaneous and asynchronous execution of multiple data-processing components on shared hardware resources causes these components to contend with one another, leading to severe computational imbalance and degraded scalability. We propose various novel optimizations that minimize resource contention and improve performance by up to 35% for training various neural networks on 24,576 GPUs of the Summit supercomputer at Oak Ridge National Laboratory---the world's largest supercomputer at the time of writing of this thesis. / Doctor of Philosophy / Deep learning is a method for computers to automatically extract complex patterns and trends from large volumes of data. It is a popular methodology that we use every day when we talk to Apple Siri or Google Assistant, when we use self-driving cars, or even when we witnessed IBM Watson be crowned as the champion of Jeopardy! While deep learning is integrated into our everyday life, it is a complex problem that has gotten the attention of many researchers. Executing deep learning is a highly computationally intensive problem. On traditional computers, such as a generic laptop or desktop machine, the computation for large deep learning problems can take years or decades to complete. Consequently, supercomputers, which are machines with massive computational capability, are leveraged for deep learning workloads. The world's fastest supercomputer today, for example, is capable of performing almost 200 quadrillion floating point operations every second. While that is impressive, for large problems, unfortunately, even the fastest supercomputers today are not fast enough. The problem is not that they do not have enough computational capability, but that deep learning problems inherently rely on a lot of data---the entire concept of deep learning centers around the fact that the computer would study a huge volume of data and draw trends from it. Moving and processing this data, unfortunately, is much slower than the computation itself and with the current hardware trends it is not expected to get much faster in the future. This thesis aims at making deep learning executions on large supercomputers faster. Specifically, it looks at two pieces associated with managing data: (1) data reading---how to quickly read large amounts of data from storage, and (2) computational imbalance---how to ensure that the different processors on the supercomputer are not waiting for each other and thus wasting time. We first analyze each performance problem to identify the root cause of it. Then, based on the analysis, we propose several novel techniques to solve the problem. With our optimizations, we are able to significantly improve the performance of deep learning execution on a number of supercomputers, including Blues and Bebop at Argonne National Laboratory, and Summit---the world's fastest supercomputer---at Oak Ridge National Laboratory.
5

The Physics of Mergers: Theoretical and Statistical Techniques Applied to Stellar Mergers in Dense Star Clusters

Leigh, William Nathan 10 1900 (has links)
<p>In this thesis, we present theoretical and statistical techniques broadly related to systems of dynamically-interacting particles. We apply these techniques to observations of dense star clusters in order to study gravitational interactions between stars. These include both long- and short-range interactions, as well as encounters leading to direct collisions and mergers. The latter have long been suspected to be an important formation channel for several curious types of stars whose origins are unknown. The former drive the structural evolution of star clusters and, by leading to their eventual dissolution and the subsequent dispersal of their stars throughout the Milky Way Galaxy, have played an important role in shaping its history. Within the last few decades, theoretical work has painted a comprehensive picture for the evolution of star clusters. And yet, we are still lacking direct observational confirmation that many of the processes thought to be driving this evolution are actually occuring. The results presented in this thesis have connected several of these processes to real observations of star clusters, in many cases for the first time. This has allowed us to directly link the observed properties of several stellar populations to the physical processes responsible for their origins.</p> <p>We present a new method of quantifying the frequency of encounters involving single, binary and triple stars using an adaptation of the classical mean free path approximation. With this technique, we have shown that dynamical encounters involving triple stars occur commonly in star clusters, and that they are likely to be an important dynamical channel for stellar mergers to occur. This is a new result that has important implications for the origins of several peculiar types of stars (and binary stars), in particular blue stragglers. We further present several new statistical techniques that are broadly applicable to systems of dynamically-interacting particles composed of several different types of populations. These are applied to observations of star clusters in order to obtain quantitative constraints for the degree to which dynamical interactions affect the relative sizes and spatial distributions of their different stellar populations. To this end, we perform an extensive analysis of a large sample of colour-magnitude diagrams taken from the ACS Survey for Globular Clusters. The results of this analysis can be summarized as follows: (1) We have compiled a homogeneous catalogue of stellar populations, including main-sequence, main-sequence turn-off, red giant branch, horizontal branch and blue straggler stars. (2) With this catalogue, we have quantified the effects of the cluster dynamics in determining the relative sizes and spatial distributions of these stellar populations. (3) These results are particularly interesting for blue stragglers since they provide compelling evidence that they are descended from binary stars. (4) Our analysis of the main-sequence populations is consistent with a remarkably universal initial stellar mass function in old massive star clusters in the Milky Way. This is a new result with important implications for our understanding of star formation in the early Universe and, more generally, the history of our Galaxy. Finally, we describe how the techniques presented in this thesis are ideally suited for application to a number of other outstanding puzzles of modern astrophysics, including chemical reactions in the interstellar medium and mergers between galaxies in galaxy clusters and groups.</p> / Doctor of Philosophy (PhD)
6

Spectroscopy of Binaries in Globular Clusters

Giesers, Benjamin David 13 December 2019 (has links)
No description available.

Page generated in 0.0451 seconds