• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

General Resource Management for Computationally Demanding Scientific Software

Xinchen Guo (13965024) 17 October 2022 (has links)
<p>Many scientific problems contain nonlinear systems of equations that require multiple iterations to reach converged results. Such software pattern follows the bulk synchronous parallel model. In that sense, an iteration is a superstep, which includes computation of local data, global communication to update data for the next iteration, and synchronization between iterations. In modern HPC environments, MPI is used to distribute data and OpenMP is used to accelerate computation of each data. More MPI processes increase the cost of communication and synchronization whereas more OpenMP threads increase the overhead of multithreading. A proper combination of MPI and OpenMP is critical to accelerate each superstep. Proper orchestration of MPI processes and OpenMP threads is also needed to efficiently use the underlying hardware resources.</p> <p>  </p> <p>Purdue’s multi-purpose nanodevice simulation tool NEMO5 distributes the computation of independent spectral points by MPI. The computation of each spectral point is accelerated with OpenMP threads. A few examples of resource utilization optimizations are presented. One type of simulation applies the non-equilibrium Green’s function method to accurately predict drug molecules. Our profiling results suggest the optimum combination has more MPI processes and fewer OpenMP threads. However, NEMO5's memory usage has large spikes for each spectral point. Such behavior limits the concurrency of spectral point calculation due to the lack of swap space on HPC nodes to prevent out-of-memory. </p> <p><br></p> <p>A distributed resource management framework is proposed and developed to automatically and dynamically manage memory and CPU usage. The concurrent calculation of spectral points is pipelined to avoid simultaneous peak memory usage. This allows more MPI processes and fewer OpenMP threads for higher parallel efficiency. Automatic CPU usage adjustment also reduces the time cost to fill and drain the calculation pipeline. The resource management framework requires minimum code intrusion and successfully speeds up the calculation. It can also be generalized for other simulation software.</p>
2

UVLabel A Tool for the Future of Interferometry Analysis

January 2019 (has links)
abstract: UVLabel was created to enable radio astronomers to view and annotate their own data such that they could then expand their future research paths. It simplifies their data rendering process by providing a simple user interface to better access sections of their data. Furthermore, it provides an interface to track trends in their data through a labelling feature. The tool was developed following the incremental development process in order to quickly create a functional and testable tool. The incremental process also allowed for feedback from radio astronomers to help guide the project's development. UVLabel provides both a functional product, and a modifiable and scalable code base for radio astronomer developers. This enables astronomers studying various astronomical interferometric data labelling capabilities. The tool can then be used to improve their filtering methods, pursue machine learning solutions, and discover new trends. Finally, UVLabel will be open source to put customization, scalability, and adaptability in the hands of these researchers. / Dissertation/Thesis / Masters Thesis Software Engineering 2019
3

Simulating Atmosphere and the TolTEC Detector Array for Data Reduction Pipeline Evaluation

January 2019 (has links)
abstract: TolTEC is a three-color millimeter wavelength camera currently being developed for the Large Millimeter Telescope (LMT) in Mexico. Synthesizing data from previous astronomy cameras as well as knowledge of atmospheric physics, I have developed a simulation of the data collection of TolTEC on the LMT. The simulation was built off smaller sub-projects that informed the development with an understanding of the detector array, the time streams for astronomical mapping, and the science behind Lumped Element Kinetic Inductance Detectors (LEKIDs). Additionally, key aspects of software development processes were integrated into the scientific development process to streamline collaboration across multiple universities and plan for integration on the servers at LMT. The work I have done benefits the data reduction pipeline team by enabling them to efficiently develop their software and test it on simulated data. / Dissertation/Thesis / Masters Thesis Software Engineering 2019
4

Diffusion de l’information dans les médias sociaux : modélisation et analyse / Information diffusion in social media : modeling and analysis

Guille, Adrien 25 November 2014 (has links)
Les médias sociaux ont largement modifié la manière dont nous produisons, diffusons et consommons l'information et sont de fait devenus des vecteurs d'information importants. L’objectif de cette thèse est d’aider à la compréhension du phénomène de diffusion de l’information dans les médias sociaux, en fournissant des moyens d’analyse et de modélisation.Premièrement, nous proposons MABED, une méthode statistique pour détecter automatiquement les évènements importants qui suscitent l'intérêt des utilisateurs des médias sociaux à partir du flux de messages qu'ils publient, dont l'originalité est d'exploiter la fréquence des interactions sociales entre utilisateurs, en plus du contenu textuel des messages. Cette méthode diffère par ailleurs de celles existantes en ce qu'elle estime dynamiquement la durée de chaque évènement, plutôt que de supposer une durée commune et fixée à l'avance pour tous les évènements. Deuxièmement, nous proposons T-BASIC, un modèle probabiliste basé sur la structure de réseau sous-jacente aux médias sociaux pour prédire la diffusion de l'information, plus précisément l'évolution du volume d'utilisateurs relayant une information donnée au fil du temps. Contrairement aux modèles similaires également basés sur la structure du réseau, la probabilité qu'une information donnée se diffuse entre deux utilisateurs n'est pas constante mais dépendante du temps. Nous décrivons aussi une procédure pour l'inférence des paramètres latents du modèle, dont l'originalité est de formuler les paramètres comme des fonctions de caractéristiques observables des utilisateurs. Troisièmement, nous proposons SONDY, un logiciel libre et extensible implémentant des méthodes tirées de la littérature pour la fouille et l'analyse des données issues des médias sociaux. Le logiciel manipule deux types de données : les messages publiés par les utilisateurs, et la structure du réseau social interconnectant ces derniers. Contrairement aux logiciels académiques existants qui se concentrent soit sur l'analyse des messages, soit sur l'analyse du réseau, SONDY permet d'analyser ces deux types de données conjointement en permettant l'analyse de l'influence par rapport aux évènements détectés. Les expérimentations menées à l'aide de divers jeux de données collectés sur le média social Twitter démontrent la pertinence de nos propositions et mettent en lumière des propriétés qui nous aident à mieux comprendre les mécanismes régissant la diffusion de l'information. Premièrement, en comparant les performances de MABED avec celles de méthodes récentes tirées de la littérature, nous montrons que la prise en compte des interactions sociales entre utilisateurs conduit à une détection plus précise des évènements importants, avec une robustesse accrue en présence de contenu bruité. Nous montrons également que MABED facilite l'interprétation des évènements détectés en fournissant des descriptions claires et précises, tant sur le plan sémantique que temporel. Deuxièmement, nous montrons la validité de la procédure proposée pour estimer les probabilités de diffusion sur lesquelles repose le modèle T-BASIC, en illustrant le pouvoir prédictif des caractéristiques des utilisateurs sélectionnées et en comparant les performances de la méthode d'estimation proposée avec celles de méthodes tirées de la littérature. Nous montrons aussi l'intérêt d'avoir des probabilités non constantes, ce qui permet de prendre en compte dans T-BASIC la fluctuation du niveau de réceptivité des utilisateurs des médias sociaux au fil du temps. Enfin, nous montrons comment, et dans quelle mesure, les caractéristiques sociales, thématiques et temporelles des utilisateurs affectent la diffusion de l'information. Troisièmement, nous illustrons à l'aide de divers scénarios l'utilité du logiciel SONDY, autant pour des non-experts, grâce à son interface utilisateur avancée et des visualisations adaptées, que pour des chercheurs du domaine, grâce à son interface de programmation. / Social media have greatly modified the way we produce, diffuse and consume information, and have become powerful information vectors. The goal of this thesis is to help in the understanding of the information diffusion phenomenon in social media by providing means of modeling and analysis.First, we propose MABED (Mention-Anomaly-Based Event Detection), a statistical method for automatically detecting events that most interest social media users from the stream of messages they publish. In contrast with existing methods, it doesn't only focus on the textual content of messages but also leverages the frequency of social interactions that occur between users. MABED also differs from the literature in that it dynamically estimates the period of time during which each event is discussed rather than assuming a predefined fixed duration for all events. Secondly, we propose T-BASIC (Time-Based ASynchronous Independent Cascades), a probabilistic model based on the network structure underlying social media for predicting information diffusion, more specifically the evolution of the number of users that relay a given piece of information through time. In contrast with similar models that are also based on the network structure, the probability that a piece of information propagate from one user to another isn't fixed but depends on time. We also describe a procedure for inferring the latent parameters of that model, which we formulate as functions of observable characteristics of social media users. Thirdly, we propose SONDY (SOcial Network DYnamics), a free and extensible software that implements state-of-the-art methods for mining data generated by social media, i.e. the messages published by users and the structure of the social network that interconnects them. As opposed to existing academic tools that either focus on analyzing messages or analyzing the network, SONDY permits the joint analysis of these two types of data through the analysis of influence with respect to each detected event.The experiments, conducted on data collected on Twitter, demonstrate the relevance of our proposals and shed light on some properties that give us a better understanding of the mechanisms underlying information diffusion. First, we compare the performance of MABED against those of methods from the literature and find that taking into account the frequency of social interactions between users leads to more accurate event detection and improved robustness in presence of noisy content. We also show that MABED helps with the interpretation of detected events by providing clearer textual description and more precise temporal descriptions. Secondly, we demonstrate the relevancy of the procedure we propose for estimating the pairwise diffusion probabilities on which T-BASIC relies. For that, we illustrate the predictive power of users' characteristics, and compare the performance of the method we propose to estimate the diffusion probabilities against those of state-of-the-art methods. We show the importance of having non-constant diffusion probabilities, which allows incorporating the variation of users' level of receptivity through time into T-BASIC. We also study how -- and in which proportion -- the social, topical and temporal characteristics of users impact information diffusion. Thirdly, we illustrate with various scenarios the usefulness of SONDY, both for non-experts -- thanks to its advanced user interface and adapted visualizations -- and for researchers -- thanks to its application programming interface.

Page generated in 0.0825 seconds