• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Trouble with Diversity: Fork-Join Networks with Heterogenous Customer Population

Nguyen, Viên 10 1900 (has links)
Consider a feedforward network of single-server stations populated by multiple job types. Each job requires the completion of a number of tasks whose order of execution is determined by a set of deterministic precedence constraints. The precedence requirements allow some tasks to be done in parallel (in which case tasks would "fork") and require that others be processed sequentially (where tasks may "join"). Jobs of a. given type share the same precedence constraints, interarrival time distributions, and service time distributions, but these characteristics may vary across different job types. We show that the heavy traffic limit of certain processes associated with heterogeneous fork-join networks can be expressed as a semimartingale reflected Brownian motion with polyhedral state space. The polyhedral region typically has many more faces than its dimension, and the description of the state space becomes quite complicated in this setting. One can interpret the proliferation of additional faces in heterogeneous fork-join networks as (i) articulations of the fork and join constraints, and (ii) results of the disordering effects that occur when jobs fork and join in their sojourns through the network.
2

Escalonamento on-line eficiente de programas fork-join recursivos do tipo divisão e conquista em MPI / Efficent on-line scheduling of recursive fork-join programs on MPI

Mor, Stefano Drimon Kurz January 2010 (has links)
Esta Dissertação de Mestrado propõe dois novos algoritmos para tornar mais eficiente o escalonamento on-line de tarefas com dependências estritas em agregados de computadores que usam como middleware para troca de mensagens alguma implementação da MPI (até a versão 2.1). Esses algoritmos foram projetados tendo-se em vista programas construídos no modelo de programação fork/join, onde a operação de fork é usada sobre uma chamada recursiva da função. São eles: 1. O algoritmo RatMD, implementado através de uma biblioteca de primitivas do tipo map-reduce, que funciona para qualquer implementação MPI, com qualquer versão da norma. Utilizado para minimizar o tempo de execução de uma computação paralela; e 2. O algoritmo RtMPD, implementado através de um sistema distribuído sobre daemons gerenciadores de processos criados dinamicamente com a implementação MPICH2 (que implementa a MPI-2). Utilizado para permitir execuções de instâncias maiores de programas paralelos dinâmicos. Ambos se baseiam em roubo de tarefas, que é a estratégia de balanceamento de carga mais difundida na literatura. Para ambos os algoritmos apresenta-se modelagem téorica de custos. Resultados experimentais obtidos ficam dentro dos limites teóricos calculados. RatMD provê uma redução no tempo de execução de até 80% em relação ao algoritmo usual (baseado em round-robin), com manutenção do speedup próximo ao linear e complexidade espacial idêntica à popular implementação com round-robin. RtMPD mantém, no mínimo, o mesmo desempenho que a implementação canônica do escalonamento em MPICH2, dobrando-se o limite físico de processos executados simultaneamente por cada nó. / This Master’s Dissertation proposes two new algorithms for improvement on on-line scheduling of dynamic-created tasks with strict dependencies on clusters of computers using MPI (up to version 2.1) as its middleware for message-passing communication. These algorithms were built targeting programs written on the fork-join model, where the fork operation is always called over an recursive function call. They are: 1. RatMD, implemented as a map-reduce library working for any MPI implementation, on whatever norm’s version. Used for performance gain; and 2. RtMPD, implemented as a distributed system over dynamic-generated processes manager daemons with MPICH2 implentation of MPI. Used for executing larger instances of dynamic parallel programs. Both algorithms are based on the (literature consolidated) work stealing technique and have formal guarantees on its execution time and load balancing. Experimental results are within theoretical bounds. RatMD shows an improvement on the performance up to 80% when paired with more usual algorithms (based on round-robin strategy). It also provides near-linear speedup and just about the same space-complexity on similar implementations. RtMPD keeps, at minimum, the very same performance of the canonical MPICH2 implementation, near doubling the physical limit of simultaneous program execution per cluster node.
3

Habanero-Scala: A Hybrid Programming model integrating Fork/Join and Actor models

Imam, Shams 24 July 2013 (has links)
This study presents a hybrid concurrent programming model combining the previously developed Fork-Join model (FJM) and Actor model (AM). With the advent of multi-core computers, there is a renewed interest in programming models that reduce the burden of reasoning about and writing efficient concurrent programs. The proposed hybrid model shows how the divide-and-conquer approach of the FJM and the no-shared mutable state and event-driven philosophy of the AM can be combined to solve certain classes of problems more efficiently and productively than either of the aforementioned models individually. The hybrid model adds actor creation and coordination to into the FJM, while also enabling parallelization within actors. This study uses the Habanero-Java and Scala programming languages as the base for the FJM and AM respectively, and provides an implementation of the hybrid model as an extension of the Scala language called Habanero-Scala. The hybrid model adds to the foundations of parallel programs, and to the tools available for the programmer to aid in productivity and performance while developing parallel software.
4

Highly available storage with minimal trust

Mahajan, Prince 05 July 2012 (has links)
Storage services form the core of modern Internet-based services spanning commercial, entertainment, and social-networking sectors. High availability is crucial for these services as even an hour of unavailability can cost them millions of dollars in lost revenue. Unfortunately, it is difficult to build highly available storage services that provide useful correctness properties. Both benign (system crashes, power out- ages etc.) and Byzantine faults (memory or disk corruption, software or configuration errors etc.) plague the availability of these services. Furthermore, the goal of high availability conflicts with our desire to provide good performance and strong correctness guarantees. For example, the Consistency, Availability, and Partition- resilience (CAP) theorem states that a storage service that must be available despite network partitions cannot enforce strong consistency. Similarly, the tradeoff between latency and durability dictates that a low-latency service cannot ensure durability in the presence of data-center wide failures. This dissertation explores the theoretical and practical limits of storage services that can be safe and live despite the presence of benign and Byzantine faults. On the practical front, we use cloud storage as a deployment model to build Depot, a highly available storage service that addresses the above challenges. Depot minimizes the trust clients have to put in the third party storage provider. As a result, Depot clients can continue functioning despite benign or Byzantine faults of the cloud servers. Yet, Depot provides stronger availability, durability, and consistency properties than those provided by many of the existing cloud deployments, without incurring prohibitive performance cost. For example, in contrast to Amazon S3’s eventual consistency, Depot provides a variation of causal consistency on each volume, while tolerating Byzantine faults. On the theoretical front, we explore the consistency-availability tradeoffs. Tradeoffs between consistency and availability have proved useful for designers in deciding how much to strengthen consistency if high availability is desired or how much to compromise availability if strong consistency is essential. We explore the limits of such tradeoffs by attempting to answer the question: What are the semantics that can be implemented without compromising availability? In this work, we investigate this question for both fail-stop and Byzantine failure models. An immediate benefit of answering this question is that we can compare and contrast the consistency provided by Depot with that achievable by an optimal implementation. More crucially, this result complements the CAP theorem. While, the CAP theorem defines a set of properties that cannot be achieved, this work identifies the limits of properties that can be achieved. / text
5

Escalonamento on-line eficiente de programas fork-join recursivos do tipo divisão e conquista em MPI / Efficent on-line scheduling of recursive fork-join programs on MPI

Mor, Stefano Drimon Kurz January 2010 (has links)
Esta Dissertação de Mestrado propõe dois novos algoritmos para tornar mais eficiente o escalonamento on-line de tarefas com dependências estritas em agregados de computadores que usam como middleware para troca de mensagens alguma implementação da MPI (até a versão 2.1). Esses algoritmos foram projetados tendo-se em vista programas construídos no modelo de programação fork/join, onde a operação de fork é usada sobre uma chamada recursiva da função. São eles: 1. O algoritmo RatMD, implementado através de uma biblioteca de primitivas do tipo map-reduce, que funciona para qualquer implementação MPI, com qualquer versão da norma. Utilizado para minimizar o tempo de execução de uma computação paralela; e 2. O algoritmo RtMPD, implementado através de um sistema distribuído sobre daemons gerenciadores de processos criados dinamicamente com a implementação MPICH2 (que implementa a MPI-2). Utilizado para permitir execuções de instâncias maiores de programas paralelos dinâmicos. Ambos se baseiam em roubo de tarefas, que é a estratégia de balanceamento de carga mais difundida na literatura. Para ambos os algoritmos apresenta-se modelagem téorica de custos. Resultados experimentais obtidos ficam dentro dos limites teóricos calculados. RatMD provê uma redução no tempo de execução de até 80% em relação ao algoritmo usual (baseado em round-robin), com manutenção do speedup próximo ao linear e complexidade espacial idêntica à popular implementação com round-robin. RtMPD mantém, no mínimo, o mesmo desempenho que a implementação canônica do escalonamento em MPICH2, dobrando-se o limite físico de processos executados simultaneamente por cada nó. / This Master’s Dissertation proposes two new algorithms for improvement on on-line scheduling of dynamic-created tasks with strict dependencies on clusters of computers using MPI (up to version 2.1) as its middleware for message-passing communication. These algorithms were built targeting programs written on the fork-join model, where the fork operation is always called over an recursive function call. They are: 1. RatMD, implemented as a map-reduce library working for any MPI implementation, on whatever norm’s version. Used for performance gain; and 2. RtMPD, implemented as a distributed system over dynamic-generated processes manager daemons with MPICH2 implentation of MPI. Used for executing larger instances of dynamic parallel programs. Both algorithms are based on the (literature consolidated) work stealing technique and have formal guarantees on its execution time and load balancing. Experimental results are within theoretical bounds. RatMD shows an improvement on the performance up to 80% when paired with more usual algorithms (based on round-robin strategy). It also provides near-linear speedup and just about the same space-complexity on similar implementations. RtMPD keeps, at minimum, the very same performance of the canonical MPICH2 implementation, near doubling the physical limit of simultaneous program execution per cluster node.
6

Escalonamento on-line eficiente de programas fork-join recursivos do tipo divisão e conquista em MPI / Efficent on-line scheduling of recursive fork-join programs on MPI

Mor, Stefano Drimon Kurz January 2010 (has links)
Esta Dissertação de Mestrado propõe dois novos algoritmos para tornar mais eficiente o escalonamento on-line de tarefas com dependências estritas em agregados de computadores que usam como middleware para troca de mensagens alguma implementação da MPI (até a versão 2.1). Esses algoritmos foram projetados tendo-se em vista programas construídos no modelo de programação fork/join, onde a operação de fork é usada sobre uma chamada recursiva da função. São eles: 1. O algoritmo RatMD, implementado através de uma biblioteca de primitivas do tipo map-reduce, que funciona para qualquer implementação MPI, com qualquer versão da norma. Utilizado para minimizar o tempo de execução de uma computação paralela; e 2. O algoritmo RtMPD, implementado através de um sistema distribuído sobre daemons gerenciadores de processos criados dinamicamente com a implementação MPICH2 (que implementa a MPI-2). Utilizado para permitir execuções de instâncias maiores de programas paralelos dinâmicos. Ambos se baseiam em roubo de tarefas, que é a estratégia de balanceamento de carga mais difundida na literatura. Para ambos os algoritmos apresenta-se modelagem téorica de custos. Resultados experimentais obtidos ficam dentro dos limites teóricos calculados. RatMD provê uma redução no tempo de execução de até 80% em relação ao algoritmo usual (baseado em round-robin), com manutenção do speedup próximo ao linear e complexidade espacial idêntica à popular implementação com round-robin. RtMPD mantém, no mínimo, o mesmo desempenho que a implementação canônica do escalonamento em MPICH2, dobrando-se o limite físico de processos executados simultaneamente por cada nó. / This Master’s Dissertation proposes two new algorithms for improvement on on-line scheduling of dynamic-created tasks with strict dependencies on clusters of computers using MPI (up to version 2.1) as its middleware for message-passing communication. These algorithms were built targeting programs written on the fork-join model, where the fork operation is always called over an recursive function call. They are: 1. RatMD, implemented as a map-reduce library working for any MPI implementation, on whatever norm’s version. Used for performance gain; and 2. RtMPD, implemented as a distributed system over dynamic-generated processes manager daemons with MPICH2 implentation of MPI. Used for executing larger instances of dynamic parallel programs. Both algorithms are based on the (literature consolidated) work stealing technique and have formal guarantees on its execution time and load balancing. Experimental results are within theoretical bounds. RatMD shows an improvement on the performance up to 80% when paired with more usual algorithms (based on round-robin strategy). It also provides near-linear speedup and just about the same space-complexity on similar implementations. RtMPD keeps, at minimum, the very same performance of the canonical MPICH2 implementation, near doubling the physical limit of simultaneous program execution per cluster node.

Page generated in 0.0249 seconds