Spelling suggestions: "subject:"caseworker"" "subject:"steelworker""
1 |
Implementace paralelního zpracování dotazů v databázovém systému PostgreSQL / Implementace paralelního zpracování dotazů v databázovém systému PostgreSQLVojtek, Daniel January 2011 (has links)
CONTENTS vi Title: Implementation of parallel query processing in PostgreSQL Author: Bc. Daniel Vojtek Department: Department of Software Engineering Supervisor: Mgr. Július Štroffek Supervisor's e-mail address: julo@stroffek.cz Abstract: Parallel query processing can help with processing of huge amounts of data stored in database systems. The aim of this diploma the- sis was to explore the possibilities, analyze, design and finally implement parallel query processing in open source database system PostgreSQL. I used a Master/Worker design pattern, in which standard PostgreSQL backend process is a master. As workers I used processes created from postmaster. In the thesis I focused on preparing an infrastructure nec- essary for parallel processing. I defined a new top level memory context over shared memory, which allows efficient and convenient memory al- locations. Then I implemented creation of new worker processes, based on master process requirements. To be able to control these workers I defined controlling structures using state machines. Then I implemented parallel sort operation and SQL operator UNION ALL using this infras- tructure. The result of this diploma thesis is not only implementation of infrastructure and some parallel operations, but also description of the problems encountered during the...
|
2 |
Online Task Scheduling on Heterogeneous Clusters : An Experimental StudyRosenvinge, Einar Magnus January 2004 (has links)
<p>We study the problem of scheduling applications composed of a large number of tasks on heterogeneous clusters. Tasks are identical, independent from each other, and can hence be computed in any order. The goal is to execute all the tasks as quickly as possible. We use the Master-Worker paradigm, where tasks are maintained by the master which will hand out batches of a variable amount of tasks to requesting workers. We introduce a new scheduling strategy, the Monitor strategy, and compare it to other strategies suggested in the literature. An image filtering application, known as matched filtering, has been used to compare the different strategies. Our implementation involves datastaging techniques in order to circumvent the possible bottleneck incurred by the master, and multi-threading to prevent possible processor idleness.</p>
|
3 |
Effective Resource Management for Master-Worker Applications in Opportunistic EnvironmentsHeymann Pignolo, Elisa 05 November 2002 (has links)
Este trabajo se basa en el uso de entornos oportunísticos, los cuales se caracterizan por aprovechar el tiempo en que las máquinas permanecen ociosas para ejecutar procesos de usuarios. Una parte fundamental de estos entornos consiste en gestionar procesos concurrentes que constituyen una aplicación paralela. En sistemas oportunísticos el objetivo del gestor de recursos es doble: por una lado ha de proveer tiempos de ejecución razonables (los usuarios desean obtener sus procesos acabados cuanto antes), y por el otro ha de proveer buena eficiencia, esto es, buen uso de los recursos, lo cual constituye el objetivo fundamental de un sistema que desea soportar grandes cantidades de cómputo durante largos períodos de tiempo.El desarrollo de un gestor de recursos adecuado para aplicaciones paralelas que se ejecutan en entornos oportunísticos incluye varios aspectos. En particular, este trabajo aborda tres de ellos:- Determinar el número de máquinas, pertenecientes a un entorno oportunístico, necesarias para ejecutar una aplicación, de manera tal que se obtenga buena eficiencia y buen tiempo de ejecución.- Planificar las tareas a las máquinas asignadas.- Reducir los efectos negativos producidos sobre una aplicación, cuando una máquina es reclamada por su dueño, lo que implica que el proceso que ejecuta allí debe dejar dicha máquina.En este trabajo se han considerado aplicaciones de tipo master-worker. El master envía tareas a los workers y recoge los resultados. Este proceso se repite varios ciclos, hasta que se produce una condición de finalización.Para asignar las tareas de la aplicación master-worker a las máquinas disponibles se ha propuesto una política de planificación dinámica denominada Random & Average. Esta política se evaluó por simulación, y los resultados muestran que su comportamiento es parecido al de políticas que necesitan información a-priori sobre el tiempo de ejecución de las tareas. También se dedujo la existencia del Intervalo Ideal, que corresponde al intervalo entre los números de máquinas mínimo y máximo para ejecutar la aplicación de manera tal que se obtenga una buena relación entre el tiempo de ejecución y la eficiencia.Luego se propuso un algoritmo para ajustar dinámicamente el número de máquinas, de manera tal que cualquier aplicación master-worker se ejecute con un número de máquinas perteneciente al intervalo ideal. Esta estrategia se implemento tanto en entornos homogéneos como heterogéneos, utilizando una aplicación encargada de obtener el esqueleto de imágenes.En un entorno oportunístico las máquinas pueden participar o dejar de participar en la ejecución de una aplicación, si son cedidas o reclamadas por sus dueños. Cuando una máquina es reclamada, el proceso que se ejecuta en dicha máquina debe ser detenido y debe dejar esa máquina inmediatamente. Si dicho proceso pertenece a una aplicación paralela, el rendimiento de toda esa aplicación se verá afectado negativamente. Dicho impacto se evaluó, y para disminuirlo se propusieron diferentes estrategias basados en utilizar máquinas extra y en duplicación de tareas. Estas estrategias se estudiaron primero por simulación y luego fueron implementadas. / This work focuses on the use of opportunistic environments, which are characterized by harnessing idle times of machines for executing user jobs. Management of the concurrent jobs constituting a parallel application is an integral part of such non-dedicated systems. In non-dedicated opportunistic environments, the resource manager's goal is to provide both a reasonable execution time (as users are interested in having their job finished as soon as possible), and good efficiency, i.e., good resource usage, which is the main goal of the system in order to obtain high throughput.The development of effective resource management for parallel applications running on opportunistic systems involves a great number of issues. In particular, this work deals with three of them:- Determining and allocating the number of machines, from the pool of machines belonging to the opportunistic system, needed for executing an application obtaining both a good execution time and a good efficiency. - Scheduling application tasks to the assigned computational resources.- Reducing the negative effects produced on an application when a machine belonging to a non-dedicated environment is reclaimed by its owner, and should therefore be released by the task running on it. Throughout this work master-worker applications have been considered. In these applications, there is a master that sends tasks to workers and collects the results. This process is repeated over a number of cycles until an end condition is reached. In order to assign tasks belonging to a master-worker application to machines belonging to an opportunistic environment, we proposed a dynamic scheduling policy called Random & Average. This policy was first evaluated by simulation, and results showed that it exhibits a similar behavior with respect to other policies requiring information in advance about the execution time of the tasks. We also derived the existence of the ideal interval, which corresponds to the interval comprised between the minimum and maximum number of machines for executing the application that obtain a good trade-off between execution time and efficiency. Then we proposed an algorithm for dynamically adjusting the number of machines, for executing any master-worker application, to a number of machines belonging to the ideal interval. This strategy was implemented and evaluated on both homogeneous and heterogeneous environments, with a master-worker thinning application.In an opportunistic environment machines can join and leave the computation as they are released or reclaimed by their owners. When a machine is reclaimed, the job running on that machine must be stopped and vacated. If this job belongs to a parallel application, the whole performance will be negatively affected. We evaluate this impact, and then, in order to alleviate it, propose strategies based on using extra machines and task replication. These strategies were first evaluated by simulation and then implemented and tested in a real environment.
|
4 |
Development Of A Grid-aware Master Worker Framework For Artificial EvolutionKetenci, Ahmet 01 December 2010 (has links) (PDF)
Genetic Algorithm (GA) has become a very popular tool for various kinds of problems, including optimization problems with wider search spaces. Grid search techniques are usually not feasible or ineffective at finding a solution, which is good enough. The most computationally intensive component of GA is the calculation of the goodness (fitness) of candidate solutions. However, since the fitness calculation of each individual does not depend each other, this process can be parallelized easily.
The easiest way to reach high amounts of computational power is using grid. Grids are composed of multiple clusters, thus they can offer much more resources than a single cluster. On the other hand, grid may not be the easiest environment to develop parallel programs, because of the lack of tools or libraries that can be used for communication among the processes.
In this work, we introduce a new framework, GridAE, for GA applications. GridAE uses the master worker model for parallelization and offers a GA library to users. It also abstracts the message passing process from users. Moreover, it has both command line interface and web interface for job management. These properties makes the framework more usable for developers even with limited parallel programming or grid computing experience. The performance of GridAE is tested with a shape optimization problem and results show that the framework is more convenient to problems with crowded populations.
|
5 |
An Analysis of an Interrupt-Driven Implementation of the Master-Worker Model with Application-Specific CoprocessorsHickman, Joseph 17 January 2008 (has links)
In this thesis, we present a versatile parallel programming model composed of an individual general-purpose processor aided by several application-specific coprocessors. These computing units operate under a simplification of the master-worker model. The user-defined coprocessors may be either homogeneous or heterogeneous. We analyze system performance with regard to system size and task granularity, and we present experimental results to determine the optimal operating conditions. Finally, we consider the suitability of this approach for scientific simulations — specifically for use in agent-based models of biological systems. / Master of Science
|
6 |
SCALABLE AND FAULT TOLERANT HIERARCHICAL B&B ALGORITHMS FOR COMPUTATIONAL GRIDSBendjoudi, Ahcène 24 April 2012 (has links) (PDF)
La résolution exacte de problèmes d'optimisation combinatoire avec les algorithmes Branch and Bound (B&B) nécessite un nombre exorbitant de ressources de calcul. Actuellement, cette puissance est offerte par les environnements large échelle comme les grilles de calcul. Cependant, les grilles présentent de nouveaux challenges : le passage à l'échelle, l'hétérogénéité et la tolérance aux pannes. La majorité des algorithmes B&B revisités pour les grilles de calcul sont basés sur le paradigme Master-Worker, ce qui limite leur passage à l'échelle. De plus, la tolérance aux pannes est rarement adressée dans ces travaux. Dans cette thèse, nous proposons trois principales contributions : P2P-B&B, H-B&B et FTH-B&B. P2P-B&B est un famework basé sur le paradigme Master-Worker traite le passage à l'échelle par la réduction de la fréquence de requêtes de tâches et en permettant les communications directes entre les workers. H-B&B traite aussi le passage à l'échelle. Contrairement aux approches proposées dans la littérature, H-B&B est complètement dynamique et adaptatif i.e. prenant en compte l'acquisition dynamique des ressources de calcul. FTH-B&B est basé sur de nouveaux méchanismes de tolérance aux pannes permettant de construire et maintenir la hiérarchie équilibrée, et de minimiser la redondance de travail quand les tâches sont sauvegardées et restaurées. Les approches proposées ont été implémentées avec la plateforme pour grille ProActive et ont été appliquées au problème d'ordonnancement de type Flow-Shop. Les expérimentations large échelle effectuées sur la grille Grid'5000 ont prouvé l'éfficacité des approches proposées.
|
7 |
Master/worker parallel discrete event simulationPark, Alfred John 16 December 2008 (has links)
The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.
|
8 |
Méthodes hybrides parallèles pour la résolution de problèmes d'optimisation combinatoire : application au clustering sous contraintes / Parallel hybrid methods for solving combinatorial optimization problems : application to clustering under constraintsOuali, Abdelkader 03 July 2017 (has links)
Les problèmes d’optimisation combinatoire sont devenus la cible de nombreuses recherches scientifiques pour leur importance dans la résolution de problèmes académiques et de problèmes réels rencontrés dans le domaine de l’ingénierie et dans l’industrie. La résolution de ces problèmes par des méthodes exactes ne peut être envisagée à cause des délais de traitement souvent exorbitants que nécessiteraient ces méthodes pour atteindre la (les) solution(s) optimale(s). Dans cette thèse, nous nous sommes intéressés au contexte algorithmique de résolution des problèmes combinatoires, et au contexte de modélisation de ces problèmes. Au niveau algorithmique, nous avons appréhendé les méthodes hybrides qui excellent par leur capacité à faire coopérer les méthodes exactes et les méthodes approchées afin de produire rapidement des solutions. Au niveau modélisation, nous avons travaillé sur la spécification et la résolution exacte des problématiques complexes de fouille des ensembles de motifs en étudiant tout particulièrement le passage à l’échelle sur des bases de données de grande taille. D'une part, nous avons proposé une première parallélisation de l'algorithme DGVNS, appelée CPDGVNS, qui explore en parallèle les différents clusters fournis par la décomposition arborescente en partageant la meilleure solution trouvée sur un modèle maître-travailleur. Deux autres stratégies, appelées RADGVNS et RSDGVNS, ont été proposées qui améliorent la fréquence d'échange des solutions intermédiaires entre les différents processus. Les expérimentations effectuées sur des problèmes combinatoires difficiles montrent l'adéquation et l'efficacité de nos méthodes parallèles. D'autre part, nous avons proposé une approche hybride combinant à la fois les techniques de programmation linéaire en nombres entiers (PLNE) et la fouille de motifs. Notre approche est complète et tire profit du cadre général de la PLNE (en procurant un haut niveau de flexibilité et d’expressivité) et des heuristiques spécialisées pour l’exploration et l’extraction de données (pour améliorer les temps de calcul). Outre le cadre général de l’extraction des ensembles de motifs, nous avons étudié plus particulièrement deux problèmes : le clustering conceptuel et le problème de tuilage (tiling). Les expérimentations menées ont montré l’apport de notre proposition par rapport aux approches à base de contraintes et aux heuristiques spécialisées. / Combinatorial optimization problems have become the target of many scientific researches for their importance in solving academic problems and real problems encountered in the field of engineering and industry. Solving these problems by exact methods is often intractable because of the exorbitant time processing that these methods would require to reach the optimal solution(s). In this thesis, we were interested in the algorithmic context of solving combinatorial problems, and the modeling context of these problems. At the algorithmic level, we have explored the hybrid methods which excel in their ability to cooperate exact methods and approximate methods in order to produce rapidly solutions of best quality. At the modeling level, we worked on the specification and the exact resolution of complex problems in pattern set mining, in particular, by studying scaling issues in large databases. On the one hand, we proposed a first parallelization of the DGVNS algorithm, called CPDGVNS, which explores in parallel the different clusters of the tree decomposition by sharing the best overall solution on a master-worker model. Two other strategies, called RADGVNS and RSDGVNS, have been proposed which improve the frequency of exchanging intermediate solutions between the different processes. Experiments carried out on difficult combinatorial problems show the effectiveness of our parallel methods. On the other hand, we proposed a hybrid approach combining techniques of both Integer Linear Programming (ILP) and pattern mining. Our approach is comprehensive and takes advantage of the general ILP framework (by providing a high level of flexibility and expressiveness) and specialized heuristics for data mining (to improve computing time). In addition to the general framework for the pattern set mining, two problems were studied: conceptual clustering and the tiling problem. The experiments carried out showed the contribution of our proposition in relation to constraint-based approaches and specialized heuristics.
|
Page generated in 0.0617 seconds