• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 15
  • 13
  • Tagged with
  • 113
  • 17
  • 16
  • 15
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Self-optimization of infrastructure and platform resources in cloud computing / Auto-optimisation des ressources de l’infrastructure et de la plate-forme dans le Cloud Computing

Zhang, Bo 12 December 2016 (has links)
L’élasticité est pensée comme une solution importante pour gérer des problèmes de performance dans les systèmes répartis. Toutefois, la plupart des recherches d’élasticité ne concernent que l’approvisionnement de ressources de manière automatique, mais ignorent toujours l’utilisation des ressources provisionnées. Cela pourrait conduire à des fuites de ressources, ce qui entraîne des dépenses inutiles. Pour éviter des problèmes, mes recherches se concentrent donc sur la maximisation de l’utilisation des ressources par l’auto-gestion des ressources. Dans cette thèse, en raison de divers problèmes de l’utilisation des ressources dans les différentes couches, je propose deux approches de gestion de ressources correspondant à l’infrastructure et la plate-forme, respectivement. Pour surmonter la limitation des infrastructures, je propose CloudGC comme service de middleware qui vise à libérer les ressources occupées par machines virtuelles qui tournent au ralenti. Dans la couche de plate-forme, une approche d’auto-équilibrage est introduite pour ajuster la configuration de Hadoop lors de l’exécution, ce qui optimise dynamiquement les performances de système. Enfin, cette thèse aussi concerne le déploiement rapide de Hadoop. Un nouvel outil, appelé "hadoop-benchmark", applique docker pour accélérer l’installation de Hadoop et fournir un ensemble d’images de docker qui contiennent plusieurs Hadoop benchmarks bien connus. Les évaluations montrent que ces approches et cet outil peuvent parvenir à l’auto-gestion des ressources dans différentes couches, puis de faciliter l’élasticité de l’infrastructure et de la plate-forme dans le Cloud computing. / Elasticity is considered as an important solution to handle the performance issues in scalable distributed system. However, most of the researches of elasticity only concern the provisioning and de-provisioning resources in automatic ways, but always ignore the resource utilization of provisioned resources. This might lead to resource leaks while provisioning redundant resources, thereby causing unnecessary expenditure. To avoid the resource leaks and redundant resources, my research therefore focus on how to maximize resource utilization by self resource management. In this thesis, relevant to diverse problems of resource usage and allocation in different layers, I propose two resource management approaches corresponding to infrastructure and platform, respectively. To overcome infrastructure limitation, I propose CloudGC as middleware service which aims to free occupied resources by recycling idle VMs. In platform-layer, a self-balancing approach is introduced to adjust Hadoop configuration at runtime, thereby avoiding memory loss and dynamically optimizing Hadoop performance. Finally, this thesis concerns rapid deployment of service which is also an issue of elasticity. A new tool, named "hadoop-benchmark", applies docker to accelerate the installation of Hadoop cluster and to provide a set of docker images which contain several well-known Hadoop benchmarks.The assessments show that these approaches and tool can well achieve resource management and self-optimization in various layers, and then facilitate the elasticity of infrastructure and platform in scalable platform, such as Cloud computing.
52

Non-monotonic trust management for distributed systems

Dong, Changyu January 2009 (has links)
No description available.
53

Control of large distributed systems using games with pure strategy Nash equilibria

Chapman, Archie C. January 2009 (has links)
Control mechanisms for optimisation in large distributed systems cannot be constructed based on traditional methods of control because they are typically characterised by distributed information and costly and/or noisy communication. Furthermore, noisy observations and dynamism are also inherent to these systems, so their control mechanisms need to be flexible, agile and robust in the face of these characteristics. In such settings, a good control mechanism should satisfy the following four design requirements: (i) it should produce high quality solutions, (ii) it should be robustness and flexibility in the face of additions, removals and failures of components, (iii) it should operate by making limited use of communication, and (iv) its operation should be computational feasible. Against this background, in order to satisfy these requirements, in this thesis we adopt a design approach based on dividing control over the system across a team of self–interested agents. Such multi–agent systems (MAS) are naturally distributed (matching the application domains in question), and by pursing their own private goals, the agents can collectively implement robust, flexible and scalable control mechanisms. In more detail, the design approach we adopt is (i) to use games with pure strategy Nash equilibria as a framework or template for constructing the agents’ utility functions, such that good solutions to the optimisation problem arise at the pure strategy Nash equilibria of the game, and (ii) to derive distributed techniques for solving the games for their Nash equilibria. The specific problems we tackle can be grouped into four main topics. First, we investigate a class of local algorithms for distributed constraint optimisation problems (DCOPs). We introduce a unifying analytical framework for studying such algorithms, and develop a parameterisation of the algorithm design space, which represents a mapping from the algorithms’ components to their performance according to each of our design requirements. Second, we develop a game–theoretic control mechanism for distributed dynamic task allocation and scheduling problems. The model in question is an expansion of DCOPs to encompass dynamic problems, and the control mechanism we derive builds on the insights from our first topic to address our four design requirements. Third, we elaborate a general class of problems including DCOPs with noisy rewards and state observations, which are realistic traits of great concern in real–world problems, and derive control mechanisms for these environments. These control mechanism allow the agents to either learn their reward functions or decide when to make observations of the world’s state and/or communicate their beliefs over the state of the world, in such a manner that they perform well according to our design requirements. Fourth, we derive an optimal algorithm for computing and optimising over pure strategy Nash equilibria in games with sparse interaction structure. By exploiting the structure present in many multi-agent interactions, this distributed algorithm can efficiently compute equilibria that optimise various criteria, thus reducing the computational burden on any one agent and operating using less communication than an equivalent centralised algorithms. For each of these topics, the control mechanisms that we derive are developed such that they perform well according to all four f our design requirements. In sum, by making the above contributions to these specific topics, we demonstrate that the general approach of using games with pure strategy Nash equilibria as a template for designing MAS produces good control mechanisms for large distributed systems.
54

A network aware adaptable application using a novel content scheduling and delivery scheme

Abdul Rahman, Abdul Muin January 2006 (has links)
The aim of this research is to investigate techniques or methodologies that will allow networked applications to adapt to network conditions between end nodes in order to maintain a reasonable Quality of Service and to design, develop and test techniques for achieving such adaptability through the use of a novel content scheduling and delivery scheme. In order to achieve this adaptation, information regarding network conditions, both static and dynamic, has to be first gathered. Since various parties have already conducted substantial research in this area, the task was to review those network measurement techniques and adopt a suitable one for use in the subsequent research. Hence the research is concerned more on how to realize these techniques in practical terms and make those network parameters accessible to applications that are going to adapt based on them. A network measurement service utilizing a standard measurement tool was proposed, developed, tested and subsequently used throughout the project. In this way the research project implementation has help in trying to understand the impact of network measurement on the overall performance of the system and what network metrics are essential in order to help the application make better adaptation decision. The project proceeded further to develop and show case an adaptable network application using a novel scheme in which content was restructured and its delivery rescheduled taking account of the available bandwidth, the content structure, size and order of importance and user specified deadlines, which made use of the network measurement service. In so doing, the project sought to show how and when adaptation can be applied and its potential benefits or otherwise as compared to conventional applications based on best effort systems. The project has proved that by adapting according to the abovementioned scheme in the event of poor network performance, user specified deadlines can be satisfied by reducing the load with contents of high importance being delivered first while contents of less importance being delivered during idle time or user's reading time or ignored if the deadline could not be met. In most cases content of high importance are delivered faster in the adaptable system as compared to the conventional best effort system.
55

Autonomous grid scheduling using probabilistic job runtime scheduling

Lazarević, Aleksandar January 2008 (has links)
Computational Grids are evolving into a global, service-oriented architecture – a universal platform for delivering future computational services to a range of applications of varying complexity and resource requirements. The thesis focuses on developing a new scheduling model for general-purpose, utility clusters based on the concept of user requested job completion deadlines. In such a system, a user would be able to request each job to finish by a certain deadline, and possibly to a certain monetary cost. Implementing deadline scheduling is dependent on the ability to predict the execution time of each queued job, and on an adaptive scheduling algorithm able to use those predictions to maximise deadline adherence. The thesis proposes novel solutions to these two problems and documents their implementation in a largely autonomous and self-managing way. The starting point of the work is an extensive analysis of a representative Grid workload revealing consistent workflow patterns, usage cycles and correlations between the execution times of jobs and its properties commonly collected by the Grid middleware for accounting purposes. An automated approach is proposed to identify these dependencies and use them to partition the highly variable workload into subsets of more consistent and predictable behaviour. A range of time-series forecasting models, applied in this context for the first time, were used to model the job execution times as a function of their historical behaviour and associated properties. Based on the resulting predictions of job runtimes a novel scheduling algorithm is able to estimate the latest job start time necessary to meet the requested deadline and sort the queue accordingly to minimise the amount of deadline overrun. The testing of the proposed approach was done using the actual job trace collected from a production Grid facility. The best performing execution time predictor (the auto-regressive moving average method) coupled to workload partitioning based on three simultaneous job properties returned the median absolute percentage error centroid of only 4.75%. This level of prediction accuracy enabled the proposed deadline scheduling method to reduce the average deadline overrun time ten-fold compared to the benchmark batch scheduler. Overall, the thesis demonstrates that deadline scheduling of computational jobs on the Grid is achievable using statistical forecasting of job execution times based on historical information. The proposed approach is easily implementable, substantially self-managing and better matched to the human workflow making it well suited for implementation in the utility Grids of the future.
56

Κατηγοριοποίηση επικοινωνιακών και υπολογιστικών εργασιών σε GRID συστήματα

Οικονομάκος, Μιχαήλ 27 February 2009 (has links)
Στην παρούσα διπλωματική εργασία, ασχολούμαστε με μια καινοτόμα τεχνολογία, αυτή των υπολογιστικών πλεγμάτων (grids). Συγκεκριμένα μελετάμε την συμπεριφορά τέτοιων συστημάτων όσον αφορά τον τρόπο που εκτελούνται οι διάφορες εργασίες σε αυτά. Αρχικά λοιπόν αφότου κάνουμε μια γενική αναδρομή στο τι είναι τα grids, πώς υλοποιούνται και ποιές ανάγκες καλύπτουν, αναφερόμαστε σε ένα συγκεκριμένο υπολογιστικό πλέγμα, το οποίο είναι εγκατεστημένο στο Πανεπιστήμιο Πατρών σε χώρο του τμήματος Μηχανικών Η/Υ και Πληροφορικής και το οποίο έχει εναρμονιστεί με επιτυχία από τις αρχές του 2006 σε μια πανευρωπαϊκή δομή grids στα πλαίσια του έργου EGEE (Enabling Grids for E-science in Europe). Στο υπόλοιπο κομμάτι της μελέτης μας, ασχολούμαστε με την κατηγοριοποίηση και την μοντελοποίηση επικοινωνιακών και υπολογιστικών εργασιών στον κόμβο της Πάτρας. Αναλυτικότερα, αφού περιγράψουμε τα δομικά στοιχεία που αποτελούν τον τοπικό κόμβο και αναπτύξουμε τη μεθοδολογία για συγκέντρωση και στατιστική επεξεργασία των δεδομένων του (υπό μορφή log files), επικεντρωνόμαστε στην περιγραφή της διαδικασίας που ακολουθείται από τη διεθνή βιβλιογραφία για την κατηγοριοποίηση των παραπάνω εργασιών. Στη συνέχεια εφαρμόζουμε τα παραπάνω για την περίπτωσή μας και τέλος συγκρίνουμε τα αποτελέσματά μας με μεθόδους που προτείνονται από άλλους μελετητές για την κατηγοριοποίηση τέτοιων εργασιών. Η σύγκριση μας οδηγεί στο ασφαλές συμπέρασμα, πως το μοντέλο το οποίο προτείνουμε για την κατηγοριοποίηση των εργασιών είναι πιο αποδοτικό και πιο απλό από όσα έχουν προταθεί ως σήμερα. Η παραπάνω μελέτη βοηθά στην πληρέστερη κατανόηση της συμπεριφοράς ενός συστήματος πλέγματος, κάτι το οποίο είναι ιδιαίτερα σημαντικό κατά τον σχεδιασμό αλγορίθμων χρονοδρομολόγησης σε τέτοιου είδους συστήματα. Τελικός στόχος παραμένει η δίκαιη μεταχείριση των χρηστών και η μέγιστη αποδοτικότητα της όλης υποδομής. / -
57

Ordonnancement multi-critère sur Clouds / Multi-criteria scheduling on Clouds

Kessaci, Yacine 28 November 2013 (has links)
Le cloud computing a émergé au cours de la dernière décennie pour être largement adopté aujourd’hui dans plusieurs domaines de l’informatique. Il consiste à proposer des ressources axées, ou non, sur le marché sous forme de services qui peuvent être consommés de manière souple et transparente. Dans cette thèse, nous traitons le problème d’ordonnancement, un des enjeux majeurs du cloud. Selon la configuration de cloud ciblée, nous avons identifié trois niveaux d’ordonnancement : niveau service, niveau tâche et niveau machine virtuelle. Nous revisitons la modélisation du problème, la conception et l’implémentation des métaheuristiques multiobjectives pour chaque niveau d’ordonnancement du cloud. Les ordonnanceurs à base de métaheuristiques que nous proposons portent sur différents critères notamment la consommation d’énergie, les émissions de gaz à effet de serre, le profit et la qualité du service (coût et temps de réponse). Nous prouvons leur capacité d’adaptation aux contraintes du cloud en les intégrant au sein du gestionnaire de cloud OpenNebula. De plus, nos ordonnanceurs ont été largement expérimentés utilisant des configurations réalistes de cloud sur Grid’5000, en tant qu’infrastructure en tant que service (IAAS), et des scénarios concrets basés sur les instances et les tarifications d’Amazon EC2. Les résultats présentés montrent que les méthodes que nous proposons surpassent les approches d’ordonnancement existantes sur tous les critères cités précédemment. / Cloud computing has emerged during the last decade to be widely adopted nowadays in several IT areas. It consists to propose market or not market-oriented resources as services that can be consumed in a ubiquitous, flexible and transparent way. In this PhD thesis, we deal with scheduling, one of the major cloud computing issue. According to the targeted cloud configuration, we have identified three levels of scheduling: service-level, task-level and Virtual Machine-level. We revisit the problem modeling, the design and the implementation of multi-objective metaheuristics for each scheduling level of the cloud. The proposed metaheuristics-based schedulers address different criteria including energy consumption, greenhouse gas emissions, profit and QoS (cost and response time). We prove their adaptability to the cloud constraints by integrating them as a part of the OpenNebula cloud manager. Moreover, our schedulers have been extensively experimented using realistic cloud configurations on Grid'5000, considered as an infrastructure as a service (IAAS), and concrete scenarios based on Amazon EC2 instances and prices. The reported results show that our proposed methods outperform existing scheduling approaches in terms of all previously cited criteria.
58

Parameterised verification of randomised distributed systems using state-based models

Graham, Douglas January 2008 (has links)
Model checking is a powerful technique for the verification of distributed systems but is limited to verifying systems with a fixed number of processes. The verification of a system for an arbitrary number of processes is known as the parameterised model checking problem and is, in general, undecidable. Parameterised model checking has been studied in depth for non-probabilistic distributed systems. We extend some of this work in order to tackle the parameterised model checking problem for distributed protocols that exhibit probabilistic behaviour, a problem that has not been widely addressed to date. In particular, we consider the application of network invariants and explicit induction to the parameterised verification of state-based models of randomised distributed systems. We demonstrate the use of network invariants by constructing invariant models for non-probabilistic and probabilistic forms of a simple counter token ring protocol. We show that proving properties of the invariants equates to proving properties of the token ring protocol for any number of processes. The use of induction is considered for the verification of a class of randomised distributed systems. These systems, termed degenerative, have the property that a model of a system with given communication graph eventually behaves like a model of a system with a reduced graph, where reduction is by removal of a set of nodes. We distinguish between deterministically, probabilistically and semi-degenerative systems, according to the manner in which a system degenerates. For the former two classes we describe induction schemas for reasoning about models of these systems over arbitrary communication graphs. We show that certain properties hold for models of such systems with any graph if they hold for all models of a system with some base graph and demonstrate this via case studies: two randomised leader election protocols. We illustrate how induction can also be employed to prove properties of semi-degenerative systems by considering a simple gossip protocol.
59

Combining meta information management and reflection in an architecture for configurable and reconfigurable middleware

Costa, Fabio Moreira January 2001 (has links)
No description available.
60

Service-oriented grids and problem solving environments

Fairman, Matthew J. January 2004 (has links)
The Internet’s continued rapid growth is creating an untapped environment containing a large quantity of highly competent computing resources suitable for exploitation in existing capacity-constrained and new innovative capability-driven distributed applications. The Grid is a new computing model that has emerged to harness these resources in a manner that fits the problem solving process needs of the computational engineering design community. Their unique requirements have created specific challenges for Grid technologies to bring interoperability, stability, scalability and flexibility, in addition to, transparent integration and generic access to disparate computing resources within and across institutional boundaries. The emergence of maturing open standards based service-oriented (SO) technologies has fulfilled the fundamental requirements of interoperability, leaves a flexible framework onto which sophisticated system architectures may be built, and provides a suitable base for the development of future Grid technologies. The work presented in this thesis is motivated by the desire to identify, understand, and resolve important challenges involved in the construction of Grid-enabled Problem Solving Environments (PSE) using SO technologies. The work explains why they are appropriate for Grid computing and successfully demonstrates the application and benefits of applying SO technologies in the scenarios of Computational Micromagnetics and Grid-enabled Engineering Optimisation and Design Search (Geodise) systems. Experiences achieved through the work can also be of referential value to future application of Grid computing in different areas.

Page generated in 0.0144 seconds