• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 93
  • 26
  • 10
  • 8
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 191
  • 95
  • 76
  • 70
  • 62
  • 54
  • 48
  • 48
  • 45
  • 44
  • 35
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Optimisation multicritère sous incertitudes : un algorithme de descente stochastique / Multiobjective optimization under uncertainty : a stochastic descent algorithm

Mercier, Quentin 10 October 2018 (has links)
Cette thèse s’intéresse à l’optimisation multiobjectif sans contrainte lorsque les objectifs sont exprimés comme des espérances de fonctions aléatoires. L’aléa est modélisé par l’intermédiaire de variables aléatoires et on considère qu’il n’impacte pas les variables d’optimisation du problème. La thèse consiste à proposer un algorithme de descente qui permet l’obtention des solutions de Pareto du problème d’optimisation ainsi écrit. En utilisant des résultats d’analyse convexe, il est possible de construire un vecteur de descente commun à l’ensemble des objectifs du problème d’optimisation pour un tirage des variables aléatoires donné. Une suite itérative consistant à descendre dans la direction du vecteur de descente commun calculé au point courant et pour un tirage aléatoire unique et indépendant des variables aléatoires est alors construite. De cette manière, l’estimation coûteuse des espérances à chaque étape du processus d’optimisation n’est pas nécessaire. Il est possible de prouver les convergences en norme et presque sûre de cette suite vers les solutions de Pareto du problème d’optimisation en espérance et d’obtenir un résultat de vitesse de convergence lorsque la suite de pas de descente est bien choisie. Après avoir proposé diverses méthodes numériques d’amélioration de l’algorithme, un ensemble d’essais numériques est mené et les résultats de l’algorithme proposé sont comparés à ceux obtenus par deux autres algorithmes classiques de la littérature. Les résultats obtenus sont comparés par l’intermédiaire de mesures adaptées à l’optimisation multiobjectif et sont analysés par le biais de profils de performance. Des méthodes sont alors proposées pour prendre en compte deux types de contrainte et sont illustrées sur des problèmes d’optimisation de structures mécaniques. Une première méthode consiste à pénaliser les objectifs par l’intermédiaire de fonctions de pénalisation exacte lorsque la contrainte est décrite par une fonction déterministe. Pour les contraintes probabilistes, on propose de remplacer les contraintes par des objectifs supplémentaires, ces contraintes probabilistes étant alors reformulées comme des espérances de fonctions indicatrices, le problème étant résolu à l’aide de l’algorithme proposé dans la thèse sans avoir à estimer les probabilités des contraintes à chaque itération. / This thesis deals with unconstrained multiobjective optimization when the objectives are written as expectations of random functions. The randomness is modelled through random variables and we consider that this does not impact the problem optimization variables. A descent algorithm is proposed which gives the Pareto solutions without having to estimate the expectancies. Using convex analysis results, it is possible to construct a common descent vector that is a descent vector for all the objectives simultaneously, for a given draw of the random variables. An iterative sequence is then built and consists in descending following this common descent vector calculated at the current point and for a single independent draw of the random variables. This construction avoids the costly estimation of the expectancies at each step of the algorithm. It is then possible to prove the mean square and almost sure convergence of the sequence towards Pareto solutions of the problem and at the same time, it is possible to obtain a speed rate result when the step size sequence is well chosen. After having proposed some numerical enhancements of the algorithm, it is tested on multiple test cases against two classical algorithms of the literature. The results for the three algorithms are then compared using two measures that have been devised for multiobjective optimization and analysed through performance profiles. Methods are then proposed to handle two types of constraint and are illustrated on mechanical structure optimization problems. The first method consists in penalising the objective functions using exact penalty functions when the constraint is deterministic. When the constraint is expressed as a probability, the constraint is replaced by an additional objective. The probability is then reformulated as an expectation of an indicator function and this new problem is solved using the algorithm proposed in the thesis without having to estimate the probability during the optimization process.
172

An Exploration Of Heterogeneous Networks On Chip

Grimm, Allen Gary 01 January 2011 (has links)
As the the number of cores on a single chip continue to grow, communication increasingly becomes the bottleneck to performance. Networks on Chips (NoC) is an interconnection paradigm showing promise to allow system size to increase while maintaining acceptable performance. One of the challenges of this paradigm is in constructing the network of inter-core connections. Using the traditional wire interconnect as long range links is proving insufficient due to the increase in relative delay as miniaturization progresses. Novel link types are capable of delivering single-hop long-range communication. We investigate the potential benefits of constructing networks with many link types applied to heterogeneous NoCs and hypothesize that a network with many link types available can achieve a higher performance at a given cost than its homogeneous network counterpart. To investigate NoCs with heterogeneous links, a multiobjective evolutionary algorithm is given a heterogeneous set of links and optimizes the number and placement of those links in an NoC using objectives of cost, throughput, and energy as a representative set of a NoC's quality. The types of links used and the topology of those links is explored as a consequence of the properties of available links and preference set on the objectives. As the platform of experimentation, the Complex Network Evolutionary Algorithm (CNEA) and the associated Complex Network Framework (CNF) are developed. CNEA is a multiobjective evolutionary algorithm built from the ParadisEO framework to facilitate the construction of optimized networks. CNF is designed and used to model and evaluate networks according to: cost of a given topology; performance in terms of a network's throughput and energy consumption; and graph-theory based metrics including average distance, degree-, length-, and link-distributions. It is shown that optimizing complex networks to cost as a function of total link length and average distance creates a power-law link-length distribution. This offers a way to decrease the average distance of a network for a given cost when compared to random networks or the standard mesh network. We then explore the use of several types of constrained-length links in the same optimization problem and find that, when given access to all link types, we obtain networks that have the same or smaller average distance for a given cost than any network that is produced when given access to only one link type. We then introduce traffic on the networks with an interconnect-based packet-level shortest-path-routed traffic model. We find heterogeneous networks can achieve a throughput as good or better than the homogeneous network counterpart using the same amount of link. Finally, these results are confirmed by augmenting a wire-based mesh network with non-traditional link types and finding significant increases the overall performance of that network.
173

Coordinated Regional and City Planning Using a Genetic Algorithm

Lowry, Michael B. 22 June 2004 (has links) (PDF)
Improved methods of planning are needed to deal with today's issues of traffic congestion, sprawl, and loss of greenspace. Past research and recent legislation call for new methods that will consider a regional perspective. Regional planning is challenged with two difficult questions: 1.Is it possible to achieve regional goals without infringing upon the local autonomy of city planners? 2.Is it possible to objectively analyze the thousands, even millions, of land use and transportation plans to find the best design? Metropolitan regions across the country have made great efforts to answer the first question. Unfortunately, effective methods for harmonizing the goals of regional and city planners have not been developed. Likewise, efforts have been made to introduce objectivity into the planning process. However, current methods continue to be subjective because there is no way to efficiently analyze the millions of alternative plans for objective decision-making. This thesis presents a new approach to regional planning that provides an affirmative answer to the two questions posed above. The first question is answered through a unique problem formulation and a corresponding 3 stage process that compels coordination between the regional and city planners. Regional goals are achieved because they are cast as objectives and constraints in stage one. Local autonomy is achieved because some of the decisions are left for the city planners to decide in the second stage. The third stage allows for negotiation between the regional and city planners. The second question is answered through the use of a genetic algorithm. The genetic algorithm provides the means to objectively consider millions of plans to find the best ones. The new approach is demonstrated on the main metropolitan region of Utah and a local city center within the region. The results from the case study provided the opportunity to learn valuable lessons concerning land use and transportation planning that can be applied to other regions experiencing rapid growth.
174

A Flexible Decision Support System for Steel Hot Rolling Mill Scheduling

Cowling, Peter I. January 2003 (has links)
No / A steel hot rolling mill subjects steel slabs to high temperatures and pressures in order to form steel coils. We describe the scheduling problem for a steel hot rolling mill. We detail the operation of a commercial decision support system which provides semi-automatic schedules, comparing its operation with existing, manual planning procedures. This commercial system is currently in use in several steel mills worldwide. The system features a very detailed multiobjective model of the steel hot rolling process. This model is solved using a variety of bespoke local and Tabu search heuristics. We describe both this model and the heuristics used to solve it. The production environment is highly unstable with frequent, unforeseen events interrupting planned production. We describe how the scheduling system's models, algorithms and interfaces have been developed to handle this instability. We consider particularly the impact on existing planning and production systems and the qualitative improvements which result from the system's implementation.
175

Advances in aircraft design: multiobjective optimization and a markup language

Deshpande, Shubhangi Govind 23 January 2014 (has links)
Today's modern aerospace systems exhibit strong interdisciplinary coupling and require a multidisciplinary, collaborative approach. Analysis methods that were once considered feasible only for advanced and detailed design are now available and even practical at the conceptual design stage. This changing philosophy for conducting conceptual design poses additional challenges beyond those encountered in a low fidelity design of aircraft. This thesis takes some steps towards bridging the gaps in existing technologies and advancing the state-of-the-art in aircraft design. The first part of the thesis proposes a new Pareto front approximation method for multiobjective optimization problems. The method employs a hybrid optimization approach using two derivative free direct search techniques, and is intended for solving blackbox simulation based multiobjective optimization problems with possibly nonsmooth functions where the analytical form of the objectives is not known and/or the evaluation of the objective function(s) is very expensive (very common in multidisciplinary design optimization). A new adaptive weighting scheme is proposed to convert a multiobjective optimization problem to a single objective optimization problem. Results show that the method achieves an arbitrarily close approximation to the Pareto front with a good collection of well-distributed nondominated points. The second part deals with the interdisciplinary data communication issues involved in a collaborative mutidisciplinary aircraft design environment. Efficient transfer, sharing, and manipulation of design and analysis data in a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative environment. An important feature of the proposed schema is the very expressive and efficient low level schemata. As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases. / Ph. D.
176

Solution of Constrained Clustering Problems through Homotopy Tracking

Easterling, David R. 15 January 2015 (has links)
Modern machine learning methods are dependent on active optimization research to improve the set of methods available for the efficient and effective extraction of information from large datasets. This, in turn, requires an intense and rigorous study of optimization methods and their possible applications to crucial machine learning applications in order to advance the potential benefits of the field. This thesis provides a study of several modern optimization techniques and supplies a mathematical inquiry into the effectiveness of homotopy methods to attack a fundamental machine learning problem, effective clustering under constraints. The first part of this thesis provides an empirical survey of several popular optimization algorithms, along with one approach that is cutting-edge. These algorithms are tested against deeply challenging real-world problems with vast numbers of local minima, and compares and contrasts the benefits of each when confronted with problems of different natures. The second part of this thesis proposes a new homotopy map for use with constrained clustering problems. This thesis explores the connections between the map and the problem, providing several theorems to justify the use of the map and making use of modern homotopy tracking software to compare an optimization that employs the map with several modern approaches to solving the same problem. / Ph. D.
177

Optimisation du développement de nouveaux produits dans l'industrie pharmaceutique par algorithme génétique multicritère / Multiobjective optimization of New Product Development in the pharmaceutical industry

Perez Escobedo, José Luis 03 June 2010 (has links)
Le développement de nouveaux produits constitue une priorité stratégique de l'industrie pharmaceutique, en raison de la présence d'incertitudes, de la lourdeur des investissements mis en jeu, de l'interdépendance entre projets, de la disponibilité limitée des ressources, du nombre très élevé de décisions impliquées dû à la longueur des processus (de l'ordre d'une dizaine d'années) et de la nature combinatoire du problème. Formellement, le problème se pose ainsi : sélectionner des projets de Ret D parmi des projets candidats pour satisfaire plusieurs critères (rentabilité économique, temps de mise sur le marché) tout en considérant leur nature incertaine. Plus précisément, les points clés récurrents sont relatifs à la détermination des projets à développer une fois que les molécules cibles sont identifiées, leur ordre de traitement et le niveau de ressources à affecter. Dans ce contexte, une approche basée sur le couplage entre un simulateur à événements discrets stochastique (approche Monte Carlo) pour représenter la dynamique du système et un algorithme d'optimisation multicritère (de type NSGA II) pour choisir les produits est proposée. Un modèle par objets développé précédemment pour la conception et l'ordonnancement d'ateliers discontinus, de réutilisation aisée tant par les aspects de structure que de logique de fonctionnement, a été étendu pour intégrer le cas de la gestion de nouveaux produits. Deux cas d'étude illustrent et valident l'approche. Les résultats de simulation ont mis en évidence l'intérêt de trois critères d'évaluation de performance pour l'aide à la décision : le bénéfice actualisé d'une séquence, le risque associé et le temps de mise sur le marché. Ils ont été utilisés dans la formulation multiobjectif du problème d'optimisation. Dans ce contexte, des algorithmes génétiques sont particulièrement intéressants en raison de leur capacité à conduire directement au front de Pareto et à traiter l'aspect combinatoire. La variante NSGA II a été adaptée au problème pour prendre en compte à la fois le nombre et l'ordre de lancement des produits dans une séquence. A partir d'une analyse bicritère réalisée pour un cas d'étude représentatif sur différentes paires de critères pour l'optimisation bi- et tri-critère, la stratégie d'optimisation s'avère efficace et particulièrement élitiste pour détecter les séquences à considérer par le décideur. Seules quelques séquences sont détectées. Parmi elles, les portefeuilles à nombre élevé de produits provoquent des attentes et des retards au lancement ; ils sont éliminés par la stratégie d'optimistaion bicritère. Les petits portefeuilles qui réduisent les files d'attente et le temps de lancement sont ainsi préférés. Le temps se révèle un critère important à optimiser simultanément, mettant en évidence tout l'intérêt d'une optimisation tricritère. Enfin, l'ordre de lancement des produits est une variable majeure comme pour les problèmes d'ordonnancement d'atelier. / New Product Development (NPD) constitutes a challenging problem in the pharmaceutical industry, due to the characteristics of the development pipeline, namely, the presence of uncertainty, the high level of the involved capital costs, the interdependency between projects, the limited availability of resources, the overwhelming number of decisions due to the length of the time horizon (about 10 years) and the combinatorial nature of a portfolio. Formally, the NPD problem can be stated as follows: select a set of R and D projects from a pool of candidate projects in order to satisfy several criteria (economic profitability, time to market) while copying with the uncertain nature of the projects. More precisely, the recurrent key issues are to determine the projects to develop once target molecules have been identified, their order and the level of resources to assign. In this context, the proposed approach combines discrete event stochastic simulation (Monte Carlo approach) with multiobjective genetic algorithms (NSGA II type, Non-Sorted Genetic Algorithm II) to optimize the highly combinatorial portfolio management problem. An object-oriented model previously developed for batch plant scheduling and design is then extended to embed the case of new product management, which is particularly adequate for reuse of both structure and logic. Two case studies illustrate and validate the approach. From this simulation study, three performance evaluation criteria must be considered for decision making: the Net Present Value (NPV) of a sequence, its associated risk defined as the number of positive occurrences of NPV among the samples and the time to market. Theyv have been used in the multiobjective optimization formulation of the problem. In that context, Genetic Algorithms (GAs) are particularly attractive for treating this kind of problem, due to their ability to directly lead to the so-called Pareto front and to account for the combinatorial aspect. NSGA II has been adapted to the treated case for taking into account both the number of products in a sequence and the drug release order. From an analysis performed for a representative case study on the different pairs of criteria both for the bi- and tricriteria optimization, the optimization strategy turns out to be efficient and particularly elitist to detect the sequences which can be considered by the decision makers. Only a few sequences are detected. Among theses sequences, large portfolios cause resource queues and delays time to launch and are eliminated by the bicriteria optimization strategy. Small portfolio reduces queuing and time to launch appear as good candidates. The optimization strategy is interesting to detect the sequence candidates. Time is an important criterion to consider simultaneously with NPV and risk criteria. The order in which drugs are released in the pipeline is of great importance as with scheduling problems.
178

Optimisation multiobjectif de réseaux de transport de gaz naturel / Multiobjective optimization of natural gas transportation networks

Hernandez-Rodriguez, Guillermo 19 September 2011 (has links)
L'optimisation de l'exploitation d'un réseau de transport de gaz naturel (RTGN) est typiquement un problème d'optimisation multiobjectif, faisant intervenir notamment la minimisation de la consommation énergétique dans les stations de compression, la maximisation du rendement, etc. Cependant, très peu de travaux concernant l'optimisation multiobjectif des réseaux de gazoducs sont présentés dans la littérature. Ainsi, ce travail vise à fournir un cadre général de formulation et de résolution de problèmes d'optimisation multiobjectif liés aux RTGN. Dans la première partie de l'étude, le modèle du RTGN est présenté. Ensuite, diverses techniques d'optimisation multiobjectif appartenant aux deux grandes classes de méthodes par scalarisation, d'une part, et de procédures évolutionnaires, d'autre part, communément utilisées dans de nombreux domaines de l'ingénierie, sont détaillées. Sur la base d'une étude comparative menée sur deux exemples mathématiques et cinq problèmes de génie des procédés (incluant en particulier un RTGN), un algorithme génétique basé sur une variante de NSGA-II, qui surpasse les méthodes de scalarisation, de somme pondérée et d'ε-Contrainte, a été retenu pour résoudre un problème d'optimisation tricritère d'un RTGN. Tout d'abord un problème monocritère relatif à la minimisation de la consommation de fuel dans les stations de compression est résolu. Ensuite un problème bicritère, où la consommation de fuel doit être minimisée et la livraison de gaz aux points terminaux du réseau maximisée, est présenté ; l'ensemble des solutions non dominées est répresenté sur un front de Pareto. Enfin l'impact d'injection d'hydrogène dans le RTGN est analysé en introduisant un troisième critère : le pourcentage d'hydrogène injecté dans le réseau que l'on doit maximiser. Dans les deux cas multiobjectifs, des méthodes génériques d'aide à la décision multicritère sont mises en oeuvre pour déterminer les meilleures solutions parmi toutes celles déployées sur les fronts de Pareto. / The optimization of a natural gas transportation network (NGTN) is typically a multiobjective optimization problem, involving for instance energy consumption minimization at the compressor stations and gas delivery maximization. However, very few works concerning multiobjective optimization of gas pipelines networks are reported in the literature. Thereby, this work aims at providing a general framework of formulation and resolution of multiobjective optimization problems related to NGTN. In the first part of the study, the NGTN model is described. Then, various multiobjective optimization techniques belonging to two main classes, scalarization and evolutionary, commonly used for engineering purposes, are presented. From a comparative study performed on two mathematical examples and on five process engineering problems (including a NGTN), a variant of the multiobjective genetic algorithm NSGA-II outmatches the classical scalararization methods, Weighted-sum and ε-Constraint. So NSGA-II has been selected for performing the triobjective optimization of a NGTN. First, the monobjective problem related to the minimization of the fuel consumption in the compression stations is solved. Then a biojective problem, where the fuel consumption has to be minimized, and the gas mass flow delivery at end-points of the network maximized, is presented. The non dominated solutions are displayed in the form of a Pareto front. Finally, the study of the impact of hydrogen injection in the NGTN is carried out by introducing a third criterion, i.e., the percentage of injected hydrogen to be maximized. In the two multiobjective cases, generic Multiple Choice Decision Making tools are implemented to identify the best solution among the ones displayed of the Pareto fronts.
179

Optimisation multicritère de réseaux d'eau / Multiobjective optimization of water networks

Boix, Marianne 28 September 2011 (has links)
Cette étude concerne l’optimisation multiobjectif de réseaux d’eau industriels via des techniques de programmation mathématique. Dans ce travail, un large éventail de cas est traité afin de proposer des solutions aux problèmes de réseaux les plus variés. Ainsi, les réseaux d’eau monopolluants sont abordés grâce à une programmation mathématique linéaire (MILP). Cette méthode est ensuite utilisée dans le cadre d’une prise en compte simultanée des réseaux d’eau et de chaleur. Lorsque le réseau fait intervenir plusieurs polluants, le problème doit être programmé de façon non linéaire (MINLP). L’optimisation multicritère de chaque réseau est basée sur la stratégie epsilon-contrainte développée à partir d’une méthode lexicographique. L’optimisation multiobjectif suivie d’une réflexion d’aide à la décision a permis d’améliorer les résultats antérieurs proposés dans la littérature de 2 à 10% en termes de consommation de coût et de 7 à 15% en ce qui concerne la dépense énergétique. Cette méthodologie est étendue à l’optimisation de parcs éco-industriels et permet ainsi d’opter pour une solution écologique et économique parmi un ensemble de configurations proposées. / This study presents a multiobjective optimization of industrial water networks through mathematical programming procedures. A large range of various examples are processed to propose several feasible solutions. An industrial network is composed of fixed numbers of process units and regenerations and contaminants. These units are characterized by a priori defined values: maximal inlet and outlet contaminant concentrations. The aim is both to determine which water flows circulate between units and to allocate them while several objectives are optimized. Fresh water flow-rate (F1), regenerated water flow-rate (F2),interconnexions number (F3), energy consumption (F4) and the number of heat exchangers (F5) are all minimized. This multiobjective optimization is based upon the epsilon-constraint strategy, which is developed from a lexicographic method that leads to Pareto fronts. Monocontaminant networks are addressed with a mixed linear mathematical programming (Mixed Integer Linear Programming, MILP) model, using an original formulation based on partial water flow-rates. The obtained results we obtained are in good agreement with the literature data and lead to the validation of the method. The set of potential network solutions is provided in the form of a Pareto front. An innovative strategy based on the GEC (global equivalent cost) leads to the choice of one network among these solutions and turns out to be more efficient for choosing a good network according to a practical point of view. If the industrial network deals with several contaminants, the formulation changes from MILP into MINLP (Mixed Integer Non Linear Programming). Thanks to the same strategy used for the monocontaminant problem, the networks obtained are topologically simpler than literature data and have the advantage of not involving very low flow-rates. A MILP model is performed in order to optimize heat and water networks. Among several examples, a real case of a paper mill plant is studied. This work leads to a significant improvement of previous solutions between 2 to 10% and 7 to 15% for cost and energy consumptions respectively. The methodology is then extended to the optimization of eco-industrial parks. Several configurations are studied regarding the place of regeneration units in the symbiosis. The best network is obtained when the regeneration is owned by each industry of the park and allows again of about 13% for each company. Finally, when heat is combined to water in the network of the ecopark, a gain of 11% is obtained compared to the case where the companies are considered individually.
180

Allocation optimale multicontraintes des workflows aux ressources d’un environnement Cloud Computing / Multi-constrained optimal allocation of workflows to Cloud Computing resources

Yassa, Sonia 10 July 2014 (has links)
Le Cloud Computing est de plus en plus reconnu comme une nouvelle façon d'utiliser, à la demande, les services de calcul, de stockage et de réseau d'une manière transparente et efficace. Dans cette thèse, nous abordons le problème d'ordonnancement de workflows sur les infrastructures distribuées hétérogènes du Cloud Computing. Les approches d'ordonnancement de workflows existantes dans le Cloud se concentrent principalement sur l'optimisation biobjectif du makespan et du coût. Dans cette thèse, nous proposons des algorithmes d'ordonnancement de workflows basés sur des métaheuristiques. Nos algorithmes sont capables de gérer plus de deux métriques de QoS (Quality of Service), notamment, le makespan, le coût, la fiabilité, la disponibilité et l'énergie dans le cas de ressources physiques. En outre, ils traitent plusieurs contraintes selon les exigences spécifiées dans le SLA (Service Level Agreement). Nos algorithmes ont été évalués par simulation en utilisant (1) comme applications: des workflows synthétiques et des workflows scientifiques issues du monde réel ayant des structures différentes; (2) et comme ressources Cloud: les caractéristiques des services de Amazon EC2. Les résultats obtenus montrent l'efficacité de nos algorithmes pour le traitement de plusieurs QoS. Nos algorithmes génèrent une ou plusieurs solutions dont certaines surpassent la solution de l'heuristique HEFT sur toutes les QoS considérées, y compris le makespan pour lequel HEFT est censé donner de bons résultats. / Cloud Computing is increasingly recognized as a new way to use on-demand, computing, storage and network services in a transparent and efficient way. In this thesis, we address the problem of workflows scheduling on distributed heterogeneous infrastructure of Cloud Computing. The existing workflows scheduling approaches mainly focus on the bi-objective optimization of the makespan and the cost. In this thesis, we propose news workflows scheduling algorithms based on metaheuristics. Our algorithms are able to handle more than two QoS (Quality of Service) metrics, namely, makespan, cost, reliability, availability and energy in the case of physical resources. In addition, they address several constraints according to the specified requirements in the SLA (Service Level Agreement). Our algorithms have been evaluated by simulations. We used (1) synthetic workflows and real world scientific workflows having different structures, for our applications; and (2) the features of Amazon EC2 services for our Cloud. The obtained results show the effectiveness of our algorithms when dealing multiple QoS metrics. Our algorithms produce one or more solutions which some of them outperform the solution produced by HEFT heuristic over all the QoS considered, including the makespan for which HEFT is supposed to give good results.

Page generated in 0.1489 seconds