• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 47
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 203
  • 203
  • 36
  • 36
  • 36
  • 35
  • 34
  • 32
  • 29
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Machine learning based on Hawkes processes and stochastic optimization / Apprentissage automatique avec les processus de Hawkes et l'optimisation stochastique

Bompaire, Martin 05 July 2019 (has links)
Le fil rouge de cette thèse est l'étude des processus de Hawkes. Ces processus ponctuels décryptent l'inter-causalité qui peut avoir lieu entre plusieurs séries d'événements. Concrètement, ils déterminent l'influence qu'ont les événements d'une série sur les événements futurs de toutes les autres séries. Par exemple, dans le contexte des réseaux sociaux, ils décrivent à quel point l'action d'un utilisateur, par exemple un Tweet, sera susceptible de déclencher des réactions de la part des autres.Le premier chapitre est une brève introduction sur les processus ponctuels suivie par un approfondissement sur les processus de Hawkes et en particulier sur les propriétés de la paramétrisation à noyaux exponentiels, la plus communément utilisée. Dans le chapitre suivant, nous introduisons une pénalisation adaptative pour modéliser, avec des processus de Hawkes, la propagation de l'information dans les réseaux sociaux. Cette pénalisation est capable de prendre en compte la connaissance a priori des caractéristiques de ces réseaux, telles que les interactions éparses entre utilisateurs ou la structure de communauté, et de les réfléchir sur le modèle estimé. Notre technique utilise des pénalités pondérées dont les poids sont déterminés par une analyse fine de l'erreur de généralisation.Ensuite, nous abordons l'optimisation convexe et les progrès réalisés avec les méthodes stochastiques du premier ordre avec réduction de variance. Le quatrième chapitre est dédié à l'adaptation de ces techniques pour optimiser le terme d'attache aux données le plus couramment utilisé avec les processus de Hawkes. En effet, cette fonction ne vérifie pas l'hypothèse de gradient-Lipschitz habituellement utilisée. Ainsi, nous travaillons avec une autre hypothèse de régularité, et obtenons un taux de convergence linéaire pour une version décalée de Stochastic Dual Coordinate Ascent qui améliore l'état de l'art. De plus, de telles fonctions comportent beaucoup de contraintes linéaires qui sont fréquemment violées par les algorithmes classiques du premier ordre, mais, dans leur version duale ces contraintes sont beaucoup plus aisées à satisfaire. Ainsi, la robustesse de notre algorithme est d'avantage comparable à celle des méthodes du second ordre dont le coût est prohibitif en grandes dimensions.Enfin, le dernier chapitre présente une nouvelle bibliothèque d'apprentissage statistique pour Python 3 avec un accent particulier mis sur les modèles temporels. Appelée tick, cette bibliothèque repose sur une implémentation en C++ et les algorithmes d'optimisation issus de l'état de l'art pour réaliser des estimations très rapides dans un environnement multi-cœurs. Publiée sur Github, cette bibliothèque a été utilisée tout au long de cette thèse pour effectuer des expériences. / The common thread of this thesis is the study of Hawkes processes. These point processes decrypt the cross-causality that occurs across several event series. Namely, they retrieve the influence that the events of one series have on the future events of all series. For example, in the context of social networks, they describe how likely an action of a certain user (such as a Tweet) will trigger reactions from the others.The first chapter consists in a general introduction on point processes followed by a focus on Hawkes processes and more specifically on the properties of the widely used exponential kernels parametrization. In the following chapter, we introduce an adaptive penalization technique to model, with Hawkes processes, the information propagation on social networks. This penalization is able to take into account the prior knowledge on the social network characteristics, such as the sparse interactions between users or the community structure, to reflect them on the estimated model. Our technique uses data-driven weighted penalties induced by a careful analysis of the generalization error.Next, we focus on convex optimization and recall the recent progresses made with stochastic first order methods using variance reduction techniques. The fourth chapter is dedicated to an adaptation of these techniques to optimize the most commonly used goodness-of-fit of Hawkes processes. Indeed, this goodness-of-fit does not meet the gradient-Lipschitz assumption that is required by the latest first order methods. Thus, we work under another smoothness assumption, and obtain a linear convergence rate for a shifted version of Stochastic Dual Coordinate Ascent that improves the current state-of-the-art. Besides, such objectives include many linear constraints that are easily violated by classic first order algorithms, but in the Fenchel-dual problem these constraints are easier to deal with. Hence, our algorithm's robustness is comparable to second order methods that are very expensive in high dimensions.Finally, the last chapter introduces a new statistical learning library for Python 3 with a particular emphasis on time-dependent models, tools for generalized linear models and survival analysis. Called tick, this library relies on a C++ implementation and state-of-the-art optimization algorithms to provide very fast computations in a single node multi-core setting. Open-sourced and published on Github, this library has been used all along this thesis to perform benchmarks and experiments.
52

Industrial Demand Response in the Primary Reserve Markets : A case study on Holmen’s Pulp and Paper Mill

Tomasini, Federica January 2019 (has links)
This thesis stems from the interest of Holmen group to investigate the opportunitiesavailable for large electricity consumers in the Swedish primary reserve markets.The study performed focuses on one of Holmen's paper mill and it aims at identifyinga load inside the production process that is suitable for providing frequency containmentservices for the grid. The evaluation of the mill's consumption prole and the technicalrequirements of the reserve market led to the identication of the electric boiler coupledwith a steam accumulator as the most appropriate load.Five case study simulating the participation of the mill to dierent energy and reservemarkets have been evaluated. For each case a linear optimization problem has beenformulated. The rst simulation represents the current practice of the mill in relation tothe energy purchased on the spot market (following it will be also referred as referencecase). The second case study (II c.s.) integrates the use of the steam accumulator asa tool to perform thermal load shifting. In the third case study (III c.s.) the mill ismodelled to bid on the spot and primary reserve market by oering some capacity ofthe electric boiler. The last two case studies (IV and V c.s.) recalls the rst and lastpreviously mentioned, but also include the possibility of having energy imbalance. Thismeans that the imbalance settlement operated by eSett will produce an additional costor prot for the mill.The last three problem formulations fall under the denition of stochastic problems,since two random variable are present, namely: average hourly frequency value andimbalance settlement price. The uncertainty of the variables is represented throughscenarios.The outcome derived from the combination of the results for the winter and summercases shows that each strategy brings an economic saving when compared to the referencecase (I c.s.). The less interesting strategies are the ones that do not involve the reservemarket, leading to about 0.03% (II c.s.) and 0.06% (IV c.s.) of saving on the overallyearly energy cost. Contrariwise, by oering FCR-N capacity, the cost of electricitycan be cut by 5.15% (III c.s.) and 6.69% (V c.s.), respectively considering and notconsidering the imbalance settlement. / Avhandlingen har sitt ursprung i skogsindustrikoncernen Holmens intresse att undersökamöjligheten för stora elförbrukare att delta på den svenska primär-reservmarknaden. Studien som utförts fokuserar på ett av Holmens pappersbruk och syftar till att identifiera en elektrisk process som, inom bruksgränserna, är lämplig för att tillhandahålla frekvensregleringstjänster till det nationella nätet. En utvärdering av brukets elförbrukning samt de tekniska krav som ställs på reservmarknaden ledde till att en elektrisk panna med tillkopplad ångackumulator identifierades som mest lämplig.Fem budstrategier som simulerar brukets deltagande till olika energioch reservmarknader har presenterats. För varje strategi är ett linjärt optimeringsproblem formulerat. Den första strategin visar på nuvarande sätt bruket köper elektricitet på spotmarknaden. Den andra strategin integrerar användning av ångackumulatorn som ett verktyg för att utföra termisk lastskiftning. I den tredje modelleras deltagande också på primärreservmarknaden genom att erbjuda en viss kapacitet hos elpannan. De två sista strategierna baseras på den första och tredje, men tillåter i tillägg obalanser vilket innebär en extra kostnad eller möjlig intjäning för bruket.De tre sista problemformuleringarna faller under definitionen stokastiska problem, eftersom två slumpmässiga variabler är närvarande, nämligen: genomsnittligt timfrekvensvärde och priset för obalans. Osäkerheten för variablerna representeras genom scenarier.Resultatet visar att varje strategi ger en ekonomisk besparing jämfört med refer-ensfallet (strategi ett). De mindre intressanta strategierna är de som inte involverarreservmarknaden, vilka endast leder till ca 0,03% och 0,06% minskning av den totalaårliga energikostnaden. Däremot, genom att erbjuda FCR-N-kapacitet kan kostnaden för el minskas med 6,69% och 5,15% beroende s eller ej.
53

[pt] DIMENSIONAMENTO DE FROTA MARÍTIMA SOB INCERTEZA EM UMA EMPRESA BRASILEIRA DE PETRÓLEO / [en] MARITIME FLEET SIZING UNDER UNCERTAINTY IN A BRAZILIAN OIL COMPANY

DANILO BAPTISTA MAROJA 06 April 2020 (has links)
[pt] A volatilidade inerente ao mercado de fretes marítimos e as incertezas relacionadas à demanda de transportes prevista contribuem para a complexidade do problema de dimensionamento da frota. Este trabalho aborda o problema da renovação da frota marítima de uma empresa brasileira do setor de óleo e gás, para o transporte, em viagens de cabotagem e longo curso, de derivados de petróleo. Para tal, é apresentado um modelo estocástico de programação inteira-mista de dois estágios para capaz de gerar indicações de contratos de afretamento a serem realizados considerando incertezas nos níveis de mercado de fretes e na previsão de volume movimentado. O modelo é capaz de fornecer composições de frota capazes de atender as especificações do problema, contudo, para os casos analisados, a avaliação das soluções obtidas ao se considerar a incerteza mostrou potencial de ganho pouco significativo em comparação com uma modelagem similar considerando valores esperados dos parâmetros. Este trabalho evidencia uma situação em que é útil a avaliação das soluções Wait-and-See (WS) e Expected Value of Expected Solution (EEV), menos demandantes computacionalmente, para calcular o potencial ganho da solução do modelo estocástico. / [en] The inherent volatility in the maritime freight market and the uncertainties related to the expected transport demand contribute to the complexity of the fleet size and mix problem. This work addresses the problem of the maritime fleet renewal of a Brazilian oil and gas company, for the transportation, in cabotage and international voyages, of oil products. To this end, we present a two-stage stochastic mixed-integer programming model capable of giving recommendations of which chartering contracts to be performed, considering uncertainties in freight market levels and in the forecasted volume movement. The model is able to provide fleet compositions capable of meeting the problem specifications, however, in the evaluated cases, little gain potential was observed by comparing the stochastic solutions to solutions considering expected parameter values. This work highlights a situation in which the evaluation of the computationally less demanding Waitand-See (WS) and Expected Value of Expected Solution (EEV) solutions is useful to calculate the potential gain of the stochastic model solution.
54

Optimization and Algorithms for Wireless Networks: Enhancing Problem Solvability, Channel Bonding Under Demand Stochasticity, and Receiver Characteristic Awareness

Abdelfattah, Amr Nabil A. 10 January 2018 (has links)
5G networks appear on the horizon with distinguished Quality of Service (QoS) requirements such as aggregated data rate and latency. Managing such networks in either a distributed or centralized manner to best utilize the available scarce resources is still a big challenge. Better mechanisms are needed for resource allocation. In this dissertation, we discuss three distinct research problems related to this theme. The first part addresses enhancing the solvability of network optimization problems. For the class of problems studied, we show that a traditionally-formulated model is insufficient from a problem-solving perspective. When the size of the problem increases, even state-of-the-art optimizers cannot obtain an optimal solution because of memory constraints. We show that augmenting the model with suitable additional constraints and structure enables the optimizer to derive optimal solutions, or significantly reduce the optimality gap. The second problem is optimal channel bonding in wireless LANs under demand uncertainty. An access point (AP) can aggregate multiple contiguous channels to satisfy demand. We discuss how to optimally utilize available frequency bands under uncertainty in AP demand using two stochastic optimization frameworks: a static scheme which minimizes the total occupied bandwidth while satisfying the demand of each AP with probability at least β and an adaptive scheme that allows adaptability of the bandwidth allocation in response to the AP demand variations. Given its complexity, we propose a novel framework to solve the adaptive stochastic optimization problem efficiently. The third problem is to allocate resources with receiver characteristic awareness in a multiple radio access technology environment. We propose a novel adjacent channel interference (ACI)-aware joint channel and power allocation framework that takes into account receiver imperfections arising due to (i) imperfect image frequency rejection and (ii) analog-to-digital converter aliasing. As the overall problem is in the form of Mixed-Integer-Linear-Programming (MILP) which is NP-hard, we develop an efficient algorithm to solve it. / Ph. D.
55

Service ORiented Computing EnviRonment (SORCER) for Deterministic Global and Stochastic Optimization

Raghunath, Chaitra 13 September 2015 (has links)
With rapid growth in the complexity of large scale engineering systems, the application of multidisciplinary analysis and design optimization (MDO) in the engineering design process has garnered much attention. MDO addresses the challenge of integrating several different disciplines into the design process. Primary challenges of MDO include computational expense and poor scalability. The introduction of a distributed, collaborative computational environment results in better utilization of available computational resources, reducing the time to solution, and enhancing scalability. SORCER, a Java-based network-centric computing platform, enables analyses and design studies in a distributed collaborative computing environment. Two different optimization algorithms widely used in multidisciplinary engineering design---VTDIRECT95 and QNSTOP---are implemented on a SORCER grid. VTDIRECT95, a Fortran 95 implementation of D. R. Jones' algorithm DIRECT, is a highly parallelizable derivative-free deterministic global optimization algorithm. QNSTOP is a parallel quasi-Newton algorithm for stochastic optimization problems. The purpose of integrating VTDIRECT95 and QNSTOP into the SORCER framework is to provide load balancing among computational resources, resulting in a dynamically scalable process. Further, the federated computing paradigm implemented by SORCER manages distributed services in real time, thereby significantly speeding up the design process. Results are included for an aircraft design application. / Master of Science
56

Approaches to Joint Base Station Selection and Adaptive Slicing in Virtualized Wireless Networks

Teague, Kory Alan 19 November 2018 (has links)
Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. This work investigates the problem of selecting base stations to construct virtual networks for a set of service providers, and adaptive slicing of the resources between the service providers to satisfy service provider demands. A two-stage stochastic optimization framework is introduced to solve this problem, and two methods are presented for approximating the stochastic model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for base station selection and adaptively slicing via a single-stage linear optimization problem. A number of scenarios are simulated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably tight solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a significant improvement in run time with the introduction of marginal error. / Master of Science / 5G, the next generation cellular network standard, promises to provide significant improvements over current generation standards. For 5G to be successful, this must be accompanied by similarly significant efficiency improvements. Wireless network virtualization is a promising technology that has been shown to improve the cost efficiency of current generation cellular networks. By abstracting the physical resource—such as cell tower base stations— from the use of the resource, virtual resources are formed. This work investigates the problem of selecting virtual resources (e.g., base stations) to construct virtual wireless networks with minimal cost and slicing the selected resources to individual networks to optimally satisfy individual network demands. This problem is framed in a stochastic optimization framework and two approaches are presented for approximation. The first approach converts the framework into a deterministic equivalent and reduces it to a tractable form. The second approach uses a genetic algorithm to approximate resource selection. Approaches are simulated and evaluated utilizing a demand model constructed to emulate the statistics of an observed real world urban network. Simulations indicate that the first approach can provide a reasonably tight solution with significant time expense, and that the second approach provides a solution in significantly less time with the introduction of marginal error.
57

Résolution de grands problèmes en optimisation stochastique dynamique et synthèse de lois de commande / Solving large-scale dynamic stochastic optimization problems

Girardeau, Pierre 17 December 2010 (has links)
Le travail présenté ici s'intéresse à la résolution numérique de problèmes de commande optimale stochastique de grande taille. Nous considérons un système dynamique, sur un horizon de temps discret et fini, pouvant être influencé par des bruits exogènes et par des actions prises par le décideur. L'objectif est de contrôler ce système de sorte à minimiser une certaine fonction objectif, qui dépend de l'évolution du système sur tout l'horizon. Nous supposons qu'à chaque instant des observations sont faites sur le système, et éventuellement gardées en mémoire. Il est généralement profitable, pour le décideur, de prendre en compte ces observations dans le choix des actions futures. Ainsi sommes-nous à la recherche de stratégies, ou encore de lois de commandes, plutôt que de simples décisions. Il s'agit de fonctions qui à tout instant et à toute observation possible du système associent une décision à prendre. Ce manuscrit présente trois contributions. La première concerne la convergence de méthodes numériques basées sur des scénarios. Nous comparons l'utilisation de méthodes basées sur les arbres de scénarios aux méthodes particulaires. Les premières ont été largement étudiées au sein de la communauté "Programmation Stochastique". Des développements récents, tant théoriques que numériques, montrent que cette méthodologie est mal adaptée aux problèmes à plusieurs pas de temps. Nous expliquons ici en détails d'où provient ce défaut et montrons qu'il ne peut être attribué à l'usage de scénarios en tant que tel, mais plutôt à la structure d'arbre. En effet, nous montrons sur des exemples numériques comment les méthodes particulaires, plus récemment développées et utilisant également des scénarios, ont un meilleur comportement même avec un grand nombre de pas de temps. La deuxième contribution part du constat que, même à l'aide des méthodes particulaires, nous faisons toujours face à ce qui est couramment appelé, en commande optimale, la malédiction de la dimension. Lorsque la taille de l'état servant à résumer le système est de trop grande taille, on ne sait pas trouver directement, de manière satisfaisante, des stratégies optimales. Pour une classe de systèmes, dits décomposables, nous adaptons des résultats bien connus dans le cadre déterministe, portant sur la décomposition de grands systèmes, au cas stochastique. L'application n'est pas directe et nécessite notamment l'usage d'outils statistiques sophistiqués afin de pouvoir utiliser la variable duale qui, dans le cas qui nous intéresse, est un processus stochastique. Nous proposons un algorithme original appelé Dual Approximate Dynamic Programming (DADP) et étudions sa convergence. Nous appliquons de plus cet algorithme à un problème réaliste de gestion de production électrique sur un horizon pluri-annuel. La troisième contribution de la thèse s'intéresse à une propriété structurelle des problèmes de commande optimale stochastique : la question de la consistance dynamique d'une suite de problèmes de décision au cours du temps. Notre but est d'établir un lien entre la notion de consistance dynamique, que nous définissons de manière informelle dans le dernier chapitre, et le concept de variable d'état, qui est central dans le contexte de la commande optimale. Le travail présenté est original au sens suivant. Nous montrons que, pour une large classe de modèles d'optimisation stochastique n'étant pas a priori consistants dynamiquement, on peut retrouver la consistance dynamique quitte à étendre la structure d'état du système / This work is intended at providing resolution methods for Stochastic Optimal Control (SOC) problems. We consider a dynamical system on a discrete and finite horizon, which is influenced by exogenous noises and actions of a decision maker. The aim is to minimize a given function of the behaviour of the system over the whole time horizon. We suppose that, at every instant, the decision maker is able to make observations on the system and even to keep some in memory. Since it is generally profitable to take these observations into account in order to draw further actions, we aim at designing decision rules rather than simple decisions. Such rules map to every instant and every possible observation of the system a decision to make. The present manuscript presents three main contributions. The first is concerned with the study of scenario-based solving methods for SOC problems. We compare the use of the so-called scenario trees technique to the particle method. The first one has been widely studied among the Stochastic Programming community and has been somehow popular in applications, until recent developments showed numerically as well as theoretically that this methodology behaved poorly when the number of time steps of the problem grows. We here explain this fact in details and show that this negative feature is not to be attributed to the scenario setting, but rather to the use of a tree structure. Indeed, we show on numerical examples how the particle method, which is a newly developed variational technique also based on scenarios, behaves in a better way even when dealing with a large number of time steps. The second contribution starts from the observation that, even with particle methods, we are still facing some kind of curse of dimensionality. In other words, decision rules intrisically suffer from the dimension of their domain, that is observations (or state in the Dynamic Programming framework). For a certain class of systems, namely decomposable systems, we adapt results concerning the decomposition of large-scale systems which are well known in the deterministic case to the SOC case. The application is not straightforward and requires some statistical analysis for the dual variable, which is in our context a stochastic process. We propose an original algorithm called Dual Approximate Dynamic Programming (DADP) and study its convergence. We also apply DADP to a real-life power management problem. The third contribution is concerned with a rather structural property for SOC problems: the question of dynamic consistency for a sequence of decision making problems over time. Our aim is to establish a link between the notion of time consistency, that we loosely define in the last chapter, and the central concept of state structure within optimal control. This contribution is original in the following sense. Many works in the literature aim at finding optimization models which somehow preserve the "natural" time consistency property for the sequence of decision making problems. On the contrary, we show for a broad class of SOC problems which are not a priori time-consistent that it is possible to regain this property by simply extending the state structure of the model
58

[en] ALLOCATION OF FIRM CAPACITY RIGHTS AMONG THERMAL PLANTS: A GAME THEORETICAL APPROACH / [pt] APLICAÇÃO DE TEORIA DE JOGOS À ALOCAÇÃO DE CAPACIDADE FIRME EM UM SISTEMA TÉRMICO

GUSTAVO ALBERTO AMARAL AYALA 17 October 2008 (has links)
[pt] O objetivo desta dissertação é analisar a aplicação de metodologias de alocação de capacidade firme de usinas termelétricas através da teoria dos jogos cooperativos e suas conseqüências na cooperação entre os agentes. Mostra-se que não existe uma maneira ótima, única, de se fazer esta repartição, mas existem critérios para verificar se uma metodologia de repartição específica apresenta algum aspecto inadequado. Um desses critérios é a justiça. Mostra-se que este sentido de justiça equivale a pertencer ao chamado núcleo de um jogo cooperativo, onde não há subsídio de um subgrupo por outro. O cálculo da capacidade firme ou Capacidade de Suprimento de Carga será formulado como um problema de otimização linear e serão investigadas vantagens e desvantagens de distintos métodos de alocação (benefícios marginais, última adição, Nucleolus, Shapley). A aplicação desses métodos tem um crescimento exponencial de esforço computacional, o método de Aumann- Shapley abordado em seguida fornece para o problema de alocação de capacidade firme uma solução computacional mais eficiente, embora em sua descrição aparentemente o método aumente o esforço computacional. Em seguida foram realizados resultados numéricos com sistemas genéricos de pequeno porte. / [en] The objective of this work is to investigate the application of different methodologies of allocation of firm capacity rights among thermal plants using a game-theoretic framework and the consequences in the cooperation among the agents. It is shown that there is not an optimal and unique approach to make this allocation but there are criteria to verify if a given approach presents any inadequate aspect. One of these criteria is the justice, or fairness. It is shown that a one sense of justice is equivalent to the condition of the core of a cooperative game. The calculation of the firm capacity will be formulated as a linear program and advantages/disadvantages of different allocation methods (marginal allocation, incremental allocation, Nucleolus, Shapley) will be investigated. The complexities of these methods are exponential, so it will be shown that the Aumann-Shapley (AS) scheme to the problem of allocation of capacity rights will be more efficient. Numerical results about the difference allocations in these methods are presented in general smalls systems.
59

Designing Two-Echelon Distribution Networks under Uncertainty / Design de réseaux de distribution à deux échelons sous incertitude

Ben Mohamed, Imen 27 May 2019 (has links)
Avec la forte croissance du e-commerce et l'augmentation continue de la population des villes impliquant des niveaux de congestion plus élevés, les réseaux de distribution doivent déployer des échelons supplémentaires pour offrir un ajustement dynamique aux besoins des entreprises au cours du temps et faire face aux aléas affectant l’activité de distribution. Dans ce contexte, les praticiens s'intéressent aux réseaux de distribution à deux échelons. Dans cette thèse, nous commençons par présenter une revue complète des problèmes de design des réseaux de distribution et souligner des caractéristiques essentielles de modélisation. Ces aspects impliquent la structure à deux échelons, l’aspect multi-période, l’incertitude et les méthodes de résolution. Notre objectif est donc, d’élaborer un cadre complet pour le design d’un réseau de distribution efficace à deux échelons, sous incertitude et multi-périodicité, dans lequel les produits sont acheminés depuis les plateformes de stockage (WP) vers les plateformes de distribution (DP) avant d'être transportés vers les clients. Ce cadre est caractérisé par une hiérarchie temporelle entre le niveau de design impliquant des décisions relatives à la localisation des plateformes et à la capacité allouée aux DPs sur une échelle de temps annuelle, et le niveau opérationnel concernant des décisions journalières de transport. % sur une base journalière.Dans une première étude, nous introduisons le cadre complet pour le problème de design de réseaux de distribution à deux échelons avec une demande incertaine, une demande et un coût variables dans le temps. Le problème est formulé comme un programme stochastique à plusieurs étapes. Il implique au niveau stratégique des décisions de localisation des DPs ainsi que des décisions d'affectation des capacités aux DPs sur plusieurs périodes de design, et au niveau opérationnel des décisions de transport sous forme d'arcs origine-destination. Ensuite, nous proposons deux modèles alternatifs basés sur la programmation stochastique à deux étapes avec recours, et les résolvons par une approche de décomposition de Benders intégrée à une technique d’approximation moyenne d’échantillon (SAA). Par la suite, nous nous intéressons à la livraison du dernier kilomètre dans un contexte urbain où les décisions de transport dans le deuxième échelon sont caractérisées par des tournées de véhicules. Un problème multi-période stochastique de localisation-routage à deux échelons avec capacité (2E-SM-CLRP) est défini, dans lequel les décisions de localisation concernent les WPs et les DPs. Le modèle est un programme stochastique à deux étapes avec recours en nombre entier. Nous développons un algorithme de décomposition de Benders. Les décisions de localisation et de capacité sont déterminées par la solution du problème maître de Benders. Le sous-problème résultant est un problème multi-dépôt de tournées de véhicule avec des dépôts et véhicules capacitaires qui est résolu par un algorithme de branch-cut-and-price.Enfin, nous étudions le cadre à plusieurs étapes proposé pour le problème stochastique multi-période de design de réseaux de distribution à deux échelons et évaluons sa tractabilité. Pour ceci, nous développons une heuristique à horizon glissant qui permet d’obtenir des bornes de bonne qualité et des solutions de design pour le modèle à plusieurs étapes. / With the high growth of e-commerce and the continuous increase in cities population contrasted with the rising levels of congestion, distribution schemes need to deploy additional echelons to offer more dynamic adjustment to the requirement of the business over time and to cope with all the random factors. In this context, a two-echelon distribution network is nowadays investigated by the practitioners.In this thesis, we first present a global survey on distribution network design problems and point out many critical modeling features, namely the two-echelon structure, the multi-period setting, the uncertainty and solution approaches. The aim, here, is to propose a comprehensive framework for the design of an efficient two-echelon distribution network under multi-period and stochastic settings in which products are directed from warehouse platforms (WPs) to distribution platforms (DPs) before being transported to customers. A temporal hierarchy characterizes the design level dealing with facility-location and capacity decisions over a set of design periods, while the operational level involves transportation decisions on a daily basis.Then, we introduce the comprehensive framework for the two-echelon distribution network design problem under uncertain demand, and time-varying demand and cost, formulated as a multi-stage stochastic program. This work looks at a generic case for the deployment of a retailer's distribution network. Thus, the problem involves, at the strategic level, decisions on the number and location of DPs along the set of design periods as well as decisions on the capacity assignment to calibrate DP throughput capacity. The operational decisions related to transportation are modeled as origin-destination arcs. Subsequently, we propose alternative modeling approaches based on two-stage stochastic programming with recourse, and solve the resulting models using a Benders decomposition approach integrated with a sample average approximation (SAA) technique.Next, we are interested in the last-mile delivery in an urban context where transportation decisions involved in the second echelon are addressed through multi-drop routes. A two-echelon stochastic multi-period capacitated location-routing problem (2E-SM-CLRP) is defined in which facility-location decisions concern both WPs and DPs. We model the problem using a two-stage stochastic program with integer recourse. To solve the 2E-SM-CLRP, we develop a Benders decomposition algorithm. The location and capacity decisions are fixed from the solution of the Benders master problem. The resulting subproblem is a capacitated vehicle-routing problem with capacitated multi-depot (CVRP-CMD) and is solved using a branch-cut-and-price algorithm.Finally, we focus on the multi-stage framework proposed for the stochastic multi-period two-echelon distribution network design problem and evaluate its tractability. A scenario tree is built to handle the set of scenarios representing demand uncertainty. We present a compact formulation and develop a rolling horizon heuristic to produce design solutions for the multi-stage model. It provides good quality bounds in a reasonable computational times.
60

Análise de processos de cenarização na geração hidroenergética. / Analysis of scenario processes in hydropower generation.

Vilhena, Frederico Abdo de 25 September 2014 (has links)
O planejamento de médio e longo prazo da operação hidrelétrica brasileira consiste em um problema de grande porte e que envolve muitas variáveis, onde, dentre estas, se destacam as vazões afluentes aos reservatórios. Estas vazões devem assim ser estimadas, com o objetivo de caracterizar a oferta futura de eletricidade em um horizonte de planejamento. Dentre as possíveis abordagens existentes para estimar estas vazões, se destaca a abordagem estocástica, que permite considerar variáveis em função de sua distribuição probabilística, e busca considerar o universo mais provável de manifestações. A abordagem estocástica pode se utilizar de modelos estocásticos, que costumam ser caracterizados através de árvores de cenários, que representam o universo de possibilidades de ocorrências. No entanto, devido à elevada dimensionalidade que o processo estocástico pode resultar ao se considerar árvores muito grandes, torna-se necessária a utilização de técnicas complementares, que visem a redução do número de cenários. Com base nesta contextualização, esta dissertação aborda de modo geral o processo de otimização estocástica do planejamento da geração hidrelétrica, considerando árvores de cenários e técnicas de redução de cenários, e utilizando como meio a modelagem de otimização da geração desenvolvida no SSD HIDROTERM, em linguagem GAMS. Como estudo de caso, foram desenvolvidos e adaptados algoritmos de otimização estocástica que consideram árvores com elevado número de cenários, gerados por meio de modelos estocásticos autorregressivos do tipo PAR e, sobre estas árvores, foi ainda aplicada a ferramenta de redução de cenários por agrupamento - SCENRED, desenvolvida em GAMS. As análises de sensibilidade realizadas visaram: validar o processo proposto de otimização estocástica; analisar os efeitos da utilização de diferentes árvores reduzidas de cenários de vazões, o impacto da consideração de diferentes horizontes de planejamento e a influência do regime hidrológico nos principais resultados do processo de otimização; além de estudar as vantagens e desvantagens deste processo para o planejamento da operação hidrelétrica. Os resultados indicam que o processo de otimização estocástica é eficaz ao considerar as aleatoriedades envolvidas na previsão de vazões afluentes. Estes também confirmaram tendências já esperadas no processo de otimização estocástica, como o fato de que quanto maior a árvore de cenários, mais precisos e estáveis tendem os resultados; assim como que quanto mais cenários envolvidos, maior o tempo de processamento requerido. Neste contexto, a utilização da ferramenta de redução SCENRED permitiu reduções significativas no tamanho da árvore de cenários, sem, contudo, ocasionar em perdas na qualidade e estabilidade da solução, além de viabilizar a aplicação do algoritmo de otimização estocástica proposto. / The medium and long-term planning of the Brazilian electric system consists of a complex problem with many uncertainties and variables, where, among these the inflows to the reservoirs highlight. These inflows need to be estimated in order to characterize the future availability of electricity in a planning horizon. Among the existing approaches to estimate these inflows, highlights the stochastic approach, which consider these variables according to their probability distribution, and aims to consider the most likely universe of manifestations. The stochastic approach can be developed through stochastic models, which are often characterized by scenarios trees that represent the possible universe. However, due to the high dimensionality that stochastic analyses can result when considering very large trees, it becomes necessary to use complementary tools, aimed at reducing the number of scenarios. Based on this context, this dissertation discusses in general the process of stochastic optimization of the hydroelectric generation planning, considering scenarios trees and scenario reduction tools, through the optimization modeling developed in the DSS HIDROTERM, developed in GAMS language. As a case study, it was generated and adapted stochastic optimization algorithms that consider trees with large number of scenarios, generated by autoregressive stochastic models PAR. Based on these trees it was applied the scenario reduction tool SCENRED, developed in GAMS language. The sensitivity analyzes developed intended to: validate the stochastic optimization process; analyze the effects of using different reduced scenarios trees of inflows; analyze the impacts of considering different planning horizons, analyze the hydrological influence on the main results of the optimization process, and the benefits and disadvantages of this process in the hydroelectric operation planning. The results indicate that the stochastic optimization process is effective to consider the randomness involved in the prediction of inflow to the reservoirs. These results have also confirmed some well-known trends in the stochastic optimization process, such as the fact that the larger the tree scenarios, more accurate and stable tend the results but also greater the processing time required. In this context, the use of the reduction tool SCENRED allowed significant reductions in the size of scenarios tree, without causing losses in quality and solution stability, enabling the application of the stochastic optimization algorithm proposed.

Page generated in 0.0596 seconds