• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 5
  • Tagged with
  • 17
  • 17
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Server allocation subject to variance constraints

Ansell, Philip Stephen January 1999 (has links)
No description available.
2

An optimisation-based approach to FKPP-type equations

Driver, David Philip January 2018 (has links)
In this thesis, we study a class of reaction-diffusion equations of the form $\frac{\partial u}{\partial t} = \mathcal{L}u + \phi u - \tfrac{1}{k} u^{k+1}$ where $\mathcal{L}$ is the stochastic generator of a Markov process, $\phi$ is a function of the space variables and $k\in \mathbb{R}\backslash\{0\}$. An important example, in the case when $k > 0$, is equations of the FKPP-type. We also give an example from the theory of utility maximisation problems when such equations arise and in this case $k < 0$. We introduce a new representation, for the solution of the equation, as the optimal value of an optimal control problem. We also give a second representation which can be seen as a dual problem to the first optimisation problem. We note that this is a new type of dual problem and we compare it to the standard Lagrangian dual formulation. By choosing controls in the optimisation problems we obtain upper and lower bounds on the solution to the PDE. We use these bounds to study the speed of the wave front of the PDE in the case when $\mathcal{L}$ is the generator of a suitable Lévy process.
3

On the complexity of energy landscapes : algorithms and a direct test of the Edwards conjecture

Martiniani, Stefano January 2017 (has links)
When the states of a system can be described by the extrema of a high-dimensional function, the characterisation of its complexity, i.e. the enumeration of the accessible stable states, can be reduced to a sampling problem. In this thesis a robust numerical protocol is established, capable of producing numerical estimates of the total number of stable states for a broad class of systems, and of computing the a-priori probability of observing any given state. The approach is demonstrated within the context of the computation of the configurational entropy of two and three-dimensional jammed packings. By means of numerical simulation we show the extensivity of the granular entropy as proposed by S.F. Edwards for three-dimensional jammed soft-sphere packings and produce a direct test of the Edwards conjecture for the equivalent two dimensional systems. We find that Edwards’ hypothesis of equiprobability of all jammed states holds only at the (un)jamming density, that is precisely the point of practical significance for many granular systems. Furthermore, two new recipes for the computation of high-dimensional volumes are presented, that improve on the established approach by either providing more statistically robust estimates of the volume or by exploiting the trajectories of the paths of steepest descent. Both methods also produce as a natural by-product unprecedented details on the structures of high-dimensional basins of attraction. Finally, we present a novel Monte Carlo algorithm to tackle problems with fluctuating weight functions. The method is shown to improve accuracy in the computation of the ‘volume’ of high dimensional ‘fluctuating’ basins of attraction and to be able to identify transition states along known reaction coordinates. We argue that the approach can be extended to the optimisation of the experimental conditions for observing certain phenomena, for which individual measurements are stochastic and provide little guidance.
4

Optimisation-based retrofit of heat-integrated distillation systems

Enriquez Gutierrez, Victor Manuel January 2016 (has links)
Distillation systems consist of one or more distillation columns, in which a mixture is separated into higher-value products, and a heat exchanger network (HEN) that recovers and reuses heat within the system. For example, crude oil distillation systems comprise crude oil distillation units (CDU), in which crude oil is distilled into products for downstream processing, a HEN and a furnace. Heat-integrated distillation systems present complex interactions between the distillation columns and HEN. These interactions, together with the many degrees of freedom and process constraints, make it challenging to retrofit or modify the operating conditions of existing distillation processes to accommodate changes in process operating conditions. Retrofit designs aim to re-use existing equipment when process objectives change, for example to increase throughput, improve product quality, or reduce energy consumption or environmental impact. To achieve these retrofit objectives, operational, structural and/or flowsheet modifications to the overall system (distillation columns and HEN) may be considered, subject to specifications and system constraints. This work proposes an optimisation-based approach to retrofit design for the capacity expansion of heat-integrated distillation systems, with a particular focus on crude oil distillation systems. Existing retrofit approaches found in the open research literature consider operational optimisation, replacing column internals, adding preflash or prefractionation units and HEN retrofit to increase the capacity of existing systems. Constraints considered usually relate to the distillation column hydraulic limits, product quality specifications and heat exchanger performance (e.g. minimum temperature approach and, pressure drop). However, no existing methodologies consider these possible modifications simultaneously; thus, beneficial interactions between flowsheet modifications, operational changes, heat integration and equipment modifications may be missed. In this work, retrofit design solutions for crude oil distillation are developed using a stochastic optimisation framework implemented in MATLAB to optimise the system operating parameters and to propose flowsheet, column and HEN modifications. Within the framework, the optimiser can propose addition of a preflash unit, modifications to the CDU internals and changes to its operating conditions; the separation system is then simulated using Aspen HYSYS (via the MATLAB interface) and the hydraulic performance of the column is analysed using published hydraulic correlations. The optimiser also proposes modifications to the HEN (i.e. installed heat transfer area, HEN structure and operating conditions), which is then simulated to evaluate heating and cooling utility demand. Either simulated annealing and global search optimisation algorithms are applied to identify the optimal design and operating conditions that meet the production requirements and product specifications. Industrially relevant case studies demonstrate the effectiveness and benefits of using the proposed retrofit approach. The case studies illustrate that combined structural and operational modifications can be effectively and systematically identified to debottleneck an existing crude oil distillation system with a relatively short payback time, while simultaneously reducing energy consumption per barrel of crude oil processed.
5

Supply chain design and distribution planning under supply uncertainty : Application to bulk liquid gas distribution

Dubedout, Hugues 03 June 2013 (has links) (PDF)
The distribution of liquid gazes (or cryogenic liquids) using bulks and tractors is a particular aspect of a fret distribution supply chain. Traditionally, these optimisation problems are treated under certainty assumptions. However, a large part of real world optimisation problems are subject to significant uncertainties due to noisy, approximated or unknown objective functions, data and/or environment parameters. In this research we investigate both robust and stochastic solutions. We study both an inventory routing problem (IRP) and a production planning and customer allocation problem. Thus, we present a robust methodology with an advanced scenario generation methodology. We show that with minimal cost increase, we can significantly reduce the impact of the outage on the supply chain. We also show how the solution generation used in this method can also be applied to the deterministic version of the problem to create an efficient GRASP and significantly improve the results of the existing algorithm. The production planning and customer allocation problem aims at making tactical decisions over a longer time horizon. We propose a single-period, two-stage stochastic model, where the first stage decisions represent the initial decisions taken for the entire period, and the second stage representing the recovery decision taken after an outage. We aim at making a tool that can be used both for decision making and supply chain analysis. Therefore, we not only present the optimized solution, but also key performance indicators. We show on multiple real-life test cases that it isoften possible to find solutions where a plant outage has only a minimal impact.
6

Water Allocation Under Uncertainty – Potential Gains from Optimisation and Market Mechanisms

Starkey, Stephen Robert January 2014 (has links)
This thesis first develops a range of wholesale water market design options, based on an optimisation approach to market-clearing, as in electricity markets, focusing on the extent to which uncertainty is accounted for in bidding, market-clearing and contract formation. We conclude that the most promising option is bidding for, and trading, a combination of fixed and proportionally scaled contract volumes, which are based on optimised outputs. Other options include those which are based on a post-clearing fit (e.g. regression) to the natural optimised outputs, or constraining the optimisation such that cleared allocations are in the contractual form required by participants. Alternatively, participants could rely on financial markets to trade instruments, but informed by a centralised market-clearing simulation. We then describe a computational modelling system, using Stochastic Constructive Dynamic Programming (CDDP), and use it to assess the importance of modelling uncertainty, and correlations, in reservoir optimisation and/or market-clearing, under a wide range of physical and economic assumptions, with or without a market. We discuss a number of bases of comparison, but focus on the benefit gain achieved as a proportion of the perfectly competitive market value (price times quantity), calculated using the market clearing price from Markov Chain optimisation. With inflow and demand completely out of phase, high inflow seasonality and volatility, and a constant elasticity of -0.5, the greatest contribution of stochastic (Markov) optimisation, as a proportion of market value was 29%, when storage capacity was only 25% of mean monthly inflow, and with effectively unlimited release capacity. This proportional gain fell only slowly for higher storage capacities, but nearly halved for lower release capacities, around the mean monthly inflow, mainly because highly constrained systems produce high prices, and hence raise market value. The highest absolute gain was actually when release capacity was only 75% of mean monthly inflow. On average, over a storage capacity range from 2% to 1200%, and release capacity range from 100% to 400%, times the mean monthly inflow, the gains from using Markov Chain and Stochastic Independent optimisation, rather than deterministic optimisation, were 18% and 13% of market value, respectively. As expected, the gains from stochastic optimisation rose rapidly for lower elasticities, and when vertical steps were added to the demand curve. But they became nearly negligible when (the absolute value of) elasticity rose to 0.75 and beyond, inflow was in-phase with demand, or the range of either seasonal variation or intra-month variability reduced to ±50% of the mean monthly inflow. Still, our results indicate that there are a wide range of reservoir and economic systems where accounting for uncertainty directly in the water allocation process could result in significant gains, whether in a centrally controlled or market context. Price and price risk, which affect individual participants, were significantly more sensitive. Our hope is that this work helps inform parties who are considering enhancing their water allocation practices with improved stochastic optimisation, and potentially market based mechanisms.
7

Autour De L'Usage des gradients en apprentissage statistique / Around the Use of Gradients in Machine Learning

Massé, Pierre-Yves 14 December 2017 (has links)
Nous établissons un théorème de convergence locale de l'algorithme classique d'optimisation de système dynamique RTRL, appliqué à un système non linéaire. L'algorithme RTRL est un algorithme en ligne, mais il doit maintenir une grande quantités d'informations, ce qui le rend impropre à entraîner des systèmes d'apprentissage de taille moyenne. L'algorithme NBT y remédie en maintenant une approximation aléatoire non biaisée de faible taille de ces informations. Nous prouvons également la convergence avec probabilité arbitrairement proche de un, de celui-ci vers l'optimum local atteint par l'algorithme RTRL. Nous formalisons également l'algorithme LLR et en effectuons une étude expérimentale, sur des données synthétiques. Cet algorithme met à jour de manière adaptive le pas d'une descente de gradient, par descente de gradient sur celui-ci. Il apporte ainsi une réponse partielle au problème de la fixation numérique du pas de descente, dont le choix influence fortement la procédure de descente et qui doit sinon faire l'objet d'une recherche empirique potentiellement longue par le praticien. / We prove a local convergence theorem for the classical dynamical system optimization algorithm called RTRL, in a nonlinear setting. The rtrl works on line, but maintains a huge amount of information, which makes it unfit to train even moderately big learning models. The NBT algorithm turns it by replacing these informations by a non-biased, low dimension, random approximation. We also prove the convergence with arbitrarily close to one probability, of this algorithm to the local optimum reached by the RTRL algorithm. We also formalize the LLR algorithm and conduct experiments on it, on synthetic data. This algorithm updates in an adaptive fashion the step size of a gradient descent, by conducting a gradient descent on this very step size. It therefore partially solves the issue of the numerical choice of a step size in a gradient descent. This choice influences strongly the descent and must otherwise be hand-picked by the user, following a potentially long research.
8

Employees Provident Fund (EPF) Malaysia : generic models for asset and liability management under uncertainty

Sheikh Hussin, Siti Aida January 2012 (has links)
We describe Employees Provident Funds (EPF) Malaysia. We explain about Defined Contribution and Defined Benefit Pension Funds and examine their similarities and differences. We also briefly discuss and compare EPF schemes in four Commonwealth countries. A family of Stochastic Programming Models is developed for the Employees Provident Fund Malaysia. This is a family of ex-ante decision models whose main aim is to manage, that is, balance assets and liabilities. The decision models comprise Expected Value Linear Programming, Two Stage Stochastic Programming with recourse, Chance Constrained Programming and Integrated Chance Constraints Programming. For the last three decision models we use scenario generators which capture the uncertainties of asset returns, salary contributions and lump sum liabilities payments. These scenario generation models for Assets and liabilities were developed and calibrated using historical data. The resulting decisions are evaluated with in-sample analysis using typical risk adjusted performance measures. Out- of- sample testing is also carried out with a larger set of generated scenarios. The benefits of two stage stochastic programming over deterministic approaches on asset allocation as well as the amount of borrowing needed for each pre-specified growth dividend are demonstrated. The contributions of this thesis are i) an insightful overview of EPF ii) construction of scenarios for assets returns and liabilities with different values of growth dividend, that combine the Markov population model with the salary growth model and retirement payments iii) construction and analysis of generic ex-ante decision models taking into consideration uncertain asset returns and uncertain liabilities iv) testing and performance evaluation of these decisions in an ex-post setting.
9

Supply chain design and distribution planning under supply uncertainty : Application to bulk liquid gas distribution / Optimisation de chaine logistique et planning de distribution sous incertitude d’approvisionnement

Dubedout, Hugues 03 June 2013 (has links)
La distribution de liquide cryogénique en « vrac », ou par camions citernes, est un cas particulier des problèmes d’optimisation logistique. Ces problèmes d’optimisation de chaines logistiques et/ou de transport sont habituellement traités sous l’hypothèse que les données sont connues à l’avance et certaines. Or, la majorité des problèmes d’optimisation industriels se placent dans un contexte incertain. Mes travaux de recherche s’intéressent aussi bien aux méthodes d’optimisation robuste que stochastiques.Mes travaux portent sur deux problèmes distincts. Le premier est un problème de tournées de véhicules avec gestion des stocks. Je propose une méthodologie basée sur les méthodes d’optimisation robuste, représentant les pannes par des scénarios. Je montre qu’il est possible de trouver des solutions qui réduisent de manière significative l’impact des pannes d’usine sur la distribution. Je montre aussi comment la méthode proposée peut aussi être appliquée à la version déterministe du problème en utilisant la méthode GRASP, et ainsi améliorer significativement les résultats obtenu par l’algorithme en place. Le deuxième problème étudié concerne la planification de la production et d’affectation les clients. Je modélise ce problème à l’aide de la technique d’optimisation stochastique avec recours. Le problème maître prend les décisions avant qu’une panne ce produise, tandis que les problèmes esclaves optimisent le retour à la normale après la panne. Le but est de minimiser le coût de la chaîne logistique. Les résultats présentés contiennent non seulement la solution optimale au problème stochastique, mais aussi des indicateurs clés de performance. Je montre qu’il est possible de trouver des solutions ou les pannes n’ont qu’un impact mineur. / The distribution of liquid gazes (or cryogenic liquids) using bulks and tractors is a particular aspect of a fret distribution supply chain. Traditionally, these optimisation problems are treated under certainty assumptions. However, a large part of real world optimisation problems are subject to significant uncertainties due to noisy, approximated or unknown objective functions, data and/or environment parameters. In this research we investigate both robust and stochastic solutions. We study both an inventory routing problem (IRP) and a production planning and customer allocation problem. Thus, we present a robust methodology with an advanced scenario generation methodology. We show that with minimal cost increase, we can significantly reduce the impact of the outage on the supply chain. We also show how the solution generation used in this method can also be applied to the deterministic version of the problem to create an efficient GRASP and significantly improve the results of the existing algorithm. The production planning and customer allocation problem aims at making tactical decisions over a longer time horizon. We propose a single-period, two-stage stochastic model, where the first stage decisions represent the initial decisions taken for the entire period, and the second stage representing the recovery decision taken after an outage. We aim at making a tool that can be used both for decision making and supply chain analysis. Therefore, we not only present the optimized solution, but also key performance indicators. We show on multiple real-life test cases that it isoften possible to find solutions where a plant outage has only a minimal impact.
10

Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire / Stochastic algorithms for learning, optimization and approximation of the steady regime

Saadane, Sofiane 02 December 2016 (has links)
Dans cette thèse, nous étudions des thématiques autour des algorithmes stochastiques et c'est pour cette raison que nous débuterons ce manuscrit par des éléments généraux sur ces algorithmes en donnant des résultats historiques pour poser les bases de nos travaux. Ensuite, nous étudierons un algorithme de bandit issu des travaux de N arendra et Shapiro dont l'objectif est de déterminer parmi un choix de plusieurs sources laquelle profite le plus à l'utilisateur en évitant toutefois de passer trop de temps à tester celles qui sont moins per­formantes. Notre but est dans un premier temps de comprendre les faiblesses structurelles de cet algorithme pour ensuite proposer une procédure optimale pour une quantité qui mesure les performances d'un algorithme de bandit, le regret. Dans nos résultats, nous proposerons un algorithme appelé NS sur-pénalisé qui permet d'obtenir une borne de regret optimale au sens minimax au travers d'une étude fine de l'algorithme stochastique sous-jacent à cette procédure. Un second travail sera de donner des vitesses de convergence pour le processus apparaissant dans l'étude de la convergence en loi de l'algorithme NS sur-pénalisé. La par­ticularité de l'algorithme est qu'il ne converge pas en loi vers une diffusion comme la plupart des algorithmes stochastiques mais vers un processus à sauts non-diffusif ce qui rend l'étude de la convergence à l'équilibre plus technique. Nous emploierons une technique de couplage afin d'étudier cette convergence. Le second travail de cette thèse s'inscrit dans le cadre de l'optimisation d'une fonc­tion au moyen d'un algorithme stochastique. Nous étudierons une version stochastique de l'algorithme déterministe de boule pesante avec amortissement. La particularité de cet al­gorithme est d'être articulé autour d'une dynamique qui utilise une moyennisation sur tout le passé de sa trajectoire. La procédure fait appelle à une fonction dite de mémoire qui, selon les formes qu'elle prend, offre des comportements intéressants. Dans notre étude, nous verrons que deux types de mémoire sont pertinents : les mémoires exponentielles et poly­nomiales. Nous établirons pour commencer des résultats de convergence dans le cas général où la fonction à minimiser est non-convexe. Dans le cas de fonctions fortement convexes, nous obtenons des vitesses de convergence optimales en un sens que nous définirons. En­fin, l'étude se termine par un résultat de convergence en loi du processus après une bonne renormalisation. La troisième partie s'articule autour des algorithmes de McKean-Vlasov qui furent intro­duit par Anatoly Vlasov et étudié, pour la première fois, par Henry McKean dans l'optique de la modélisation de la loi de distribution du plasma. Notre objectif est de proposer un al­gorithme stochastique capable d'approcher la mesure invariante du processus. Les méthodes pour approcher une mesure invariante sont connues dans le cas des diffusions et de certains autre processus mais ici la particularité du processus de McKean-Vlasov est de ne pas être une diffusion linéaire. En effet, le processus a de la mémoire comme les processus de boule pesante. De ce fait, il nous faudra développer une méthode alternative pour contourner ce problème. Nous aurons besoin d'introduire la notion de pseudo-trajectoires afin de proposer une procédure efficace. / In this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure.

Page generated in 0.132 seconds