• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 48
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 209
  • 209
  • 36
  • 36
  • 36
  • 36
  • 33
  • 31
  • 30
  • 24
  • 24
  • 22
  • 21
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Mechanism Design Theory for Service Contracts

Hong, Sukhwa 05 October 2015 (has links)
This paper presents a novel approach for designing and optimizing maintenance service contracts through the application of mechanism design theory. When offering a contract to its customer, the maintenance service provider seeks to specify contract terms - such as price, service features and incentives - that maximize the provider's profit, satisfy customer needs, allocate risks effectively and mitigate moral hazards. Optimal contract design has to account for asymmetric information and uncertainties associated with customer characteristics and behaviors. We illustrate our mechanism design approach by applying it to the contract design challenge of a gas turbine manufacturer, which also provides maintenance services for its aircraft engines. In our solution approach, we compute an optimal set of contracts. The entire set is presented to the customer and is designed such that the customer will accept one of the contract alternatives without negotiations. In addition to eliminating the costs and delays associated with negotiations, this approach also reveals the customer's private information to the service provider, which the provider can use to its benefit in maintenance management and future contract renewals. Furthermore, we design and incorporate win-win incentive mechanisms into the contracts, which reward the customer for actions that reduces maintenance costs. We present a deterministic and a stochastic mechanism design model, the latter accounting for uncertainties associated with customer actions, engine performance, and maintenance costs during the contract execution phase. / Master of Science

Stochastic optimization problems : decomposition and coordination under risk / Problèmes d'optimisation stochastique : décomposition et coordination avec risque

Gérard, Henri 26 October 2018 (has links)
Nous considérons des problèmes d'optimisation stochastique et de théorie des jeux avec des mesures de risque. Dans une première partie, nous mettons l'accent sur la cohérence temporelle. Nous commençons par prouver une équivalence entre cohérence temporelle et l'existence d'une formule imbriquée pour des fonctions. Motivés par des exemples bien connus dans les mesures de risque, nous étudions trois classes de fonctions: les fonctions invariantes par translation, les transformées de Fenchel-Moreau et les fonctions supremum. Ensuite, nous étendons le concept de cohérence temporelle à la cohérence entre joueurs, en remplaçant le temps séquentiel par un ensemble non ordonné et les fonctions par des relations binaires. Enfin, nous montrons comment la cohérence entre joueurs est liée à des formes de décomposition séquentielles et parallèles en optimisation. Dans une seconde partie, nous étudions l'impact des mesures de risque sur la multiplicité des équilibres dans les problèmes de jeux dynamiques dans les marchés complets et incomplets. Nous concevons un exemple où l'introduction de mesures de risque conduit à l'existence de trois équilibres au lieu d'un dans le cas risque neutre. Nous analysons la capacité de deux algorithmes différents à trouver les différents équilibres. Nous discutons des liens entre la cohérence des joueurs et les problèmes d'équilibre dans les jeux. Dans une troisième partie, nous étudions l'optimisation robuste pour l'apprentissage automatique. En utilisant des mesures de risque convexes, nous fournissons un cadre unifié et proposons un algorithme adapté couvrant trois ensembles d'ensembles d'ambiguïté étudiés dans la littérature / We consider stochastic optimization and game theory problems with risk measures. In a first part, we focus on time consistency. We begin by proving an equivalence between time consistent mappings and the existence of a nested formula. Motivated by well-known examples in risk measures, we investigate three classes of mappings: translation invariant, Fenchel-Moreau transform and supremum mappings. Then, we extend the concept of time consistency to player consistency, by replacing the sequential time by any unordered set and mappings by any relations. Finally, we show how player consistency relates to sequential and parallel forms of decomposition in optimization. In a second part, we study how risk measures impact the multiplicity of equilibria in dynamic game problems in complete and incomplete markets. We design an example where the introduction of risk measures leads to the existence of three equilibria instead of one in the risk neutral case. We analyze the ability of two different algorithms to recover the different equilibria. We discuss links between player consistency and equilibrium problems in games. In a third part, we study distribution ally robust optimization in machine learning. Using convex risk measures, we provide a unified framework and propose an adapted algorithm covering three ambiguity sets discussed in the literature

Adaptive sampling-based motion planning with control barrier functions

Ahmad, Ahmad Ghandi 27 September 2021 (has links)
In this thesis we modified a sampling-based motion planning algorithm to improve sampling efficiency. First, we modify the RRT* motion planning algorithm with a local motion planner that guarantees collision-free state trajectories without explicitly checking for collision with obstacles. The control trajectories are generated by solving a sequence of quadratic programs with Control Barrier Functions (CBF) constraints. If the control trajectories satisfy the CBF constraints, the state trajectories are guaranteed to stay in the free subset of the state space. Second, we use a stochastic optimization algorithm to adapt the sampling density function of RRT* to increase the probability of sampling in promising regions in the configuration space. In our approach, we use the nonparametric generalized cross-entropy (GCE) method is used for importance sampling, where a subset of the sampled RRT* trajectories is incrementally exploited to adapt the density function. The modified algorithms, the Adaptive CBF-RRT* and the CBF-RRT*, are demonstrated with numerical examples using the unicycle dynamics. The Adaptive CBF-RRT* has been shown to yield paths with lower cost with fewer tree vertexes than the CBF-RRT*. / 2022-03-27T00:00:00Z

Modely obchodování s emisními povolenkami z pohledu firmy / Business models for emission management

Novotná, Tereza January 2021 (has links)
This thesis focuses on multistage stochastic programming for CO2 emissions management models. It defines the multistage stochastic programming in general, portfolio selection problem with utility function but also with other new approaches such as using risk measures, chance constraint or second order stochastic dominance. For testing this models we need to also reformulate our problems to fit scenario tree which are generated. For some models we also need to reduce the dimension of scenario tree. Thus some techniques for scenario tree dimensionality reduction are discussed. We try to apply all these approaches to our data and get results on real data from power energy sector. For this sector, the decisions about allowances might be very important as they are not granted any allowances.

Retrospective Approximation for Smooth Stochastic Optimization

David T Newton (15369535) 30 April 2023 (has links)
<p>Stochastic Gradient Descent (SGD) is a widely-used iterative algorithm for solving stochastic optimization problems for a smooth (and possibly non-convex) objective function via queries from a first-order stochastic oracle.</p> <p>In this dissertation, we critically examine SGD's choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA) where, during each iteration, a deterministic solver executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus leverages what is appealing for implementation -- during each iteration, a solver, e.g., L-BFGS with backtracking line search is used, as is, and the subsampled objected function is solved only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L1 consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sub-linear solvers) of RA, and identify two practical termination criteria, one of which we show leads to optimal complexity rates. The message from extensive numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner is useful both in terms of algorithmic trajectory as well as from the standpoint of  dispensing with hyper-parameter tuning.</p>

Stochastic Frank-Wolfe Algorithm : Uniform Sampling Without Replacement

Håkman, Olof January 2023 (has links)
The Frank-Wolfe (FW) optimization algorithm, due to its projection free property, has gained popularity in recent years with typical application within the field of machine learning. In the stochastic setting, it is still relatively understudied in comparison to the more expensive projected method of Stochastic Gradient Descent (SGD). We propose a novel Stochastic Frank-Wolfe (SFW) algorithm, inspired by SGDo where random sampling is considered without replacement. In analogy to this abbreviation, we call the proposed algorithm SFWo. We consider a convex setting of gradient Lipschitz (smooth) functions over compact domains. Depending on experiment design (LASSO, Matrix Sensing, Matrix Completion), the SFWo algorithm, exhibits a faster or matching empirical convergence, and a tendency of bounded suboptimality by the precursor SFW. Benchmarks on both synthetic and real world data display that SFWo improves on the number of stochastic gradient evaluations needed to achieve the same guarantee as SFW. / Intresset för Frank-Wolfes (FW) optimeringsalgoritm har tack vare dess projektionsfria egenskap ökat de senaste åren med typisk tillämpning inom området maskininlärning. I sitt stokastiska uförande är den fortfarande relativt understuderad i jämförelse med den dyrare projicerande metoden Stochastic Gradient Descent (SGD). Vi föreslår en ny Stochastic Frank-Wolfe(SFW) algoritm, inspirerad av SGDo där slumpmässigt urval görs utan återläggning. I analogi med denna förkortning så kallar vi den föreslagna algoritmen SFWo. Vi betraktar en konvex miljö och gradient Lipschitz kontinuerliga (släta) funktioner över kompakta definitionsmängder. Beroende på experimentdesign (LASSO, Matrix Sensing, Matrix Completion) så visar den föreslagna algoritmen SFWo på en snabbare eller matchande empirisk konvergens och tenderar vara begränsad i suboptimalitet av föregångaren SFW. Prestandajämförelser på både syntetisk och verklig data visar att SFWo förbättrarantalet stokastiska gradientevalueringar som behövs för att uppnå samma garanti som för SFW.

Saddle point techniques in convex composite and error-in-measurement optimization

He, Niao 07 January 2016 (has links)
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.

Stochastické síťové modely / Stochastic activity networks

Sůva, Pavel January 2011 (has links)
In the present work, stochastic network models representing a project as a set of activities are studied, as well as different approaches to these models. The critical path method, stochastic network models with probability constraints, finding a reference project duration, worst-case analysis in stochastic networks and optimization of the parameters of the probability distributions of the activity durations are studied. Use of stochastic network models in telecommunications networks is also briefly presented. In a numerical study, some of these models are implemented and the~related numerical results are analyzed.

Decentralized optimization for energy efficiency under stochasticity / Optimisation décentralisée pour l’efficacité énergétique

Pacaud, François 25 October 2018 (has links)
Les réseaux électriques doivent absorber une production d'énergie renouvelable croissante, de façon décentralisée. Leur gestion optimale amène à des problèmes spécifiques. Nous étudions dans cette thèse la formulation mathématique de tels problèmes en tant que problèmes d'optimisation stochastique multi-pas de temps. Nous analysons plus spécifiquement la décomposition en temps et en espace de tels problèmes. Dans la première partie de ce manuscrit, Décomposition temporelle pour l'optimisation de la gestion de microgrid domestique, nous appliquons les méthodes d'optimisation stochastique à la gestion de microgrid de petite taille. Nous comparons différents algorithmes d'optimisation sur deux exemples: le premier considère une microgrid domestique équipée avec une batterie et une centrale de micro-cogénération; le deuxième considère quant à lui une autre microgrid domestique, cette fois équipée avec une batterie et des panneaux solaires. Dans la seconde partie, Décomposition temporelle et spatiale de problèmes d'optimisation de grande taille, nous étendons les études précédentes à des microgrids de plus grandes tailles, avec différentes unités et stockages connectés ensemble. La résolution frontale de tels problèmes de grande taille par Programmation Dynamique s'avère impraticable. Nous proposons deux algorithmes originaux pour pallier ce problème en mélangeant une décomposition temporelle avec une décomposition spatiale --- par les prix ou par les ressources. Dans la dernière partie, Contributions à l'algorithme Stochastic Dual Dynamic Programming, nous nous concentrons sur l'algorithme emph{Stochastic DualDynamic Programming} (SDDP) qui est actuellement une méthode de référence pour résoudre des problèmes d'optimisation stochastique multi-pas de temps. Nous étudions un nouveau critère d'arrêt pour cet algorithme basé sur une version duale de SDDP, qui permet d'obtenir une borne supérieure déterministe pour le problème primal / New energy systems are designed to absorb a large share of renewableenergy in a decentralized fashion. Their optimized management raises specificissues. We study mathematical formulation as large scale multistagestochastic optimization problems. We focus on time and space decompositionmethods in a stochastic setting.In the first part of this manuscript, Time decomposition inoptimization and management of home microgrids, we apply stochasticoptimization algorithms to the management of small scale microgrids. We compare different optimization algorithms on two examples:a domestic microgrid equipped with a microCombined Heat and Power generator and a battery;a domestic microgrid equipped with a battery and solar panels.In the second part, Mixing time and spatial decomposition inlarge-scale optimization problems, we extend the previous studies tolarger microgrids, where different units and storage devices are connected together. As a direct resolution by Dynamic Programming of such large scale systemsis untractable, we propose original algorithms mixing time decomposition on the one hand, and price and resource spatial decomposition on the other hand.In the third part, Contributions to Stochastic Dual Dynamic Programming,we focus on the Stochastic Dual Dynamic Programming (SDDP) algorithm,a well-known algorithm to solve multistage stochastic optimizationproblems. We present a new stopping criteria based on a dual versionof SDDP which gives a deterministic upper-bound for the primal problem

Estudo de confiabilidade aplicado à otimização da operação em tempo real de redes de abastecimento de água / Study of reliability applied to real time optimization of operation of water network supply

Odan, Frederico Keizo 28 June 2013 (has links)
A presente pesquisa realizou o estudo da confiabilidade aplicado à otimização da operação em tempo real de sistemas de abastecimento de água (SAA). Almeja-se que a otimização da operação em tempo real empregue técnicas que a tornem robusta, ou seja, que considerem as incertezas inerentes a um SAA real. Para tanto, é necessário associar ao modelo de otimização um previsor de demanda e um simulador hidráulico. O previsor produzirá estimativas de demandas futuras para o horizonte desejado, o qual alimentará o simulador, a fim de que sejam determinadas as estratégias operacionais otimizadas para atendimento das demandas previstas. Implementou-se o método de otimização AMALGAM (\"A Multialgorithm Genetically Adaptive Method\"), juntamente com as demais rotinas computacionais necessárias para integrar o simulador hidráulico (EPANET 2) e o previsor de demanda baseado na Rede Neural Dinâmica (DAN2). O modelo desenvolvido foi aplicado nos setores de abastecimento Eliana, Iguatemi e Martinez, os quais são operados pelo Departamento Autônomo de Água e Esgotos (DAAE) da cidade de Araraquara, SP. Os modelos das redes de água foram calibrados por meio de dados de vazão e carga de pressão coletados em campanhas de campo. As estratégias operacionais resultantes foram comparadas as operações praticadas pelo DAAE, resultando em reduções no custo do consumo de energia de 14%, 13% e 30% para os setores Eliana, Iguatemi e Martinez, respectivamente. / This research project proposes the study of reliability applied to real time optimization of operation of water supply network (WSN). It is desired to obtain robust real time optimization of operation through the use of adequate techniques which accounts the inherent uncertainty of a real WSN. To accomplish the task it is necessary to associate to the optimization model a demand forecaster and a hydraulic simulator. The forecaster will produce the future demand for the planning horizon to serve as input for the simulator, so it is possible to obtain the optimized operation to meet the predicted demand. It was implemented the AMALGAM (\"A Multialgorithm Genetically Adaptive Method\") to serve as optimization model as well as the necessary computational routine to link the EPANET hydraulic simulator as well as the demand forecaster based on DAN2. The developed model was applied to the sectors Eliana, Iguatemi and Martinez, which are part of the water system operated by the Autonomous Department of Water and Sewer (DAAE) of Araraquara, SP. The water network model was calibrated using data collected on field campaign to gather pressure and flow data. The optimized operation was compared to the operation from DAAE, resulting in reduction of energy consumption cost of 14%, 13% and 30% respectively for the sectors Eliana, Iguatemi and Martinez.

Page generated in 0.1591 seconds