• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 47
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 203
  • 203
  • 36
  • 36
  • 36
  • 35
  • 34
  • 32
  • 29
  • 25
  • 24
  • 23
  • 22
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Retrospective Approximation for Smooth Stochastic Optimization

David T Newton (15369535) 30 April 2023 (has links)
<p>Stochastic Gradient Descent (SGD) is a widely-used iterative algorithm for solving stochastic optimization problems for a smooth (and possibly non-convex) objective function via queries from a first-order stochastic oracle.</p> <p>In this dissertation, we critically examine SGD's choice of executing a single step as opposed to multiple steps between subsample updates. Our investigation leads naturally to generalizing SG into Retrospective Approximation (RA) where, during each iteration, a deterministic solver executes possibly multiple steps on a subsampled deterministic problem and stops when further solving is deemed unnecessary from the standpoint of statistical efficiency. RA thus leverages what is appealing for implementation -- during each iteration, a solver, e.g., L-BFGS with backtracking line search is used, as is, and the subsampled objected function is solved only to the extent necessary. We develop a complete theory using relative error of the observed gradients as the principal object, demonstrating that almost sure and L1 consistency of RA are preserved under especially weak conditions when sample sizes are increased at appropriate rates. We also characterize the iteration and oracle complexity (for linear and sub-linear solvers) of RA, and identify two practical termination criteria, one of which we show leads to optimal complexity rates. The message from extensive numerical experiments is that the ability of RA to incorporate existing second-order deterministic solvers in a strategic manner is useful both in terms of algorithmic trajectory as well as from the standpoint of  dispensing with hyper-parameter tuning.</p>
12

Stochastic Frank-Wolfe Algorithm : Uniform Sampling Without Replacement

Håkman, Olof January 2023 (has links)
The Frank-Wolfe (FW) optimization algorithm, due to its projection free property, has gained popularity in recent years with typical application within the field of machine learning. In the stochastic setting, it is still relatively understudied in comparison to the more expensive projected method of Stochastic Gradient Descent (SGD). We propose a novel Stochastic Frank-Wolfe (SFW) algorithm, inspired by SGDo where random sampling is considered without replacement. In analogy to this abbreviation, we call the proposed algorithm SFWo. We consider a convex setting of gradient Lipschitz (smooth) functions over compact domains. Depending on experiment design (LASSO, Matrix Sensing, Matrix Completion), the SFWo algorithm, exhibits a faster or matching empirical convergence, and a tendency of bounded suboptimality by the precursor SFW. Benchmarks on both synthetic and real world data display that SFWo improves on the number of stochastic gradient evaluations needed to achieve the same guarantee as SFW. / Intresset för Frank-Wolfes (FW) optimeringsalgoritm har tack vare dess projektionsfria egenskap ökat de senaste åren med typisk tillämpning inom området maskininlärning. I sitt stokastiska uförande är den fortfarande relativt understuderad i jämförelse med den dyrare projicerande metoden Stochastic Gradient Descent (SGD). Vi föreslår en ny Stochastic Frank-Wolfe(SFW) algoritm, inspirerad av SGDo där slumpmässigt urval görs utan återläggning. I analogi med denna förkortning så kallar vi den föreslagna algoritmen SFWo. Vi betraktar en konvex miljö och gradient Lipschitz kontinuerliga (släta) funktioner över kompakta definitionsmängder. Beroende på experimentdesign (LASSO, Matrix Sensing, Matrix Completion) så visar den föreslagna algoritmen SFWo på en snabbare eller matchande empirisk konvergens och tenderar vara begränsad i suboptimalitet av föregångaren SFW. Prestandajämförelser på både syntetisk och verklig data visar att SFWo förbättrarantalet stokastiska gradientevalueringar som behövs för att uppnå samma garanti som för SFW.
13

Mechanism Design Theory for Service Contracts

Hong, Sukhwa 05 October 2015 (has links)
This paper presents a novel approach for designing and optimizing maintenance service contracts through the application of mechanism design theory. When offering a contract to its customer, the maintenance service provider seeks to specify contract terms - such as price, service features and incentives - that maximize the provider's profit, satisfy customer needs, allocate risks effectively and mitigate moral hazards. Optimal contract design has to account for asymmetric information and uncertainties associated with customer characteristics and behaviors. We illustrate our mechanism design approach by applying it to the contract design challenge of a gas turbine manufacturer, which also provides maintenance services for its aircraft engines. In our solution approach, we compute an optimal set of contracts. The entire set is presented to the customer and is designed such that the customer will accept one of the contract alternatives without negotiations. In addition to eliminating the costs and delays associated with negotiations, this approach also reveals the customer's private information to the service provider, which the provider can use to its benefit in maintenance management and future contract renewals. Furthermore, we design and incorporate win-win incentive mechanisms into the contracts, which reward the customer for actions that reduces maintenance costs. We present a deterministic and a stochastic mechanism design model, the latter accounting for uncertainties associated with customer actions, engine performance, and maintenance costs during the contract execution phase. / Master of Science
14

Saddle point techniques in convex composite and error-in-measurement optimization

He, Niao 07 January 2016 (has links)
This dissertation aims to develop efficient algorithms with improved scalability and stability properties for large-scale optimization and optimization under uncertainty, and to bridge some of the gaps between modern optimization theories and recent applications emerging in the Big Data environment. To this end, the dissertation is dedicated to two important subjects -- i) Large-scale Convex Composite Optimization and ii) Error-in-Measurement Optimization. In spite of the different natures of these two topics, the common denominator, to be presented, lies in their accommodation for systematic use of saddle point techniques for mathematical modeling and numerical processing. The main body can be split into three parts. In the first part, we consider a broad class of variational inequalities with composite structures, allowing to cover the saddle point/variational analogies of the classical convex composite minimization (i.e. summation of a smooth convex function and a simple nonsmooth convex function). We develop novel composite versions of the state-of-the-art Mirror Descent and Mirror Prox algorithms aimed at solving such type of problems. We demonstrate that the algorithms inherit the favorable efficiency estimate of their prototypes when solving structured variational inequalities. Moreover, we develop several variants of the composite Mirror Prox algorithm along with their corresponding complexity bounds, allowing the algorithm to handle the case of imprecise prox mapping as well as the case when the operator is represented by an unbiased stochastic oracle. In the second part, we investigate four general types of large-scale convex composite optimization problems, including (a) multi-term composite minimization, (b) linearly constrained composite minimization, (c) norm-regularized nonsmooth minimization, and (d) maximum likelihood Poisson imaging. We demonstrate that the composite Mirror Prox, when integrated with saddle point techniques and other algorithmic tools, can solve all these optimization problems with the best known so far rates of convergences. Our main related contributions are as follows. Firstly, regards to problems of type (a), we develop an optimal algorithm by integrating the composite Mirror Prox with a saddle point reformulation based on exact penalty. Secondly, regards to problems of type (b), we develop a novel algorithm reducing the problem to solving a ``small series'' of saddle point subproblems and achieving an optimal, up to log factors, complexity bound. Thirdly, regards to problems of type (c), we develop a Semi-Proximal Mirror-Prox algorithm by leveraging the saddle point representation and linear minimization over problems' domain and attain optimality both in the numbers of calls to the first order oracle representing the objective and calls to the linear minimization oracle representing problem's domain. Lastly, regards to problem (d), we show that the composite Mirror Prox when applied to the saddle point reformulation circumvents the difficulty with non-Lipschitz continuity of the objective and exhibits better convergence rate than the typical rate for nonsmooth optimization. We conduct extensive numerical experiments and illustrate the practical potential of our algorithms in a wide spectrum of applications in machine learning and image processing. In the third part, we examine error-in-measurement optimization, referring to decision-making problems with data subject to measurement errors; such problems arise naturally in a number of important applications, such as privacy learning, signal processing, and portfolio selection. Due to the postulated observation scheme and specific structure of the problem, straightforward application of standard stochastic optimization techniques such as Stochastic Approximation (SA) and Sample Average Approximation (SAA) are out of question. Our goal is to develop computationally efficient and, hopefully, not too conservative data-driven techniques applicable to a broad scope of problems and allowing for theoretical performance guarantees. We present two such approaches -- one depending on a fully algorithmic calculus of saddle point representations of convex-concave functions and the other depending on a general approximation scheme of convex stochastic programming. Both approaches allow us to convert the problem of interests to a form amenable for SA or SAA. The latter developments are primarily focused on two important applications -- affine signal processing and indirect support vector machines.
15

Stochastické síťové modely / Stochastic activity networks

Sůva, Pavel January 2011 (has links)
In the present work, stochastic network models representing a project as a set of activities are studied, as well as different approaches to these models. The critical path method, stochastic network models with probability constraints, finding a reference project duration, worst-case analysis in stochastic networks and optimization of the parameters of the probability distributions of the activity durations are studied. Use of stochastic network models in telecommunications networks is also briefly presented. In a numerical study, some of these models are implemented and the~related numerical results are analyzed.
16

Decentralized optimization for energy efficiency under stochasticity / Optimisation décentralisée pour l’efficacité énergétique

Pacaud, François 25 October 2018 (has links)
Les réseaux électriques doivent absorber une production d'énergie renouvelable croissante, de façon décentralisée. Leur gestion optimale amène à des problèmes spécifiques. Nous étudions dans cette thèse la formulation mathématique de tels problèmes en tant que problèmes d'optimisation stochastique multi-pas de temps. Nous analysons plus spécifiquement la décomposition en temps et en espace de tels problèmes. Dans la première partie de ce manuscrit, Décomposition temporelle pour l'optimisation de la gestion de microgrid domestique, nous appliquons les méthodes d'optimisation stochastique à la gestion de microgrid de petite taille. Nous comparons différents algorithmes d'optimisation sur deux exemples: le premier considère une microgrid domestique équipée avec une batterie et une centrale de micro-cogénération; le deuxième considère quant à lui une autre microgrid domestique, cette fois équipée avec une batterie et des panneaux solaires. Dans la seconde partie, Décomposition temporelle et spatiale de problèmes d'optimisation de grande taille, nous étendons les études précédentes à des microgrids de plus grandes tailles, avec différentes unités et stockages connectés ensemble. La résolution frontale de tels problèmes de grande taille par Programmation Dynamique s'avère impraticable. Nous proposons deux algorithmes originaux pour pallier ce problème en mélangeant une décomposition temporelle avec une décomposition spatiale --- par les prix ou par les ressources. Dans la dernière partie, Contributions à l'algorithme Stochastic Dual Dynamic Programming, nous nous concentrons sur l'algorithme emph{Stochastic DualDynamic Programming} (SDDP) qui est actuellement une méthode de référence pour résoudre des problèmes d'optimisation stochastique multi-pas de temps. Nous étudions un nouveau critère d'arrêt pour cet algorithme basé sur une version duale de SDDP, qui permet d'obtenir une borne supérieure déterministe pour le problème primal / New energy systems are designed to absorb a large share of renewableenergy in a decentralized fashion. Their optimized management raises specificissues. We study mathematical formulation as large scale multistagestochastic optimization problems. We focus on time and space decompositionmethods in a stochastic setting.In the first part of this manuscript, Time decomposition inoptimization and management of home microgrids, we apply stochasticoptimization algorithms to the management of small scale microgrids. We compare different optimization algorithms on two examples:a domestic microgrid equipped with a microCombined Heat and Power generator and a battery;a domestic microgrid equipped with a battery and solar panels.In the second part, Mixing time and spatial decomposition inlarge-scale optimization problems, we extend the previous studies tolarger microgrids, where different units and storage devices are connected together. As a direct resolution by Dynamic Programming of such large scale systemsis untractable, we propose original algorithms mixing time decomposition on the one hand, and price and resource spatial decomposition on the other hand.In the third part, Contributions to Stochastic Dual Dynamic Programming,we focus on the Stochastic Dual Dynamic Programming (SDDP) algorithm,a well-known algorithm to solve multistage stochastic optimizationproblems. We present a new stopping criteria based on a dual versionof SDDP which gives a deterministic upper-bound for the primal problem
17

Estudo de confiabilidade aplicado à otimização da operação em tempo real de redes de abastecimento de água / Study of reliability applied to real time optimization of operation of water network supply

Odan, Frederico Keizo 28 June 2013 (has links)
A presente pesquisa realizou o estudo da confiabilidade aplicado à otimização da operação em tempo real de sistemas de abastecimento de água (SAA). Almeja-se que a otimização da operação em tempo real empregue técnicas que a tornem robusta, ou seja, que considerem as incertezas inerentes a um SAA real. Para tanto, é necessário associar ao modelo de otimização um previsor de demanda e um simulador hidráulico. O previsor produzirá estimativas de demandas futuras para o horizonte desejado, o qual alimentará o simulador, a fim de que sejam determinadas as estratégias operacionais otimizadas para atendimento das demandas previstas. Implementou-se o método de otimização AMALGAM (\"A Multialgorithm Genetically Adaptive Method\"), juntamente com as demais rotinas computacionais necessárias para integrar o simulador hidráulico (EPANET 2) e o previsor de demanda baseado na Rede Neural Dinâmica (DAN2). O modelo desenvolvido foi aplicado nos setores de abastecimento Eliana, Iguatemi e Martinez, os quais são operados pelo Departamento Autônomo de Água e Esgotos (DAAE) da cidade de Araraquara, SP. Os modelos das redes de água foram calibrados por meio de dados de vazão e carga de pressão coletados em campanhas de campo. As estratégias operacionais resultantes foram comparadas as operações praticadas pelo DAAE, resultando em reduções no custo do consumo de energia de 14%, 13% e 30% para os setores Eliana, Iguatemi e Martinez, respectivamente. / This research project proposes the study of reliability applied to real time optimization of operation of water supply network (WSN). It is desired to obtain robust real time optimization of operation through the use of adequate techniques which accounts the inherent uncertainty of a real WSN. To accomplish the task it is necessary to associate to the optimization model a demand forecaster and a hydraulic simulator. The forecaster will produce the future demand for the planning horizon to serve as input for the simulator, so it is possible to obtain the optimized operation to meet the predicted demand. It was implemented the AMALGAM (\"A Multialgorithm Genetically Adaptive Method\") to serve as optimization model as well as the necessary computational routine to link the EPANET hydraulic simulator as well as the demand forecaster based on DAN2. The developed model was applied to the sectors Eliana, Iguatemi and Martinez, which are part of the water system operated by the Autonomous Department of Water and Sewer (DAAE) of Araraquara, SP. The water network model was calibrated using data collected on field campaign to gather pressure and flow data. The optimized operation was compared to the operation from DAAE, resulting in reduction of energy consumption cost of 14%, 13% and 30% respectively for the sectors Eliana, Iguatemi and Martinez.
18

Kvazi Njutnovi postupci za probleme stohastičkog programiranja / Quasi Newton Methods for Stochastic Programming Problems

Ovcin Zoran 19 July 2016 (has links)
<p>Posmatra se problem minimizacije bez ograničenja. U determinističkom&nbsp;slučaju ti problemi se uspe&scaron;no re&scaron;avaju iterativnim Kvazi Njutnovim postupcima.&nbsp;Ovde se istražuje &nbsp;stohastički slučaj, kada su poznate vrednosti funkcije cilja i njenog gradijenta na koje je uticao &scaron;um. Koristi se novi način određivanja dužina koraka, koji kombinuje metod linijskog pretraživanja i metod stohastičke aproksimacije tako da zadrži dobre osobine oba pristupa i obezbedi veću efikasnost postupka. Metod je testiran u kombinaciji sa vi&scaron;e načina izbora pravca u iterativnom postupku. Dokazana je konvergencija novog postupka i testiranjem na velikom broju standardnih test problema pokazana njegova efikasnost. Takođe se za re&scaron;avanje problema ekvilibriuma u Neoklasičnoj ekonomiji predlaže i dokazuje konvergencija jednog Fiksnog Njutnovog postupka. U zadatku nalaženja re&scaron;enja za niz problema kojima se preciznije modelira slučajni sistem, ovaj Fiksni Njutnov postupak ostvaruje veliku u&scaron;tedu CPU vremena u odnosu na Njutnov metod. U prvom delu teze je dat op&scaron;ti teoretski uvod. U drugom delu je dat pregled relevantnih rezultata iz posmatranih oblasti zajedno sa dva originalna rezultata. U trećem &nbsp;delu su dati rezultati numeričkih testova.</p> / <p>The problem under consideration is unconstrained minimization pro-blem. The problem in deterministic case is often solved with Quasi Newton met-hods. In noisy environment, which is considered, new approach for step length along descent direction is used. The new approach combines line search and stoc-hastic&nbsp; approximation method using good characteristics of both enabling better efficiency. The convergence is proved. New step length is tested with three de-scent directions. Many standard test problems show the efficiency of the met-hod. Also, a new, affordable procedure based on application of the fixed Newton method for a sequence of equilibrium problems generated by simulation is intro-duced. The convergence conditions of the method are derived. The numerical results show a clear difference in the quality of information obtained by solving a sequence of problems if compared with the single equilibrium problem. In the first part general theoretical introduction is given. In the second part a survey of results from scientific community is given together with original results. The third part contains many numerical tests of new methods that show its efficiency.</p>
19

MODIFYING SIGNAL RETIMING PROCEDURES AND POLICIES: A CASE OF HIGH-FIDELITY MODELING WITH MEDIUM-RESOLUTION DATA

Unknown Date (has links)
Signal retiming, or signal optimization process, has not changed much over the last few decades. Traditional procedures rely on low-resolution data and a low-fidelity modeling approach. Such developed signal timing plans always require a fine-tuning process for deployed signal plans in field, thus questioning the very benefits of signal optimization. New trends suggest the use of high-resolution data, which are not easily available. At the same time, many improvements could be made if the traditional signal retiming process was modified to include the use of medium-resolution data and high-fidelity modeling. This study covers such an approach, where a traditional retiming procedure is modified to utilize large medium-resolution data sets, high-fidelity simulation models, and powerful stochastic optimization to develop robust signal timing plans. The study covers a 28-intersection urban corridor in Southeastern Florida. Medium-resolution data are used to identify peak-hour, Day-Of-Year (DOY) representative volumes for major seasons. Both low-fidelity and high-fidelity models are developed and calibrated with high precision to match the field signal operations. Then, by using traditional and stochastic optimization tools, signal timing plans are developed and tested in microsimulation. The findings reveal shortcomings of the traditional approach. Signal timing plans developed from medium-resolution data and high-fidelity modeling approach reduce average delay by 5%-26%. Travel times on the corridor are usually reduced by up to 10.5%, and the final solution does not transfer delay on the other neighboring streets (illustrated through latent delay), which is also decreased by 10%-49% when compared with the traditional results. In general, the novel approach has shown a great potential. The next step should be field testing and validation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
20

Robustní přístupy v optimalizaci portfolia se stochastickou dominancí / Robust approaches in portfolio optimization with stochastic dominance

Kozmík, Karel January 2019 (has links)
We use modern approach of stochastic dominance in portfolio optimization, where we want the portfolio to dominate a benchmark. Since the distribution of returns is often just estimated from data, we look for the worst distribution that differs from empirical distribution at maximum by a predefined value. First, we define in what sense the distribution is the worst for the first and second order stochastic dominance. For the second order stochastic dominance, we use two different formulations for the worst case. We derive the robust stochastic dominance test for all the mentioned approaches and find the worst case distribution as the optimal solution of a non-linear maximization problem. Then we derive programs to maximize an objective function over the weights of the portfolio with robust stochastic dominance in constraints. We consider robustness either in returns or in probabilities for both the first and the second order stochastic dominance. To the best of our knowledge nobody was able to derive such program before. We apply all the derived optimization programs to real life data, specifically to returns of assets captured by Dow Jones Industrial Average, and we analyze the problems in detail using optimal solutions of the optimization programs with multiple setups. The portfolios calculated using...

Page generated in 0.0306 seconds