• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 48
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 36
  • 36
  • 36
  • 36
  • 34
  • 32
  • 30
  • 24
  • 24
  • 22
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Kvazi Njutnovi postupci za probleme stohastičkog programiranja / Quasi Newton Methods for Stochastic Programming Problems

Ovcin Zoran 19 July 2016 (has links)
<p>Posmatra se problem minimizacije bez ograničenja. U determinističkom&nbsp;slučaju ti problemi se uspe&scaron;no re&scaron;avaju iterativnim Kvazi Njutnovim postupcima.&nbsp;Ovde se istražuje &nbsp;stohastički slučaj, kada su poznate vrednosti funkcije cilja i njenog gradijenta na koje je uticao &scaron;um. Koristi se novi način određivanja dužina koraka, koji kombinuje metod linijskog pretraživanja i metod stohastičke aproksimacije tako da zadrži dobre osobine oba pristupa i obezbedi veću efikasnost postupka. Metod je testiran u kombinaciji sa vi&scaron;e načina izbora pravca u iterativnom postupku. Dokazana je konvergencija novog postupka i testiranjem na velikom broju standardnih test problema pokazana njegova efikasnost. Takođe se za re&scaron;avanje problema ekvilibriuma u Neoklasičnoj ekonomiji predlaže i dokazuje konvergencija jednog Fiksnog Njutnovog postupka. U zadatku nalaženja re&scaron;enja za niz problema kojima se preciznije modelira slučajni sistem, ovaj Fiksni Njutnov postupak ostvaruje veliku u&scaron;tedu CPU vremena u odnosu na Njutnov metod. U prvom delu teze je dat op&scaron;ti teoretski uvod. U drugom delu je dat pregled relevantnih rezultata iz posmatranih oblasti zajedno sa dva originalna rezultata. U trećem &nbsp;delu su dati rezultati numeričkih testova.</p> / <p>The problem under consideration is unconstrained minimization pro-blem. The problem in deterministic case is often solved with Quasi Newton met-hods. In noisy environment, which is considered, new approach for step length along descent direction is used. The new approach combines line search and stoc-hastic&nbsp; approximation method using good characteristics of both enabling better efficiency. The convergence is proved. New step length is tested with three de-scent directions. Many standard test problems show the efficiency of the met-hod. Also, a new, affordable procedure based on application of the fixed Newton method for a sequence of equilibrium problems generated by simulation is intro-duced. The convergence conditions of the method are derived. The numerical results show a clear difference in the quality of information obtained by solving a sequence of problems if compared with the single equilibrium problem. In the first part general theoretical introduction is given. In the second part a survey of results from scientific community is given together with original results. The third part contains many numerical tests of new methods that show its efficiency.</p>
22

MODIFYING SIGNAL RETIMING PROCEDURES AND POLICIES: A CASE OF HIGH-FIDELITY MODELING WITH MEDIUM-RESOLUTION DATA

Unknown Date (has links)
Signal retiming, or signal optimization process, has not changed much over the last few decades. Traditional procedures rely on low-resolution data and a low-fidelity modeling approach. Such developed signal timing plans always require a fine-tuning process for deployed signal plans in field, thus questioning the very benefits of signal optimization. New trends suggest the use of high-resolution data, which are not easily available. At the same time, many improvements could be made if the traditional signal retiming process was modified to include the use of medium-resolution data and high-fidelity modeling. This study covers such an approach, where a traditional retiming procedure is modified to utilize large medium-resolution data sets, high-fidelity simulation models, and powerful stochastic optimization to develop robust signal timing plans. The study covers a 28-intersection urban corridor in Southeastern Florida. Medium-resolution data are used to identify peak-hour, Day-Of-Year (DOY) representative volumes for major seasons. Both low-fidelity and high-fidelity models are developed and calibrated with high precision to match the field signal operations. Then, by using traditional and stochastic optimization tools, signal timing plans are developed and tested in microsimulation. The findings reveal shortcomings of the traditional approach. Signal timing plans developed from medium-resolution data and high-fidelity modeling approach reduce average delay by 5%-26%. Travel times on the corridor are usually reduced by up to 10.5%, and the final solution does not transfer delay on the other neighboring streets (illustrated through latent delay), which is also decreased by 10%-49% when compared with the traditional results. In general, the novel approach has shown a great potential. The next step should be field testing and validation. / Includes bibliography. / Thesis (M.S.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
23

Robustní přístupy v optimalizaci portfolia se stochastickou dominancí / Robust approaches in portfolio optimization with stochastic dominance

Kozmík, Karel January 2019 (has links)
We use modern approach of stochastic dominance in portfolio optimization, where we want the portfolio to dominate a benchmark. Since the distribution of returns is often just estimated from data, we look for the worst distribution that differs from empirical distribution at maximum by a predefined value. First, we define in what sense the distribution is the worst for the first and second order stochastic dominance. For the second order stochastic dominance, we use two different formulations for the worst case. We derive the robust stochastic dominance test for all the mentioned approaches and find the worst case distribution as the optimal solution of a non-linear maximization problem. Then we derive programs to maximize an objective function over the weights of the portfolio with robust stochastic dominance in constraints. We consider robustness either in returns or in probabilities for both the first and the second order stochastic dominance. To the best of our knowledge nobody was able to derive such program before. We apply all the derived optimization programs to real life data, specifically to returns of assets captured by Dow Jones Industrial Average, and we analyze the problems in detail using optimal solutions of the optimization programs with multiple setups. The portfolios calculated using...
24

Optimization under uncertainty: conic programming representations, relaxations, and approximations

Xu, Guanglin 01 August 2017 (has links)
In practice, the presence of uncertain parameters in optimization problems introduces new challenges in modeling and solvability to operations research. There are three main paradigms proposed for optimization problems under uncertainty. These include stochastic programming, robust optimization, and sensitivity analysis. In this thesis, we examine, improve, and combine the latter two paradigms in several relevant models and applications. In the second chapter, we study a two-stage adjustable robust linear optimization problem in which the right-hand sides are uncertain and belong to a compact, convex, and tractable uncertainty set. Under standard and simple assumptions, we reformulate the two-stage problem as a copositive optimization program, which in turns leads to a class of tractable semidefinite-based approximations that are at least as strong as the affine policy, which is a well studied tractable approximation in the literature. We examine our approach over several examples from the literature and the results demonstrate that our tractable approximations significantly improve the affine policy. In particular, our approach recovers the optimal values of a class of instances of increasing size for which the affine policy admits an arbitrary large gap. In the third chapter, we leverage the concept of robust optimization to conduct sensitivity analysis of the optimal value of linear programming (LP). In particular, we propose a framework for sensitivity analysis of LP problems, allowing for simultaneous perturbations in the objective coefficients and right-hand sides, where the perturbations are modeled in a compact, convex, and tractable uncertainty set. This framework unifies and extends multiple approaches for LP sensitivity analysis in the literature and has close ties to worst-case LP and two-stage adjustable linear programming. We define the best-case and worst-case LP optimal values over the uncertainty set. As the concept aligns well with the general spirit of robust optimization, we denote our approach as robust sensitivity analysis. While the best-case and worst-case optimal values are difficult to compute in general, we prove that they equal the optimal values of two separate, but related, copositive programs. We then develop tight, tractable conic relaxations to provide bounds on the best-case and worst case optimal values, respectively. We also develop techniques to assess the quality of the bounds, and we validate our approach computationally on several examples from—and inspired by—the literature. We find that the bounds are very strong in practice and, in particular, are at least as strong as known results for specific cases from the literature. In the fourth chapter of this thesis, we study the expected optimal value of a mixed 0-1 programming problem with uncertain objective coefficients following a joint distribution. We assume that the true distribution is not known exactly, but a set of independent samples can be observed. Using the Wasserstein metric, we construct an ambiguity set centered at the empirical distribution from the observed samples and containing all distributions that could have generated the observed samples with a high confidence. The problem of interest is to investigate the bound on the expected optimal value over the Wasserstein ambiguity set. Under standard assumptions, we reformulate the problem into a copositive programming problem, which naturally leads to a tractable semidefinite-based approximation. We compare our approach with a moment-based approach from the literature for two applications. The numerical results illustrate the effectiveness of our approach. Finally, we conclude the thesis with remarks on some interesting open questions in the field of optimization under uncertainty. In particular, we point out that some interesting topics that can be potentially studied by copositive programming techniques.
25

Individual and institutional asset liability management

Hainaut, Donatien 25 September 2007 (has links)
One of the classical problems in finance is that of an economic unit who aims at maximizing his expected life-time utility from consumption and/or terminal wealth by an effective asset-liability management. The purpose of this thesis is to determine the optimal investment strategies , from the point of view of their economic utility, for individual and institutional investors such pension funds.
26

Demand Effects in Productivity and Efficiency Analysis

Lee, Chia-Yen 2012 May 1900 (has links)
Demand fluctuations will bias the measurement of productivity and efficiency. This dissertation described three ways to characterize the effect of demand fluctuations. First, a two-dimensional efficiency decomposition (2DED) of profitability is proposed for manufacturing, service, or hybrid production systems to account for the demand effect. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component's profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. Second, this study proposes a proactive DEA model to account for demand fluctuations and proposes input or output adjustments to maximize effective production. Demand fluctuations lead to variations in the output levels affecting measures of technical efficiency. In the short-run, firms can adjust their variable resources to address the demand fluctuates and perform more efficiently. Proactive DEA is a short-run capacity planning method, proposed to provide decision support to a firm interested in improving the effectiveness of a production system under demand uncertainty using a stochastic programming DEA (SPDEA) approach. This method improves the decision making related to short-run capacity expansion and estimates the expected value of effectiveness given demand. In the third part of the dissertation, a Nash-Cournot equilibrium is identified for an oligopolistic market. The standard assumption in the efficiency literature that firms desire to produce on the production frontier may not hold in an oligopolistic market where the production decisions of all firms will determine the market price, i.e. an increase in a firm's output level leads to a lower market clearing price and potentially-lower profits. Models for both the production possibility set and the inverse demand function are used to identify a Nash-Cournot equilibrium and improvement targets which may not be on the strongly efficient production frontier. This behavior is referred to as rational inefficiency because the firm reduces its productivity levels in order to increase profits.
27

Wind Power Integration in Power Systems with Transmission Bottlenecks

Matevosyan, Julija January 2006 (has links)
During the last two decades, the increase in electricity demand and environmental concern resulted in fast growth of power production from renewable sources. Wind power is one of the most efficient alternatives. Due to the rapid development of wind turbine technology and increasing size of wind farms, wind power plays a significant part in the power production mix of Germany, Spain, Denmark, and some other countries. The best conditions for the development of wind farms are in remote, open areas with low population density. The transmission system in such areas might not be dimensioned to accommodate additional large-scale power infeed. Furthermore a part of the existing transmission capacity might already be reserved for conventional power plants situated in the same area. In this thesis four alternatives for large-scale wind power integration in areas with transmission bottlenecks are considered. The first possibility is to revise the methods for calculation of available transmission capacity. The second solution for large-scale integration of wind power in such areas is to reinforce the network. This alternative, however, may be expensive and time consuming. As wind power production depends on the wind speed, the full load hours of wind turbine generator are only 2000-4000 hours per year. Therefore reinforcing a transmission network in order to remove a bottleneck completely is often not economically justified. Wind energy curtailments during congestion situations is then the third solution for large-scale wind power integration with less or no grid reinforcement. The fourth solution is to store excess wind energy. Pumped hydro storage or battery storage for the large-scale wind farms are still rather expensive options, but existing conventional power plants with fast production control capabilities and sufficient storage capacity, e.g., hydro power plants, could be used for this purpose. As there is a lot of research work on the first two alternatives, the thesis provides a review and summarizes the main conclusions from the existing work. The thesis is then directed towards the development of the methods for estimation of wind energy curtailments, evaluation of wind energy storage possibility in hydro reservoirs and development of short term hydro power production planning methods, considering coordination with wind power. Additionally in the thesis the strategy that minimizes imbalance costs of a wind power utility, trading wind power on the short term power market is elaborated and analyzed. / QC 20100608
28

A Study on Urban Water Reuse Management Modeling

Zhang, Changyu January 2005 (has links)
This research deals with urban water reuse planning and management modeling in the context of sustainable development. Rapid urbanization and population growth have presented a great challenge to urban water resources management. As water reuse may alleviate pollution loads and enhance water supply sources, water reuse is being recognized as a sustainable urban water management strategy and is becoming increasingly attractive in urban water resources management. An efficient water reuse planning and management model is of significance in promoting water reuse practices. This thesis introduces an urban water reuse management and planning model using optimization methods with an emphasis on modeling uncertainty issues associated with water demand and water quality. The model is developed in conjunction with the overall urban water system with considerations over water supply, water demand, water distribution, water quality, and wastewater treatment and discharge. The objective of the model is to minimize the overall cost of the system subject to technological, societal and environmental constraints. Uncertainty issues associated with water demand and treatment quality are modeled by introducing stochastic programming methods, namely, two-stage stochastic recourse programming and chance-constraint programming. The model is capable of identifying and evaluating water reuse in urban water systems to optimize the allocation of urban water resources with regard to uncertainties. It thus provides essential information in planning and managing urban water reuse systems towards a more sustainable urban water resources management. An application was presented in order to demonstrate the modeling process and to analyze the impact of uncertainties.
29

A Study on Urban Water Reuse Management Modeling

Zhang, Changyu January 2005 (has links)
This research deals with urban water reuse planning and management modeling in the context of sustainable development. Rapid urbanization and population growth have presented a great challenge to urban water resources management. As water reuse may alleviate pollution loads and enhance water supply sources, water reuse is being recognized as a sustainable urban water management strategy and is becoming increasingly attractive in urban water resources management. An efficient water reuse planning and management model is of significance in promoting water reuse practices. This thesis introduces an urban water reuse management and planning model using optimization methods with an emphasis on modeling uncertainty issues associated with water demand and water quality. The model is developed in conjunction with the overall urban water system with considerations over water supply, water demand, water distribution, water quality, and wastewater treatment and discharge. The objective of the model is to minimize the overall cost of the system subject to technological, societal and environmental constraints. Uncertainty issues associated with water demand and treatment quality are modeled by introducing stochastic programming methods, namely, two-stage stochastic recourse programming and chance-constraint programming. The model is capable of identifying and evaluating water reuse in urban water systems to optimize the allocation of urban water resources with regard to uncertainties. It thus provides essential information in planning and managing urban water reuse systems towards a more sustainable urban water resources management. An application was presented in order to demonstrate the modeling process and to analyze the impact of uncertainties.
30

Building Networks in the Face of Uncertainty

Gupta, Shubham January 2011 (has links)
The subject of this thesis is to study approximation algorithms for some network design problems in face of uncertainty. We consider two widely studied models of handling uncertainties - Robust Optimization and Stochastic Optimization. We study a robust version of the well studied Uncapacitated Facility Location Problem (UFLP). In this version, once the set of facilities to be opened is decided, an adversary may close at most β facilities. The clients must then be assigned to the remaining open facilities. The performance of a solution is measured by the worst possible set of facilities that the adversary may close. We introduce a novel LP for the problem, and provide an LP rounding algorithm when all facilities have same opening costs. We also study the 2-stage Stochastic version of the Steiner Tree Problem. In this version, the set of terminals to be covered is not known in advance. Instead, a probability distribution over the possible sets of terminals is known. One is allowed to build a partial solution in the first stage a low cost, and when the exact scenario to be covered becomes known in the second stage, one is allowed to extend the solution by building a recourse network, albeit at higher cost. The aim is to construct a solution of low cost in expectation. We provide an LP rounding algorithm for this problem that beats the current best known LP rounding based approximation algorithm.

Page generated in 0.1518 seconds