Spelling suggestions: "subject:"cynamic programming"" "subject:"clynamic programming""
71 |
Optimal Portfolio in Outperforming Its Liability Benchmark for a Defined Benefit Pension Plan李意豐, Yi-Feng Li Unknown Date (has links)
摘要
本文於確定給付退休金計劃下,探討基金經理人於最差基金財務短絀情境發生前極大化管理目標之最適投資組合,基金比值過程定義為基金現值與負債指標之比例,管理人將於指定最差基金比值發生前極大化達成既定經營目標之機率,隨時間改變之基金投資集合包括無風險之現金、債券與股票。本研究建構隨機控制模型描述此最適化問題,並以動態規劃方法求解,由結果歸納,經理人之最適策略包含極小化基金比值變異之避險因素,風險偏好及跨期投資集合相關之避險因素與模型狀態變數相關之避險因素。本研究利用馬可夫練逼近法逼近隨機控制的數值解,結果顯示基金經理人須握有很大部位的債券,且不同的投資期間對於最適投資決策有很大的影響。
關鍵字: 短絀、確定給付、負債指標、隨機控制、動態規劃。 / Abstract
This paper analyzes the portfolio problem that is a pension fund manager has to maximize the possibility of reaching his managerial goal before the worst scenario shortfall occurs in a defined benefit pension scheme. The fund ratio process defined as the ratio between the fund level and its accrued liability benchmark is attained to maximize the probability that the predetermined target is achieved before it falls below an intolerable boundary. The time-varying opportunity set in our study includes risk-free cash, bonds and stock index. The problems are formulated as a stochastic control framework and are solved through dynamics programming. In this study, the optimal portfolio are characterized by three components, the liability hedging component, the intertemporal hedging component against changes in the opportunity set, and the temporal hedging component minimizing the variation in fund ratio growth. The Markov chain approximation methods are employed to approximate the stochastic control solutions numerically. The result shows that fund managers should hold large proportions of bonds and time horizon plays a crucial role in constructing the optimal portfolio.
Keywords: shortfall; defined benefit; liability benchmark; stochastic control; dynamic programming.
|
72 |
DYNAMIC PRODUCTION PLANNING WITH SUBCONTRACTING.Wu, Yih-Bor January 1987 (has links)
This research is concerned with scheduling production over a finite planning horizon in a capacitated manufacturing facility. It is assumed that a second source of supply is available by means of subcontracting and that the demand varies over time. The problem is to establish the production level in the facility and/or the ordering quantity from the subcontractor for each period in the planning horizon. Firstly, the cost functions are analyzed and two types of realistic production cost models are identified. Then mathematical models are developed for two different problems. One is a single criterion problem aimed at minimizing the total production and inventory costs. The other is a bicriterion problem which seeks the efficient frontier with respect to the total cost and the number of subcontractings, both to be minimized, over the planning horizon. For each of the above, two methods, namely, a general dynamic programming approach and an improved dynamic programming approach (Shortest path method) are presented. Several results are obtained for reducing the computations in solving these problems. Based on these results, algorithms are developed for both problems. The computational complexity of these algorithms are also analyzed. Two heuristic rules are then suggested for obtaining near-optimal solutions to the first problem with lesser computation. Both rules have been tested extensively and the results indicate advantages of using them. One of these rules is useful for solving the uncapacitated problem faster without losing optimality. The above results are then extended to other cases where some of the assumptions in the original problem are relaxed. Finally, we studied the multi-item lot-sizing problem with the subcontracting option and proposed a heuristic for solving the problem by the Lagrangean relaxation approach. We demonstrated that with an additional capacity constraint in the dual problem the feasible solution and the lower bound obtained during each iteration converge much faster than without it. After testing some randomly generated problems we found that most of the solutions obtained from the heuristic are very close to the best lower bound obtained from the dual problem within a limited number of iterations.
|
73 |
Short-term operation of surface reservoirs within long-term goals.Estalrich-Lopez, Juan. January 1989 (has links)
A stochastic dynamic programming model (called P.B.S.D.P.) based on the consideration of peak discharge and time between peaks as two stochastic variables has been used to model and to solve a reservoir operation problem. This conceptualization of the physical reality allows to solve, in this order, the tactical and strategic operation of surface reservoirs. This P.B.S.D.P. model has been applied to the Sau reservoir in the Northeastern corner of Spain. The results showed a significant improvement over the currently used operation procedure, yielding values of yearly average electricity production that are somewhat under 6% of what could have been the maximum electricity production.
|
74 |
Design and Analysis of Decision Rules via Dynamic ProgrammingAmin, Talha M. 24 April 2017 (has links)
The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods.
Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions.
Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy.
|
75 |
Optimizing Trading Decisions for Hydro Storage Systems using Approximate Dual Dynamic ProgrammingLöhndorf, Nils, Wozabal, David, Minner, Stefan 22 August 2013 (has links) (PDF)
We propose a new approach to optimize operations of hydro storage systems with multiple connected reservoirs whose operators participate in wholesale electricity markets. Our formulation integrates short-term intraday with long-term interday decisions. The intraday problem considers bidding decisions as well as storage operation during the day and is formulated as a stochastic program. The interday problem is modeled as a Markov decision process of managing storage operation over time, for which we propose integrating stochastic dual dynamic programming with approximate dynamic programming. We show that the approximate solution converges towards an upper bound of the optimal solution. To demonstrate the efficiency of the solution approach, we fit an econometric model to actual price and in inflow data and apply the approach to a case study of an existing hydro storage system. Our results indicate that the approach is tractable for a real-world application and that the gap between theoretical upper and a simulated lower bound decreases sufficiently fast. (authors' abstract)
|
76 |
Indifference pricing of natural gas storage contracts.Löhndorf, Nils, Wozabal, David January 2017 (has links) (PDF)
Natural gas markets are incomplete due to physical limitations and low liquidity, but most valuation approaches for natural gas storage contracts assume a complete market. We propose an alternative approach based on indifference pricing which does not require this assumption but entails the solution of a high- dimensional stochastic-dynamic optimization problem under a risk measure. To solve this problem, we develop a method combining stochastic dual dynamic programming with a novel quantization method that approximates the continuous process of natural gas prices by a discrete scenario lattice. In a computational experiment, we demonstrate that our solution method can handle the high dimensionality of the optimization problem and that solutions are near-optimal. We then compare our approach with rolling intrinsic valuation, which is widely used in the industry, and show that the rolling intrinsic value is sub-optimal under market incompleteness, unless the decision-maker is perfectly risk-averse. We strengthen this result by conducting a backtest using historical data that compares both trading strategies. The results show that up to 40% more profit can be made by using our indifference pricing approach.
|
77 |
Essays in Computational Macroeconomics and FinanceHull, Isaiah January 2013 (has links)
Thesis advisor: Peter N. Ireland / This dissertation examines three topics in computational macroeconomics and finance. The first two chapters are closely linked; and the third chapter covers a separate topic in finance. Throughout the dissertation, I place a strong emphasis on constructing computational tools and modeling devices; and using them in appropriate applications. The first chapter examines how a central banks choice of interest rate rule impacts the rate of mortgage default and welfare. In this chapter, a quantitative equilibrium (QE) model is constructed that incorporates incomplete markets, aggregate uncertainty, overlapping generations, and realistic mortgage structure. Through a series of counterfactual simulations, five things are demonstrated: 1) nominal interest rate rules that exhibit cyclical behavior increase the average default rate and lower average welfare; 2) welfare can be substantially improved by adopting a modified Taylor rule that stabilizes house prices; 3) a decrease in the length of the interest rate cycle will tend to increase the average default rate; 4) if the business and housing cycles are not aligned, then aggressive inflation targeting will tend to increase the mortgage default rate; and 5) placing a legal cap on loan-to-value ratios will lower the average default rate and lessen the intensity of extreme events. In addition to these findings, this paper also incorporates an important mechanism for default, which had not pre- viously been included in the QE literature: default spikes happen when income falls and home equity is degraded at the same time. The paper concludes with a policy recommendation for central banks: if they wish to crises where many households default simultaneously, they should either adopt a rule that generates interest rates with slow-moving cycles or use a modified Taylor rule that also targets house price growth. The second chapter generalizes the solution method used in the first and compares it to more common techniques used in the computational macroeconomics literature, including the parameterized expectations approach (PEA), projection methods, and value function iteration. In particular, this chapter compares the speed and accuracy of the aforementioned modifications to an alternative method that was introduced separately by Judd (1998), Sutton and Barto (1998), and Van Roy et al. (1997), but was not developed into a general solution method until Powell (2007) introduced it to the Operations Research literature. This approach involves rewriting the Bellman equation in terms of the post-decision state variables, rather than the pre-decision state variables, as is done in standard dynamic programming applications in economics. I show that this approach yields considerable performance benefits over common global solution methods when the state space is large; and has the added benefit of not forcing modelers to assume a data generating process for shocks. In addition to this, I construct two new algorithms that take advantage of this approach to solve heterogenous agent models. Finally, the third chapter imports the SIR model from mathematical epidemiol- ogy; and uses it to construct a model of financial epidemics. In particular, the paper demonstrates how the SIR model can be microfounded in an economic context to make predictions about financial epidemics, such as the spread of asset-backed securities (ABS) and exchange-traded funds (ETFs), the proliferation of zombie financial institutions, and the expansion of financial bubbles and mean-reverting fads. The paper proceeds by developing the 1-host SIR model for economic and financial contexts; and then moves on to demonstrate how to work with the multi-host version of the model. In addition to showing how the SIR framework can be used to model economic interactions, it will also: 1) show how it can be simulated; 2) use it to develop and estimate a sufficient statistic for the spread of a financial epidemic; and 3) show how policymakers can impose the financial analog of herd immunity-that is, prevent the spread of a financial epidemic without completely banning the asset or behavior associated with the epidemic. Importantly, the paper will focus on developing a neutral framework to describe financial epidemics that can be either bad or good. That is, the general framework can be applied to epidemics that constitute a mean-reverting fad or an informational bubble, but ultimately yield little value and shrink in importance; or epidemics that are long-lasting and yield a new financial in- strument that generates permanent efficiency gains or previously unrealized hedging opportunities. / Thesis (PhD) — Boston College, 2013. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
|
78 |
Parametric RNA Partition Function AlgorithmsDing, Yang January 2010 (has links)
Thesis advisor: Peter Clote / In addition to the well-characterized messenger RNA, transfer RNA and ribosomal RNA, many new classes of noncoding RNA(ncRNA) have been discovered in the past few years. ncRNA has been shown to play important roles in multiple regulation and development processes. The increasing needs for RNA structural analysis software provide great opportunities on computational biologists. In this thesis I present three highly non-trivial RNA parametric structural analysis algorithms: 1) RNAhairpin and RNAmultiloop, which calculate parition functions with respect to hairpin number, multiloop number and multiloop order, 2) RNAshapeEval, which is based upon partition function calculation with respect to a fixed abstract shape, and 3) RNAprofileZ, which calculates the expected partition function and ensemble free energy given an RNA position weight matrix.I also describe the application of these software in biological problems, including evaluating purine riboswitch aptamer full alignment sequences to adopt their consensus shape, building hairpin and multiloop profiles for certain Rfam families, tRNA and pseudoknotted RNA secondary structure predictions. These algorithms hold the promise to be useful in a broad range of biological problems such as structural motifs search, ncRNA gene finders, canonical and pseudoknotted secondary structure predictions. / Thesis (MS) — Boston College, 2010. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Biology.
|
79 |
Chromosome classification and speech recognition using inferred Markov networks with empirical landmarks.January 1993 (has links)
by Law Hon Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 67-70). / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Automated Chromosome Classification --- p.4 / Chapter 2.1 --- Procedures in Chromosome Classification --- p.6 / Chapter 2.2 --- Sample Preparation --- p.7 / Chapter 2.3 --- Low Level Processing and Measurement --- p.9 / Chapter 2.4 --- Feature Extraction --- p.11 / Chapter 2.5 --- Classification --- p.15 / Chapter 3 --- Inference of Markov Networks by Dynamic Programming --- p.17 / Chapter 3.1 --- Markov Networks --- p.18 / Chapter 3.2 --- String-to-String Correction --- p.19 / Chapter 3.3 --- String-to-Network Alignment --- p.21 / Chapter 3.4 --- Forced Landmarks in String-to-Network Alignment --- p.31 / Chapter 4 --- Landmark Finding in Markov Networks --- p.34 / Chapter 4.1 --- Landmark Finding without a priori Knowledge --- p.34 / Chapter 4.2 --- Chromosome Profile Processing --- p.37 / Chapter 4.3 --- Analysis of Chromosome Networks --- p.39 / Chapter 4.4 --- Classification Results --- p.45 / Chapter 5 --- Speech Recognition using Inferred Markov Networks --- p.48 / Chapter 5.1 --- Linear Predictive Analysis --- p.48 / Chapter 5.2 --- TIMIT Speech Database --- p.50 / Chapter 5.3 --- Feature Extraction --- p.51 / Chapter 5.4 --- Empirical Landmarks in Speech Networks --- p.52 / Chapter 5.5 --- Classification Results --- p.55 / Chapter 6 --- Conclusion --- p.57 / Chapter 6.1 --- Suggested Improvements --- p.57 / Chapter 6.2 --- Concluding remarks --- p.61 / Appendix A --- p.63 / Reference --- p.67
|
80 |
A comparative study of several dynamic time warping algorithms for speech recognitionMyers, Cory S January 1980 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1980. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Cory S. Myers. / M.S.
|
Page generated in 0.1296 seconds