31 |
Essays in Financial EngineeringAhn, Andrew January 2014 (has links)
This thesis consists of three essays in financial engineering. In particular we study problems in option pricing, stochastic control and risk management.
In the first essay, we develop an accurate and efficient pricing approach for options on leveraged ETFs (LETFs). Our approach allows us to price these options quickly and in a manner that is consistent with the underlying ETF price dynamics. The numerical results also demonstrate that LETF option prices have model-dependency particularly in high-volatility environments.
In the second essay, we extend a linear programming (LP) technique for approximately solving high-dimensional control problems in a diffusion setting. The original LP technique applies to finite horizon problems with an exponentially-distributed horizon, T. We extend the approach to fixed horizon problems. We then apply these techniques to dynamic portfolio optimization problems and evaluate their performance using convex duality methods. The numerical results suggest that the LP approach is a very promising one for tackling high-dimensional control problems.
In the final essay, we propose a factor model-based approach for performing scenario analysis in a risk management context. We argue that our approach addresses some important drawbacks to a standard scenario analysis and, in a preliminary numerical investigation with option portfolios, we show that it produces superior results as well.
|
32 |
Stochastic Approximation Algorithms in the Estimation of Quasi-Stationary Distribution of Finite and General State Space Markov ChainsZheng, Shuheng January 2014 (has links)
This thesis studies stochastic approximation algorithms for estimating the quasi-stationary distribution of Markov chains. Existing numerical linear algebra methods and probabilistic methods might be computationally demanding and intractable in large state spaces. We take our motivation from a heuristic described in the physics literature and use the stochastic approximation framework to analyze and extend it.
The thesis begins by looking at the finite dimensional setting. The finite dimensional quasi-stationary estimation algorithm was proposed in the Physics literature by [#latestoliveira, #oliveiradickman1, #dickman], however no proof was given there and it was not recognized as a stochastic approximation algorithm. This and related schemes were analyzed in the context of urn problems and the consistency of the estimator is shown there [#aldous1988two, #pemantle, #athreya]. The rate of convergence is studied by [#athreya] in special cases only. The first chapter provides a different proof of the algorithm's consistency and establishes a rate of convergence in more generality than [#athreya]. It is discovered that the rate of convergence is only fast when a certain restrictive eigenvalue condition is satisfied. Using the tool of iterate averaging, the algorithm can be modified and we can eliminate the eigenvalue condition.
The thesis then moves onto the general state space discrete-time Markov chain setting. In this setting, the stochastic approximation framework does not have a strong theory in the current literature, so several of the convergence results have to be adapted because the iterates of our algorithm are measure-valued The chapter formulates the quasi-stationary estimation algorithm in this setting. Then, we extend the ODE method of [#kushner2003stochastic] and proves the consistency of algorithm. Through the proof, several non-restrictive conditions required for convergence of the algorithm are discovered.
Finally, the thesis tests the algorithm by running some numerical experiments. The examples are designed to test the algorithm in various edge cases. The algorithm is also empirically compared against the Fleming-Viot method.
|
33 |
The Theory of Systemic RiskChen, Chen January 2014 (has links)
Systemic risk is an issue of great concern in modern financial markets as well as, more broadly, in the management of complex business and engineering systems. It refers to the risk of collapse of an entire complex system, as a result of the actions taken by the individual component entities or agents that comprise the system. We investigate the topic of systemic risk from the perspectives of measurement, structural sources, and risk factors. In particular, we propose an axiomatic framework for the measurement and management of systemic risk based on the simultaneous analysis of outcomes across agents in the system and over scenarios of nature. Our framework defines a broad class of systemic risk measures that accommodate a rich set of regulatory preferences. This general class of systemic risk measures captures many specific measures of systemic risk that have recently been proposed as special cases, and highlights their implicit assumptions. Moreover, the systemic risk measures that satisfy our conditions yield decentralized decompositions, i.e., the systemic risk can be decomposed into risk due to individual agents. Furthermore, one can associate a shadow price for systemic risk to each agent that correctly accounts for the externalities of the agent's individual decision-making on the entire system. Also, we provide a structural model for a financial network consisting of a set of firms holding common assets. In the model, endogenous asset prices are captured by the marketing clearing condition when the economy is in equilibrium. The key ingredients in the financial market that are captured in this model include the general portfolio choice flexibility of firms given posted asset prices and economic states, and the mark-to-market wealth of firms. The price sensitivity can be analyzed, where we characterize the key features of financial holding networks that minimize systemic risk, as a function of overall leverage. Finally, we propose a framework to estimate risk measures based on risk factors. By introducing a form of factor-separable risk measures, the acceptance set of the original risk measure connects to the acceptance sets of the factor-separable risk measures. We demonstrate that the tight bounds for factor-separable coherent risk measures can be explicitly constructed.
|
34 |
Essays on Inventory Management and Conjoint AnalysisChen, Yupeng January 2015 (has links)
With recent theoretic and algorithmic advancements, modern optimization methodologies have seen a substantial expansion of modeling power, being applied to solve challenging problems in impressively diverse areas. This dissertation aims to extend the modeling frontier of optimization methodologies in two exciting fields inventory management and conjoint analysis. Although
the three essays concern distinct applications using different optimization methodologies, they
share a unifying theme, which is to develop intuitive models using advanced optimization techniques to solve problems of practical relevance. The first essay (Chapter 2) applies robust optimization to solve a single installation inventory model with non stationary uncertain demand. A classical problem in operations research, the inventory management model could become very challenging to analyze when lost sales dynamics, non zero fixed ordering cost, and positive lead time are introduced. In this essay, we propose a robust cycle based control policy based on an innovative decomposition idea to solve a family of variants of this model. The policy is simple, flexible, easily implementable and numerical experiments suggest that the policy has very promising empirical performance.The policy can be used both when the excess demand is backlogged as well as when it is lost; with non zero fixed ordering cost, and also when lead time is non zero. The policy decisions are computed by solving a collection of linear programs even when there is a positive fixed ordering cost. The policy also extends in a very simple manner to the joint pricing and inventory control problem. The second essay (Chapter 3) applies sparse machine learning to model multimodal continuous heterogeneity in conjoint analysis. Consumers' heterogeneous preferences can often be represented using a multimodal continuous heterogeneity (MCH) distribution. One interpretation of MCH is that the consumer population consists of a few distinct segments, each of which contains a heterogeneous sub population. Modeling of MCH raises considerable challenges as both across and within segment heterogeneity need to be accounted for. In this essay, we propose an innovative sparse learning approach for modeling MCH and apply it to conjoint analysis where adequate modeling of consumer heterogeneity is critical. The sparse learning approach models MCH via a two-stage divide and conquer framework, in which we first decompose the consumer population by recovering a set of candidate segmentations using structured sparsity modeling, and then use each candidate segmentation to develop a set of individual level representations of MCH. We select the optimal individual level representation of MCH and the corresponding optimal candidate segmentation using cross-validation. Two notable features of our approach are that it accommodates both across and within segment heterogeneity and endogenously imposes an adequate amount of shrinkage to recover the individual level partworths. We empirically validate the performance of the sparse learning approach using extensive simulation experiments and two empirical conjoint data sets. The third essay (Chapter 4) applies dynamic discrete choice models to investigate the impact of return policies on consumers' product purchase and return behavior. Return policies have been ubiquitous in the marketplace, allowing consumers to use and evaluate a product before fully committing to purchase. Despite the clear practical relevance of return policies, however, few studies have provided empirical assessments of how consumers' purchase and return decisions respond to the return policies facing them. In this essay, we propose to model consumers' purchase and return decisions using a dynamic discrete choice model with forward looking and Bayesian learning. More specifically, we postulate that consumers' purchase and return decisions are optimal solutions for some underlying dynamic expected utility maximization problem in which consumers learn their true evaluations of products via usage in a Bayesian manner and make purchase and return decisions to maximize their expected present value of utility, and return policies impact consumers' purchase and return decisions by entering the dynamic expected utility maximization problem as constraints. Our proposed model provides a behaviorally plausible approach to examine the impact of return policies on consumers' purchase and return behavior.
|
35 |
Perfect Simulation and Deployment Strategies for DetectionWallwater, Aya January 2015 (has links)
This dissertation contains two parts. The first part provides the first algorithm that, under minimal assumptions, allows to simulate the stationary waiting-time sequence of a single-server queue backwards in time, jointly with the input processes of the queue
(inter-arrival and service times).
The single-server queue is useful in applications of DCFTP (Dominated Coupling From The Past), which is a well known protocol for simulation without bias from steady-state distributions. Our algorithm terminates in finite time assuming only finite mean of the
inter-arrival and service times. In order to simulate the single-server queue in stationarity until the first idle period in finite expected termination time we require the existence of finite variance. This requirement is also necessary for such idle time (which is a natural
coalescence time in DCFTP applications) to have finite mean. Thus, in this sense, our algorithm is applicable under minimal assumptions.
The second part studies the behavior of diffusion processes in a random environment.
We consider an adversary that moves in a given domain and our goal is to produce an optimal strategy to detect and neutralize him by a given deadline. We assume that the target's dynamics follows a diffusion process whose parameters are informed by available intelligence information. We will dedicate one chapter to the rigorous formulation of the detection problem, an introduction of several frameworks that can be considered when applying our methods, and a discussion on the challenges of finding the analytical optimal solution. Then, in the following chapter, we will present our main result, a large deviation behavior of the adversary's survival probability under a given strategy. This result will be later give rise to asymptotically efficient Monte Carlo algorithms.
|
36 |
Recursive Utility with Narrow Framing: Properties and ApplicationsGuo, Jing January 2017 (has links)
We study the total utility of an agent in a model of narrow framing with constant elasticity of intertemporal substitution and relative risk aversion degree and with infinite time horizon. In a finite-state Markovian setting, we prove that the total utility uniquely exists when the agent derives nonnegative utility of gains and losses incurred by holding risky assets and that the total utility can be non-existent or non-unique otherwise. Moreover, we prove that the utility, when uniquely exists, can be computed by a recursive algorithm with any starting point. We then consider a portfolio selection problem with narrow framing and solve it by proving that the corresponding dynamic programming equation has a unique solution. Finally, we propose a new model of narrow framing in which the agent's total utility uniquely exists in general.
Barberis and Huang (2009, J. Econ. Dynam. Control, vol. 33, no. 8, pp. 1555-1576) propose a preference model that allows for narrow framing, and this model has been successfully applied to explain individuals' attitudes toward timeless gambles and high equity premia in the market. To uniquely define the utility process in this preference model and to yield a unique solution when the model is applied to portfolio selection problems, one needs to impose some restrictions on the model parameters, which are too tight for many financial applications. We propose a modification of Barberis and Huang's model and show that the modified model admits a unique utility process and a unique solution in portfolio selection problems. Moreover, the modified model is more tractable than Barberis and Huang's when applied to portfolio selection and asset pricing.
|
37 |
Production Planning with Risk HedgingWang, Liao January 2017 (has links)
We study production planning integrated with risk hedging in a continuous-time stochastic setting. The (cumulative) demand process is modeled as a sum of two components: the demand rate is a general function in a tradable financial asset (which follows another stochastic process), and the noise component follows an independent Brownian motion. There are two decisions: a production quantity decision at the beginning of the planning horizon, and a dynamic hedging strategy throughout the horizon. Thus, the total terminal wealth has two components: production payoff, and profit/loss from the hedging strategy.
The production quantity and hedging strategy are jointly optimized under the mean-variance and the shortfall criteria. For each risk objective, we derive the optimal hedging strategy in closed form and express the associated minimum risk as a function of the production quantity, the latter is then further optimized. With both production and hedging (jointly) optimized, we provide a complete characterization of the efficient frontier. By quantifying the risk reduction contributed by the hedging strategy, we demonstrate its substantial improvement over a production-only decision.
To derive the mean-variance hedging strategy, we use a numeraire-based approach, and the derived optimal strategy consists of a risk mitigation component and an investment component. For the shortfall hedging, a convex duality method is used, and the optimal strategy takes the form of a put option and a digital option, which combine to close the gap from the target left by production (only).
Furthermore, we extend the models and results by allowing multiple products, with demand rates depending on multiple assets. We also make extension by allowing the asset price to follow various stochastic processes (other than the geometric Brownian motion).
|
38 |
Control and optimization approaches for energy-limited systems: applications to wireless sensor networks and battery-powered vehiclesPourazarm, Sepideh 10 March 2017 (has links)
This dissertation studies control and optimization approaches to obtain energy-efficient and reliable routing schemes for battery-powered systems in network settings.
First, incorporating a non-ideal battery model, the lifetime maximization problem for static wireless sensor networks is investigated. Adopting an optimal control approach, it is shown that there exists a time-invariant optimal routing vector in a fixed topology network. Furthermore, under very mild conditions, this optimal policy is robust with respect to the battery model used. Then, the lifetime maximization problem is investigated for networks with a mobile source node. Redefining the network lifetime, two versions of the problem are studied: when there exist no prior knowledge about the source node’s motion dynamics vs. when source node’s trajectory is known in advance. For both cases, problems are formulated in the optimal control framework. For the former, the solution can be reduced to a sequence of nonlinear programming problems solved on line as the source node trajectory evolves. For the latter, an explicit off-line numerical solution is required.
Second, the problem of routing for vehicles with limited energy through a network
with inhomogeneous charging nodes is studied. The goal is to minimize the total elapsed time, including traveling and recharging time, for vehicles to reach their destinations. Adopting a game-theoretic approach, the problem is investigated from two different points of view: user-centric vs. system-centric. The former is first formulated as a mixed integer nonlinear programming problem. Then, by exploiting properties of an optimal solution, it is reduced to a lower dimensionality problem. For the latter, grouping vehicles into subflows and including the traffic congestion effects, a system-wide optimization problem is defined. Both problems are studied in a dynamic programming framework as well.
Finally, the thesis quantifies the Price Of Anarchy (POA) in transportation net- works using actual traffic data. The goal is to compare the network performance under user-optimal vs. system-optimal policies. First, user equilibria flows and origin- destination demands are estimated for the Eastern Massachusetts transportation net- work using speed and capacity datasets. Then, obtaining socially-optimal flows by solving a system-centric problem, the POA is estimated.
|
39 |
An Exact Optimization Approach for Relay Node Location in Wireless Sensor NetworksJanuary 2019 (has links)
abstract: I study the problem of locating Relay nodes (RN) to improve the connectivity of a set
of already deployed sensor nodes (SN) in a Wireless Sensor Network (WSN). This is
known as the Relay Node Placement Problem (RNPP). In this problem, one or more
nodes called Base Stations (BS) serve as the collection point of all the information
captured by SNs. SNs have limited transmission range and hence signals are transmitted
from the SNs to the BS through multi-hop routing. As a result, the WSN
is said to be connected if there exists a path for from each SN to the BS through
which signals can be hopped. The communication range of each node is modeled
with a disk of known radius such that two nodes are said to communicate if their
communication disks overlap. The goal is to locate a given number of RNs anywhere
in the continuous space of the WSN to maximize the number of SNs connected (i.e.,
maximize the network connectivity). To solve this problem, I propose an integer
programming based approach that iteratively approximates the Euclidean distance
needed to enforce sensor communication. This is achieved through a cutting-plane
approach with a polynomial-time separation algorithm that identies distance violations.
I illustrate the use of my algorithm on large-scale instances of up to 75 nodes
which can be solved in less than 60 minutes. The proposed method shows solutions
times many times faster than an alternative nonlinear formulation. / Dissertation/Thesis / Masters Thesis Industrial Engineering 2019
|
40 |
CONTINUOUS TIME PROGRAMMING WITH NONLINEAR CONSTRAINTSUnknown Date (has links)
Source: Dissertation Abstracts International, Volume: 34-08, Section: B, page: 3953. / Thesis (Ph.D.)--The Florida State University, 1973.
|
Page generated in 0.0445 seconds