• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning

Huang, Bo January 2014 (has links)
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size. This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems.
382

High-Dimensional Portfolio Management: Taxes, Execution and Information Relaxations

Wang, Chun January 2014 (has links)
Portfolio management has always been a key topic in finance research area. While many researchers have studied portfolio management problems, most of the work to date assumes trading is frictionless. This dissertation presents our investigation of the optimal trading policies and efforts of applying duality method based on information relaxations to portfolio problems where the investor manages multiple securities and confronts trading frictions, in particular capital gain taxes and execution cost. In Chapter 2, we consider dynamic asset allocation problems where the investor is required to pay capital gains taxes on her investment gains. This is a very challenging problem because the tax to be paid whenever a security is sold depends on the tax basis, i.e. the price(s) at which the security was originally purchased. This feature results in high-dimensional and path-dependent problems which cannot be solved exactly except in the case of very stylized problems with just one or two securities and relatively few time periods. The asset allocation problem with taxes has several variations depending on: (i) whether we use the exact or average tax-basis and (ii) whether we allow the full use of losses (FUL) or the limited use of losses (LUL). We consider all of these variations in this chapter but focus mainly on the exact and average-cost tax-basis LUL cases since these problems are the most realistic and generally the most challenging. We develop several sub-optimal trading policies for these problems and use duality techniques based on information relaxations to assess their performances. Our numerical experiments consider problems with as many as 20 securities and 20 time periods. The principal contribution of this chapter is in demonstrating that much larger problems can now be tackled through the use of sophisticated optimization techniques and duality methods based on information-relaxations. We show in fact that the dual formulation of exact tax-basis problems are much easier to solve than the corresponding primal problems. Indeed, we can easily solve dual problem instances where the number of securities and time periods is much larger than 20. We also note, however, that while the average tax-basis problem is relatively easier to solve in general, its corresponding dual problem instances are non-convex and more difficult to solve. We therefore propose an approach for the average tax-basis dual problem that enables valid dual bounds to still be obtained. In Chapter 3, we consider a portfolio execution problem where a possibly risk-averse agent needs to trade a fixed number of shares in multiple stocks over a short time horizon. Our price dynamics can capture linear but stochastic temporary and permanent price impacts as well as stochastic volatility. In general it's not possible to solve even numerically for the optimal policy in this model, however, and so we must instead search for good sub-optimal policies. Our principal policy is a variant of an open-loop feedback control (OLFC) policy and we show how the corresponding OLFC value function may be used to construct good primal and dual bounds on the optimal value function. The dual bound is constructed using the recently developed duality methods based on information relaxations. One of the contributions of this chapter is the identification of sufficient conditions to guarantee convexity, and hence tractability, of the associated dual problem instances. That said, we do not claim that the only plausible models are those where all dual problem instances are convex. We also show that it is straightforward to include a non-linear temporary price impact as well as return predictability in our model. We demonstrate numerically that good dual bounds can be computed quickly even when nested Monte-Carlo simulations are required to estimate the so-called dual penalties. These results suggest that the dual methodology can be applied in many models where closed-form expressions for the dual penalties cannot be computed. In Chapter 4, we apply duality methods based on information relaxations to dynamic zero-sum games. We show these methods can easily be used to construct dual lower and upper bounds for the optimal value of these games. In particular, these bounds can be used to evaluate sub-optimal policies for zero-sum games when calculating the optimal policies and game value is intractable.
383

Sequential Optimization in Changing Environments: Theory and Application to Online Content Recommendation Services

Gur, Yonatan January 2014 (has links)
Recent technological developments allow the online collection of valuable information that can be efficiently used to optimize decisions "on the fly" and at a low cost. These advances have greatly influenced the decision-making process in various areas of operations management, including pricing, inventory, and retail management. In this thesis we study methodological as well as practical aspects arising in online sequential optimization in the presence of such real-time information streams. On the methodological front, we study aspects of sequential optimization in the presence of temporal changes, such as designing decision making policies that adopt to temporal changes in the underlying environment (that drives performance) when only partial information about this changing environment is available, and quantifying the added complexity in sequential decision making problems when temporal changes are introduced. On the applied front, we study practical aspects associated with a class of online services that focus on creating customized recommendations (e.g., Amazon, Netflix). In particular, we focus on online content recommendations, a new class of online services that allows publishers to direct readers from articles they are currently reading to other web-based content they may be interested in, by means of links attached to said article. In the first part of the thesis we consider a non-stationary variant of a sequential stochastic optimization problem, where the underlying cost functions may change along the horizon. We propose a measure, termed {\it variation budget}, that controls the extent of said change, and study how restrictions on this budget impact achievable performance. As a yardstick to quantify performance in non-stationary settings we propose a regret measure relative to a dynamic oracle benchmark. We identify sharp conditions under which it is possible to achieve long-run-average optimality and more refined performance measures such as rate optimality that fully characterize the complexity of such problems. In doing so, we also establish a strong connection between two rather disparate strands of literature: adversarial online convex optimization; and the more traditional stochastic approximation paradigm (couched in a non-stationary setting). This connection is the key to deriving well performing policies in the latter, by leveraging structure of optimal policies in the former. Finally, tight bounds on the minimax regret allow us to quantify the "price of non-stationarity," which mathematically captures the added complexity embedded in a temporally changing environment versus a stationary one. In the second part of the thesis we consider another core stochastic optimization problem couched in a multi-armed bandit (MAB) setting. We develop a MAB formulation that allows for a broad range of temporal uncertainties in the rewards, characterize the (regret) complexity of this class of MAB problems by establishing a direct link between the extent of allowable reward "variation" and the minimal achievable worst-case regret, and provide an optimal policy that achieves that performance. Similarly to the first part of the thesis, our analysis draws concrete connections between two strands of literature: the adversarial and the stochastic MAB frameworks. The third part of the thesis studies applied optimization aspects arising in online content recommendations, that allow web-based publishers to direct readers from articles they are currently reading to other web-based content. We study the content recommendation problem and its unique dynamic features from both theoretical as well as practical perspectives. Using a large data set of browsing history at major media sites, we develop a representation of content along two key dimensions: clickability, the likelihood to click to an article when it is recommended; and engageability, the likelihood to click from an article when it hosts a recommendation. Based on this representation, we propose a class of user path-focused heuristics, whose purpose is to simultaneously ensure a high instantaneous probability of clicking recommended articles, while also optimizing engagement along the future path. We rigorously quantify the performance of these heuristics and validate their impact through a live experiment. The third part of the thesis is based on a collaboration with a leading provider of content recommendations to online publishers.
384

Studies in Stochastic Networks: Efficient Monte-Carlo Methods, Modeling and Asymptotic Analysis

Dong, Jing January 2014 (has links)
This dissertation contains two parts. The first part develops a series of bias reduction techniques for: point processes on stable unbounded regions, steady-state distribution of infinite server queues, steady-state distribution of multi-server loss queues and loss networks and sample path of stochastic differential equations. These techniques can be applied for efficient performance evaluation and optimization of the corresponding stochastic models. We perform detailed running time analysis under heavy traffic of the perfect sampling algorithms for infinite server queues and multi-server loss queues and prove that the algorithms achieve nearly optimal order of complexity. The second part aims to model and analyze the load-dependent slowdown effect in service systems. One important phenomenon we observe in such systems is bi-stability, where the system alternates randomly between two performance regions. We conduct heavy traffic asymptotic analysis of system dynamics and provide operational solutions to avoid the bad performance region.
385

Methods for Pricing Pre-Earnings Equity Options and Leveraged ETF Options

Santoli, Marco January 2015 (has links)
In this thesis, we present several analytical and numerical methods for two financial engineering problems: 1) accounting for the impact of an earnings announcement on the price and implied volatility of the associated equity options, and 2) analyzing the price dynamics of leveraged exchange-traded funds (LETFs) and valuation of LETF options. Our pricing models capture the main characteristics of these options, along with jumps and stochastic volatility in the underlying asset. We illustrate our results through numerical implementation and calibration using market data. In the first part, we model the pricing of equity options around an earnings announcement (EA). Empirical studies have shown that an earnings announcement can lead to an immediate price shock to the company stock. Since many companies also have options written on their stocks, the option prices should reflect the uncertain price impact of an upcoming EA before expiration. To represent the shock due to earnings, we incorporate a random jump on the announcement date in the dynamics of the stock price. We consider different distributions of the scheduled earnings jump as well as different underlying stock price dynamics before and after the EA date. Our main contributions include analytical option pricing formulas when the underlying stock price follows the Kou model along with a double-exponential or Gaussian EA jump on the announcement date. Furthermore, we derive analytic bounds and asymptotics for the pre-EA implied volatility under various models. The calibration results demonstrate adequate fit of the entire implied volatility surface prior to an announcement. The comparison of the risk-neutral distribution of the EA jump to its historical counterpart is also discussed. Moreover, we discuss the valuation and exercise strategy of pre-EA American options, and present an analytical approximation and numerical results. The second part focuses on the analysis of LETFs. We start by providing a quantitative risk analysis of LETFs with an emphasis on the impact of leverage ratios and investment horizons. Given an investment horizon, different leverage ratios imply different levels of risk. Therefore, the idea of an {admissible range of leverage ratios} is introduced. For an admissible leverage ratio, the associated LETF satisfies a given risk constraint based on, for example, the value-at-risk (VaR) and conditional VaR. Moreover, we discuss the concept of {admissible risk horizon} so that the investor can control risk exposure by selecting an appropriate holding period. The intra-horizon risk is calculated, showing that higher leverage can significantly increase the probability of an LETF value hitting a lower level. This leads us to evaluate a stop-loss/take-profit strategy for LETFs and determine the optimal take-profit given a stop-loss risk constraint. In addition, the impact of volatility exposure on the returns of different LETF portfolios is investigated. In the last chapter, we study the pricing of options written on LETFs. Since LETFs on the same reference index share the same source of risk, it is important to consider a consistent pricing methodology of these options. In addition, LETFs can theoretically experience a loss greater than 100\%. In practice, some LETF providers design the fund so that the daily returns are capped both downward and upward. We incorporate these features and model the reference index by a stochastic volatility model with jumps. An efficient numerical algorithm based on transform methods to value options under this model is presented. We illustrate the accuracy of our pricing algorithm by comparing it to existing methods. Calibration using empirical option data shows the impact of leverage ratio on the implied volatility. Our method is extended to price American-style LETF options.
386

Optimal Multiple Stopping Approach to Mean Reversion Trading

Li, Xin January 2015 (has links)
This thesis studies the optimal timing of trades under mean-reverting price dynamics subject to fixed transaction costs. We first formulate an optimal double stopping problem whereby a speculative investor can choose when to enter and subsequently exit the market. The investor's value functions and optimal timing strategies are derived when prices are driven by an Ornstein-Uhlenbeck (OU), exponential OU, or Cox-Ingersoll-Ross (CIR) process. Moreover, we analyze a related optimal switching problem that involves an infinite sequence of trades. In addition to solving for the value functions and optimal switching strategies, we identify the conditions under which the double stopping and switching problems admit the same optimal entry and/or exit timing strategies. A number of extensions are also considered, such as incorporating a stop-loss constraint, or a minimum holding period under the OU model. A typical solution approach for optimal stopping problems is to study the associated free boundary problems or variational inequalities (VIs). For the double optimal stopping problem, we apply a probabilistic methodology and rigorously derive the optimal price intervals for market entry and exit. A key step of our approach involves a transformation, which in turn allows us to characterize the value function as the smallest concave majorant of the reward function in the transformed coordinate. In contrast to the variational inequality approach, this approach directly constructs the value function as well as the optimal entry and exit regions, without a priori conjecturing a candidate value function or timing strategy. Having solved the optimal double stopping problem, we then apply our results to deduce a similar solution structure for the optimal switching problem. We also verify that our value functions solve the associated VIs. Among our results, we find that under OU or CIR price dynamics, the optimal stopping problems admit the typical buy-low-sell-high strategies. However, when the prices are driven by an exponential OU process, the investor generally enters when the price is low, but may find it optimal to wait if the current price is sufficiently close to zero. In other words, the continuation (waiting) region for entry is disconnected. A similar phenomenon is observed in the OU model with stop-loss constraint. Indeed, the entry region is again characterized by a bounded price interval that lies strictly above the stop-loss level. As for the exit timing, a higher stop-loss level always implies a lower optimal take-profit level. In all three models, numerical results are provided to illustrate the dependence of timing strategies on model parameters.
387

Smart Grid Risk Management

Abad Lopez, Carlos Adrian January 2015 (has links)
Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices. Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.
388

Decision Making with Coupled Learning: Applications in Inventory Management and Auctions

Chaneton, Juan Manuel January 2015 (has links)
Operational decisions can be complicated by the presence of uncertainty. In many cases, there exist means to reduce uncertainty, though these may come at a cost. Decision makers then face the dilemma of acting based on current, incomplete information versus investing in trying to minimize uncertainty. Understanding the impact of this trade-off on decisions and performance is the central topic of this thesis. When attempting to construct probabilistic models based on data, operational decisions often affect the amount and quality of data that is collected. This introduces an exploration-exploitation trade-off between decisions and information collection. Much of the literature has sought to understand how operational decisions should be modified to incorporate this trade-off. While studying two well-known operational problems, we ask an even more basic question: does the exploration-exploitation trade-off matter in the first place? In the first two parts of this thesis we focus on this question in the context of the newsvendor problem and sequential auctions with incomplete private information. We first analyze the well-studied stationary multi-period newsvendor problem, in which a retailer sells perishable items and unmet demand is lost and unobserved. This latter limitation, referred to as demand censoring, is what introduces the exploration-exploitation trade-off in this problem. We focus on two questions: i.) what is the value of accounting for the exploration-exploitation trade-off; and, ii.) what is the cost imposed by having access only to sales data as opposed to underlying demand samples? Quite remarkably, we show that, for a broad family of tractable cases, there is essentially no exploration-exploitation trade-off; i.e., there is almost no value of accounting for the impact of decisions on information collection. Moreover, we establish that losses due to demand censoring (as compared to having full access to demand samples) are limited, but these are of higher order than those due to ignoring the exploration-exploitation trade-off. In other words, efforts aimed at improving information collection concerning lost sales are more valuable than analytic or computational efforts to pin down the optimal policy in the presence of censoring. In the second part of this thesis we examine the problem of an agent bidding on a sequence of repeated auctions for an item. The agent does not fully know his own valuation of the object and he can only collect information if he wins an auction. This coupling introduces the exploration-exploitation trade-off in this problem. We study the value of accounting for information collection on decisions and find that: i.) in general the exploration-exploitation trade-off cannot be ignored (that is, in some cases ignoring exploration can substantially affect rewards), but ii.) for a broad class of instances, ignoring exploration can indeed produce nearly optimal results. We characterize this class through a set of conditions on the problem primitives, and we demonstrate with examples that these are satisfied for common settings found in the literature. In the third part of this thesis we study the impact of uncertainty in the context of inventory record inaccuracies in inventory management systems. Record inaccuracies, mismatches between physical and recorded inventory, are frequently encountered in practice and can markedly affect revenues. Most of the literature is devoted to analyzing the cost-benefit relationship between investing in means to reduce inaccuracies and accounting for them in operational decisions. We focus on the less explored approach of using available data to reduce the uncertainty in inventory. In practice, collecting Point Of Sale (POS) data is substantially simpler than collecting stock information. We propose a model in which inventory is regarded as a virtually unobservable quantity and POS data is used to infer its state over time. Additionally, our method also works as an effective estimator of censored demand in the presence of inaccurate records. We test our methodology with extensive numerical experiments based on both simulated and actual retailing data. The results show that it is remarkably effective in inferring unobservable past statistics and predicting future stock status, even in the presence of severe data misspecifications.
389

Stochastic Networks: Modeling, Simulation Design and Risk Control

Li, Juan January 2015 (has links)
This dissertation studies stochastic network problems that arise in various areas with important industrial applications. Due to uncertainty of both external and internal variables, these networks are exposed to the risk of failure with certain probability, which, in many cases, is very small. It is thus desirable to develop efficient simulation algorithms to study the stability of these networks and provide guidance for risk control. Chapter 2 models equilibrium allocations in a distribution network as the solution of a linear program (LP) which minimizes the cost of unserved demands across nodes in the network. Assuming that the demands are random (following a jointly Gaussian law), we study the probability that the optimal cost exceeds a large threshold, which is a rare event. Our contribution is the development of importance sampling and conditional Monte Carlo algorithms for estimating this probability. We establish the asymptotic efficiency of our algorithms and also present numerical results that demonstrate the strong performance of our algorithms. Chapter 3 studies an insurance-reinsurance network model that deals with default contagion risks with a particular aim of capturing cascading effects at the time of defaults. We capture these effects by finding an equilibrium allocation of settlements that can be found as the unique optimal solution of an optimization problem. We are able to obtain an asymptotic description of the most likely ways in which the default of a specific group of participants can occur, by solving a multidimensional Knapsack integer programming problem. We also propose a class of strongly efficient Monte Carlo estimators for computing the expected loss of the network conditioned on the failure of a specific set of companies. Chapter 4 discusses control schemes for maintaining low failure probability of a transmission system power line. We construct a stochastic differential equation to describe the temperature evolution in a line subject to stochastic exogenous factors such as ambient temperature, and present a solution to the resulting stochastic heat equation. A number of control algorithms designed to limit the probability that a line exceeds its critical temperature are provided.
390

Ranking Algorithms on Directed Configuration Networks

Chen, Ningyuan January 2015 (has links)
In recent decades, complex real-world networks, such as social networks, the World Wide Web, financial networks, etc., have become a popular subject for both researchers and practitioners. This is largely due to the advances in computing power and big-data analytics. A key issue of analyzing these networks is the centrality of nodes. Ranking algorithms are designed to achieve the goal, e.g., Google's PageRank. We analyze the asymptotic distribution of the rank of a randomly chosen node, computed by a family of ranking algorithms on a random graph, including PageRank, when the size of the network grows to infinity. We propose a configuration model generating the structure of a directed graph given in- and out-degree distributions of the nodes. The algorithm guarantees the generated graph to be simple (without self-loops and multiple edges in the same direction) for a broad spectrum of degree distributions, including power-law distributions. Power-law degree distribution is referred to as scale-free property and observed in many real-world networks. On the random graph G_n=(V_n,E_n) generated by the configuration model, we study the distribution of the ranks, which solves R_i=∑ _{j: (j,i) ∈ E_n} (C_jR_j +Q_i) for all node i, some weight C_i and personalization value Q_i. We show that as the size of the graph n → ∞, the rank of a randomly chosen node converges weakly to the endogenous solution of the R =^D ∑ _{i=1}^N (C_iR_i + Q), where (Q, N, {C_i}) is a random vector and {R_i} are i.i.d. copies of R, independent of (Q, N,{C_i}). This main result is divided into three steps. First, we show that the rank of a randomly chosen node can be approximated by applying the ranking algorithm on the graph for finite iterations. Second, by coupling the graph to a branching tree that is governed by the empirical size-biased distribution, we approximate the finite iteration of the ranking algorithm by the root node of the branching tree. Finally, we prove that the rank of the root of the branching tree converges to that of a limiting weighted branching process, which is independent of n and solves the stochastic fixed-point equation. Our result formalizes the well-known heuristics, that a network often locally possesses a tree-like structure. We conduct a numerical example showing that the approximation is very accurate for English Wikipedia pages (over 5 million). To draw a sample from the endogenous solution of the stochastic fixed-point equation, one can run linear branching recursions on a weighted branching process. We provide an iterative simulation algorithm based on bootstrap. Compared to the naive Monte Carlo, our algorithm reduces the complexity from exponential to linear in the number of recursions. We show that as the bootstrap sample size tends to infinity, the sample drawn according to our algorithm converges to the target distribution in the Kantorovich-Rubinstein distance and the estimator is consistent.

Page generated in 0.1268 seconds