• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Two Essays in Financial Engineering

Yang, Linan January 2015 (has links)
This dissertation consists of two parts. In the first part, we investigate the potential impact of wrong-way risk on calculating credit valuation adjustment (CVA) of a derivatives portfolio. A credit valuation adjustment (CVA) is an adjustment applied to the value of a derivative contract or a portfolio of derivatives to account for counterparty credit risk. Measuring CVA requires combining models of market and credit risk. Wrong-way risk refers to the possibility that a counterparty's likelihood of default increases with the market value of the exposure. We develop a method for bounding wrong-way risk, holding fixed marginal models for market and credit risk and varying the dependence between them. Given simulated paths of the two models, we solve a linear program to find the worst-case CVA resulting from wrong-way risk. We analyze properties of the solution and prove convergence of the estimated bound as the number of paths increases. The worst case can be overly pessimistic, so we extend the procedure for a tempered CVA by penalizing the deviation of the joint model of market and credit risk from a reference model. By varying the penalty for deviations, we can sweep out the full range of possible CVA values for different degrees of wrong-way risk. Here, too, we prove convergence of the estimate of the tempered CVA and the joint distribution that attains it. Our method addresses an important source of model risk in counterparty risk measurement. In the second part, we study investors' trading behavior in a model of realization utility. We assume that investors' trading decisions are driven not only by the utility of consumption and terminal wealth, but also by the utility burst from realizing a gain or a loss. More precisely, we consider a dynamic trading problem in which an investor decides when to purchase and sell a stock to maximize her wealth utility and realization utility with her reference points adapting to the stock's gain and loss asymmetrically. We study, both theoretically and numerically, the optimal trading strategies and asset pricing implications of two types of agents: adaptive agents, who realize prospectively the reference point adaptation in the future, and naive agents, who fail to do so. We find that an adaptive agent sells the stock more frequently when the stock is at a gain than a naive agent does, and that the adaptive agent asks for a higher risk premium for the stock than the naive agent does in equilibrium. Moreover, compared to a non-adaptive agent whose reference point does not change with the stock's gain and loss, both the adaptive and naive agents sell the stock less frequently, and the naive agent requires the same risk premium as the non-adaptive agent does.
392

Heavy Tails and Instabilities in Large-Scale Systems with Failures

Skiani, Evangelia January 2015 (has links)
Modern engineering systems, e.g., wireless communication networks, distributed computing systems, etc., are characterized by high variability and susceptibility to failures. Failure recovery is required to guarantee the successful operation of these systems. One straight- forward and widely used mechanism is to restart the interrupted jobs from the beginning after a failure occurs. In network design, retransmissions are the primary building blocks of the network architecture that guarantee data delivery in the presence of channel failures. Retransmissions have recently been identified as a new origin of power laws in modern information networks. In particular, it was discovered that retransmissions give rise to long tails (delays) and possibly zero throughput. To this end, we investigate the impact of the ‘retransmission phenomenon’ on the performance of failure prone systems and propose adaptive solutions to address emerging instabilities. The preceding finding of power law phenomena due to retransmissions holds under the assumption that data sizes have infinite support. In practice, however, data sizes are upper bounded 0 ≤ L ≤ b, e.g., WaveLAN’s maximum transfer unit is 1500 bytes, YouTube videos are of limited duration, e-mail attachments cannot exceed 10MB, etc. To this end, we first provide a uniform characterization of the entire body of the distribution of the number of retransmissions, which can be represented as a product of a power law and the Gamma distribution. This rigorous approximation clearly demonstrates the transition from power law distributions in the main body to exponential tails. Furthermore, the results highlight the importance of wisely determining the size of data fragments in order to accommodate the performance needs in these systems as well as provide the appropriate tools for this fragmentation. Second, we extend the analysis to the practically important case of correlated channels using modulated processes, e.g., Markov modulated, to capture the underlying dependencies. Our study shows that the tails of the retransmission and delay distributions are asymptotically insensitive to the channel correlations and are determined by the state that generates the lightest tail in the independent channel case. This insight is beneficial both for capacity planning and channel modeling since the independent model is sufficient and the correlation details do not matter. However, the preceding finding may be overly optimistic when the best state is atypical, since the effects of ‘bad’ states may still downgrade the performance. Third, we examine the effects of scheduling policies in queueing systems with failures and restarts. Fair sharing, e.g., processor sharing (PS), is a widely accepted approach to resource allocation among multiple users. We revisit the well-studied M/G/1 PS queue with a new focus on server failures and restarts. Interestingly, we discover a new phenomenon showing that PS-based scheduling induces complete instability in the presence of retransmissions, regardless of how low the traffic load may be. This novel phenomenon occurs even when the job sizes are bounded/fragmented, e.g., deterministic. This work demonstrates that scheduling one job at a time, such as first-come-first-serve, achieves a larger stability region and should be preferred in these systems. Last, we delve into the area of distributed computing and study the effects of commonly used mechanisms, i.e., restarts, fragmentation, replication, especially in cloud computing services. We evaluate the efficiency of these techniques under different assumptions on the data streams and discuss the corresponding optimization problem. These findings are useful for optimal resource allocation and fault tolerance in rapidly developing computing networks. In addition to networking and distributed computing systems, the aforementioned results improve our understanding of failure recovery management in large manufacturing and service systems, e.g., call centers. Scalable solutions to this problem increase in significance as these systems continuously grow in scale and complexity. The new phenomena and the techniques developed herein provide new insights in the areas of parallel computing, probability and statistics, as well as financial engineering.
393

Coordinated dynamic planning for air and space operations / CDASOCS

Wroten, Matthew Christian January 2005 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 148-150). / Planners of military air and space operations in a battlefield environment seek to allocate resources against targets in a way that best achieves the objectives of the commander. In future conflicts, the presence of new types of assets, such as tactical space-based sensors and Operationally Responsive Spacelift (ORS) assets, will add complexity to air and space operations decisions. In order to best achieve objectives, planners of different types of assets will likely need to work collaboratively when formulating tasking for their resources. The purpose of this research is to investigate the challenges of air and space collaboration and to quantify its potential benefit. We model a future threat scenario involving a rogue nation with Weapons of Mass Destruction (WMD) capability and a significant air defense force. We consider three separately-controlled resource groups - aircraft, satellites, and ORS assets - to combat the target threat. In addition, we formulate a top-level coordination controller, whose job it is to effect collaborative decision-making among resource groups. Using a combination of pre-existing software and new algorithms, we develop the Coordinated Dynamic Air and Space Operations Control System (CDASOCS), which simulates controller-generated plans in a battlefield environment recurring over multiple planning periods. New algorithms are presented for both the top-level coordination controller and the ORS controller. The benefits of resource coordination in CDASOCS are demonstrated in three main experiments along with several parameter variation tests. / by Matthew Christian Wroten. / S.M.
394

Distributed averaging in dynamic networks

Rajagopalan, Shreevatsa January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 39-40). / The question of computing average of numbers present at nodes in a network in a distributed manner using gossip or message-passing algorithms has been of great recent interest across disciplines -- algorithms, control and robotics, estimation, social networks, etc. It has served as a non-trivial, representative model for an important class of questions arising in these disciplines and thus guiding intellectual progress over the past few decades. In most of these applications, there is inherent dynamics present, such as changes in the network topology in terms of communication links, changes in the values of numbers present at nodes, and nodes joining or leaving. The effect of dynamics in terms of communication links on the design and analysis of algorithms for averaging is reasonably well understood, e.g. [14][2][8][4]. However, little is known about the effect of other forms of dynamics. In this thesis, we study the effect of such types of dynamics in the context of maintaining average in the network. Specifically, we design dynamics-aware message-passing or gossip algorithm that maintains good estimate of average in presence of continuous change in numbers at nodes. Clearly, in presence of such dynamics the best one can hope for is a tradeoff between the accuracy of each node's estimate of the average at each time instant and the rate of dynamics. For our algorithm, we characterize this tradeoff and establish it to be near optimal. The dependence of the accuracy of the algorithm on the rate of dynamics as well as on the underlying graph structure is quantified. / by Shreevatsa Rajagopalan. / S.M.
395

Delay characterization and prediction in major U.S. airline networks

Hanley, Zebulon James January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 101-102). / This thesis expands on models that predict delays within the National Airspace System (NAS) in the United States. We propose a new method to predict the expected behavior of the NAS throughout the course of an entire day after only a few flying hours have elapsed. We do so by using k-means clustering to classify the daily NAS behavior into a small set of most commonly seen snapshots. We then use random forests to map the delay behavior experienced early in a day to the most similar NAS snapshot, from which we make our type-of-day prediction for the NAS. By noon EST, we are able to predict the NAS type-of-day with 85% accuracy. We then incorporate these NAS type-of-day predictions into previously proposed models to predict the delay on specific origin-destination (OD) pairs within the U.S. at a certain number of hours into the future. The predictions use local delay variables, such as the current delay on specific OD pairs and airports, as well network-level variables such as the NAS type-of-day. These OD pair delay prediction models use random forests to make classification and regression predictions. The effects of changes in classification threshold, prediction horizon, NAS type-of-day inclusion, and using wheel off/on, actual, and scheduled gate departure and arrival times are studied. Lastly, we explore how the delay behavior of the NAS has changed over the last ten years and how well the models perform on new data. / by Zebulon James Hanley. / S.M.
396

Exact Methods for Solving Single-Vehicle Pickup and Delivery Problems in Real Time

O'Neil, Ryan James 02 March 2019 (has links)
<p> The Traveling Salesman Problem with Pickup and Delivery (TSPPD) describes the problem of finding a minimum cost path in which pickups precede their associated deliveries. The TSPPD is particularly important in the growing field of Dynamic Pickup and Delivery Problems (DPDP). These include the many-to-many Dial-A-Ride Problems (DARP) of companies such as Uber and Lyft, and the Meal Delivery Routing Problem (MDRP) of Grubhub. We examine exact methods for solving TSPPDs where orders from different pickup locations can be carried simultaneously. Our interest lies in solving such problems for real-time applications, in which finding high quality solutions quickly (often in less than a second) is more important that proving the optimality of such solutions. </p><p> We begin by considering enumeration, Constraint Programming (CP) with and without Assignment Problem (AP) inference duals, Mixed Integer Programming (MIP), and hybrid methods combining CP and MIP. Our CP formulations examine multiple techniques for ensuring pickup and delivery precedence relationships. We then propose a new MIP formulation for the Asymmetric Traveling Salesman Problem with Pickup and Delivery, along with valid inequalities for the Sarin-Sherali-Bhootra formulation. We study these models in their complete forms, relax complicating constraints of these models, and compare their performance. Finally, we examine the use of low-width Multivalued Decision Diagrams (MDDs) in a branch-and-bound with and without AP inference duals as a primal heuristic for finding high quality solutions to TSPPDs within strict time budgets. </p><p> In our results and conclusions, we attempt to provide guidance about which of these methods may be most appropriate for fast TSPPD solving given various time budgets and problem sizes. We present computational results showing the promise of our new MIP formulations when applied to pickup and delivery problems. Finally, we show that hybridized low-width MDDs can be more effective than similarly structured hybrid CP techniques for real-time combinatorial decision making.</p><p>
397

Sparse learning : statistical and optimization perspectives

Dedieu, Antoine January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 101-109). / In this thesis, we study the computational and statistical aspects of several sparse models when the number of samples and/or features is large. We propose new statistical estimators and build new computational algorithms - borrowing tools and techniques from areas of convex and discrete optimization. First, we explore an Lq-regularized version of the Best Subset selection procedure which mitigates the poor statistical performance of the best-subsets estimator in the low SNR regimes. The statistical and empirical properties of the estimator are explored, especially when compared to best-subsets selection, Lasso and Ridge. Second, we propose new computational algorithms for a family of penalized linear Support Vector Machine (SVM) problem with a hinge loss function and sparsity-inducing regularizations. Our methods bring together techniques from Column (and Constraint) Generation and modern First Order methods for non-smooth convex optimization. These two components complement each others' strengths, leading to improvements of 2 orders of magnitude when compared to commercial LP solvers. Third, we present a novel framework inspired by Hierarchical Bayesian modeling to predict user session-length on on-line streaming services. The time spent by a user on a platform depends upon user-specific latent variables which are learned via hierarchical shrinkage. Our framework incorporates flexible parametric/nonparametric models on the covariates and outperforms state-of- the-art estimators in terms of efficiency and predictive performance on real world datasets from the internet radio company Pandora Media Inc. / by Antoine Dedieu. / S.M.
398

Essays in financial engineering

Haugh, Martin B. (Martin Brendan), 1971- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 109-115). / This thesis consists of three essays that apply techniques of operations research to problems in financial engineering. In particular, we study problems in portfolio optimization and options pricing. The first essay is motivated by the fact that derivative securities are equivalent to specific dynamic trading strategies in complete markets. This suggests the possibility of constructing buy-and-hold portfolios of options that mimic certain dynamic investment policies, e.g., asset-allocation rules. We explore this possibility by solving the following problem: given an optimal dynamic investment policy, find a set of options at the start of the investment horizon which will come closest to the optimal dynamic investment policy. We solve this problem for several combinations of preferences, return dynamics, and optimality criteria, and show that under certain conditions, a portfolio consisting of just a few european options is an excellent substitute for considerably more complex dynamic investment policies. In the second essay, we develop a method for pricing and exercising high-dimensional American options. The approach is based on approximate dynamic programming using nonlinear regression to approximate the value function. Using the approximate dynamic programming solutions, we construct upper and lower bounds on the option prices. These bounds can be evaluated by Monte Carlo simulation, and they are general enough to be used in conjunction with other approximate methods for pricing American options. / (cont.) We characterize the theoretical worst-case performance of the pricing bounds and examine how they may be used for hedging and exercising the option. We also discuss the implications for the design of the approximate pricing algorithm and illustrate its performance on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. The third essay explores the possibility of solving high-dimensional portfolio optimization problems using approximate dynamic programming. In particular, we employ approximate value iteration where the portfolio strategy at each time period is obtained using quadratic approximations to the approximate value function. We then compare the resulting solution to the best heuristic strategies available. Though the approximate dynamic programming solutions are often competitive, they are sometimes dominated by the best heuristic strategy. On such occasions we conclude that inaccuracies in the quadratic approximations are responsible for the poor performance. Finally, we compare our results to other recent work in this area and suggest possible methods for improving these algorithms. / by Martin B. Haugh. / Ph.D.
399

Tractability through approximation : a study of two discrete optimization problems

Farahat, Amr, 1973- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references. / (cont.) algorithm, at one extreme, and complete enumeration, at the other extreme. We derive worst-case approximation guarantees on the solution produced by such an algorithm for matroids. We then define a continuous relaxation of the original problem and show that some of the derived bounds apply with respect to the relaxed problem. We also report on a new bound for independence systems. These bounds extend, and in some cases strengthen, previously known results for standard best-in greedy. / This dissertation consists of two parts. In the first part, we address a class of weakly-coupled multi-commodity network design problems characterized by restrictions on path flows and 'soft' demand requirements. In the second part, we address the abstract problem of maximizing non-decreasing submodular functions over independence systems, which arises in a variety of applications such as combinatorial auctions and facility location. Our objective is to develop approximate solution procedures suitable for large-scale instances that provide a continuum of trade-offs between accuracy and tractability. In Part I, we review the application of Dantzig-Wolfe decomposition to mixed-integer programs. We then define a class of multi-commodity network design problems that are weakly-coupled in the flow variables. We show that this problem is NP-complete, and proceed to develop an approximation/reformulation solution approach based on Dantzig-Wolfe decomposition. We apply the ideas developed to the specific problem of airline fleet assignment with the goal of creating models that incorporate more realistic revenue functions. This yields a new formulation of the problem with a provably stronger linear programming relaxation, and we provide some empirical evidence that it performs better than other models proposed in the literature. In Part II, we investigate the performance of a family of greedy-type algorithms to the problem of maximizing submodular functions over independence systems. Building on pioneering work by Conforti, Cornu6jols, Fisher, Jenkyns, Nemhauser, Wolsey and others, we analyze a greedy algorithm that incrementally augments the current solution by adding subsets of arbitrary variable cardinality. This generalizes the standard best-in greedy / by Amr Farahat. / Ph.D.
400

Robust model selection and outlier detection in linear regressions

McCann, Lauren, Ph. D. Massachusetts Institute of Technology January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 191-196). / In this thesis, we study the problems of robust model selection and outlier detection in linear regression. The results of data analysis based on linear regressions are highly sensitive to model choice and the existence of outliers in the data. This thesis aims to help researchers to choose the correct model when their data could be contaminated with outliers, to detect possible outliers in their data, and to study the impact that such outliers have on their analysis. First, we discuss the problem of robust model selection. Many methods for performing model selection were designed with the standard error model ... and least squares estimation in mind. These methods often perform poorly on real world data, which can include outliers. Robust model selection methods aim to protect us from outliers and capture the model that represents the bulk of the data. We review the currently available model selection algorithms (both non-robust and robust) and present five new algorithms. Our algorithms aim to improve upon the currently available algorithms, both in terms of accuracy and computational feasibility. We demonstrate the improved accuracy of our algorithms via a simulation study and a study on a real world data set. / (cont.) Finally, we discuss the problem of outlier detection. In addition to model selection, outliers can adversely influence many other outcomes of regression-based data analysis. We describe a new outlier diagnostic tool, which we call diagnostic data traces. This tool can be used to detect outliers and study their influence on a variety of regression statistics. We demonstrate our tool on several data sets, which are considered benchmarks in the field of outlier detection. / by Lauren McCann. / Ph.D.

Page generated in 0.1583 seconds