Spelling suggestions: "subject:"contenter"" "subject:"intenter""
71 |
Portfolio risk minimization under departures from normalityLauprête, Geoffrey J. (Geoffrey Jean), 1972- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 206-210). / This thesis revisits the portfolio selection problem in cases where returns cannot be modeled as Gaussian. The emphasis is on the development of financially intuitive and statistically sound approaches to portfolio risk minimization. When returns exhibit asymmetry, we propose using a quantile-based measure of risk which we call shortfall. Shortfall is related to Value-at-Risk and Conditional Value-at-Risk, and can be tuned to capture tail risk. We formulate the sample shortfall minimization problem as a linear program. Using results from empirical process theory, we derive a central limit theorem for the shortfall portfolio estimator. We warn about the statistical pitfalls of portfolio selection based on the minimization of rare events, which happens to be the case when shortfall is tuned to focus on extreme tail risk. In the presence of heavy tails and tail dependence, we show that portfolios based on the minimization of alternative robust measures of risk may in fact have lower variance than those based on the minimization of sample variance. We show that minimizing the sample mean absolute deviation yields portfolios that are asymptotically more efficient than those based on the minimization of the sample variance, when returns have a multivariate Student-t distribution with degrees of freedom less than or equal to 6. This motivates our consideration of other robust measures of risk, for which we present linear and quadratic programming formulations. / (cont.) We carry out experiments on simulated and historical data, illustrating the fact that the efficiency gained by considering robust measures of risk may be substantial. Finally, when the number of return observations is of the same order of magnitude as, or smaller than, the dimension of the portfolio being estimated, we investigate the applicability of regularization to sample risk minimization. We examine both L1- and L2-regularization. We interpret regularization from a Bayesian perspective, and provide an algorithm for choosing the regularization parameter. We validate the use of regularization in portfolio selection on simulated and historical data, and conclude that regularization can yield portfolios with smaller risk, and in particular smaller variance. / by Geoffrey J. Lauprête. / Ph.D.
|
72 |
Optimizing safety stock placement in general network supply chainsLesnaia, Ekaterina. January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 207-214). / The amount of safety stock to hold at each stage in a supply chain is an important problem for a manufacturing company that faces uncertain demand and needs to provide a high level of service to its customers. The amount of stock held should be small to minimize holding and storage costs while retaining the ability to serve customers on time and satisfy most, if not all, of the demand. This thesis analyzes this problem by utilizing the framework of deterministic service time models and provides an algorithm for safety stock placement in general-network supply chains. We first show that the general problem is NP-hard. Next, we develop several conditions that characterize an optimal solution of the general-network problem. We find that we can identify all possible candidates for the optimal service times for a stage by constructing paths from the stage to each other stage in the supply chain. We use this construct, namely these paths, as the basis for a branch and bound algorithm for the general-network problem. To generate the lower bounds, we create and solve a spanning-tree relaxation of the general-network problem. We provide a polynomial algorithm to solve these spanning tree problems. We perform a set of computational experiments to assess the performance of the general-network algorithm and to determine how to set various parameters for the algorithm. In addition to the general network case, we consider two-layer network problems. We develop a specialized branch and bound algorithm for these problems and show computationally that it is more efficient than the general-network algorithm applied to the two-layer networks. / by Ekaterina Lesnaia. / Ph.D.
|
73 |
Modeling social response to the spread of an infectious diseaseEvans, Jane A. (Jane Amanda) January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 85-88). / With the globalization of culture and economic trade, it is increasingly important not only to detect outbreaks of infectious disease early, but also to anticipate the social response to the disease. In this thesis, we use social network analysis and data mining methods to model negative social response (NSR), where a society demonstrates strain associated with a disease. Specifically, we apply real world biosurveillance data on over 11,000 initial events to: 1) describe how negative social response spreads within an outbreak, and 2) analytically predict negative social response to an outbreak. In the first approach, we developed a meta-model that describes the interrelated spread of disease and NSR over a network. This model is based on both a susceptible-infective- recovered (SIR) epidemiology model and a social influence model. It accurately captured the collective behavior of a complex epidemic, providing insights on the volatility of social response. In the second approach, we introduced a multi-step joint methodology to improve the detection and prediction of rare NSR events. The methodology significantly reduced the incidence of false positives over a more conventional supervised learning model. We found that social response to the spread of an infectious disease is predictable, despite the seemingly random occurrence of these events. Together, both approaches offer a framework for expanding a society's critical biosurveillance capability. / by Jane A. Evans. / S.M.
|
74 |
Exploration vs. exploitation : reducing uncertainty in operational problems / Exploration versus exploitation : reducing uncertainty in operational problems / Reducing uncertainty in operational problemsShaposhnik, Yaron January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 205-207). / Motivated by several core operational applications, we introduce a class of multistage stochastic optimization models that capture a fundamental tradeoff between performing work under uncertainty (exploitation) and investing resources to reduce the uncertainty in the decision making (exploration/testing). Unlike existing models, in which the exploration-exploitation tradeoffs typically relate to learning the underlying distributions, the models we introduce assume a known probabilistic characterization of the uncertainty, and focus on the tradeoff of learning exact realizations. In the first part of the thesis (Chapter 2), we study a class of scheduling problems that capture common settings in service environments in which the service provider must serve a collection of jobs that have a-priori uncertain processing times and priorities (modeled as weights). In addition, the service provider must decide how to dynamically allocate capacity between processing jobs and testing jobs to learn more about their respective processing times and weights. We obtain structural results of optimal policies that provide managerial insights, efficient optimal and near-optimal algorithms, and quantification of the value of testing. In the second part of the thesis (Chapter 3), we generalize the model introduced in the first part by studying how to prioritize testing when jobs have different uncertainties. We model difference in uncertainties using the convex order, a general relation between distributions, which implies that the variance of one distribution is higher than the variance of the other distribution. Using an analysis based on the concept of mean preserving local spread, we show that the structure of the optimal policy generalizes that of the initial model where jobs were homogeneous and had equal weights. Finally, in the third part of the thesis (Chapter 4), we study a broad class of stochastic combinatorial optimization that can be formulated as Linear Programs whose objective coefficients are random variables that can be tested, and whose constraint polyhedron has the structure of a polymatroid. We characterize the optimal policy and show that similar types of policies optimally govern testing decisions in this setting as well. / by Yaron Shaposhnik. / Ph. D.
|
75 |
Studies integrating geometry, probability, and optimization under convexityNogueira, Alexandre Belloni January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 197-202). / Convexity has played a major role in a variety of fields over the past decades. Nevertheless, the convexity assumption continues to reveal new theoretical paradigms and applications. This dissertation explores convexity in the intersection of three fields, namely, geometry, probability, and optimization. We study in depth a variety of geometric quantities. These quantities are used to describe the behavior of different algorithms. In addition, we investigate how to algorithmically manipulate these geometric quantities. This leads to algorithms capable of transforming ill-behaved instances into well-behaved ones. In particular, we provide probabilistic methods that carry out such task efficiently by exploiting the geometry of the problem. More specific contributions of this dissertation are as follows. (i) We conduct a broad exploration of the symmetry function of convex sets and propose efficient methods for its computation in the polyhedral case. (ii) We also relate the symmetry function with the computational complexity of an interior-point method to solve a homogeneous conic system. (iii) Moreover, we develop a family of pre-conditioners based on the symmetry function and projective transformations for such interior-point method. / (cont.) The implementation of the pre-conditioners relies on geometric random walks. (iv) We developed the analysis of the re-scaled perceptron algorithm for a linear conic system. In this method a sequence of linear transformations is used to increase a condition measure associated with the problem. (v) Finally, we establish properties relating a probability density induced by an arbitrary norm and the geometry of its support. This is used to construct an efficient simulating annealing algorithm to test whether a convex set is bounded, where the set is represented only by a membership oracle. / by Alexandre Belloni Nogueira. / Ph.D.
|
76 |
Information and decentralization in inventory, supply chain, and transportation systemsRoels, Guillaume January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 199-213). / This thesis investigates the impact of lack of information and decentralization of decision-making on the performance of inventory, supply chain, and transportation systems. In the first part of the thesis, we study two extensions of a classic single-item, single-period inventory control problem: the "newsvendor problem." We first analyze the newsvendor problem when the demand distribution is only partially specified by some moments and shape parameters. We determine order quantities that are robust, in the sense that they minimize the newsvendor's maximum regret about not acting optimally, and we compute the maximum value of additional information. The minimax regret approach is scalable to solve large practical problems, such as those arising in network revenue management, since it combines an efficient solution procedure with very modest data requirements. We then analyze the newsvendor problem when the inventory decision-making is decentralized. In supply chains, inventory decisions often result from complex negotiations among supply partners and might therefore lead to a loss of efficiency (in terms of profit loss). / (cont.) We quantify the loss of efficiency of decentralized supply chains that use price-only contracts under the following configurations: series, assembly, competitive procurement, and competitive distribution. In the second part of the thesis, we characterize the dynamic nature of traffic equilibria in a transportation network. Using the theory of kinematic waves, we derive an analytical model for traffic delays capturing the first-order traffic dynamics and the impact of shock waves. We then incorporate the travel-time model within a dynamic user equilibrium setting and illustrate how the model applies to solve a large network assignment problem. / by Guillaume Roels. / Ph.D.
|
77 |
United States Air Force fighter jet maintenance Models : effectiveness of index policiesKessler, John M. (John Michael) January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / Cataloged from PDF version of thesis. "June 2013." / Includes bibliographical references (pages 107-109). / As some of the most technically complex systems in the world, United States fighter aircraft require a complex logistics system to sustain their reliable operation and ensure that the day-to-day Air Force missions can be satisfied. While there has been a lot of attention among academics and practitioners regarding the study of this complex logistics system, most of the focus has been on availability of spare parts that are indeed essential for the smooth operations of the fighter aircraft. However, in recent years there has been an increasing awareness that maintenance resources are an equally important enabler and should be considered together with inventory issues. The maintenance resources required to repair the fighter aircraft are expensive and therefore limited. Moreover, there are various types of maintenance that compete for the same resources. It .is therefore imperative that the allocation of maintenance resources is done as efficiently as possible. In this thesis, we study two areas of fighter aircraft maintenance that could significantly benefit from improved resource allocation and scheduling strategies. We use quantitative and qualitative data from Air Force data-bases and logistics personnel to develop an innovative modeling framework to capture these challenging maintenance problems. This modeling framework is based on a generalization of the of the well-known multi-armed bandit superprocess problem. Using these models, we develop index policies which provide intuitive, easily implemented, and effective rules for scheduling maintenance activities and allocating maintenance resources. These policies seem to improve on existing best practices within the Air Force, and perform well in extensive data-driven simulated computational experiments. The first area is focused on the challenges of scheduling maintenance for the low observable (stealth) capabilities of the F-22 Raptor, specifically, maintenance of the outer coating of the aircraft that is essential to maintain its radar invisibility. In particular, we generate index policies that efficiently schedule which aircraft should enter low observable maintenance, how long they should be worked on, and which aircraft should fly in order to maximize the stealth capability of the fleet. Secondly, we model the maintenance process of the F100-229 engine, which is the primary propulsion method used in the F-16C/D and F-15E aircraft. In particular, we generate index policies to decide which engines should take priority over others, and whether or not certain components of the engines should be repaired or replaced. The policies address both elective (planned) and unplanned maintenance tasks. / by John M. Kessler. / S.M.
|
78 |
Advanced mixed-integer programming formulations : methodology, computation, and applicationHuchette, Joseph Andrew January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 193-203). / This thesis introduces systematic ways to use mixed-integer programming (MIP) to solve difficult nonconvex optimization problems arising in application areas as varied as operations, robotics, power systems, and machine learning. Our goal is to produce MIP formulations that perform extremely well in practice, requiring us to balance qualities often in opposition: formulation size, strength, and branching behavior. We start by studying a combinatorial framework for building MIP formulations, and present a complete graphical characterization of its expressive power. Our approach allows us to produce strong and small formulations for a variety of structures, including piecewise linear functions, relaxations for multilinear functions, and obstacle avoidance constraints. Second, we present a geometric way to construct MIP formulations, and use it to investigate the potential advantages of general integer (as opposed to binary) MIP formulations. We are able to apply our geometric construction method to piecewise linear functions and annulus constraints, producing small, strong general integer MIP formulations that induce favorable behavior in a branch-and-bound algorithm. Third, we perform an in-depth computational study of MIP formulations for nonconvex piecewise linear functions, showing that the new formulations devised in this thesis outperform existing approaches, often substantially (e.g. solving to optimality in orders of magnitude less time). We also highlight how high-level, easy-to-use computational tools, built on top of the JuMP modeling language, can help make these advanced formulations accessible to practitioners and researchers. Furthermore, we study high-dimensional piecewise linear functions arising in the context of deep learning, and develop a new strong formulation and valid inequalities for this structure. We close the thesis by answering a speculative question: Given a disjunctive constraint, what can we reasonably sacrifice in order to construct MIP formulations with very few integer variables? We show that, if we allow our formulations to introduce spurious "integer holes" in their interior, we can produce strong formulations for any disjunctive constraint with only two integer variables and a linear number of inequalities (and reduce this further to a constant number for specific structures). We provide a framework to encompass these MIP-with-holes formulations, and show how to modify standard MIP algorithmic tools such as branch-and-bound and cutting planes to handle the holes. / by Joseph Andrew Huchette. / Ph. D.
|
79 |
Essays on the emiprical properties of stock and mutual fund returnsTaylor, Jonathan David, 1969- January 2000 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Includes bibliographical references (leaves 95-102). / Survivorship bias influences statistical inference in Finance. Through a series of Monte Carlo simulations in the style of Brown, Goetzmann, Ibbotson, and Ross {1992), we study the sampling distribution of the mean return, standard deviation, beta, Fama & MacBeth {1973) t-statistic, and Jegadeesh & Titman (1993) momentum strategy return in progressively truncated datasets. Survivor-biased datasets have higher mean returns, lower return standard deviations and lower betas than the full sample. Beta has no explanatory power even when the CAPM is true, a finding virtually unaffected by survivorship bias. Returns to a momentum strategy are positive even when stock idiosyncratic returns are serially and cross-sectionally uncorrelated, but survivorship bias overestimates the returns and underestimates the beta of the strategy. / by Jonathan David Taylor. / Ph.D.
|
80 |
An approximate dynamic programming approach to discrete optimizationDemir, Ramazan January 2000 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Includes bibliographical references (leaves 181-189). / We develop Approximate Dynamic Programming (ADP) methods to integer programming problems. We describe and investigate parametric, nonparametric and base-heuristic learning approaches to approximate the value function in order to break the curse of dimensionality. Through an extensive computational study we illustrate that our ADP approach to integer programming competes successfully with existing methodologies including state of art commercial packages like CPLEX. Our benchmarks for comparison are solution quality, running time and robustness (i.e., small deviations in the computational resources such as running time for varying instances of same size). In this thesis, we particularly focus on knapsack problems and the binary integer programming problem. We explore an integrated approach to solve discrete optimization problems by unifying optimization techniques with statistical learning. Overall, this research illustrates that the ADP is a promising technique by providing near-optimal solutions within reasonable amount of computation time especially for large scale problems with thousands of variables and constraints. Thus, Approximate Dynamic Programming can be considered as a new alternative to existing approximate methods for discrete optimization problems. / by Ramazan Demir. / Ph.D.
|
Page generated in 0.0466 seconds