221 |
Transient analysis of D(t)/M(t)/1 queuing system with applications to computing airport delaysGupta, Shubham, Ph. D. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 44-45). / This thesis is motivated by the desire to estimate air traffic delays at airports under a range of assumptions about the predictability of (a) inter-arrival times of demands (arrivals and departures) and (b) service times of aircraft movements (landings and takeoffs). It consists of two main parts. In the first, a transient analysis of a D(t)/M(t)/1 queuing system is presented. The reason for focusing on such a system is that it may be useful in evaluating some of the benefits of a future Air Traffic Management (ATM) system, such as the Next Generation Air Transportation System (NGATS or NextGen) currently being developed in the United States. One of the main features of these future ATM systems will be high predictability and regularity of the inter-arrival times of airport demands, i.e., a nearly deterministic demand process. This will be achieved through significant reductions in aircraft trajectory uncertainty, with the expectation that airport delays will also decrease substantially as a result. We develop a novel, computationally-efficient numerical approach for solving D(t)/M(t)/1 queuing systems with general, dynamic demand and service rates. We also discuss the complexity of the approach and some characteristics of the resulting solutions. In the second part of the thesis, we use a set of models of dynamic queuing systems, in addition to our D(t)/M(t)/1 model to explore the range of values that airport delays may assume under different sets of assumptions about the level of uncertainty associated with demand inter-arrival times and with service times. We thus compute airport delays under different queuing systems in a dynamic setting (where demand and service rates are time-varying) to capture the entire range of uncertainties expected during the deployment of various future ATM system technologies. The specific additional models we consider are: a deterministic D(t)/D(t)/1 model in which it is assumed that airport demands for landings and takeoffs occur at exactly as scheduled; and a M(t)/Ek(t)/1 model which, because of the "richness" of the family of Erlang distributions, Ek, can be used to approximate most M(t)/G(t)/1 models that may arise in airport applications. It can be seen that these models, when used together, provide bounds on estimated airport delays, with the D(t)/D(t)/1 model most likely to offer a lower bound and the M(t)/M(t)/1 model (i.e., the special case of M(t)/Ek(t)/1 with k = 1), an upper bound. We show through a set of examples based on a few of the busiest airports in the United States that: the closeness of the delay estimates provided by the different models depend on the level of congestion at an airport and the relative shapes of the dynamic profiles of capacity and demand at the airport; the difference (on a "percentage" basis) between the estimates provided by the deterministic model and the stochastic ones is largest for uncongested airports and decreases as the level of congestion increases; D(t)/M(t)/1 and M(t)/D(t)/1 produce estimates of the same order of magnitude, and reflect delays in the presence of "moderate" uncertainty at an airport; and delays under a D(t)/M(t)/1 queuing system are always higher than under a M(t)/D(t)/1 system. / by Shubham Gupta. / S.M.
|
222 |
Essays on inventory, pricing and financial trading strategiesLu, Ye, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 83-85). / In a multi-product market, if one product stocks out, consumers may substitute to competing products. In this thesis, we use an axiomatic approach to characterize a price-dependent demand substitution rule, and provide a sufficient and necessary condition for demand models where our demand substitution rule applies. Our results can serve as a link between the pricing and inventory literature, and enable the study of joint pricing and inventory coordination and competition. I demonstrate the impact of this axiomatic approach on the joint pricing and inventory coordination model by incorporating the price-dependent demand substitution rule, and illustrate that if the axiomatic approach is acceptable, the optimal strategy and corresponding expected profit are quite different than models that ignore stockout demand substitution. I use this price-dependent demand substitution rule to model the joint pricing and inventory game, and study the existence of Nash equilibrium in this game. In the second part of this thesis, I consider the problem of dynamically trading a security over a finite time horizon. The model assumes that a trader has a "safe price" for the security, which is the highest price that the trader is willing to pay for this security in each time period. A trader's order has both temporary (short term) and permanent (long term) impact on the security price and the security price may increase after the trader's order, to a point where it is above the safe price. Given a safe price constraint for the current time period, I characterize the optimal policy for the trader to maximize the total number of securities he can buy over a fixed time horizon. / (cont.) In particular, I consider a greedy policy, which involves at each stage buying a quantity that drives the temporary price to the security safety price. I show that the greedy policy is not always optimal and provide conditions under which the greedy policy is optimal. I also provide bounds on the performance of the greedy policy relative to the performance of the optimal policy. / by Ye Lu. / Ph.D.
|
223 |
Correlation decay and decentralized optimization in graphical modelsWeber, Theophane January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 213-229) and index. / Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables. / (cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease. / by Théophane Weber. / Ph.D.
|
224 |
Portfolio risk minimization under departures from normalityLauprête, Geoffrey J. (Geoffrey Jean), 1972- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 206-210). / This thesis revisits the portfolio selection problem in cases where returns cannot be modeled as Gaussian. The emphasis is on the development of financially intuitive and statistically sound approaches to portfolio risk minimization. When returns exhibit asymmetry, we propose using a quantile-based measure of risk which we call shortfall. Shortfall is related to Value-at-Risk and Conditional Value-at-Risk, and can be tuned to capture tail risk. We formulate the sample shortfall minimization problem as a linear program. Using results from empirical process theory, we derive a central limit theorem for the shortfall portfolio estimator. We warn about the statistical pitfalls of portfolio selection based on the minimization of rare events, which happens to be the case when shortfall is tuned to focus on extreme tail risk. In the presence of heavy tails and tail dependence, we show that portfolios based on the minimization of alternative robust measures of risk may in fact have lower variance than those based on the minimization of sample variance. We show that minimizing the sample mean absolute deviation yields portfolios that are asymptotically more efficient than those based on the minimization of the sample variance, when returns have a multivariate Student-t distribution with degrees of freedom less than or equal to 6. This motivates our consideration of other robust measures of risk, for which we present linear and quadratic programming formulations. / (cont.) We carry out experiments on simulated and historical data, illustrating the fact that the efficiency gained by considering robust measures of risk may be substantial. Finally, when the number of return observations is of the same order of magnitude as, or smaller than, the dimension of the portfolio being estimated, we investigate the applicability of regularization to sample risk minimization. We examine both L1- and L2-regularization. We interpret regularization from a Bayesian perspective, and provide an algorithm for choosing the regularization parameter. We validate the use of regularization in portfolio selection on simulated and historical data, and conclude that regularization can yield portfolios with smaller risk, and in particular smaller variance. / by Geoffrey J. Lauprête. / Ph.D.
|
225 |
Optimizing safety stock placement in general network supply chainsLesnaia, Ekaterina. January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 207-214). / The amount of safety stock to hold at each stage in a supply chain is an important problem for a manufacturing company that faces uncertain demand and needs to provide a high level of service to its customers. The amount of stock held should be small to minimize holding and storage costs while retaining the ability to serve customers on time and satisfy most, if not all, of the demand. This thesis analyzes this problem by utilizing the framework of deterministic service time models and provides an algorithm for safety stock placement in general-network supply chains. We first show that the general problem is NP-hard. Next, we develop several conditions that characterize an optimal solution of the general-network problem. We find that we can identify all possible candidates for the optimal service times for a stage by constructing paths from the stage to each other stage in the supply chain. We use this construct, namely these paths, as the basis for a branch and bound algorithm for the general-network problem. To generate the lower bounds, we create and solve a spanning-tree relaxation of the general-network problem. We provide a polynomial algorithm to solve these spanning tree problems. We perform a set of computational experiments to assess the performance of the general-network algorithm and to determine how to set various parameters for the algorithm. In addition to the general network case, we consider two-layer network problems. We develop a specialized branch and bound algorithm for these problems and show computationally that it is more efficient than the general-network algorithm applied to the two-layer networks. / by Ekaterina Lesnaia. / Ph.D.
|
226 |
Modeling social response to the spread of an infectious diseaseEvans, Jane A. (Jane Amanda) January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 85-88). / With the globalization of culture and economic trade, it is increasingly important not only to detect outbreaks of infectious disease early, but also to anticipate the social response to the disease. In this thesis, we use social network analysis and data mining methods to model negative social response (NSR), where a society demonstrates strain associated with a disease. Specifically, we apply real world biosurveillance data on over 11,000 initial events to: 1) describe how negative social response spreads within an outbreak, and 2) analytically predict negative social response to an outbreak. In the first approach, we developed a meta-model that describes the interrelated spread of disease and NSR over a network. This model is based on both a susceptible-infective- recovered (SIR) epidemiology model and a social influence model. It accurately captured the collective behavior of a complex epidemic, providing insights on the volatility of social response. In the second approach, we introduced a multi-step joint methodology to improve the detection and prediction of rare NSR events. The methodology significantly reduced the incidence of false positives over a more conventional supervised learning model. We found that social response to the spread of an infectious disease is predictable, despite the seemingly random occurrence of these events. Together, both approaches offer a framework for expanding a society's critical biosurveillance capability. / by Jane A. Evans. / S.M.
|
227 |
Exploration vs. exploitation : reducing uncertainty in operational problems / Exploration versus exploitation : reducing uncertainty in operational problems / Reducing uncertainty in operational problemsShaposhnik, Yaron January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 205-207). / Motivated by several core operational applications, we introduce a class of multistage stochastic optimization models that capture a fundamental tradeoff between performing work under uncertainty (exploitation) and investing resources to reduce the uncertainty in the decision making (exploration/testing). Unlike existing models, in which the exploration-exploitation tradeoffs typically relate to learning the underlying distributions, the models we introduce assume a known probabilistic characterization of the uncertainty, and focus on the tradeoff of learning exact realizations. In the first part of the thesis (Chapter 2), we study a class of scheduling problems that capture common settings in service environments in which the service provider must serve a collection of jobs that have a-priori uncertain processing times and priorities (modeled as weights). In addition, the service provider must decide how to dynamically allocate capacity between processing jobs and testing jobs to learn more about their respective processing times and weights. We obtain structural results of optimal policies that provide managerial insights, efficient optimal and near-optimal algorithms, and quantification of the value of testing. In the second part of the thesis (Chapter 3), we generalize the model introduced in the first part by studying how to prioritize testing when jobs have different uncertainties. We model difference in uncertainties using the convex order, a general relation between distributions, which implies that the variance of one distribution is higher than the variance of the other distribution. Using an analysis based on the concept of mean preserving local spread, we show that the structure of the optimal policy generalizes that of the initial model where jobs were homogeneous and had equal weights. Finally, in the third part of the thesis (Chapter 4), we study a broad class of stochastic combinatorial optimization that can be formulated as Linear Programs whose objective coefficients are random variables that can be tested, and whose constraint polyhedron has the structure of a polymatroid. We characterize the optimal policy and show that similar types of policies optimally govern testing decisions in this setting as well. / by Yaron Shaposhnik. / Ph. D.
|
228 |
Studies integrating geometry, probability, and optimization under convexityNogueira, Alexandre Belloni January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 197-202). / Convexity has played a major role in a variety of fields over the past decades. Nevertheless, the convexity assumption continues to reveal new theoretical paradigms and applications. This dissertation explores convexity in the intersection of three fields, namely, geometry, probability, and optimization. We study in depth a variety of geometric quantities. These quantities are used to describe the behavior of different algorithms. In addition, we investigate how to algorithmically manipulate these geometric quantities. This leads to algorithms capable of transforming ill-behaved instances into well-behaved ones. In particular, we provide probabilistic methods that carry out such task efficiently by exploiting the geometry of the problem. More specific contributions of this dissertation are as follows. (i) We conduct a broad exploration of the symmetry function of convex sets and propose efficient methods for its computation in the polyhedral case. (ii) We also relate the symmetry function with the computational complexity of an interior-point method to solve a homogeneous conic system. (iii) Moreover, we develop a family of pre-conditioners based on the symmetry function and projective transformations for such interior-point method. / (cont.) The implementation of the pre-conditioners relies on geometric random walks. (iv) We developed the analysis of the re-scaled perceptron algorithm for a linear conic system. In this method a sequence of linear transformations is used to increase a condition measure associated with the problem. (v) Finally, we establish properties relating a probability density induced by an arbitrary norm and the geometry of its support. This is used to construct an efficient simulating annealing algorithm to test whether a convex set is bounded, where the set is represented only by a membership oracle. / by Alexandre Belloni Nogueira. / Ph.D.
|
229 |
Information and decentralization in inventory, supply chain, and transportation systemsRoels, Guillaume January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 199-213). / This thesis investigates the impact of lack of information and decentralization of decision-making on the performance of inventory, supply chain, and transportation systems. In the first part of the thesis, we study two extensions of a classic single-item, single-period inventory control problem: the "newsvendor problem." We first analyze the newsvendor problem when the demand distribution is only partially specified by some moments and shape parameters. We determine order quantities that are robust, in the sense that they minimize the newsvendor's maximum regret about not acting optimally, and we compute the maximum value of additional information. The minimax regret approach is scalable to solve large practical problems, such as those arising in network revenue management, since it combines an efficient solution procedure with very modest data requirements. We then analyze the newsvendor problem when the inventory decision-making is decentralized. In supply chains, inventory decisions often result from complex negotiations among supply partners and might therefore lead to a loss of efficiency (in terms of profit loss). / (cont.) We quantify the loss of efficiency of decentralized supply chains that use price-only contracts under the following configurations: series, assembly, competitive procurement, and competitive distribution. In the second part of the thesis, we characterize the dynamic nature of traffic equilibria in a transportation network. Using the theory of kinematic waves, we derive an analytical model for traffic delays capturing the first-order traffic dynamics and the impact of shock waves. We then incorporate the travel-time model within a dynamic user equilibrium setting and illustrate how the model applies to solve a large network assignment problem. / by Guillaume Roels. / Ph.D.
|
230 |
United States Air Force fighter jet maintenance Models : effectiveness of index policiesKessler, John M. (John Michael) January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / Cataloged from PDF version of thesis. "June 2013." / Includes bibliographical references (pages 107-109). / As some of the most technically complex systems in the world, United States fighter aircraft require a complex logistics system to sustain their reliable operation and ensure that the day-to-day Air Force missions can be satisfied. While there has been a lot of attention among academics and practitioners regarding the study of this complex logistics system, most of the focus has been on availability of spare parts that are indeed essential for the smooth operations of the fighter aircraft. However, in recent years there has been an increasing awareness that maintenance resources are an equally important enabler and should be considered together with inventory issues. The maintenance resources required to repair the fighter aircraft are expensive and therefore limited. Moreover, there are various types of maintenance that compete for the same resources. It .is therefore imperative that the allocation of maintenance resources is done as efficiently as possible. In this thesis, we study two areas of fighter aircraft maintenance that could significantly benefit from improved resource allocation and scheduling strategies. We use quantitative and qualitative data from Air Force data-bases and logistics personnel to develop an innovative modeling framework to capture these challenging maintenance problems. This modeling framework is based on a generalization of the of the well-known multi-armed bandit superprocess problem. Using these models, we develop index policies which provide intuitive, easily implemented, and effective rules for scheduling maintenance activities and allocating maintenance resources. These policies seem to improve on existing best practices within the Air Force, and perform well in extensive data-driven simulated computational experiments. The first area is focused on the challenges of scheduling maintenance for the low observable (stealth) capabilities of the F-22 Raptor, specifically, maintenance of the outer coating of the aircraft that is essential to maintain its radar invisibility. In particular, we generate index policies that efficiently schedule which aircraft should enter low observable maintenance, how long they should be worked on, and which aircraft should fly in order to maximize the stealth capability of the fleet. Secondly, we model the maintenance process of the F100-229 engine, which is the primary propulsion method used in the F-16C/D and F-15E aircraft. In particular, we generate index policies to decide which engines should take priority over others, and whether or not certain components of the engines should be repaired or replaced. The policies address both elective (planned) and unplanned maintenance tasks. / by John M. Kessler. / S.M.
|
Page generated in 0.1457 seconds