Spelling suggestions: "subject:"3research intenter"" "subject:"3research contenter""
41 |
Tractable multi-product pricing under discrete choice modelsKeller, Philipp W. (Philipp Wilhelm), 1982- January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 199-204). / We consider a retailer offering an assortment of differentiated substitutable products to price-sensitive customers. Prices are chosen to maximize profit, subject to inventory/ capacity constraints, as well as more general constraints. The profit is not even a quasi-concave function of the prices under the basic multinomial logit (MNL) demand model. Linear constraints can induce a non-convex feasible region. Nevertheless, we show how to efficiently solve the pricing problem under three important, more general families of demand models. Generalized attraction (GA) models broaden the range of nonlinear responses to changes in price. We propose a reformulation of the pricing problem over demands (instead of prices) which is convex. We show that the constrained problem under MNL models can be solved in a polynomial number of Newton iterations. In experiments, our reformulation is solved in seconds rather than days by commercial software. For nested-logit (NL) demand models, we show that the profit is concave in the demands (market shares) when all the price-sensitivity parameters are sufficiently close. The closed-form expressions for the Hessian of the profit that we derive can be used with general-purpose nonlinear solvers. For the special (unconstrained) case already considered in the literature, we devise an algorithm that requires no assumptions on the problem parameters. The class of generalized extreme value (GEV) models includes the NL as well as the cross-nested logit (CNL) model. There is generally no closed form expression for the profit in terms of the demands. We nevertheless how the gradient and Hessian can be computed for use with general-purpose solvers. We show that the objective of a transformed problem is nearly concave when all the price sensitivities are close. For the unconstrained case, we develop a simple and surprisingly efficient first-order method. Our experiments suggest that it always finds a global optimum, for any model parameters. We apply the method to mixed logit (MMNL) models, by showing that they can be approximated with CNL models. With an appropriate sequence of parameter scalings, we conjecture that the solution found is also globally optimal. / by Philipp Wilhelm Keller. / Ph.D.
|
42 |
Learning with structured decision constraintsGoh, Chong Yang January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 119-125). / This thesis addresses several prediction and estimation problems under structured decision constraints. We consider them in two parts below. Part 1 focuses on supervised learning problems with constrained output spaces. We approach it in two ways. First, we consider an algorithmic framework that is based on minimizing estimated conditional risk functions. With this approach, we first estimate the conditional expected loss (i.e., conditional risk) function by regression, and then minimize it to predict an output. We analyze statistical and computational properties of this approach, and demonstrate empirically that it can adapt better to certain loss functions compared to methods that directly minimize surrogates of empirical risks. Second, we consider a constraint-embedding approach for reducing prediction time. The idea is to express the output constraints in terms of the model parameters, so that computational burdens are shifted from prediction to training. Specifically, we demonstrate how certain logical constraints in multilabel classification, such as implication, transitivity and mutual exclusivity, can be embedded in convex cones under a class of linear structured prediction models. The approach is also applicable to general affine constraints in vector regression tasks. Part 2 concerns the estimation of a rank-based choice model under substitution constraints. Our motivating application is to estimate the primary demand for a bike-share service using censored data of commuters' trips. We model commuter arrivals with a Poisson process and characterize their trip preferences with a probability mass function (PMF) over rankings of origin-destination pairs. Estimating the arrival rate and PMF, however, is challenging due to the factorial growth of the number of rankings. To address this, we reduce the parameter dimension by (i) finding sparse representations efficiently, and (ii) constraining trip substitutions spatially according to the bike-share network. We also derive an iterative estimation procedure based on difference-of-convex programming. Our method is effective in recovering the primary demand and computationally tractable on a city scale, as we demonstrate on a bike-share service in Boston. / by Chong Yang Goh. / Ph. D.
|
43 |
Degradable airline scheduling : an approach to improve operational robustness and differentiate service qualityKang, Laura Sumi, 1977- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 113-118). / We present a methodology for deriving robust airline schedules that are not vulnerable to disruptions caused by bad weather. In this methodology, the existing schedule is partitioned into independent sub-schedules or layers - prioritized on the basis of revenue - that provide airlines with a clear delay/cancellation policy and may enable them to market and sell tickets for flight legs based oil passenger preference for reliability. We present three different ways to incorporate degradability into the scheduling process: (1) between flight scheduling and fleet assignment (degradable schedule partitioning model), (2) with fleet assignment (degradable fleet assignment model), and (3) with aircraft routing (degradable aircraft routing model). Each problem is modeled as an integer program. Search algorithms are applied to the degradable aircraft routing model, which has a large number of decision variables. Results indicate that we can successfully assign flight legs with high revenue itineraries in the higher priority layer without adding aircraft or changing the schedule, and differentiate the service quality for passengers in different priority layers. Passengers in the high priority layers have much less delay and fewer cancellations than passengers in low priority layers even during the bad weather. In terms of recovery cost, which includes revenue lost, operational cost saving and crew delay cost, degradable airline schedules can save up to $30,000 per day. Degradable airline schedules have cost saving effect, especially when an airport with a high capacity reduction in bad weather is affected by bad weather. / by Laura Sumi Kang. / Ph.D.
|
44 |
Improving behavioral decision making in operations and food safety managementWang, Shujing, Ph. D. Massachusetts Institute of Technology January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 127-138). / In Chapter 1, we design controlled laboratory experiments to investigate whether and how process-driven discussions may improve information sharing, information integration, and the resulting group performance in collective decision making. We first investigate three different conditions on group discussion and find that while suggesting a process (without enforcement) significantly enhances information sharing, enforcing the process is a critical enabler for the group to integrate the shared information into its final decision making and hence improves group performance. We then replicate our results with a more complex task in the context of operations management, and find evidence that the underlying mechanism for process to induce better performance is reduced difficulty and hence lowered mental effort involved in the task. Chapter 2 and 3 focus on supply-chain and governance measures to improve food safety management. In Chapter 2, we employ Heckman's sample selection framework to investigate whether and how structural properties of China's farming supply chains and the strength of governance within the regions in which the supply chains operate jointly influence the risks of economically motivated adulteration (EMA) of food products. We find that both supply chain dispersion and weak local governance are associated with higher EMA risks. In Chapter 3, we further explore a set of innovative transparency measures for quantifying the strength of governance in food safety, which are designed based on data scraped from relevant government websites. / by Shujing Wang. / Ph. D.
|
45 |
Algorithmic and game-theoretic perspectives on schedulingUhan, Nelson A. (Nelson Alexander) January 2008 (has links)
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 103-110). / (cont.) Second, for almost all 0-1 bipartite instances, we give a lower bound on the integrality gap of various linear programming relaxations of this problem. Finally, we show that for almost all 0-1 bipartite instances, all feasible schedules are arbitrarily close to optimal. Finally, we consider the problem of minimizing the sum of weighted completion times in a concurrent open shop environment. We present some interesting properties of various linear programming relaxations for this problem, and give a combinatorial primal-dual 2-approximation algorithm. / In this thesis, we study three problems related to various algorithmic and game-theoretic aspects of scheduling. First, we apply ideas from cooperative game theory to study situations in which a set of agents faces super modular costs. These situations appear in a variety of scheduling contexts, as well as in some settings related to facility location and network design. Although cooperation is unlikely when costs are super modular, in some situations, the failure to cooperate may give rise to negative externalities. We study the least core value of a cooperative game -- the minimum penalty we need to charge a coalition for acting independently that ensures the existence of an efficient and stable cost allocation -- as a means of encouraging cooperation. We show that computing the least core value of supermodular cost cooperative games is strongly NP-hard, and design an approximation framework for this problem that in the end, yields a (3 + [epsilon])-approximation algorithm. We also apply our approximation framework to obtain better results for two special cases of supermodular cost cooperative games that arise from scheduling and matroid optimization. Second, we focus on the classic precedence- constrained single-machine scheduling problem with the weighted sum of completion times objective. We focus on so-called 0-1 bipartite instances of this problem, a deceptively simple class of instances that has virtually the same approximability behavior as arbitrary instances. In the hope of improving our understanding of these instances, we use models from random graph theory to look at these instances with a probabilistic lens. First, we show that for almost all 0-1 bipartite instances, the decomposition technique of Sidney (1975) does not yield a non-trivial decomposition. / by Nelson A. Uhan. / Ph.D.
|
46 |
Estimation of sell-up potential in airline revenue management systemsGuo, Jingqiang Charles January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 69-71). / The growth of Low Fare Carriers (LFCs) has encouraged many airlines to remove fare restrictions (such as advance purchase requirements and Saturday-night stays) on many of their fare class products, leading to the simplification of fare structures in competitive markets. In the most extreme case, these markets have fare structures that are unrestricted; the fare class products differ only by price since they AL1 lack restrictions. In these unrestricted markets, passengers buy the lowest possible fare product since there are no longer any restrictions that prevent them from doing so. A forecasting method known as "Q-forecasting" takes into account the sell- up potential of passengers in forecasting the demand in each of the fare products in such markets. Sell-up occurs when passengers upon being denied their original fare class choice, decide to pay more for the next available fare class so long as the price remains below their maximum willingness to pay. Quantifying this sell-up potential either using estimated or input values is thus crucial in helping airlines increase revenues when competing in unrestricted fare markets. A simulation model known as the Passenger Origin-Destination Simulator (PODS) contains the following 3 sell-up estimation methods: (i) Direct Observation (DO), (ii) Forecast Prediction (FP), and (iii) Inverse Cumulative (IC). The goal of this thesis is thus to investigate and compare the revenue performance of the 3 sell-up estimation methods. These methods are tested in a 2-airline (consisting of AL1 and AL2) unrestricted network under different RM fare class optimization scenarios. / (cont.) Both estimated and input sell-up values are tested on AL1 whereas only input sell-up values are tested on AL2. The findings of the simulations indicate that using FP typically results in the highest revenues for AL1 among AL1 3 sell-up estimation methods. When compared against simple RM fare class threshold methods that do not consider sell-up, using FP results in up to a 3% revenue gain for AL1. Under some fare class optimization scenarios, using FP instead of input sell-up values even results in a revenue increase of close to 1%. These findings suggest that FP is robust enough under a range of fare class optimizers to be used by airlines as a sell-up estimator in unrestricted fare environments so as to raise revenues. / by Jingqiang Charles Guo. / S.M.
|
47 |
Transient analysis of D(t)/M(t)/1 queuing system with applications to computing airport delaysGupta, Shubham, Ph. D. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 44-45). / This thesis is motivated by the desire to estimate air traffic delays at airports under a range of assumptions about the predictability of (a) inter-arrival times of demands (arrivals and departures) and (b) service times of aircraft movements (landings and takeoffs). It consists of two main parts. In the first, a transient analysis of a D(t)/M(t)/1 queuing system is presented. The reason for focusing on such a system is that it may be useful in evaluating some of the benefits of a future Air Traffic Management (ATM) system, such as the Next Generation Air Transportation System (NGATS or NextGen) currently being developed in the United States. One of the main features of these future ATM systems will be high predictability and regularity of the inter-arrival times of airport demands, i.e., a nearly deterministic demand process. This will be achieved through significant reductions in aircraft trajectory uncertainty, with the expectation that airport delays will also decrease substantially as a result. We develop a novel, computationally-efficient numerical approach for solving D(t)/M(t)/1 queuing systems with general, dynamic demand and service rates. We also discuss the complexity of the approach and some characteristics of the resulting solutions. In the second part of the thesis, we use a set of models of dynamic queuing systems, in addition to our D(t)/M(t)/1 model to explore the range of values that airport delays may assume under different sets of assumptions about the level of uncertainty associated with demand inter-arrival times and with service times. We thus compute airport delays under different queuing systems in a dynamic setting (where demand and service rates are time-varying) to capture the entire range of uncertainties expected during the deployment of various future ATM system technologies. The specific additional models we consider are: a deterministic D(t)/D(t)/1 model in which it is assumed that airport demands for landings and takeoffs occur at exactly as scheduled; and a M(t)/Ek(t)/1 model which, because of the "richness" of the family of Erlang distributions, Ek, can be used to approximate most M(t)/G(t)/1 models that may arise in airport applications. It can be seen that these models, when used together, provide bounds on estimated airport delays, with the D(t)/D(t)/1 model most likely to offer a lower bound and the M(t)/M(t)/1 model (i.e., the special case of M(t)/Ek(t)/1 with k = 1), an upper bound. We show through a set of examples based on a few of the busiest airports in the United States that: the closeness of the delay estimates provided by the different models depend on the level of congestion at an airport and the relative shapes of the dynamic profiles of capacity and demand at the airport; the difference (on a "percentage" basis) between the estimates provided by the deterministic model and the stochastic ones is largest for uncongested airports and decreases as the level of congestion increases; D(t)/M(t)/1 and M(t)/D(t)/1 produce estimates of the same order of magnitude, and reflect delays in the presence of "moderate" uncertainty at an airport; and delays under a D(t)/M(t)/1 queuing system are always higher than under a M(t)/D(t)/1 system. / by Shubham Gupta. / S.M.
|
48 |
Essays on inventory, pricing and financial trading strategiesLu, Ye, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 83-85). / In a multi-product market, if one product stocks out, consumers may substitute to competing products. In this thesis, we use an axiomatic approach to characterize a price-dependent demand substitution rule, and provide a sufficient and necessary condition for demand models where our demand substitution rule applies. Our results can serve as a link between the pricing and inventory literature, and enable the study of joint pricing and inventory coordination and competition. I demonstrate the impact of this axiomatic approach on the joint pricing and inventory coordination model by incorporating the price-dependent demand substitution rule, and illustrate that if the axiomatic approach is acceptable, the optimal strategy and corresponding expected profit are quite different than models that ignore stockout demand substitution. I use this price-dependent demand substitution rule to model the joint pricing and inventory game, and study the existence of Nash equilibrium in this game. In the second part of this thesis, I consider the problem of dynamically trading a security over a finite time horizon. The model assumes that a trader has a "safe price" for the security, which is the highest price that the trader is willing to pay for this security in each time period. A trader's order has both temporary (short term) and permanent (long term) impact on the security price and the security price may increase after the trader's order, to a point where it is above the safe price. Given a safe price constraint for the current time period, I characterize the optimal policy for the trader to maximize the total number of securities he can buy over a fixed time horizon. / (cont.) In particular, I consider a greedy policy, which involves at each stage buying a quantity that drives the temporary price to the security safety price. I show that the greedy policy is not always optimal and provide conditions under which the greedy policy is optimal. I also provide bounds on the performance of the greedy policy relative to the performance of the optimal policy. / by Ye Lu. / Ph.D.
|
49 |
Correlation decay and decentralized optimization in graphical modelsWeber, Theophane January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 213-229) and index. / Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables. / (cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease. / by Théophane Weber. / Ph.D.
|
50 |
Portfolio risk minimization under departures from normalityLauprête, Geoffrey J. (Geoffrey Jean), 1972- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 206-210). / This thesis revisits the portfolio selection problem in cases where returns cannot be modeled as Gaussian. The emphasis is on the development of financially intuitive and statistically sound approaches to portfolio risk minimization. When returns exhibit asymmetry, we propose using a quantile-based measure of risk which we call shortfall. Shortfall is related to Value-at-Risk and Conditional Value-at-Risk, and can be tuned to capture tail risk. We formulate the sample shortfall minimization problem as a linear program. Using results from empirical process theory, we derive a central limit theorem for the shortfall portfolio estimator. We warn about the statistical pitfalls of portfolio selection based on the minimization of rare events, which happens to be the case when shortfall is tuned to focus on extreme tail risk. In the presence of heavy tails and tail dependence, we show that portfolios based on the minimization of alternative robust measures of risk may in fact have lower variance than those based on the minimization of sample variance. We show that minimizing the sample mean absolute deviation yields portfolios that are asymptotically more efficient than those based on the minimization of the sample variance, when returns have a multivariate Student-t distribution with degrees of freedom less than or equal to 6. This motivates our consideration of other robust measures of risk, for which we present linear and quadratic programming formulations. / (cont.) We carry out experiments on simulated and historical data, illustrating the fact that the efficiency gained by considering robust measures of risk may be substantial. Finally, when the number of return observations is of the same order of magnitude as, or smaller than, the dimension of the portfolio being estimated, we investigate the applicability of regularization to sample risk minimization. We examine both L1- and L2-regularization. We interpret regularization from a Bayesian perspective, and provide an algorithm for choosing the regularization parameter. We validate the use of regularization in portfolio selection on simulated and historical data, and conclude that regularization can yield portfolios with smaller risk, and in particular smaller variance. / by Geoffrey J. Lauprête. / Ph.D.
|
Page generated in 0.0859 seconds