• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1707
  • 914
  • 622
  • 365
  • 281
  • 185
  • 160
  • 149
  • 130
  • 67
  • 60
  • 58
  • 34
  • 31
  • 18
  • Tagged with
  • 5395
  • 1081
  • 586
  • 532
  • 517
  • 503
  • 477
  • 476
  • 438
  • 420
  • 419
  • 380
  • 360
  • 358
  • 358
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Optimization of airport terminal-area air traffic operations under uncertain weather conditions

Pfeil, Diana Michalek January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 153-158). / Convective weather is responsible for large delays and widespread disruptions in the U.S. National Airspace System, especially during summer. Although Air Traffic Flow Management algorithms exist to schedule and route traffic in the face of disruptions, they require reliable forecasts of airspace capacity. However, there exists a gap between the spatial and temporal accuracy of aviation weather forecasts (and existing capacity models) and what these algorithms assume. In this thesis we consider the problem of integrating currently available convective weather forecasts with air traffic management in terminal airspace (near airports). We first demonstrate how raw convective weather forecasts, which provide deterministic predictions of the Vertically Integrated Liquid (the precipitation content in a column of airspace) can be translated into reliable and accurate probabilistic fore- casts of whether or not a terminal-area route will be blocked. Given a flight route through the terminal-area, we apply techniques from machine learning to determine the probability that the route will be open in actual weather. This probabilistic route blockage predictor is then used to optimize terminal-area operations. We develop an integer programming formulation for a 2-dimensional model of terminal airspace that dynamically moves arrival and departure routes to maximize expected capacity. Experiments using real weather scenarios on stormy days show that our algorithms recommend that a terminal-area route be modified 30% of the time, opening up 13% more available routes during these scenarios. The error rate is low, with only 5% of cases corresponding to a modified route being blocked while the original route is in fact open. In addition, for routes predicted to be open with probability 0.95 or greater by our method, 96% of these routes are indeed open (on average) in the weather that materializes. In the final part of the thesis we consider more realistic models of terminal airspace routing and structure. We develop an A*-based routing algorithm that identifies 3-D routes through airspace that adhere to physical aircraft constraints during climb and descent, are conflict-free, and are likely to avoid convective weather hazards. The proposed approach is aimed at improving traffic manager decision-making in today's operational environment. / by Diana Michalek Pfeil. / Ph.D.
62

Tractable multi-product pricing under discrete choice models

Keller, Philipp W. (Philipp Wilhelm), 1982- January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 199-204). / We consider a retailer offering an assortment of differentiated substitutable products to price-sensitive customers. Prices are chosen to maximize profit, subject to inventory/ capacity constraints, as well as more general constraints. The profit is not even a quasi-concave function of the prices under the basic multinomial logit (MNL) demand model. Linear constraints can induce a non-convex feasible region. Nevertheless, we show how to efficiently solve the pricing problem under three important, more general families of demand models. Generalized attraction (GA) models broaden the range of nonlinear responses to changes in price. We propose a reformulation of the pricing problem over demands (instead of prices) which is convex. We show that the constrained problem under MNL models can be solved in a polynomial number of Newton iterations. In experiments, our reformulation is solved in seconds rather than days by commercial software. For nested-logit (NL) demand models, we show that the profit is concave in the demands (market shares) when all the price-sensitivity parameters are sufficiently close. The closed-form expressions for the Hessian of the profit that we derive can be used with general-purpose nonlinear solvers. For the special (unconstrained) case already considered in the literature, we devise an algorithm that requires no assumptions on the problem parameters. The class of generalized extreme value (GEV) models includes the NL as well as the cross-nested logit (CNL) model. There is generally no closed form expression for the profit in terms of the demands. We nevertheless how the gradient and Hessian can be computed for use with general-purpose solvers. We show that the objective of a transformed problem is nearly concave when all the price sensitivities are close. For the unconstrained case, we develop a simple and surprisingly efficient first-order method. Our experiments suggest that it always finds a global optimum, for any model parameters. We apply the method to mixed logit (MMNL) models, by showing that they can be approximated with CNL models. With an appropriate sequence of parameter scalings, we conjecture that the solution found is also globally optimal. / by Philipp Wilhelm Keller. / Ph.D.
63

Learning with structured decision constraints

Goh, Chong Yang January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 119-125). / This thesis addresses several prediction and estimation problems under structured decision constraints. We consider them in two parts below. Part 1 focuses on supervised learning problems with constrained output spaces. We approach it in two ways. First, we consider an algorithmic framework that is based on minimizing estimated conditional risk functions. With this approach, we first estimate the conditional expected loss (i.e., conditional risk) function by regression, and then minimize it to predict an output. We analyze statistical and computational properties of this approach, and demonstrate empirically that it can adapt better to certain loss functions compared to methods that directly minimize surrogates of empirical risks. Second, we consider a constraint-embedding approach for reducing prediction time. The idea is to express the output constraints in terms of the model parameters, so that computational burdens are shifted from prediction to training. Specifically, we demonstrate how certain logical constraints in multilabel classification, such as implication, transitivity and mutual exclusivity, can be embedded in convex cones under a class of linear structured prediction models. The approach is also applicable to general affine constraints in vector regression tasks. Part 2 concerns the estimation of a rank-based choice model under substitution constraints. Our motivating application is to estimate the primary demand for a bike-share service using censored data of commuters' trips. We model commuter arrivals with a Poisson process and characterize their trip preferences with a probability mass function (PMF) over rankings of origin-destination pairs. Estimating the arrival rate and PMF, however, is challenging due to the factorial growth of the number of rankings. To address this, we reduce the parameter dimension by (i) finding sparse representations efficiently, and (ii) constraining trip substitutions spatially according to the bike-share network. We also derive an iterative estimation procedure based on difference-of-convex programming. Our method is effective in recovering the primary demand and computationally tractable on a city scale, as we demonstrate on a bike-share service in Boston. / by Chong Yang Goh. / Ph. D.
64

Degradable airline scheduling : an approach to improve operational robustness and differentiate service quality

Kang, Laura Sumi, 1977- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 113-118). / We present a methodology for deriving robust airline schedules that are not vulnerable to disruptions caused by bad weather. In this methodology, the existing schedule is partitioned into independent sub-schedules or layers - prioritized on the basis of revenue - that provide airlines with a clear delay/cancellation policy and may enable them to market and sell tickets for flight legs based oil passenger preference for reliability. We present three different ways to incorporate degradability into the scheduling process: (1) between flight scheduling and fleet assignment (degradable schedule partitioning model), (2) with fleet assignment (degradable fleet assignment model), and (3) with aircraft routing (degradable aircraft routing model). Each problem is modeled as an integer program. Search algorithms are applied to the degradable aircraft routing model, which has a large number of decision variables. Results indicate that we can successfully assign flight legs with high revenue itineraries in the higher priority layer without adding aircraft or changing the schedule, and differentiate the service quality for passengers in different priority layers. Passengers in the high priority layers have much less delay and fewer cancellations than passengers in low priority layers even during the bad weather. In terms of recovery cost, which includes revenue lost, operational cost saving and crew delay cost, degradable airline schedules can save up to $30,000 per day. Degradable airline schedules have cost saving effect, especially when an airport with a high capacity reduction in bad weather is affected by bad weather. / by Laura Sumi Kang. / Ph.D.
65

Improving behavioral decision making in operations and food safety management

Wang, Shujing, Ph. D. Massachusetts Institute of Technology January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 127-138). / In Chapter 1, we design controlled laboratory experiments to investigate whether and how process-driven discussions may improve information sharing, information integration, and the resulting group performance in collective decision making. We first investigate three different conditions on group discussion and find that while suggesting a process (without enforcement) significantly enhances information sharing, enforcing the process is a critical enabler for the group to integrate the shared information into its final decision making and hence improves group performance. We then replicate our results with a more complex task in the context of operations management, and find evidence that the underlying mechanism for process to induce better performance is reduced difficulty and hence lowered mental effort involved in the task. Chapter 2 and 3 focus on supply-chain and governance measures to improve food safety management. In Chapter 2, we employ Heckman's sample selection framework to investigate whether and how structural properties of China's farming supply chains and the strength of governance within the regions in which the supply chains operate jointly influence the risks of economically motivated adulteration (EMA) of food products. We find that both supply chain dispersion and weak local governance are associated with higher EMA risks. In Chapter 3, we further explore a set of innovative transparency measures for quantifying the strength of governance in food safety, which are designed based on data scraped from relevant government websites. / by Shujing Wang. / Ph. D.
66

Algorithmic and game-theoretic perspectives on scheduling

Uhan, Nelson A. (Nelson Alexander) January 2008 (has links)
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 103-110). / (cont.) Second, for almost all 0-1 bipartite instances, we give a lower bound on the integrality gap of various linear programming relaxations of this problem. Finally, we show that for almost all 0-1 bipartite instances, all feasible schedules are arbitrarily close to optimal. Finally, we consider the problem of minimizing the sum of weighted completion times in a concurrent open shop environment. We present some interesting properties of various linear programming relaxations for this problem, and give a combinatorial primal-dual 2-approximation algorithm. / In this thesis, we study three problems related to various algorithmic and game-theoretic aspects of scheduling. First, we apply ideas from cooperative game theory to study situations in which a set of agents faces super modular costs. These situations appear in a variety of scheduling contexts, as well as in some settings related to facility location and network design. Although cooperation is unlikely when costs are super modular, in some situations, the failure to cooperate may give rise to negative externalities. We study the least core value of a cooperative game -- the minimum penalty we need to charge a coalition for acting independently that ensures the existence of an efficient and stable cost allocation -- as a means of encouraging cooperation. We show that computing the least core value of supermodular cost cooperative games is strongly NP-hard, and design an approximation framework for this problem that in the end, yields a (3 + [epsilon])-approximation algorithm. We also apply our approximation framework to obtain better results for two special cases of supermodular cost cooperative games that arise from scheduling and matroid optimization. Second, we focus on the classic precedence- constrained single-machine scheduling problem with the weighted sum of completion times objective. We focus on so-called 0-1 bipartite instances of this problem, a deceptively simple class of instances that has virtually the same approximability behavior as arbitrary instances. In the hope of improving our understanding of these instances, we use models from random graph theory to look at these instances with a probabilistic lens. First, we show that for almost all 0-1 bipartite instances, the decomposition technique of Sidney (1975) does not yield a non-trivial decomposition. / by Nelson A. Uhan. / Ph.D.
67

Estimation of sell-up potential in airline revenue management systems

Guo, Jingqiang Charles January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 69-71). / The growth of Low Fare Carriers (LFCs) has encouraged many airlines to remove fare restrictions (such as advance purchase requirements and Saturday-night stays) on many of their fare class products, leading to the simplification of fare structures in competitive markets. In the most extreme case, these markets have fare structures that are unrestricted; the fare class products differ only by price since they AL1 lack restrictions. In these unrestricted markets, passengers buy the lowest possible fare product since there are no longer any restrictions that prevent them from doing so. A forecasting method known as "Q-forecasting" takes into account the sell- up potential of passengers in forecasting the demand in each of the fare products in such markets. Sell-up occurs when passengers upon being denied their original fare class choice, decide to pay more for the next available fare class so long as the price remains below their maximum willingness to pay. Quantifying this sell-up potential either using estimated or input values is thus crucial in helping airlines increase revenues when competing in unrestricted fare markets. A simulation model known as the Passenger Origin-Destination Simulator (PODS) contains the following 3 sell-up estimation methods: (i) Direct Observation (DO), (ii) Forecast Prediction (FP), and (iii) Inverse Cumulative (IC). The goal of this thesis is thus to investigate and compare the revenue performance of the 3 sell-up estimation methods. These methods are tested in a 2-airline (consisting of AL1 and AL2) unrestricted network under different RM fare class optimization scenarios. / (cont.) Both estimated and input sell-up values are tested on AL1 whereas only input sell-up values are tested on AL2. The findings of the simulations indicate that using FP typically results in the highest revenues for AL1 among AL1 3 sell-up estimation methods. When compared against simple RM fare class threshold methods that do not consider sell-up, using FP results in up to a 3% revenue gain for AL1. Under some fare class optimization scenarios, using FP instead of input sell-up values even results in a revenue increase of close to 1%. These findings suggest that FP is robust enough under a range of fare class optimizers to be used by airlines as a sell-up estimator in unrestricted fare environments so as to raise revenues. / by Jingqiang Charles Guo. / S.M.
68

Transient analysis of D(t)/M(t)/1 queuing system with applications to computing airport delays

Gupta, Shubham, Ph. D. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 44-45). / This thesis is motivated by the desire to estimate air traffic delays at airports under a range of assumptions about the predictability of (a) inter-arrival times of demands (arrivals and departures) and (b) service times of aircraft movements (landings and takeoffs). It consists of two main parts. In the first, a transient analysis of a D(t)/M(t)/1 queuing system is presented. The reason for focusing on such a system is that it may be useful in evaluating some of the benefits of a future Air Traffic Management (ATM) system, such as the Next Generation Air Transportation System (NGATS or NextGen) currently being developed in the United States. One of the main features of these future ATM systems will be high predictability and regularity of the inter-arrival times of airport demands, i.e., a nearly deterministic demand process. This will be achieved through significant reductions in aircraft trajectory uncertainty, with the expectation that airport delays will also decrease substantially as a result. We develop a novel, computationally-efficient numerical approach for solving D(t)/M(t)/1 queuing systems with general, dynamic demand and service rates. We also discuss the complexity of the approach and some characteristics of the resulting solutions. In the second part of the thesis, we use a set of models of dynamic queuing systems, in addition to our D(t)/M(t)/1 model to explore the range of values that airport delays may assume under different sets of assumptions about the level of uncertainty associated with demand inter-arrival times and with service times. We thus compute airport delays under different queuing systems in a dynamic setting (where demand and service rates are time-varying) to capture the entire range of uncertainties expected during the deployment of various future ATM system technologies. The specific additional models we consider are: a deterministic D(t)/D(t)/1 model in which it is assumed that airport demands for landings and takeoffs occur at exactly as scheduled; and a M(t)/Ek(t)/1 model which, because of the "richness" of the family of Erlang distributions, Ek, can be used to approximate most M(t)/G(t)/1 models that may arise in airport applications. It can be seen that these models, when used together, provide bounds on estimated airport delays, with the D(t)/D(t)/1 model most likely to offer a lower bound and the M(t)/M(t)/1 model (i.e., the special case of M(t)/Ek(t)/1 with k = 1), an upper bound. We show through a set of examples based on a few of the busiest airports in the United States that: the closeness of the delay estimates provided by the different models depend on the level of congestion at an airport and the relative shapes of the dynamic profiles of capacity and demand at the airport; the difference (on a "percentage" basis) between the estimates provided by the deterministic model and the stochastic ones is largest for uncongested airports and decreases as the level of congestion increases; D(t)/M(t)/1 and M(t)/D(t)/1 produce estimates of the same order of magnitude, and reflect delays in the presence of "moderate" uncertainty at an airport; and delays under a D(t)/M(t)/1 queuing system are always higher than under a M(t)/D(t)/1 system. / by Shubham Gupta. / S.M.
69

Essays on inventory, pricing and financial trading strategies

Lu, Ye, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 83-85). / In a multi-product market, if one product stocks out, consumers may substitute to competing products. In this thesis, we use an axiomatic approach to characterize a price-dependent demand substitution rule, and provide a sufficient and necessary condition for demand models where our demand substitution rule applies. Our results can serve as a link between the pricing and inventory literature, and enable the study of joint pricing and inventory coordination and competition. I demonstrate the impact of this axiomatic approach on the joint pricing and inventory coordination model by incorporating the price-dependent demand substitution rule, and illustrate that if the axiomatic approach is acceptable, the optimal strategy and corresponding expected profit are quite different than models that ignore stockout demand substitution. I use this price-dependent demand substitution rule to model the joint pricing and inventory game, and study the existence of Nash equilibrium in this game. In the second part of this thesis, I consider the problem of dynamically trading a security over a finite time horizon. The model assumes that a trader has a "safe price" for the security, which is the highest price that the trader is willing to pay for this security in each time period. A trader's order has both temporary (short term) and permanent (long term) impact on the security price and the security price may increase after the trader's order, to a point where it is above the safe price. Given a safe price constraint for the current time period, I characterize the optimal policy for the trader to maximize the total number of securities he can buy over a fixed time horizon. / (cont.) In particular, I consider a greedy policy, which involves at each stage buying a quantity that drives the temporary price to the security safety price. I show that the greedy policy is not always optimal and provide conditions under which the greedy policy is optimal. I also provide bounds on the performance of the greedy policy relative to the performance of the optimal policy. / by Ye Lu. / Ph.D.
70

Correlation decay and decentralized optimization in graphical models

Weber, Theophane January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 213-229) and index. / Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables. / (cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease. / by Théophane Weber. / Ph.D.

Page generated in 0.1412 seconds