• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 11
  • 10
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 376
  • 376
  • 319
  • 32
  • 23
  • 21
  • 21
  • 15
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Route optimization under uncertainty for Unmanned Underwater Vehicles / Route optimization under uncertainty for UUVs

Cates, Jacob Roy January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 87). / As our technology continues to increase, our military wishes to reduce the risk service members endure by using unmanned vehicles. These unmanned vehicles will, over time, become more independent and trustworthy in our operational military. The goal of this thesis is to improve the intelligence an unmanned vehicle has in its decision making to keep up with ever increasing capabilities. In this thesis, we assume an Unmanned Underwater Vehicle (UUV) is given tasks and must decide which ones to perform (or not perform) in which order. If there is enough time and energy to perform all of the tasks, then the UUV only needs to solve a traveling salesman problem to find the best order. We focus on a tightly constrained situation, where the UUV must choose which tasks to perform to collect the highest reward. In prize collecting traveling salesman problems, authors are often dismissive about stochastic problem parameters, and are satisfied with using expected value of random variables. In this thesis a more rigorous probabilistic model is formulated which establishes a guaranteed confidence level for the probability the UUV breaks mission critical time and energy constraints. The formulation developed takes the stochasticity of the problem parameters into account and produces solutions which are robust. The thesis first presents a linear programming problem which calculates the transition probabilities for a specific route. This linear programming problem is then used to create a constraint which forces the UUV to choose a route that maintains an appropriate confidence level for satisfying the time and energy constraints. Once the exact model is created, heuristics are discussed and analyzed. The heuristics are designed to provide "good"' solutions for larger sized problems and maintain a relatively low run time. / by Jacob Roy Cates. / S.M.
202

Reverse logistics for consumer electronics : forecasting failures, managing inventory, and matching warranties

Calmon, André du Pin January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 147-150). / The goal of this thesis is to describe, model, and optimize reverse logistics systems commonly used in the Consumer Electronics industry. The context and motivation for this work stem from a collaboration with an industrial partner, a Fortune 500 company that sells consumer electronics and is one of the top retailers in its sector. The thesis is divided into three parts. In the first part of the thesis we model and analyze the problem of forecasting failures of new products. When a new device is introduced to the market there is limited information available about its failure time distribution since most devices have yet to fail. However, there is extensive failure time data for prior devices, as well as evidence that the failure time distribution for new devices can be forecast from the data for prior devices. In this setting, we propose two strategies for forecasting the failure distribution of new products that leverages the censored failure observations for the new devices as well as this massive amount of data collected for prior devices. We validate these strategies using data from our industrial partner and using data from a social enterprise located in the Boston area. The second part of the thesis concerns inventory management in a reverse logistics system that supports the warranty returns and replacement for a consumer electronic device. This system is a closed-loop supply chain since failed devices are refurbished and are kept in inventory to be used as replacement devices or are sold through a side-sales channel. Furthermore, managing inventory in this system is challenging due to the short life-cycle of this type of device and the rapidly declining value for the inventory that could potentially be sold. We propose a stochastic model that captures the dynamics of inventory of this system, including the limited life-cycle and the declining value of inventory that can be sold off. We characterize the structure of the optimal policy for this problem. In addition, we introduce two heuristics: (i) a certainty-equivalent approximation, which leads to a simple closed form policy; and (ii) a dual balancing heuristic, which results in a more tractable newsvendor type model. We also develop a robust version of this model in order to obtain bounds for the overall performance of the system. The performance of these heuristics is analyzed using data from our industrial partner. The final part of the thesis concerns the problem faced by a consumer electronics retailer when matching devices in inventory to customers. More specifically, we analyze a setting where there are two warranties in place: (i) the consumer warranty, offered by the retailer to the consumer, and (ii) the Original Equipment Manufacturer (OEM) warranty, offered by the OEM to the retailer. Both warranties are valid for a limited period (usually 12 months), and once warranties expire, the coverage to replace or repair a faulty device ends. Thus, a customer does not receive a replacement if he/she is out of consumer warranty, and the retailer cannot send the device to the OEM for repairs if it is out of OEM warranty. The retailer would ideally like to have the two warranties for a device being matched, i.e., the customer would have the same time left in his consumer warranty as the device would have left in the OEM warranty. A mismatch between these warranties can incur costs to the retailer beyond the usual processing costs of warranty requests. Namely, since a device can fail multiple times during its lifecycle the replacement device sent to customers that file warranty requests can lead to out-of-OEM-warranty returns. In order to mitigate the number of out-of-OEM-warranty returns, we propose an online algorithm to match customers that have filed warranty claims to refurbished devices in inventory. The algorithm matches the oldest devices in inventory to the oldest customers in each period. We characterize the competitive ratio of this algorithm and, through numerical experiments using historical data, demonstrate that it can significantly reduce out of warranty returns compared to our partner's current strategy. / by Andre du Pin Calmon. / Ph. D.
203

Robust reconnaissance asset planning under uncertainty

Culver, David M. (David Martin) January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 105-107). / This thesis considers the tactical reconnaissance asset allocation problem in military operations. Specifically this thesis presents methods to optimize, under uncertain conditions, tactical reconnaissance asset allocation in order to maximize, within acceptable levels of asset risk exposure, the expected total information collection value. We propose a deterministic integer optimization formulation and two robust mixed-integer optimization extensions to address this problem. Robustness is applied to our model using both polyhedral and ellipsoidal uncertainty sets resulting in tractable mixed integer linear and second order cone problems. We show through experimentation that robust optimization leads to overall improvements in solution quality compared to non-robust and typical human generated plans. Additionally we show that by using our robust models, military planners can ensure better solution feasibility compared to non-robust planning methods even if they seriously misjudge their knowledge of the enemy and the battlefield. We also compare the trade-offs of using polyhedral and ellipsoidal uncertainty sets. In our tests our model using ellipsoidal uncertainty sets provided better quality solutions at a cost of longer average solution times to that of the polyhedral uncertainty set model. Lastly we outline a special case of our models that allows us to improve solution time at the cost of some solution quality. / by David M. Culver. / S.M.
204

Resource allocation in stochastic processing networks : performance and scaling

Zhong, Yuan, Ph.D. Massachusetts Institute of Technology. Operations Research Center January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 189-193). / This thesis addresses the design and analysis of resource allocation policies in largescale stochastic systems, motivated by examples such as the Internet, cloud facilities, wireless networks, etc. A canonical framework for modeling many such systems is provided by "stochastic processing networks" (SPN) (Harrison [28, 29]). In this context, the key operational challenge is efficient and timely resource allocation. We consider two important classes of SPNs: switched networks and bandwidth-sharing networks. Switched networks are constrained queueing models that have been used successfully to describe the detailed packet-level dynamics in systems such as input-queued switches and wireless networks. Bandwidth-sharing networks have primarily been used to capture the long-term behavior of the flow-level dynamics in the Internet. In this thesis, we develop novel methods to analyze the performance of existing resource allocation policies, and we design new policies that achieve provably good performance. First, we study performance properties of so-called Maximum-Weight-[alpha] (MW-[alpha]) policies in switched networks, and of a-fair policies in bandwidth-sharing networks, both of which are well-known families of resource allocation policies, parametrized by a positive parameter [alpha] > 0. We study both their transient properties as well as their steady-state behavior. In switched networks, under a MW-a policy with a 2 1, we obtain bounds on the maximum queue size over a given time horizon, by means of a maximal inequality derived from the standard Lyapunov drift condition. As a corollary, we establish the full state space collapse property when [alpha] > 1. In the steady-state regime, for any [alpha] >/= 0, we obtain explicit exponential tail bounds on the queue sizes, by relying on a norm-like Lyapunov function, different from the standard Lyapunov function used in the literature. Methods and results are largely parallel for bandwidth-sharing networks. Under an a-fair policy with [alpha] >/= 1, we obtain bounds on the maximum number of flows in the network over a given time horizon, and hence establish the full state space collapse property when [alpha] >/= 1. In the steady-state regime, using again a norm-like Lyapunov function, we obtain explicit exponential tail bounds on the number of flows, for any a > 0. As a corollary, we establish the validity of the diffusion approximation developed by Kang et al. [32], in steady state, for the case [alpha] = 1. Second, we consider the design of resource allocation policies in switched networks. At a high level, the central performance questions of interest are: what is the optimal scaling behavior of policies in large-scale systems, and how can we achieve it? More specifically, in the context of general switched networks, we provide a new class of online policies, inspired by the classical insensitivity theory for product-form queueing networks, which admits explicit performance bounds. These policies achieve optimal queue-size scaling, in the conventional heavy-traffic regime, for a class of switched networks, thus settling a conjecture (documented in [51]) on queue-size scaling in input-queued switches. In the particular context of input-queued switches, we consider the scaling behavior of queue sizes, as a function of the port number n and the load factor [rho]. In particular, we consider the special case of uniform arrival rates, and we focus on the regime where [rho] = 1 - 1/f(n), with f(n) >/= n. We provide a new class of policies under which the long-run average total queue size scales as O(n1.5 -f(n) log f(n)). As a corollary, when f(n) = n, the long-run average total queue size scales as O(n2.5 log n). This is a substantial improvement upon prior works [44], [52], [48], [39], where the same quantity scales as O(n3 ) (ignoring logarithmic dependence on n). / by Yuan Zhong. / Ph.D.
205

Robust optimization, game theory, and variational inequalities

Aghassi, Michele Leslie January 2005 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 193-109). / We propose a robust optimization approach to analyzing three distinct classes of problems related to the notion of equilibrium: the nominal variational inequality (VI) problem over a polyhedron, the finite game under payoff uncertainty, and the network design problem under demand uncertainty. In the first part of the thesis, we demonstrate that the nominal VI problem is in fact a special instance of a robust constraint. Using this insight and duality-based proof techniques from robust optimization, we reformulate the VI problem over a polyhedron as a single- level (and many-times continuously differentiable) optimization problem. This reformulation applies even if the associated cost function has an asymmetric Jacobian matrix. We give sufficient conditions for the convexity of this reformulation and thereby identify a class of VIs, of which monotone affine (and possibly asymmetric) VIs are a special case, which may be solved using widely-available and commercial-grade convex optimization software. In the second part of the thesis, we propose a distribution-free model of incomplete- information games, in which the players use a robust optimization approach to contend with payoff uncertainty. / (cont.) Our "robust game" model relaxes the assumptions of Harsanyi's Bayesian game model, and provides an alternative, distribution-free equilibrium concept, for which, in contrast to ex post equilibria, existence is guaranteed. We show that computation of "robust-optimization equilibria" is analogous to that of Nash equilibria of complete- information games. Our results cover incomplete-information games either involving or not involving private information. In the third part of the thesis, we consider uncertainty on the part of a mechanism designer. Specifically, we present a novel, robust optimization model of the network design problem (NDP) under demand uncertainty and congestion effects, and under either system- optimal or user-optimal routing. We propose a corresponding branch and bound algorithm which comprises the first constructive use of the price of anarchy concept. In addition, we characterize conditions under which the robust NDP reduces to a less computationally demanding problem, either a nominal counterpart or a single-level quadratic optimization problem. Finally, we present a novel traffic "paradox," illustrating counterintuitive behavior of changes in cost relative to changes in demand. / by Michele Leslie Aghassi. / Ph.D.
206

Online optimization in routing and scheduling

Wagner, Michael R. (Michael Robert), 1978- January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (leaves 169-176). / In this thesis we study online optimization problems in routing and scheduling. An online problem is one where the problem instance is revealed incrementally. Decisions can (and sometimes must) be made before all information is available. We design and analyze (polynomial-time) online algorithms for a variety of problems. We utilize worst-case competitive ratio (and relaxations thereof), asymptotic and Monte Carlo simulation analyses in our study of these algorithms. The focus of this thesis is on online routing problems in arbitrary metric spaces. We begin our study with online versions of the Traveling Salesman Problem (TSP) and the Traveling Repairman Problem (TRP). We then generalize these basic problems to allow for precedence constraints, capacity constraints and multiple vehicles. We give the first competitive ratio results for many new online routing problems. We then consider resource augmentation, where we give the online algorithm additional resources: faster servers, larger capacities, more servers, less restrictive constraints and advanced information. We derive new worst-case bounds that are relaxations of the competitive ratio. / (cont.) We also study the (stochastic) asymptotic properties of these algorithms - introducing stochastic structure to the problem data, unknown and unused by the online algorithm. In a variety of situations we show that many online routing algorithms are (quickly) asymptotically optimal, almost surely, and we characterize the rates of convergence. We also study classic machine sequencing problems in an online setting. Specifically, we look at deterministic and randomized algorithms for the problems of scheduling jobs with release dates on single and parallel machines, with and without preemption, to minimize the sum of weighted completion times. We derive improved competitive ratio bounds and we show that many well-known machine scheduling algorithms are almost surely asymptotically optimal under general stochastic assumptions. For both routing and sequencing problems, we complement these theoretical derivations with Monte Carlo simulation results. / by Michael Robert Wagner. / Ph.D.
207

Optimization-simulation framework to optimize hospital bed allocation in academic medical centers

Vanden Berg, Andrew M January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 99-100). / Congestion, overcrowding, and increasing patient wait times are major challenges that many large, academic centers currently face. To address these challenges, hospitals must effectively utilize available beds through proper strategic bed allocation and robust operational day-to-day bed assignment policies. Since patient daily demand for beds is highly variable, it is frequent that the physical capacity allocated to a given clinical service is not sufficient to accommodate all of the patients who belong to that service. This situation could lead to extensive wait time of patients in various locations in the hospital (e.g., the emergency department), as well as clinically and operationally undesirable misplacements of patients in hospital floors/beds that are managed by other clinical services than the ones to which the patients belong. In this thesis, we develop an optimization-simulation framework to optimize the bed allocation at Mass General Hospital. Detailed, data-driven simulation suggests that the newly proposed bed allocation would lead to significant reduction in patient intra-day wait time in the emergency department and other hospital locations, as well as a major reduction in the misplacements of patients in the Medicine service, which is the largest service in the hospital. We employ a two-pronged approach. First, we developed a detailed simulation setting of the entire hospital that could be used to assess the effectiveness of day-to-day operational bed assignment policies given a specific bed allocation. However, the simulation does not allow tractable optimization that seeks to find the best bed allocation among all possible allocations. This motivates the development of a network-flow/network design inspired mixed integer program that approximates the operational performance of bed allocations and allows us to effectively search for approximately the best allocation. The mixed integer program can be solved via a scenario sampling approach to provide candidate bed allocations. These are then tested and evaluated via the simulation setting. These tools facilitate expert discussions on how to modify the existing bed allocation at MGH to improve the day-to-day performance of the bed assignment process. / by Andrew M. Vanden Berg. / S.M.
208

Application of robust statistics to asset allocation models

Zhou, Xinfeng January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 105-107). / Many strategies for asset allocation involve the computation of expected returns and the covariance or correlation matrix of financial instruments returns. How much of each instrument to own is determined by an attempt to minimize risk (the variance of linear combinations of investments in these financial assets) subject to various constraints such as a given level of return, concentration limits, etc. The expected returns and the covariance matrix contain many parameters to estimate and two main problems arise. First, the data will very likely have outliers that will seriously affect the covariance matrix. Second, with so many parameters to estimate, a large number of observations are required and the nature of markets may change substantially over such a long period. In this thesis we use robust covariance procedures, such as FAST-MCD, quadrant-correlation-based covariance and 2D-Huber-based covariance, to address the first problem and regularization (Bayesian) methods that fully utilize the market weights of all assets for the second. High breakdown affine equivariant robust methods are effective, but tend to be costly when cross-validation is required to determine regularization parameters. / (cont.) We, therefore, also consider non-affine invariant robust covariance estimation. When back-tested on market data, these methods appear to be effective in improving portfolio performance. In conclusion, robust asset allocation methods have great potential to improve risk-adjusted portfolio returns and therefore deserve further exploration in investment management research. / by Xinfeng Zhou. / S.M.
209

Algorithms for large-scale personalization

Li, Andrew A. (Andrew Andi) January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 191-205). / The term personalization typically refers to the activity of online recommender systems, and while product and content personalization is now ubiquitous in e-commerce, systems today remain relatively primitive: they are built on a small fraction of available data, run with heuristic algorithms, and restricted to e-commerce applications. This thesis addresses key challenges and new applications for modern, large-scale personalization. In particular, this thesis is outlined as follows: First, we formulate a generic, flexible framework for learning from matrix-valued data, including the kinds of data commonly collected in e-commerce. Underlying this framework is a classic de-noising problem called tensor recovery, for which we provide an efficient algorithm, called Slice Learning, that is practical for massive datasets. Further, we establish near-optimal recovery guarantees that represent an order improvement over the best available results for this problem. Experimental results from a music recommendation platform are shown. Second, we apply this de-noising framework to new applications in precision medicine where data are routinely complex and in high dimensions. We describe a simple, accurate proteomic blood test (a 'liquid biopsy') for cancer detection that relies on de-noising via the Slice Learning algorithm. Experiments on plasma from healthy patients that were later diagnosed with cancer demonstrate that our test achieves diagnostically significant sensitivities and specificities for many types of cancers in their earliest stages. Third, we present an efficient, principled approach to operationalizing recommendations, i.e. the decision of exactly what items to recommend. Motivated by settings such as online advertising where the space of items is massive and recommendations must be made in milliseconds, we propose an algorithm that simultaneously achieves two important properties: (1) sublinear runtime and (2) a constant-factor guarantee under a wide class of choice models. Our algorithm relies on a new sublinear time sampling scheme, which we develop to solve a class of problems that subsumes the classic nearest neighbor problem. Results from a massive online content recommendation firm are given. Fourth, we address the problem of cost-effectively executing a broad class of computations on commercial cloud computing platforms, including the computations typically done in personalization. We formulate this as a resource allocation problem and introduce a new approach to modeling uncertainty - the Data-Driven Prophet Model - that treads the line between stochastic and adversarial modeling, and is amenable to the common situation where stochastic modeling is challenging, despite the availability of copious historical data. We propose a simple, scalable algorithm that is shown to be order-optimal in this setting. Results from experiments on a commercial cloud platform are shown. / by Andrew A. Li. / Ph. D.
210

Regulating exploration in multi-armed bandit problems with time patterns and dying arms

Tracà, Stefano January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 65-70). / In retail, there are predictable yet dramatic time-dependent patterns in customer behavior, such as periodic changes in the number of visitors, or increases in customers just before major holidays. The standard paradigm of multi-armed bandit analysis does not take these known patterns into account. This means that for applications in retail, where prices are fixed for periods of time, current bandit algorithms will not suffice. This work provides a framework and methods that take the time-dependent patterns into account. In the corrected methods, exploitation (greed) is regulated over time, so that more exploitation occurs during higher reward periods, and more exploration occurs in periods of low reward. In order to understand why regret is reduced with the corrected methods, a set of bounds on the expected regret are presented and insights into why we would want to exploit during periods of high reward are discussed. When the set of available options changes over time, mortal bandits algorithms have proven to be extremely useful in a number of settings, for example, for providing news article recommendations, or running automated online advertising campaigns. Previous work on this problem showed how to regulate exploration of new arms when they have recently appeared, but they do not adapt when the arms are about to disappear. Since in most applications we can determine either exactly or approximately when arms will disappear, we can leverage this information to improve performance: we should not be exploring arms that are about to disappear. Also for this framework, adaptations of algorithms and regret bounds are provided. The proposed methods perform well in experiments, and were inspired by a high-scoring entry in the Exploration and Exploitation 3 contest using data from Yahoo! Front Page. That entry heavily used time-series methods to regulate greed over time, which was substantially more effective than other contextual bandit methods. / by Stefano Tracà. / Ph. D.

Page generated in 0.0739 seconds