• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1718
  • 915
  • 622
  • 369
  • 282
  • 186
  • 160
  • 149
  • 130
  • 67
  • 60
  • 58
  • 34
  • 31
  • 18
  • Tagged with
  • 5421
  • 1084
  • 586
  • 532
  • 517
  • 504
  • 477
  • 476
  • 439
  • 423
  • 421
  • 380
  • 362
  • 361
  • 358
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A robust optimization approach to finance

Pachamanova, Dessislava A. (Dessislava Angelova), 1975- January 2002 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2002. / Includes bibliographical references (p. 137-141). / An important issue in real-world optimization problems is how to treat uncertain coefficients. Robust optimization is a modeling methodology that takes a deterministic view: the optimal solution is required to remain feasible for any realization of the uncertain coefficients within prescribed uncertainty sets. The focus of this thesis is on robust linear programming problems in which the uncertainty sets are polytopes. The assumption of polyhedral uncertainty leads to compact, efficiently solvable linear formulations. In the first part of the thesis, we study special types of polyhedral uncertainty sets that allow for incorporating moment information about the distribution of the uncertain coefficients, and for controlling the tradeoff between robustness and optimality. We provide probabilistic guarantees on the feasibility of optimal solutions obtained with such uncertainty sets for any realization of the uncertain coefficients. We then illustrate the versatility of robust polyhedral formulations by studying three financial applications: single period portfolio optimization, multiperiod portfolio management, and credit risk estimation. In the area of single period portfolio optimization, we propose ways of modeling inaccuracy in parameter estimates, and explore the benefits of robust optimal strategies through computational experiments with the statistical estimation of a particular measure of portfolio risk - sample shortfall. We emphasize the advantages of linear, as opposed to nonlinear, robust formulations in large portfolio problems with integrality constraints. / (cont.) In the area of multiperiod portfolio management, we propose robust polyhedral formulations that use some minimal information about long-term direction of movement of asset returns to make informed decisions about portfolio rebalancing over the short term. The suggested formulations allow for including considerations of transaction costs and taxes while keeping the dimension of the problem low. In the area of credit risk estimation, we propose a model for estimating the survival probability distribution and the fair prices of credit risky bonds from market prices of similar credit risky securities. We address the issue of uncertainty in key parameters of the model, such as discount factors, by using robust optimization modeling. We also suggest a method for classification of credit risky bonds based on integer programming techniques. / by Dessislava A. Pachamanova. / Ph.D.
112

Potential benefits of information sharing during the arrival process at hub airports

Andersson, Kari January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Includes bibliographical references (p. 81-82). / This thesis explores the benefits of increasing communication and collaboration between airlines and air traffic controllers during the arrival process at hub airports. In particular, this study estimates operational improvements, measured in passenger minutes of delay saved, from providing airlines with more accurate landing time estimates and from allowing airlines to influence the sequence of incoming traffic. To estimate these potential benefits, the Airline Sequencing model was developed to simulate airline decisions regarding ground operations. The model has been calibrated and validated to reflect airline decision making. Scenario analyses were conducted using the model to estimate the delay savings that an airline could realize through the use of more accurate landing time estimates although the potential ability to influence the sequence of arriving traffic. The current results indicate that. increased communication and collaboration could significantly decrease delays. Over a time horizon of 3.25 hours, a decrease in the standard deviation in landing time estimate errors of two minutes could prevent between 500 and 2000 passenger minutes of delay. Allowing an airline to shift each arriving aircraft's landing time up to 5 minutes can save between 4000 and 7000 passenger minutes of delay over a time period of 3.25 hours. The potential savings depend on the delay conditions of the time horizon considered in the model. These results indicate that the potential reduction in delay could be significant. Further investigation into the feasibility of the communication and collaboration suggested in this thesis is warranted. / by Kari Andersson. / S.M.
113

Optimal routes for electric vehicles facing uncertainty, congestion, and energy constraints

Fontana, Matthew William January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 165-170). / There are many benefits of owning a battery electric vehicle, including zero tailpipe emissions, potential independence from oil, lower fuel costs, and the option to recharge the battery at home. However, a significant concern about owning a battery electric vehicle is range anxiety: the fear that the battery will run out of charge before the driver reaches his or her destination. We address range anxiety by providing a robust optimization framework to give drivers confidence that they can reach their destinations in a reasonable amount of time with enough energy in the battery, even when there is uncertainty in travel time and energy consumption on the roads. The robust optimization appropriately incorporates uncertainty without significantly increasing the complexity of the problem. This thesis describes that optimization framework and how to use it on real-world examples to find appropriate routes, with a central part being the application of robust optimization to the problem. We develop an energy model, an optimization-based formulation using robust optimization, and algorithms to quickly find good routes for battery electric vehicles. The combination of using robust optimization, the A-Star algorithm to find shortest paths, and Lagrangian relaxation allows us to solve the problem in seconds or less. For one example start and destination, our algorithms required less than 2 seconds for each instance (energy consumption limit). In addition, for example trips, we compute a Pareto frontier to illustrate the time-energy tradeoff from driving different routes. We use Lagrangian relaxation to provide lower bounds and estimates that suggest that our algorithms produce near-optimal solutions. We apply our methodology to example trips in Massachusetts and Michigan to demonstrate its practicality and its potential for real-world use. Future work could continue to improve the modeling accuracy and include algorithmic enhancements to further improve running time, especially for larger networks. / by Matthew William Fontana. / Ph.D.
114

Data-driven models for uncertainty and behavior

Gupta, Vishal, Ph. D. Massachusetts Institute of Technology January 2014 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / 117 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 173-180). / The last decade has seen an explosion in the availability of data. In this thesis, we propose new techniques to leverage these data to tractably model uncertainty and behavior. Specifically, this thesis consists of three parts: In the first part, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using hypothesis testing. The approach is flexible and widely applicable, and robust optimization problems built from our new data driven sets are computationally tractable, both theoretically and practically. Optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. Computational evidence from classical applications of robust optimization { queuing and portfolio management { confirm that our new data-driven sets significantly outperform traditional robust optimization techniques whenever data is available. In the second part, we examine in detail an application of the above technique to the unit commitment problem. Unit commitment is a large-scale, multistage optimization problem under uncertainty that is critical to power system operations. Using real data from the New England market, we illustrate how our proposed data-driven uncertainty sets can be used to build high-fidelity models of the demand for electricity, and that the resulting large-scale, mixed-integer adaptive optimization problems can be solved efficiently. With respect to this second contribution, we propose new data-driven solution techniques for this class of problems inspired by ideas from machine learning. Extensive historical back-testing confirms that our proposed approach generates high quality solutions that compare with state-of-the-art methods. In the third part, we focus on behavioral modeling. Utility maximization (single agent case) and equilibrium modeling (multi-agent case) are by far the most common behavioral models in operations research. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the primitives of these models. Our approach supports both parametric and nonparametric estimation through kernel learning. We prove that our estimators enjoy a strong generalization guarantee even when the model is misspecified. Finally, we present computational evidence from applications in economics and transportation science illustrating the effectiveness of our approach and its scalability to large-scale instances. / by Vishal Gupta. / Ph. D.
115

Vignettes on robust combinatorial optimization

Udwani, Rajan January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 137-142). / In this thesis, we design and analyze algorithms for robust combinatorial optimization in various settings. First, we consider the problem of simultaneously maximizing multiple objectives, all monotone submodular, subject to a cardinality constraint. We focus on the case where the number of objectives is super-constant yet much smaller than the cardinality of the chosen set. We propose several algorithms (including one with the best achievable asymptotic guarantee for the problem). Experiments on synthetic data show that a heuristic based on our more practical and fast algorithm outperforms current practical algorithms in all cases considered. Next, we study the problem of robust maximization of a single monotone submodular function in scenarios where after choosing a feasible set of elements, some elements from the chosen set are adversarially removed. Under some restriction on the number of elements that can be removed, we give the first constant factor approximation algorithms as well as the best possible asymptotic approximation in certain cases. We also give a black box result for the much more general setting of deletion-robust maximization subject to an independence system. Lastly, we consider a robust appointment scheduling problem where the goal is to design simple appointment systems that try to achieve both high server utilization as well as short waiting times, under uncertainty in job processing times. When the order of jobs is fixed and one seeks to find optimal appointment duration for jobs, we give a simple heuristic that achieves the first constant factor (2) approximation. We also give closed form optimal solutions in various special cases that supersede previous work. For the setting where order of jobs is also flexible and under-utilization costs are homogeneous, it was previously shown that an EPTAS exists. We instead focus on simple and practical heuristics, and find a ratio based ordering which is 1.0604 approximate, improving on previous results for similarly practical heuristics. / by Rajan Udwani. / Ph. D.
116

Robust planning for unmanned underwater vehicles / Robust planning for UUVs

Frost, Emily Anne January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 59-60). / In this thesis, I design and implement a novel method of schedule and path selection between predetermined waypoints for unmanned underwater vehicles under uncertainty. The problem is first formulated as a mixed-integer optimization model and subsequently uncertainty is addressed using a robust optimization approach. Solutions were tested through simulation and computational results are presented which indicate that the robust approach handles larger problems than could previously be solved in a reasonable running time while preserving a high level of robustness. This thesis demonstrates that the robust methods presented can solve realistic-sized problems in reasonable runtimes - a median of ten minutes and a mean of thirty minutes for 32 tasks - and that the methods perform well both in terms of expected reward and robustness to disturbances in the environment. The latter two results are obtained by simulating solutions given by the deterministic method, a naive robust method, and finally the two restricted affine robust policies. The two restricted affine policies consistently show an expected reward of nearly 100%, while the deterministic and naive robust methods achieve approximately 50% of maximum reward possible. / by Emily Anne Frost. / S.M.
117

Stochastic models and data driven simulations for healthcare operations

Anderson, Ross Michael January 2014 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 251-257). / This thesis considers problems in two areas in the healthcare operations: Kidney Paired Donation (KPD) and scheduling medical residents in hospitals. In both areas, we explore the implications of policy change through high fidelity simulations. We then build stochastic models to provide strategic insight into how policy decisions affect the operations of these healthcare systems. KPD programs enable patients with living but incompatible donors (referred to as patient-donor pairs) to exchange kidneys with other such pairs in a centrally organized clearing house. Exchanges involving two or more pairs are performed by arranging the pairs in a cycle, where the donor from each pair gives to the patient from the next pair. Alternatively, a so called altruistic donor can be used to initiate a chain of transplants through many pairs, ending on a patient without a willing donor. In recent years, the use of chains has become pervasive in KPD, with chains now accounting for the majority of KPD transplants performed in the United States. A major focus of our work is to understand why long chains have become the dominant method of exchange in KPD, and how to best integrate their use into exchange programs. In particular, we are interested in policies that KPD programs use to determine which exchanges to perform, which we refer to as matching policies. First, we devise a new algorithm using integer programming to maximize the number of transplants performed on a fixed pool of patients, demonstrating that matching policies which must solve this problem are implementable. Second, we evaluate the long run implications of various matching policies, both through high fidelity simulations and analytic models. Most importantly, we find that: (1) using long chains results in more transplants and reduced waiting time, and (2) the policy of maximizing the number of transplants performed each day is as good as any batching policy. Our theoretical results are based on introducing a novel model of a dynamically evolving random graph. The analysis of this model uses classical techniques from Erdos-Renyi random graph theory as well as tools from queueing theory including Lyapunov functions and Little's Law. In the second half of this thesis, we consider the problem of how hospitals should design schedules for their medical residents. These schedules must have capacity to treat all incoming patients, provide quality care, and comply with regulations restricting shift lengths. In 2011, the Accreditation Council for Graduate Medical Education (ACGME) instituted a new set of regulations on duty hours that restrict shift lengths for medical residents. We consider two operational questions for hospitals in light of these new regulations: will there be sufficient staff to admit all incoming patients, and how will the continuity of patient care be affected, particularly in a first day of a patients hospital stay, when such continuity is critical? To address these questions, we built a discrete event simulation tool using historical data from a major academic hospital, and compared several policies relying on both long and short shifts. The simulation tool was used to inform staffing level decisions at the hospital, which was transitioning away from long shifts. Use of the tool led to the following strategic insights. We found that schedules based on shorter more frequent shifts actually led to a larger admitting capacity. At the same time, such schedules generally reduce the continuity of care by most metrics when the departments operate at normal loads. However, in departments which operate at the critical capacity regime, we found that even the continuity of care improved in some metrics for schedules based on shorter shifts, due to a reduction in the use of overtime doctors. We develop an analytically tractable queueing model to capture these insights. The analysis of this model requires analyzing the steady-state behavior of the fluid limit of a queueing system, and proving a so called "interchange of limits" result. / by Ross Michael Anderson. / Ph. D.
118

Multiserver queueing systems in heavy traffic

Eschenfeldt, Patrick Clark January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 107-109). / In the study of queueing systems, a question of significant current interest is that of large scale behavior, where the size of the system increases without bound. This regime has becoming increasingly relevant with the rise of massive distributed systems like server farms, call centers, and health care management systems. To minimize underutilization of resources, the specific large scale regime of most interest is one in which the work to be done increases as processing capability increases. In this thesis, we characterize the behavior of two such large scale queueing systems. In the first part of the thesis we consider a Join the Shortest Queue (JSQ) policy in the so-called Halfin-Whitt heavy traffic regime. We establish that a scaled process counting the number of idle servers and queues of length two weakly converges to a two-dimensional reflected Ornstein-Uhlenbeck process, while processes counting longer queues converge to a deterministic system decaying to zero in constant time. This limiting system is similar to that of the traditional Halfin-Whitt model in its basic performance measures, but there are key differences in the queueing behavior of the JSQ model. In particular, only a vanishing fraction of customers will have to wait, but those who do will incur a constant order waiting time. In the second part of the thesis we consider a widely studied so-called "supermarket model" in which arriving customers join the shortest of d randomly selected queues. Assuming rate n[lambda]n Poisson arrivals and rate 1 exponentially distributed service times, our heavy traffic regime is described by [lambda]n 1 as n --> [infinity]. We give a simple expectation argument establishing that queues have steady state length at least i* = logd 1/1-[lambda]n with probability approaching one as n [infinity] 8. Our main result for this system concerns the detailed behavior of queues with length smaller than i*. Assuming [lambda]n converges to 1 at rate at most [square root of]n, we show that the dynamics of such queues does not follow a diffusion process, as is typical for queueing systems in heavy traffic, but is described instead by a deterministic infinite system of linear differential equations, after an appropriate rescaling. / by Patrick Clark Eschenfeldt. / Ph. D.
119

Advances in robust and adaptive optimization : algorithms, software, and insights / Advances in RO and AO : algorithms, software, and insights

Dunning, Iain Robert January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 215-220). / Optimization in the presence of uncertainty is at the heart of operations research. There are many approaches to modeling the nature of this uncertainty, but this thesis focuses on developing new algorithms, software, and insights for an approach that has risen in popularity over the last 15 years: robust optimization (RO), and its extension to decision making across time, adaptive optimization (AO). In the first chapter, we perform a computational study of two approaches for solving RO problems: "reformulation" and "cutting planes". Our results provide useful evidence for what types of problems each method excels in. In the second chapter, we present and analyze a new algorithm for multistage AO problems with both integer and continuous recourse decisions. The algorithm operates by iteratively partitioning the problem's uncertainty set, using the approximate solution at each iteration. We show that it quickly produces high-quality solutions. In the third chapter, we propose an AO approach to a general version of the process flexibility design problem, whereby we must decide which factories produce which products. We demonstrate significant savings for the price of flexibility versus simple but popular designs in the literature. In the fourth chapter, we describe computationally practical methods for solving problems with "relative" RO objective functions. We use combinations of absolute and relative worst-case objective functions to find "Pareto-efficient" solutions that combine aspects of both. We demonstrate through three in-depth case studies that these solutions are intuitive and perform well in simulation. In the fifth chapter, we describe JuMPeR, a software package for modeling RO and AO problems that builds on the JuMP modeling language. It supports many features including automatic reformulation, cutting plane generation, linear decision rules, and general data-driven uncertainty sets. / by Iain Robert Dunning. / Ph. D.
120

Integer optimization methods for machine learning

Chang, Allison An January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 129-137). / In this thesis, we propose new mixed integer optimization (MIO) methods to ad- dress problems in machine learning. The first part develops methods for supervised bipartite ranking, which arises in prioritization tasks in diverse domains such as information retrieval, recommender systems, natural language processing, bioinformatics, and preventative maintenance. The primary advantage of using MIO for ranking is that it allows for direct optimization of ranking quality measures, as opposed to current state-of-the-art algorithms that use heuristic loss functions. We demonstrate using a number of datasets that our approach can outperform other ranking methods. The second part of the thesis focuses on reverse-engineering ranking models. This is an application of a more general ranking problem than the bipartite case. Quality rankings affect business for many organizations, and knowing the ranking models would allow these organizations to better understand the standards by which their products are judged and help them to create higher quality products. We introduce an MIO method for reverse-engineering such models and demonstrate its performance in a case-study with real data from a major ratings company. We also devise an approach to find the most cost-effective way to increase the rank of a certain product. In the final part of the thesis, we develop MIO methods to first generate association rules and then use the rules to build an interpretable classifier in the form of a decision list, which is an ordered list of rules. These are both combinatorially challenging problems because even a small dataset may yield a large number of rules and a small set of rules may correspond to many different orderings. We show how to use MIO to mine useful rules, as well as to construct a classifier from them. We present results in terms of both classification accuracy and interpretability for a variety of datasets. / by Allison An Chang. / Ph.D.

Page generated in 0.0812 seconds