• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 11
  • 10
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 376
  • 376
  • 319
  • 32
  • 23
  • 21
  • 21
  • 15
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Delay characterization and prediction in major U.S. airline networks

Hanley, Zebulon James January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 101-102). / This thesis expands on models that predict delays within the National Airspace System (NAS) in the United States. We propose a new method to predict the expected behavior of the NAS throughout the course of an entire day after only a few flying hours have elapsed. We do so by using k-means clustering to classify the daily NAS behavior into a small set of most commonly seen snapshots. We then use random forests to map the delay behavior experienced early in a day to the most similar NAS snapshot, from which we make our type-of-day prediction for the NAS. By noon EST, we are able to predict the NAS type-of-day with 85% accuracy. We then incorporate these NAS type-of-day predictions into previously proposed models to predict the delay on specific origin-destination (OD) pairs within the U.S. at a certain number of hours into the future. The predictions use local delay variables, such as the current delay on specific OD pairs and airports, as well network-level variables such as the NAS type-of-day. These OD pair delay prediction models use random forests to make classification and regression predictions. The effects of changes in classification threshold, prediction horizon, NAS type-of-day inclusion, and using wheel off/on, actual, and scheduled gate departure and arrival times are studied. Lastly, we explore how the delay behavior of the NAS has changed over the last ten years and how well the models perform on new data. / by Zebulon James Hanley. / S.M.
192

Sparse learning : statistical and optimization perspectives

Dedieu, Antoine January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 101-109). / In this thesis, we study the computational and statistical aspects of several sparse models when the number of samples and/or features is large. We propose new statistical estimators and build new computational algorithms - borrowing tools and techniques from areas of convex and discrete optimization. First, we explore an Lq-regularized version of the Best Subset selection procedure which mitigates the poor statistical performance of the best-subsets estimator in the low SNR regimes. The statistical and empirical properties of the estimator are explored, especially when compared to best-subsets selection, Lasso and Ridge. Second, we propose new computational algorithms for a family of penalized linear Support Vector Machine (SVM) problem with a hinge loss function and sparsity-inducing regularizations. Our methods bring together techniques from Column (and Constraint) Generation and modern First Order methods for non-smooth convex optimization. These two components complement each others' strengths, leading to improvements of 2 orders of magnitude when compared to commercial LP solvers. Third, we present a novel framework inspired by Hierarchical Bayesian modeling to predict user session-length on on-line streaming services. The time spent by a user on a platform depends upon user-specific latent variables which are learned via hierarchical shrinkage. Our framework incorporates flexible parametric/nonparametric models on the covariates and outperforms state-of- the-art estimators in terms of efficiency and predictive performance on real world datasets from the internet radio company Pandora Media Inc. / by Antoine Dedieu. / S.M.
193

Essays in financial engineering

Haugh, Martin B. (Martin Brendan), 1971- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 109-115). / This thesis consists of three essays that apply techniques of operations research to problems in financial engineering. In particular, we study problems in portfolio optimization and options pricing. The first essay is motivated by the fact that derivative securities are equivalent to specific dynamic trading strategies in complete markets. This suggests the possibility of constructing buy-and-hold portfolios of options that mimic certain dynamic investment policies, e.g., asset-allocation rules. We explore this possibility by solving the following problem: given an optimal dynamic investment policy, find a set of options at the start of the investment horizon which will come closest to the optimal dynamic investment policy. We solve this problem for several combinations of preferences, return dynamics, and optimality criteria, and show that under certain conditions, a portfolio consisting of just a few european options is an excellent substitute for considerably more complex dynamic investment policies. In the second essay, we develop a method for pricing and exercising high-dimensional American options. The approach is based on approximate dynamic programming using nonlinear regression to approximate the value function. Using the approximate dynamic programming solutions, we construct upper and lower bounds on the option prices. These bounds can be evaluated by Monte Carlo simulation, and they are general enough to be used in conjunction with other approximate methods for pricing American options. / (cont.) We characterize the theoretical worst-case performance of the pricing bounds and examine how they may be used for hedging and exercising the option. We also discuss the implications for the design of the approximate pricing algorithm and illustrate its performance on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. The third essay explores the possibility of solving high-dimensional portfolio optimization problems using approximate dynamic programming. In particular, we employ approximate value iteration where the portfolio strategy at each time period is obtained using quadratic approximations to the approximate value function. We then compare the resulting solution to the best heuristic strategies available. Though the approximate dynamic programming solutions are often competitive, they are sometimes dominated by the best heuristic strategy. On such occasions we conclude that inaccuracies in the quadratic approximations are responsible for the poor performance. Finally, we compare our results to other recent work in this area and suggest possible methods for improving these algorithms. / by Martin B. Haugh. / Ph.D.
194

Tractability through approximation : a study of two discrete optimization problems

Farahat, Amr, 1973- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references. / (cont.) algorithm, at one extreme, and complete enumeration, at the other extreme. We derive worst-case approximation guarantees on the solution produced by such an algorithm for matroids. We then define a continuous relaxation of the original problem and show that some of the derived bounds apply with respect to the relaxed problem. We also report on a new bound for independence systems. These bounds extend, and in some cases strengthen, previously known results for standard best-in greedy. / This dissertation consists of two parts. In the first part, we address a class of weakly-coupled multi-commodity network design problems characterized by restrictions on path flows and 'soft' demand requirements. In the second part, we address the abstract problem of maximizing non-decreasing submodular functions over independence systems, which arises in a variety of applications such as combinatorial auctions and facility location. Our objective is to develop approximate solution procedures suitable for large-scale instances that provide a continuum of trade-offs between accuracy and tractability. In Part I, we review the application of Dantzig-Wolfe decomposition to mixed-integer programs. We then define a class of multi-commodity network design problems that are weakly-coupled in the flow variables. We show that this problem is NP-complete, and proceed to develop an approximation/reformulation solution approach based on Dantzig-Wolfe decomposition. We apply the ideas developed to the specific problem of airline fleet assignment with the goal of creating models that incorporate more realistic revenue functions. This yields a new formulation of the problem with a provably stronger linear programming relaxation, and we provide some empirical evidence that it performs better than other models proposed in the literature. In Part II, we investigate the performance of a family of greedy-type algorithms to the problem of maximizing submodular functions over independence systems. Building on pioneering work by Conforti, Cornu6jols, Fisher, Jenkyns, Nemhauser, Wolsey and others, we analyze a greedy algorithm that incrementally augments the current solution by adding subsets of arbitrary variable cardinality. This generalizes the standard best-in greedy / by Amr Farahat. / Ph.D.
195

Robust model selection and outlier detection in linear regressions

McCann, Lauren, Ph. D. Massachusetts Institute of Technology January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 191-196). / In this thesis, we study the problems of robust model selection and outlier detection in linear regression. The results of data analysis based on linear regressions are highly sensitive to model choice and the existence of outliers in the data. This thesis aims to help researchers to choose the correct model when their data could be contaminated with outliers, to detect possible outliers in their data, and to study the impact that such outliers have on their analysis. First, we discuss the problem of robust model selection. Many methods for performing model selection were designed with the standard error model ... and least squares estimation in mind. These methods often perform poorly on real world data, which can include outliers. Robust model selection methods aim to protect us from outliers and capture the model that represents the bulk of the data. We review the currently available model selection algorithms (both non-robust and robust) and present five new algorithms. Our algorithms aim to improve upon the currently available algorithms, both in terms of accuracy and computational feasibility. We demonstrate the improved accuracy of our algorithms via a simulation study and a study on a real world data set. / (cont.) Finally, we discuss the problem of outlier detection. In addition to model selection, outliers can adversely influence many other outcomes of regression-based data analysis. We describe a new outlier diagnostic tool, which we call diagnostic data traces. This tool can be used to detect outliers and study their influence on a variety of regression statistics. We demonstrate our tool on several data sets, which are considered benchmarks in the field of outlier detection. / by Lauren McCann. / Ph.D.
196

Predicting mortality for patients in critical care : a univariate flagging approach

Sheth, Mallory January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 87-89). / Predicting outcomes for critically ill patients is a topic of considerable interest. The most widely used models utilize data from early in a patient's stay to predict risk of death. While research has shown that use of daily information, including trends in key variables, can improve predictions of patient prognosis, this problem is challenging as the number of variables that must be considered is large and increasingly complex modeling techniques are required. The objective of this thesis is to build a mortality prediction system that improves upon current approaches. We aim to do this in two ways: 1. By incorporating a wider range of variables, including time-dependent features 2. By exploring different predictive modeling techniques beyond standard regression We identify three promising approaches: a random forest model, a best subset regression containing just five variables, and a novel approach called the Univariate Flagging Algorithm (UFA). In this thesis, we show that all three methods significantly outperform a widely-used mortality prediction approach, the Sequential Organ Failure Assessment (SOFA) score. However, we assert that UFA in particular is well-suited for predicting mortality in critical care. It can detect optimal cut-points in data, easily scales to a large number of variables, is easy to interpret, is capable of predicting rare events, and is robust to noise and missing data. As such, we believe it is a valuable step toward individual patient survival estimates. / by Mallory Sheth. / S.M.
197

An analytics approach to designing clinical trials for cancer

Relyea, Stephen L. (Stephen Lawrence) January 2013 (has links)
Thesis (S.M. in Operations Research)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 67-71). / Since chemotherapy began as a treatment for cancer in the 1940s, cancer drug development has become a multi-billion dollar industry. Combination chemotherapy remains the leading treatment for advanced cancers, and cancer drug research and clinical trials are enormous expenses for pharmaceutical companies and the government. We propose an analytics approach for the analysis and design of clinical trials that can discover drug combinations with significant improvements in survival and toxicity. We first build a comprehensive database of clinical trials. We then use this database to develop statistical models from earlier trials that are capable of predicting the survival and toxicity of new combinations of drugs. Then, using these statistical models, we develop optimization models that select novel treatment regimens that could be tested in clinical trials, based on the totality of data available on existing combinations. We present evidence for advanced gastric and gastroesophageal cancers that the proposed analytics approach a) leads to accurate predictions of survival and toxicity outcomes of clinical trials as long as the drugs used have been seen before in different combinations, b) suggests novel treatment regimens that balance survival and toxicity and take into account the uncertainty in our predictions, and c) outperforms the trials run by the average oncologist to give survival improvements of several months. Ultimately, our analytics approach offers promise for improving life expectancy and quality of life for cancer patients at low cost. / by Stephen L. Relyea. / S.M.in Operations Research
198

Emission regulations in the electricity market : an analysis from consumers, producers and central planner perspectives

Figueroa Rodriguez, Cristian Ricardo January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 119-122). / In the first part of this thesis, the objective is to identify optimal bidding strategies in the wholesale electricity market. We consider asymmetric producers submitting bids to a system operator. The system operator allocates demand via a single clearing price auction. The highest accepted bid sets the per unit market price payed by consumers. We find a pure Nash equilibrium to the bidding strategies of asymmetric producers unattainable in a symmetric model. Our results show that producers with relatively large capacities are able to exercise market power. However, the market may seem competitive due to the large number of producers serving demand. The objective of the second part of the thesis, is to compare two regulation policies: a fixed transfer price, such as tax regulation, and a permit system, such as cap-and-trade. For this purpose, we analyze an economy where risk neutral manufacturers satisfy price sensitive demand. The objective of the regulation established by the central planner is to achieve an external objective, e.g. reduce pollution or limit consumption of scarce resource. When demand is uncertain, designing these regulations to achieve the same expected level of the external objective results in the same expected consumer price but very different manufacturers' expected profit and central planner revenue. For instance, our results show that when the firms are price takers, the manufacturers with the worst technology always prefer a tax policy. Interestingly, we identify conditions under which the manufacturers with the cleanest technology benefit from higher expected profit as tax rate increases. In the third part of the thesis, we investigate the impact labeling decisions have on the supply chain. We consider a two stage supply chain consisting of a supplier and a retailer. Demand is considered stochastic, decreasing in price and increasing in a quality parameter, e.g. carbon emissions. The unit production cost for the supplier is increasing in the quality level chosen. We identify two different contracts that maximize the efficiency of the supply chain while allowing the different parties to achieve their objectives individually. / by Cristian Ricardo Figueroa Rodriguez. / Ph.D.
199

Modeling and design of material recovery facilities : genetic algorithm approach / Material recovery facilities : genetic algorithm approach

Testa, Mariapaola January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 187-193). / In the Organisation for Economic Co-operation and Development (OECD) area, the production of numerical solid waste (MI\SW) increased by 32% between 1990 and 2011, exceeding 660 million tonnes in 2011; the world-wide production of waste is estimated to grow further due to increasing GDP in developing economies. Given this scenario, effective treatment and recovery of wastes becomes a priority. In developed countries, MSW is usually sent to materials recovery facilities (MRFs), which use mechanical and manual sorting units to extract valuable components. In this work, we define a network flow model to represent a MRF that sorts wastes using multi-output units with recirculating streams. For each material in the system, we define a matrix to describe the sorting process. We then formulate a genetic algorithm (GA) that generates alternative configurations of a MRF having a given set of sorting units with known separation parameters and selects those with highest profit and efficiency. The GA incorporates a heuristic for personnel allocation to manual units. We code the algorithm in Java and apply it to an existing MRF. The results show a 33.4% improvement in profit and a 1.7% improvement in efficiency with respect to the current configuration without hand sorting; and a 6.7% improvement in profit and a 3.9% improvement il efficiency, with respect to the current configuration with hand sorting. / by Mariapaola Testa. / S.M.
200

Buyout prices in online auctions

Gupta, Shobhit January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 149-154). / Buyout options allow bidders to instantly purchase at a specified price an item listed for sale through an online auction. A temporary buyout option disappears once a regular bid above the reserve price is made, while a permanent option remains available until it is exercised or the auction ends. Buyout options are widely used in online auctions and have significant economic importance: nearly half of the auctions today are listed with a buyout price and the option is exercised in nearly one fourth of them. We formulate a game-theoretic model featuring time-sensitive bidders with independent private valuations and Poisson arrivals but endogenous bidding times in order to answer the following questions: How should buyout prices be set in order to maximize the seller's discounted revenue? What are the relative benefits of using each type of buyout option? While all existing buyout options we are aware of currently rely on a static buyout price (i.e. with a constant value), what is the potential benefit associated with using instead a dynamic buyout price that varies as the auction progresses? / (cont.) For all buyout option types we exhibit a Nash equilibrium in bidder strategies, argue that this equilibrium constitutes a plausible outcome prediction, and study the problem of maximizing the corresponding seller revenue. In particular, the equilibrium strategy in all cases is such that a bidder exercises the buyout option provided it is still available and his valuation is above a time-dependent threshold. Our numerical experiments suggest that a seller may significantly increase his utility by introducing a buyout option when any of the participants are time-sensitive. Furthermore, while permanent buyout options yield higher predicted revenue than temporary options, they also provide additional incentives for late bidding and may therefore not be always more desirable. The numerical results also imply that the increase in seller's utility (over a fixed buyout price auction) enabled by a dynamic buyout price is small and does not seem to justify the corresponding increase in complexity. / by Shobhit Gupta. / Ph.D.

Page generated in 0.4417 seconds