Spelling suggestions: "subject:"contenter"" "subject:"intenter""
211 |
Distributed averaging in dynamic networksRajagopalan, Shreevatsa January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 39-40). / The question of computing average of numbers present at nodes in a network in a distributed manner using gossip or message-passing algorithms has been of great recent interest across disciplines -- algorithms, control and robotics, estimation, social networks, etc. It has served as a non-trivial, representative model for an important class of questions arising in these disciplines and thus guiding intellectual progress over the past few decades. In most of these applications, there is inherent dynamics present, such as changes in the network topology in terms of communication links, changes in the values of numbers present at nodes, and nodes joining or leaving. The effect of dynamics in terms of communication links on the design and analysis of algorithms for averaging is reasonably well understood, e.g. [14][2][8][4]. However, little is known about the effect of other forms of dynamics. In this thesis, we study the effect of such types of dynamics in the context of maintaining average in the network. Specifically, we design dynamics-aware message-passing or gossip algorithm that maintains good estimate of average in presence of continuous change in numbers at nodes. Clearly, in presence of such dynamics the best one can hope for is a tradeoff between the accuracy of each node's estimate of the average at each time instant and the rate of dynamics. For our algorithm, we characterize this tradeoff and establish it to be near optimal. The dependence of the accuracy of the algorithm on the rate of dynamics as well as on the underlying graph structure is quantified. / by Shreevatsa Rajagopalan. / S.M.
|
212 |
Delay characterization and prediction in major U.S. airline networksHanley, Zebulon James January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 101-102). / This thesis expands on models that predict delays within the National Airspace System (NAS) in the United States. We propose a new method to predict the expected behavior of the NAS throughout the course of an entire day after only a few flying hours have elapsed. We do so by using k-means clustering to classify the daily NAS behavior into a small set of most commonly seen snapshots. We then use random forests to map the delay behavior experienced early in a day to the most similar NAS snapshot, from which we make our type-of-day prediction for the NAS. By noon EST, we are able to predict the NAS type-of-day with 85% accuracy. We then incorporate these NAS type-of-day predictions into previously proposed models to predict the delay on specific origin-destination (OD) pairs within the U.S. at a certain number of hours into the future. The predictions use local delay variables, such as the current delay on specific OD pairs and airports, as well network-level variables such as the NAS type-of-day. These OD pair delay prediction models use random forests to make classification and regression predictions. The effects of changes in classification threshold, prediction horizon, NAS type-of-day inclusion, and using wheel off/on, actual, and scheduled gate departure and arrival times are studied. Lastly, we explore how the delay behavior of the NAS has changed over the last ten years and how well the models perform on new data. / by Zebulon James Hanley. / S.M.
|
213 |
Space, body and power/play: a case study of Hong Kong Cultural Center.January 2004 (has links)
Chow Pui-ha. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2004. / Includes bibliographical references (leaves 213-219). / Abstracts in English and Chinese. / Prologue --- p.1-3 / Chapter Ch. 1 --- "Theorizing Space, Body,ower andlay" --- p.4 / Chapter I. --- The Interrelations of Space and Body --- p.5-13 / Chapter II. --- Theower oflay and Leisure --- p.13-19 / Chapter III. --- Leisure in Modernity --- p.19-21 / Chapter IV. --- Researchroblematic: Leisure andowerlay --- p.21-22 / Chapter Ch. 2 --- Operationalisation --- p.23 / Chapter I. --- The Site of Study: Hong Kong Cultural Centre --- p.24-30 / Chapter II. --- Institutionalower at Cultural Centre --- p.30-33 / Chapter III --- Theoretical Framework --- p.34-38 / Chapter IV --- Research Questions --- p.39-41 / Chapter V. --- Methodology --- p.41-46 / Chapter VI. --- Research Implications --- p.46-48 / Chapter Ch. 3 --- hallocratic Bodyspace --- p.49 / Chapter I. --- The Representational Spaces: Aublic Toilet or a Culturalalace? --- p.50-61 / Chapter II. --- Spatialractices:erforming Arts/erformative Culture --- p.62-70 / Chapter III --- Performative Leisure and Disenchanted body --- p.71-78 / Chapter IV. --- olitical Economy of the Body andolitical Economy of Music --- p.78-80 / Chapter V. --- Enchanted Body in the Center?. --- p.81 -83 / Chapter VI. --- The Representation of Space: Elitist Discourse --- p.84-87 / Chapter VII. --- Thehallocartic Bodyspace of the Elitist Culturalalace --- p.87-91 / Chapter CH. 4 --- Embryonic Bodyspace --- p.92 / Chapter I. --- Representational Space: the Garden and the Open Theatre --- p.93-97 / Chapter II. --- Spatialractices: Compositionalerformances --- p.98-109 / Chapter III. --- Re-created Body and Leisure Societies --- p.109-121 / Chapter IV. --- Embryonic Bodyspace --- p.122-126 / Chapter V. --- The Representation of Space: the Repression of Revolution --- p.126-129 / Chapter VI. --- Institutionalised Embryonic Bodyspace --- p.129-136 / Chapter CH. 5 --- Contested Bodyspace --- p.137 / Chapter I. --- Sectioned Lifeworlds --- p.138-141 / Chapter II. --- Civility as the Logic of Centrality --- p.141-149 / Chapter III. --- Dialectic of Order and Disorder and the Logic of Civility --- p.149-153 / Chapter IV. --- ower atlay --- p.153-155 / Chapter CH. 6 --- Body-City and City Spectacle --- p.156 / Chapter I. --- The Logic of Civility and City Imaginary --- p.157-159 / Chapter II. --- Spectacularization of City --- p.159-162 / Chapter III. --- Event Capital and Hong Kong Identity --- p.163-173 / Chapter IV. --- Mainland Tourists as the City Spectacle of Hong Kong --- p.173-181 / Chapter V. --- "City spectacle, Spatial Order andower Negotiation" --- p.181-184 / Chapter CH. 7 --- Conclusion:olitics oflay on Body-City --- p.185 / Chapter I. --- lay as Tactic --- p.186-187 / Chapter II. --- hallocratic and Embryonic Bodyspaces --- p.188-190 / Chapter III. --- Leisure Relations --- p.190-191 / Chapter IV. --- Creative City and City Citizenshi --- p.191 -200 / Chapter VI. --- lay as Lifeolitics --- p.200-207 / Chapter VII. --- Conclusion --- p.207-208 / Appendix 1: Renowned artists and groupsresented in HK Cultural Centre --- p.209-210 / Appendix 2: The Meaning of the Eighteen Buildings Presented in a Symphony of Lights --- p.211-212 / Bibliography --- p.213-219
|
214 |
Sparse learning : statistical and optimization perspectivesDedieu, Antoine January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 101-109). / In this thesis, we study the computational and statistical aspects of several sparse models when the number of samples and/or features is large. We propose new statistical estimators and build new computational algorithms - borrowing tools and techniques from areas of convex and discrete optimization. First, we explore an Lq-regularized version of the Best Subset selection procedure which mitigates the poor statistical performance of the best-subsets estimator in the low SNR regimes. The statistical and empirical properties of the estimator are explored, especially when compared to best-subsets selection, Lasso and Ridge. Second, we propose new computational algorithms for a family of penalized linear Support Vector Machine (SVM) problem with a hinge loss function and sparsity-inducing regularizations. Our methods bring together techniques from Column (and Constraint) Generation and modern First Order methods for non-smooth convex optimization. These two components complement each others' strengths, leading to improvements of 2 orders of magnitude when compared to commercial LP solvers. Third, we present a novel framework inspired by Hierarchical Bayesian modeling to predict user session-length on on-line streaming services. The time spent by a user on a platform depends upon user-specific latent variables which are learned via hierarchical shrinkage. Our framework incorporates flexible parametric/nonparametric models on the covariates and outperforms state-of- the-art estimators in terms of efficiency and predictive performance on real world datasets from the internet radio company Pandora Media Inc. / by Antoine Dedieu. / S.M.
|
215 |
Essays in financial engineeringHaugh, Martin B. (Martin Brendan), 1971- January 2001 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001. / Includes bibliographical references (p. 109-115). / This thesis consists of three essays that apply techniques of operations research to problems in financial engineering. In particular, we study problems in portfolio optimization and options pricing. The first essay is motivated by the fact that derivative securities are equivalent to specific dynamic trading strategies in complete markets. This suggests the possibility of constructing buy-and-hold portfolios of options that mimic certain dynamic investment policies, e.g., asset-allocation rules. We explore this possibility by solving the following problem: given an optimal dynamic investment policy, find a set of options at the start of the investment horizon which will come closest to the optimal dynamic investment policy. We solve this problem for several combinations of preferences, return dynamics, and optimality criteria, and show that under certain conditions, a portfolio consisting of just a few european options is an excellent substitute for considerably more complex dynamic investment policies. In the second essay, we develop a method for pricing and exercising high-dimensional American options. The approach is based on approximate dynamic programming using nonlinear regression to approximate the value function. Using the approximate dynamic programming solutions, we construct upper and lower bounds on the option prices. These bounds can be evaluated by Monte Carlo simulation, and they are general enough to be used in conjunction with other approximate methods for pricing American options. / (cont.) We characterize the theoretical worst-case performance of the pricing bounds and examine how they may be used for hedging and exercising the option. We also discuss the implications for the design of the approximate pricing algorithm and illustrate its performance on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. The third essay explores the possibility of solving high-dimensional portfolio optimization problems using approximate dynamic programming. In particular, we employ approximate value iteration where the portfolio strategy at each time period is obtained using quadratic approximations to the approximate value function. We then compare the resulting solution to the best heuristic strategies available. Though the approximate dynamic programming solutions are often competitive, they are sometimes dominated by the best heuristic strategy. On such occasions we conclude that inaccuracies in the quadratic approximations are responsible for the poor performance. Finally, we compare our results to other recent work in this area and suggest possible methods for improving these algorithms. / by Martin B. Haugh. / Ph.D.
|
216 |
Tractability through approximation : a study of two discrete optimization problemsFarahat, Amr, 1973- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references. / (cont.) algorithm, at one extreme, and complete enumeration, at the other extreme. We derive worst-case approximation guarantees on the solution produced by such an algorithm for matroids. We then define a continuous relaxation of the original problem and show that some of the derived bounds apply with respect to the relaxed problem. We also report on a new bound for independence systems. These bounds extend, and in some cases strengthen, previously known results for standard best-in greedy. / This dissertation consists of two parts. In the first part, we address a class of weakly-coupled multi-commodity network design problems characterized by restrictions on path flows and 'soft' demand requirements. In the second part, we address the abstract problem of maximizing non-decreasing submodular functions over independence systems, which arises in a variety of applications such as combinatorial auctions and facility location. Our objective is to develop approximate solution procedures suitable for large-scale instances that provide a continuum of trade-offs between accuracy and tractability. In Part I, we review the application of Dantzig-Wolfe decomposition to mixed-integer programs. We then define a class of multi-commodity network design problems that are weakly-coupled in the flow variables. We show that this problem is NP-complete, and proceed to develop an approximation/reformulation solution approach based on Dantzig-Wolfe decomposition. We apply the ideas developed to the specific problem of airline fleet assignment with the goal of creating models that incorporate more realistic revenue functions. This yields a new formulation of the problem with a provably stronger linear programming relaxation, and we provide some empirical evidence that it performs better than other models proposed in the literature. In Part II, we investigate the performance of a family of greedy-type algorithms to the problem of maximizing submodular functions over independence systems. Building on pioneering work by Conforti, Cornu6jols, Fisher, Jenkyns, Nemhauser, Wolsey and others, we analyze a greedy algorithm that incrementally augments the current solution by adding subsets of arbitrary variable cardinality. This generalizes the standard best-in greedy / by Amr Farahat. / Ph.D.
|
217 |
Robust model selection and outlier detection in linear regressionsMcCann, Lauren, Ph. D. Massachusetts Institute of Technology January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 191-196). / In this thesis, we study the problems of robust model selection and outlier detection in linear regression. The results of data analysis based on linear regressions are highly sensitive to model choice and the existence of outliers in the data. This thesis aims to help researchers to choose the correct model when their data could be contaminated with outliers, to detect possible outliers in their data, and to study the impact that such outliers have on their analysis. First, we discuss the problem of robust model selection. Many methods for performing model selection were designed with the standard error model ... and least squares estimation in mind. These methods often perform poorly on real world data, which can include outliers. Robust model selection methods aim to protect us from outliers and capture the model that represents the bulk of the data. We review the currently available model selection algorithms (both non-robust and robust) and present five new algorithms. Our algorithms aim to improve upon the currently available algorithms, both in terms of accuracy and computational feasibility. We demonstrate the improved accuracy of our algorithms via a simulation study and a study on a real world data set. / (cont.) Finally, we discuss the problem of outlier detection. In addition to model selection, outliers can adversely influence many other outcomes of regression-based data analysis. We describe a new outlier diagnostic tool, which we call diagnostic data traces. This tool can be used to detect outliers and study their influence on a variety of regression statistics. We demonstrate our tool on several data sets, which are considered benchmarks in the field of outlier detection. / by Lauren McCann. / Ph.D.
|
218 |
Predicting mortality for patients in critical care : a univariate flagging approachSheth, Mallory January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 87-89). / Predicting outcomes for critically ill patients is a topic of considerable interest. The most widely used models utilize data from early in a patient's stay to predict risk of death. While research has shown that use of daily information, including trends in key variables, can improve predictions of patient prognosis, this problem is challenging as the number of variables that must be considered is large and increasingly complex modeling techniques are required. The objective of this thesis is to build a mortality prediction system that improves upon current approaches. We aim to do this in two ways: 1. By incorporating a wider range of variables, including time-dependent features 2. By exploring different predictive modeling techniques beyond standard regression We identify three promising approaches: a random forest model, a best subset regression containing just five variables, and a novel approach called the Univariate Flagging Algorithm (UFA). In this thesis, we show that all three methods significantly outperform a widely-used mortality prediction approach, the Sequential Organ Failure Assessment (SOFA) score. However, we assert that UFA in particular is well-suited for predicting mortality in critical care. It can detect optimal cut-points in data, easily scales to a large number of variables, is easy to interpret, is capable of predicting rare events, and is robust to noise and missing data. As such, we believe it is a valuable step toward individual patient survival estimates. / by Mallory Sheth. / S.M.
|
219 |
An analytics approach to designing clinical trials for cancerRelyea, Stephen L. (Stephen Lawrence) January 2013 (has links)
Thesis (S.M. in Operations Research)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 67-71). / Since chemotherapy began as a treatment for cancer in the 1940s, cancer drug development has become a multi-billion dollar industry. Combination chemotherapy remains the leading treatment for advanced cancers, and cancer drug research and clinical trials are enormous expenses for pharmaceutical companies and the government. We propose an analytics approach for the analysis and design of clinical trials that can discover drug combinations with significant improvements in survival and toxicity. We first build a comprehensive database of clinical trials. We then use this database to develop statistical models from earlier trials that are capable of predicting the survival and toxicity of new combinations of drugs. Then, using these statistical models, we develop optimization models that select novel treatment regimens that could be tested in clinical trials, based on the totality of data available on existing combinations. We present evidence for advanced gastric and gastroesophageal cancers that the proposed analytics approach a) leads to accurate predictions of survival and toxicity outcomes of clinical trials as long as the drugs used have been seen before in different combinations, b) suggests novel treatment regimens that balance survival and toxicity and take into account the uncertainty in our predictions, and c) outperforms the trials run by the average oncologist to give survival improvements of several months. Ultimately, our analytics approach offers promise for improving life expectancy and quality of life for cancer patients at low cost. / by Stephen L. Relyea. / S.M.in Operations Research
|
220 |
Emission regulations in the electricity market : an analysis from consumers, producers and central planner perspectivesFigueroa Rodriguez, Cristian Ricardo January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 119-122). / In the first part of this thesis, the objective is to identify optimal bidding strategies in the wholesale electricity market. We consider asymmetric producers submitting bids to a system operator. The system operator allocates demand via a single clearing price auction. The highest accepted bid sets the per unit market price payed by consumers. We find a pure Nash equilibrium to the bidding strategies of asymmetric producers unattainable in a symmetric model. Our results show that producers with relatively large capacities are able to exercise market power. However, the market may seem competitive due to the large number of producers serving demand. The objective of the second part of the thesis, is to compare two regulation policies: a fixed transfer price, such as tax regulation, and a permit system, such as cap-and-trade. For this purpose, we analyze an economy where risk neutral manufacturers satisfy price sensitive demand. The objective of the regulation established by the central planner is to achieve an external objective, e.g. reduce pollution or limit consumption of scarce resource. When demand is uncertain, designing these regulations to achieve the same expected level of the external objective results in the same expected consumer price but very different manufacturers' expected profit and central planner revenue. For instance, our results show that when the firms are price takers, the manufacturers with the worst technology always prefer a tax policy. Interestingly, we identify conditions under which the manufacturers with the cleanest technology benefit from higher expected profit as tax rate increases. In the third part of the thesis, we investigate the impact labeling decisions have on the supply chain. We consider a two stage supply chain consisting of a supplier and a retailer. Demand is considered stochastic, decreasing in price and increasing in a quality parameter, e.g. carbon emissions. The unit production cost for the supplier is increasing in the quality level chosen. We identify two different contracts that maximize the efficiency of the supply chain while allowing the different parties to achieve their objectives individually. / by Cristian Ricardo Figueroa Rodriguez. / Ph.D.
|
Page generated in 0.1073 seconds