• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 11
  • 10
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 376
  • 376
  • 319
  • 32
  • 23
  • 21
  • 21
  • 15
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Analysis of employee stock options and guaranteed withdrawal benefits for life

Shah, Premal (Premal Y.) January 2008 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 215-224). / In this thesis we study three problems related to financial modeling. First, we study the problem of pricing Employee Stock Options (ESOs) from the point of view of the issuing company. Since an employee cannot trade or eectively hedge ESOs, she exercises them to maximize a subjective criterion of value. Modeling this exercise behavior is key to pricing ESOs. We argue that ESO exercises should not be modeled on a one by one basis, as is commonly done, but at a portfolio level because exercises related to different ESOs that an employee holds would be coupled. Using utility based models we also show that such coupled exercise behavior leads to lower average ESO costs for the commonly used utility functions such as power and exponential utilities. Unfortunately, utility based models do not lead to tractable solutions for finding costs associated with ESOs. We propose a new risk management based approach to model exercise behavior based on mean-variance portfolio maximization. The resulting exercise behavior is both intuitive and leads to a computationally tractable model for finding ESO exercises and pricing ESOs as a portfolio. We also study a special variant of this risk-management based exercise model, which leads to a decoupling of the ESO exercises and then obtain analytical bounds on the implied cost of an ESO for the employer in this case. Next, we study Guaranteed Withdrawal Benefits (GWB) for life, a recent and popular product that many insurance companies have offered for retirement planning. The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. / (cont.) The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. We first analyze a continuous time version of this product in a Black Scholes economy with simplifying assumptions on population mortality and obtain an analytical solution for the product value. This simple analysis reveals the high sensitivity the product bears to several risk factors. We then further investigate the pricing of GWB in a more realistic setting using different asset pricing models, including those that allow the interest rates and the volatility of returns to be stochastic. Our analysis reveals that 1) GWB has insufficient price discrimination and is susceptible to adverse selection and 2) valuations can vary substantially depending on which class of models is used for accounting. We believe that the ambiguity in value and the presence of significant risks, which can be challenging to hedge, should create concerns to the GWB underwriters, their clients as well as the regulators. Finally, many problems in finance are Sequential Decision Problems (SDPs) under uncertainty. We nd that SDP formulations using commonly used financial metrics or acceptability criteria can lead to dynamically inconsistent strategies. We study the link between objective functions used in SDPs, dynamic consistency and dynamic programming. We then propose ways to create dynamically consistent formulations. / by Premal Shah. / Ph.D.
172

Effectiveness and design of sparse process flexibilities

Wei, Yehua, Ph. D. Massachusetts Institute of Technology January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 119-121). / The long chain has been an important concept in the design of flexible processes. This design concept, as well as other sparse flexibility structures, have been applied by the automotive and other industries as a way to increase flexibility in order to better match available capacities with variable demands. Numerous empirical studies have validated the effectiveness of these structures. However, there is little theory that explains the effectiveness of the long chain, except when the system size is large, i.e., by applying an asymptotic analysis. Our attempt in this thesis is to develop a theory that explains the effectiveness of long chain and other sparse flexibility structures for finite size systems. We study the sales of sparse flexibility structures under both stochastic and worst-case demands. From our analysis, we not only provide rigorous explanation to the effectiveness of the long chain, but also refine guidelines in designing other sparse flexibility structures. Under stochastic demand, we first develop two deterministic properties, supermodularity and decomposition of the long chain, that serve as important building blocks in our analysis. Applying the supermodularity property, we show that the marginal benefit, i.e., the increase in expected sales, increases as the long chain is constructed, and the largest benefit is always achieved when the chain is closed by adding the last arc to the system. Then, applying the decomposition property, we develop four important results for the long chain under IID demands: (i) an effective algorithm to compute the performance of long chain using only matrix multiplications; (ii) a proof on the optimality of the long chain among all 2-flexibility structures; (iii) a result that the gap between the fill rate of full flexibility and that of the long chain increases with system size, thus implying that the effectiveness of the long chain relative to full flexibility increases as the number of products decreases; (iv) a risk-pooling result implying that the fill rate of a long chain increases with the number of products, but this increase converges to zero exponentially fast. Under worst-case demand, we propose the plant cover index, an index defined by a constrained bipartite vertex cover problem associated with a given flexibility structure. We show that the plant cover index allows for a comparison between the worst-case performances of two flexibility structures based only on their structures and is independent of the choice of the uncertainty set or the choice of the performance measure. More precisely, we show that if all of the plant cover indices of one structure are greater than or equal to the plant cover indices of the other structure, then the first structure is more robust than the second one, i.e. performs better in worst-case under any symmetric uncertainty set and a large class of performance measures. Applying this relation, we demonstrate the effectiveness of the long chain in worst-case performances, and derive a general heuristic that generates sparse flexibility structures which are tested to be effective under both stochastic and worst-case demands. Finally, to understand the effect of process flexibility in reducing logistics cost, we study a model where the manufacturer is required to satisfy deterministic product demand at different distribution centers. Under this model, we prove that if the cost of satisfying product demands at distribution centers is independent of production plants or distribution centers, then there always exists a long chain that is optimal among 2-flexibility structures. Moreover, when all plants and distribution centers are located on a line, we provide a characterization for the optimal long chain that minimizes the total transportation cost. The characterization gives rise to a heuristic that finds effective sparse flexibility structures when plants and distribution centers are located on a 2-dimensional plane. / by Yehua Wei. / Ph.D.
173

Selfish versus coordinated routing in network games

Stier Moses, Nicolás E January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 159-170) and index. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / A common assumption in network optimization models is that a central authority controls the whole system. However, in some applications there are independent users, and assuming that they will follow directions given by an authority is not realistic. Individuals will only accept directives if they are in their own interest or if there are incentives that encourage them to do so. Actually, it would be much easier to let users make their own decisions hoping that the outcome will be close to the authority's goals. Our main contribution is to show that, in static networks subject to congestion, users' selfish decisions drive the system close to optimality with respect to various common objectives. This connection to individual decision making proves fruitful; not only does it provide us with insights and additional understanding of network problems, but it also allows us to design approximation algorithms for computationally difficult problems. More specifically, the conflicting objectives of the users prompt the definition of a network game in which they minimize their own latencies. We show that the so-called price of anarchy is small in a quite general setting. Namely, for networks with side constraints and non-convex, non-differentiable, and even discontinuous latency functions, we show that although an arbitrary equilibrium need not be efficient, the total latency of the best equilibrium is close to that of an optimal solution. In addition, when the measure of the solution quality is the maximum latency, equilibria in networks without constraints are also near-optimal. We provide the first analysis of the problem of minimizing that objective in static networks with congestion. / (cont.) As this problem is NP-hard, computing an equilibrium represents a constant-factor approximation algorithm. In some situations, the network authority might still want to do better than in equilibrium. We propose to use a solution that minimizes the total latency, subject to constraints designed to improve the solution's fairness. For several real-world instances, we compute traffic assignments of notably smaller total latency than an equilibrium, yet of similar fairness. Furthermore, we provide theoretical results that explain the conclusions derived from the computational study. / by Nicolás E. Stier-Moses. / Ph.D.
174

A Langrangian decomposition approach to weakly coupled dynamic optimization problems and its applications

Hawkins, Jeffrey Thomas, 1977- January 2003 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 187-192). / We present a Lagrangian based approach to decoupling weakly coupled dynamic optimization problems for both finite and infinite horizon problems. The main contributions of this dissertation are: (i) We develop methods for obtaining bounds on the optimal cost based on solving low dimensional dynamic programs; (ii) We utilize the resulting low dimensional dynamic programs and combine them using integer programming methods to find feasible policies for the overall problem; (iii) To illustrate the power of our methods we apply them to a large collection of dynamic optimization problems: multiarmed bandits, restless bandits, queueing networks, serial supply chains, linear control problems and on-line auctions, all with promising results. In particular, the resulting policies appear to be near optimal. (iv) We provide an indepth analysis of several aspects of on-line auctions, both from a buyer's and a seller's perspective. Specifically, for buyers we construct a model of on-line auctions using publicly available data and develop an algorithm for optimally bidding in multiple simultaneous auctions. For sellers we construct a model of on-line auctions using publicly available data and demonstrate how a seller can increase the final selling price using dynamic programming. / by Jeffrey Thomas Hawkins. / Ph.D.
175

Robust, risk-sensitive, and data-driven control of Markov Decision Processes

Le Tallec, Yann January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 201-211). / Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature. / (cont.) The structural and computational results for such games make this formulation appealing. By adding a penalty for unlikely parameters, we extend the formulation of worst-case control of uncertain MDPs and mitigate its conservativeness. We show a duality between the penalized worst-case control of uncertain MDPs with rectangular uncertainty and the minimization of a Markovian dynamically consistent convex risk measure of the sample cost. This notion of risk has desirable properties for multi-period decision making, including a new Markovian property that we introduce and motivate. This Markovian property is critical in establishing the equivalence between minimizing some risk measure of the sample cost and solving a certain zero-sum Markov game between the decision maker and nature, and to tackling infinite-horizon problems. An alternative approach to dealing with uncertain MDPs, which avoids the curse of uncertainty, is to exploit directly observational data. Specifically, we estimate the expected performance of any given policy (and its gradient with respect to certain policy parameters) from a training set comprising observed trajectories sampled under a known policy. / (cont.) We propose new value (and value gradient) estimators that are unbiased and have low training set to training set variance. We expect our approach to outperform competing approaches when there are few system observations compared to the underlying MDP size, as indicated by numerical experiments. / by Yann Le Tallec. / Ph.D.
176

An analytics approach to hypertension treatment

Epstein, Christina (Christina Lynn) January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / 13 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 67-68). / Hypertension is a major public health issue worldwide, affecting more than a third of the adult population and increasing the risk of myocardial infarction, heart failure, stroke, and kidney disease. Current clinical guidelines have yet to achieve consensus and continue to rely on expert opinion for recommendations lacking a sufficient evidence base. In practice, trial and error is typically required to discover a medication combination and dosage that works to control blood pressure for a given patient. We propose an analytics approach to hypertension treatment: applying visualization, predictive analytics methods, and optimization to existing electronic health record data to (1) find conjectures parallel and potentially orthogonal to guidelines, (2) hasten response time to therapy, and/or (3) optimize therapy selection. This thesis presents work toward these goals including data preprocessing and exploration, feature creation, the discovery of clinically-relevant clusters based on select blood pressure features, and three development spirals of predictive models and results. / by Christina Epstein. / S.M.
177

A data mining approach for acoustic diagnosis of cardiopulmonary disease

Flietstra, Bryan C January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 107-111). / Variations in training and individual doctor's listening skills make diagnosing a patient via stethoscope based auscultation problematic. Doctors have now turned to more advanced devices such as x-rays and computed tomography (CT) scans to make diagnoses. However, recent advances in lung sound analysis techniques allow for the auscultation to be performed with an array of microphones, which send the lung sounds to a computer for processing. The computer automatically identifies adventitious sounds using time expanded waveform analysis and allows for a more precise auscultation. We investigate three data mining techniques in order to diagnose a patient based solely on the sounds heard within the chest by a "smart" stethoscope. We achieve excellent recognition performance by using k nearest neighbors, neural networks, and support vector machines to make classifications in pair-wise comparisons. We also extend the research to a multi-class scenario and are able to separate patients with interstitial pulmonary fibrosis with 80% accuracy. Adding clinical data also improves recognition performance. Our results show that performing computerized lung auscultation offers a low-cost, non-invasive diagnostic procedure that gives doctors better clinical utility especially in situations when x-rays and CT scans are not available. / by Bryan C. Flietstra. / S.M.
178

From data to decisions in healthcare : an optimization perspective

Weinstein, Alexander Michael January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 107-110). / The past few decades have seen many methodological and technological advances in optimization, statistics, and machine learning. What is still not well understood is how to combine these tools to take data as inputs and give decisions as outputs. The healthcare arena offers fertile ground for improvement in data-driven decisionmaking. Every day, medical researchers develop and test novel therapies via randomized clinical trials, which, when designed efficiently, can provide evidence for efficacy and harm. Over the last two decades, electronic medical record systems have become increasingly prevalent in hospitals and other care settings. The growth of these and other data sources, combined with the aforementioned advances in the field of operations research, enable new modes of study and analysis in healthcare. In this thesis, we take a data-driven approach to decision-making in healthcare through the lenses of optimization, statistics, and machine learning. In Parts I and II of the thesis, we apply mixed-integer optimization to enhance the design and analysis of clinical trials, a critical step in the approval process for innovative medical treatments. In Part I, we present a robust mixed-integer optimization algorithm for allocating subjects to treatment groups in sequential experiments. By improving covariate balance across treatment groups, the proposed method yields statistical power at least as high as, and sometimes significantly higher than, state-of- the-art covariate-adaptive randomization approaches. In Part II, we present a mixed-integer optimization approach for identifying exceptional responders in randomized trial data. In practice, this approach can be used to extract added value from costly clinical trials that may have failed to identify a positive treatment effect for the general study population, but could be beneficial to a subgroup of the population. In Part III, we present a personalized approach to diabetes management using electronic medical records. The approach is based on a k-nearest neighbors algorithm. By harnessing the power of optimization and machine learning, we can improve patient outcomes and move from the one-size-fits-all approach that dominates the medical landscape today, to a personalized, patient-centered approach. / by Alexander Michael Weinstein. / Ph. D.
179

The case for coordination : equity, efficiency and passenger impacts in air traffic flow management / Equity, efficiency and passenger impacts in air traffic flow management

Fearing, Douglas (Douglas Stephen) January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 121-123). / In this thesis, we develop multi-resource integer optimization formulations for coordinating Traffic Flow Management (TFM) programs with equity considerations. Our multi-resource approaches ignore aircraft connectivity between flights, but allow a single flight to utilize multiple capacity-controlled resources. For example, when both Ground Delay Programs (GDPs) and Airspace Flow Programs (AFPs) are simultaneously in effect, a single flight may be impacted by a GDP and one or more AFPs. We show that due to the similarity with current practice, our models can be applied directly in the current Collaborative Decision-Making (CDM) environment. In the first part of the thesis, we develop these formulations as extensions of a well-studied, existing nationwide TFM formulation and compare them to approaches utilized in practice. In order to make these comparisons, we first develop a metric, Time-Order Deviation, for evaluating schedule fairness in the multi-resource setting. We use this metric to compare approaches in terms of both schedule fairness and allocated flight delays. Using historical scenarios derived from 2007 data, we show that, even with limited interaction between TFM programs, our Ration-by-Schedule Exponential Penalty model can improve the utilization of air transportation system resources. Skipping ahead, in the last part of the thesis, we develop a three-stage sequential evaluation procedure in order to analyze the TFM allocation process in the context of a dynamic CDM environment. To perform this evaluation we develop an optimization-based airline disruption response model, which utilizes passenger itinerary data to approximate the underlying airline objective, resulting in estimated flight cancellations and aircraft swaps between flight legs. Using this three-stage sequential evaluation procedure, we show that the benefits of an optimization-based allocation are likely overstated based on a simple flight-level analysis. The difference between these results and those in the first part of the thesis suggests the importance of the multi-stage evaluation procedure. Our results also suggest that there may be significant benefits to incorporating aircraft flow balance considerations into the Federal Aviation Administration's (FAA's) TFM allocation procedures. The passenger itinerary data required for the airline disruption response model in the last part of the thesis are not publicly available, thus in the second part of the thesis, we develop a method for modeling passenger travel and delays. In our approach for estimating historical passenger travel, we develop a discrete choice model trained on one quarter of proprietary booking data to disaggregate publicly available passenger demand. Additionally, we extend a network-based heuristic for calculating passenger delays to estimate historical passenger delays for 2007. To demonstrate the value in this approach, we investigate how passenger delays are affected by various features of the itinerary, such as carrier and time of travel. Beyond its applications in this thesis, we believe the estimated passenger itinerary data will have broad applicability, allowing a passenger-centric focus to be incorporated in many facets of air transportation research. To facilitate these endeavors, we have publicly shared our estimated passenger itinerary data for 2007. / by Douglas Fearing. / Ph.D.
180

Regression under a modern optimization lens

King, Angela, Ph. D. Massachusetts Institute of Technology January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 131-139). / In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving mixed integer optimization (MIO) problems. The common mindset of MIO as theoretically elegant but practically irrelevant is no longer justified. In this thesis, we propose a methodology for regression modeling that is based on optimization techniques and centered around MIO. In Part I we propose a method to select a subset of variables to include in a linear regression model using continuous and integer optimization. Despite the natural formulation of subset selection as an optimization problem with an lo-norm constraint, current methods for subset selection do not attempt to use integer optimization to select the best subset. We show that, although this problem is non-convex and NP-hard, it can be practically solved for large scale problems. We numerically demonstrate that our approach outperforms other sparse learning procedures. In Part II of the thesis, we build off of Part I to modify the objective function and include constraints that will produce linear regression models with other desirable properties, in addition to sparsity. We develop a unified framework based on MIO which aims to algorithmize the process of building a high-quality linear regression model. This is the only methodology we are aware of to construct models that imposes statistical properties simultaneously rather than sequentially. Finally, we turn our attention to logistic regression modeling. It is the goal of Part III of the thesis to efficiently solve the mixed integer convex optimization problem of logistic regression with cardinality constraints to provable optimality. We develop a tailored algorithm to solve this challenging problem and demonstrate its speed and performance. We then show how this method can be used within the framework of Part II, thereby also creating an algorithmic approach to fitting high-quality logistic regression models. In each part of the thesis, we illustrate the effectiveness of our proposed approach on both real and synthetic datasets. / by Angela King. / Ph. D.

Page generated in 0.0656 seconds