401 |
Competitive multi-period pricing for perishable productsSood, Anshul, 1978- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 161-165). / Pricing of a perishable product over a multi-period time horizon is a challenging problem under an oligopolistic market framework. We propose and study a model for multi-period pricing in an oligopolistic market for a single perishable product. Each participating seller in the market has a fixed inventory of the product at the beginning of the time horizon and additional production is not an available option. Any unsold inventory at the end of the horizon is worthless. The sellers do not have the option of periodically reviewing and replenishing their inventory. Such a model is appropriate for modelling competition in situations where inventory replenishment decisions are made over a longer time horizon and can be considered exogenous to the pricing decision process. This kind of a setup can be used to model pricing of air fares, hotel reservations, bandwidth in communication networks, etc. In this thesis, we study two issues related to multi-period pricing of a perishable product. First we study the competitive aspect of the problem. Second we study the setup where the demand function for each seller has some associated uncertainty. We assume that the sellers would like to adopt a policy that is robust to adverse uncertain circumstances. We discuss the challenges associated with the analysis for this model. / (cont.) We study non-cooperative Nash equilibrium policies for the sellers. We discuss why known results from the literature do not extend to this model. We introduce an optimization approach using results from variational inequality theory and robust optimization to establish existence of the pricing equilibrium policy and comment on the uniqueness of the pricing equilibrium policy. We also introduce an iterative learning algorithm for computing the equilibrium policy and analyze its convergence. We study how much is lost in terms of efficiency (in terms of total system profit) due to competition. Finally, we illustrate our results with some numerical examples and discuss some insights. / by Anshul Sood. / Ph.D.
|
402 |
The cross section of expected stock returns revisited / Cross-section of expected stock returns revisitedSursock, Jean-Paul, 1974- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Also available online at the DSpace at MIT website. / Includes bibliographical references (leaves 60-61). / We review and extend two important empirical financial studies: Fama and MacBeth [1973] and Fama and French [1992]. Fama and MacBeth [1973] sort stocks on the New York Stock Exchange into 20 portfolios based on their market [beta]. They test for, and conclude that, [beta] does in fact explain the cross-sectional variation in average stock returns for the 1926-1968 period. After we replicate the results in their study we extend their work to the most current data. The coefficients and t-statistics for five-year sub-periods exhibit roughly the same properties during the last half of the century as they did during the period originally studied. Fama and MacBeth report statistically significant results for their overall period (1935-1968) as well. When we run the same test on the all the data currently available (1935-1998) we find that the t-statistics are lower, instead of higher, than they were for the 1935-1968 period. We run several variations on the Fama and MacBeth [1973] paper. For example, we vary the exchanges (NYSE, AMEX, and/or NASDAQ) and indexes (value-weighted or equally-weighted) employed. We also study the effect of using robust (least absolute deviation) regressions instead of ordinary least squares. In all cases, the results are similar to those described above. Fama and French [1993] show that, when size is controlled for, market [beta] does not explain the cross-sectional variation in returns for the 1963-1990 period. They find that two other variables, size (market equity) and book-to-market equity, combine to capture the cross-sectional variation in average stock returns during the same period. After replicating their results, we update the study to the most current data. We find that the t-statistics for size and book-to-market equity are more significant during the 1963-1998 period than they were for the 1963-1990 period. We also confirm that [beta] is statistically insignificant during the 1963-1998 period. / by Jean-Paul Sursock. / S.M.
|
403 |
Analysis of employee stock options and guaranteed withdrawal benefits for lifeShah, Premal (Premal Y.) January 2008 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 215-224). / In this thesis we study three problems related to financial modeling. First, we study the problem of pricing Employee Stock Options (ESOs) from the point of view of the issuing company. Since an employee cannot trade or eectively hedge ESOs, she exercises them to maximize a subjective criterion of value. Modeling this exercise behavior is key to pricing ESOs. We argue that ESO exercises should not be modeled on a one by one basis, as is commonly done, but at a portfolio level because exercises related to different ESOs that an employee holds would be coupled. Using utility based models we also show that such coupled exercise behavior leads to lower average ESO costs for the commonly used utility functions such as power and exponential utilities. Unfortunately, utility based models do not lead to tractable solutions for finding costs associated with ESOs. We propose a new risk management based approach to model exercise behavior based on mean-variance portfolio maximization. The resulting exercise behavior is both intuitive and leads to a computationally tractable model for finding ESO exercises and pricing ESOs as a portfolio. We also study a special variant of this risk-management based exercise model, which leads to a decoupling of the ESO exercises and then obtain analytical bounds on the implied cost of an ESO for the employer in this case. Next, we study Guaranteed Withdrawal Benefits (GWB) for life, a recent and popular product that many insurance companies have offered for retirement planning. The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. / (cont.) The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. We first analyze a continuous time version of this product in a Black Scholes economy with simplifying assumptions on population mortality and obtain an analytical solution for the product value. This simple analysis reveals the high sensitivity the product bears to several risk factors. We then further investigate the pricing of GWB in a more realistic setting using different asset pricing models, including those that allow the interest rates and the volatility of returns to be stochastic. Our analysis reveals that 1) GWB has insufficient price discrimination and is susceptible to adverse selection and 2) valuations can vary substantially depending on which class of models is used for accounting. We believe that the ambiguity in value and the presence of significant risks, which can be challenging to hedge, should create concerns to the GWB underwriters, their clients as well as the regulators. Finally, many problems in finance are Sequential Decision Problems (SDPs) under uncertainty. We nd that SDP formulations using commonly used financial metrics or acceptability criteria can lead to dynamically inconsistent strategies. We study the link between objective functions used in SDPs, dynamic consistency and dynamic programming. We then propose ways to create dynamically consistent formulations. / by Premal Shah. / Ph.D.
|
404 |
Effectiveness and design of sparse process flexibilitiesWei, Yehua, Ph. D. Massachusetts Institute of Technology January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 119-121). / The long chain has been an important concept in the design of flexible processes. This design concept, as well as other sparse flexibility structures, have been applied by the automotive and other industries as a way to increase flexibility in order to better match available capacities with variable demands. Numerous empirical studies have validated the effectiveness of these structures. However, there is little theory that explains the effectiveness of the long chain, except when the system size is large, i.e., by applying an asymptotic analysis. Our attempt in this thesis is to develop a theory that explains the effectiveness of long chain and other sparse flexibility structures for finite size systems. We study the sales of sparse flexibility structures under both stochastic and worst-case demands. From our analysis, we not only provide rigorous explanation to the effectiveness of the long chain, but also refine guidelines in designing other sparse flexibility structures. Under stochastic demand, we first develop two deterministic properties, supermodularity and decomposition of the long chain, that serve as important building blocks in our analysis. Applying the supermodularity property, we show that the marginal benefit, i.e., the increase in expected sales, increases as the long chain is constructed, and the largest benefit is always achieved when the chain is closed by adding the last arc to the system. Then, applying the decomposition property, we develop four important results for the long chain under IID demands: (i) an effective algorithm to compute the performance of long chain using only matrix multiplications; (ii) a proof on the optimality of the long chain among all 2-flexibility structures; (iii) a result that the gap between the fill rate of full flexibility and that of the long chain increases with system size, thus implying that the effectiveness of the long chain relative to full flexibility increases as the number of products decreases; (iv) a risk-pooling result implying that the fill rate of a long chain increases with the number of products, but this increase converges to zero exponentially fast. Under worst-case demand, we propose the plant cover index, an index defined by a constrained bipartite vertex cover problem associated with a given flexibility structure. We show that the plant cover index allows for a comparison between the worst-case performances of two flexibility structures based only on their structures and is independent of the choice of the uncertainty set or the choice of the performance measure. More precisely, we show that if all of the plant cover indices of one structure are greater than or equal to the plant cover indices of the other structure, then the first structure is more robust than the second one, i.e. performs better in worst-case under any symmetric uncertainty set and a large class of performance measures. Applying this relation, we demonstrate the effectiveness of the long chain in worst-case performances, and derive a general heuristic that generates sparse flexibility structures which are tested to be effective under both stochastic and worst-case demands. Finally, to understand the effect of process flexibility in reducing logistics cost, we study a model where the manufacturer is required to satisfy deterministic product demand at different distribution centers. Under this model, we prove that if the cost of satisfying product demands at distribution centers is independent of production plants or distribution centers, then there always exists a long chain that is optimal among 2-flexibility structures. Moreover, when all plants and distribution centers are located on a line, we provide a characterization for the optimal long chain that minimizes the total transportation cost. The characterization gives rise to a heuristic that finds effective sparse flexibility structures when plants and distribution centers are located on a 2-dimensional plane. / by Yehua Wei. / Ph.D.
|
405 |
Selfish versus coordinated routing in network gamesStier Moses, Nicolás E January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 159-170) and index. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / A common assumption in network optimization models is that a central authority controls the whole system. However, in some applications there are independent users, and assuming that they will follow directions given by an authority is not realistic. Individuals will only accept directives if they are in their own interest or if there are incentives that encourage them to do so. Actually, it would be much easier to let users make their own decisions hoping that the outcome will be close to the authority's goals. Our main contribution is to show that, in static networks subject to congestion, users' selfish decisions drive the system close to optimality with respect to various common objectives. This connection to individual decision making proves fruitful; not only does it provide us with insights and additional understanding of network problems, but it also allows us to design approximation algorithms for computationally difficult problems. More specifically, the conflicting objectives of the users prompt the definition of a network game in which they minimize their own latencies. We show that the so-called price of anarchy is small in a quite general setting. Namely, for networks with side constraints and non-convex, non-differentiable, and even discontinuous latency functions, we show that although an arbitrary equilibrium need not be efficient, the total latency of the best equilibrium is close to that of an optimal solution. In addition, when the measure of the solution quality is the maximum latency, equilibria in networks without constraints are also near-optimal. We provide the first analysis of the problem of minimizing that objective in static networks with congestion. / (cont.) As this problem is NP-hard, computing an equilibrium represents a constant-factor approximation algorithm. In some situations, the network authority might still want to do better than in equilibrium. We propose to use a solution that minimizes the total latency, subject to constraints designed to improve the solution's fairness. For several real-world instances, we compute traffic assignments of notably smaller total latency than an equilibrium, yet of similar fairness. Furthermore, we provide theoretical results that explain the conclusions derived from the computational study. / by Nicolás E. Stier-Moses. / Ph.D.
|
406 |
A Langrangian decomposition approach to weakly coupled dynamic optimization problems and its applicationsHawkins, Jeffrey Thomas, 1977- January 2003 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 187-192). / We present a Lagrangian based approach to decoupling weakly coupled dynamic optimization problems for both finite and infinite horizon problems. The main contributions of this dissertation are: (i) We develop methods for obtaining bounds on the optimal cost based on solving low dimensional dynamic programs; (ii) We utilize the resulting low dimensional dynamic programs and combine them using integer programming methods to find feasible policies for the overall problem; (iii) To illustrate the power of our methods we apply them to a large collection of dynamic optimization problems: multiarmed bandits, restless bandits, queueing networks, serial supply chains, linear control problems and on-line auctions, all with promising results. In particular, the resulting policies appear to be near optimal. (iv) We provide an indepth analysis of several aspects of on-line auctions, both from a buyer's and a seller's perspective. Specifically, for buyers we construct a model of on-line auctions using publicly available data and develop an algorithm for optimally bidding in multiple simultaneous auctions. For sellers we construct a model of on-line auctions using publicly available data and demonstrate how a seller can increase the final selling price using dynamic programming. / by Jeffrey Thomas Hawkins. / Ph.D.
|
407 |
Robust, risk-sensitive, and data-driven control of Markov Decision ProcessesLe Tallec, Yann January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 201-211). / Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature. / (cont.) The structural and computational results for such games make this formulation appealing. By adding a penalty for unlikely parameters, we extend the formulation of worst-case control of uncertain MDPs and mitigate its conservativeness. We show a duality between the penalized worst-case control of uncertain MDPs with rectangular uncertainty and the minimization of a Markovian dynamically consistent convex risk measure of the sample cost. This notion of risk has desirable properties for multi-period decision making, including a new Markovian property that we introduce and motivate. This Markovian property is critical in establishing the equivalence between minimizing some risk measure of the sample cost and solving a certain zero-sum Markov game between the decision maker and nature, and to tackling infinite-horizon problems. An alternative approach to dealing with uncertain MDPs, which avoids the curse of uncertainty, is to exploit directly observational data. Specifically, we estimate the expected performance of any given policy (and its gradient with respect to certain policy parameters) from a training set comprising observed trajectories sampled under a known policy. / (cont.) We propose new value (and value gradient) estimators that are unbiased and have low training set to training set variance. We expect our approach to outperform competing approaches when there are few system observations compared to the underlying MDP size, as indicated by numerical experiments. / by Yann Le Tallec. / Ph.D.
|
408 |
An analytics approach to hypertension treatmentEpstein, Christina (Christina Lynn) January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / 13 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 67-68). / Hypertension is a major public health issue worldwide, affecting more than a third of the adult population and increasing the risk of myocardial infarction, heart failure, stroke, and kidney disease. Current clinical guidelines have yet to achieve consensus and continue to rely on expert opinion for recommendations lacking a sufficient evidence base. In practice, trial and error is typically required to discover a medication combination and dosage that works to control blood pressure for a given patient. We propose an analytics approach to hypertension treatment: applying visualization, predictive analytics methods, and optimization to existing electronic health record data to (1) find conjectures parallel and potentially orthogonal to guidelines, (2) hasten response time to therapy, and/or (3) optimize therapy selection. This thesis presents work toward these goals including data preprocessing and exploration, feature creation, the discovery of clinically-relevant clusters based on select blood pressure features, and three development spirals of predictive models and results. / by Christina Epstein. / S.M.
|
409 |
A data mining approach for acoustic diagnosis of cardiopulmonary diseaseFlietstra, Bryan C January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 107-111). / Variations in training and individual doctor's listening skills make diagnosing a patient via stethoscope based auscultation problematic. Doctors have now turned to more advanced devices such as x-rays and computed tomography (CT) scans to make diagnoses. However, recent advances in lung sound analysis techniques allow for the auscultation to be performed with an array of microphones, which send the lung sounds to a computer for processing. The computer automatically identifies adventitious sounds using time expanded waveform analysis and allows for a more precise auscultation. We investigate three data mining techniques in order to diagnose a patient based solely on the sounds heard within the chest by a "smart" stethoscope. We achieve excellent recognition performance by using k nearest neighbors, neural networks, and support vector machines to make classifications in pair-wise comparisons. We also extend the research to a multi-class scenario and are able to separate patients with interstitial pulmonary fibrosis with 80% accuracy. Adding clinical data also improves recognition performance. Our results show that performing computerized lung auscultation offers a low-cost, non-invasive diagnostic procedure that gives doctors better clinical utility especially in situations when x-rays and CT scans are not available. / by Bryan C. Flietstra. / S.M.
|
410 |
From data to decisions in healthcare : an optimization perspectiveWeinstein, Alexander Michael January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 107-110). / The past few decades have seen many methodological and technological advances in optimization, statistics, and machine learning. What is still not well understood is how to combine these tools to take data as inputs and give decisions as outputs. The healthcare arena offers fertile ground for improvement in data-driven decisionmaking. Every day, medical researchers develop and test novel therapies via randomized clinical trials, which, when designed efficiently, can provide evidence for efficacy and harm. Over the last two decades, electronic medical record systems have become increasingly prevalent in hospitals and other care settings. The growth of these and other data sources, combined with the aforementioned advances in the field of operations research, enable new modes of study and analysis in healthcare. In this thesis, we take a data-driven approach to decision-making in healthcare through the lenses of optimization, statistics, and machine learning. In Parts I and II of the thesis, we apply mixed-integer optimization to enhance the design and analysis of clinical trials, a critical step in the approval process for innovative medical treatments. In Part I, we present a robust mixed-integer optimization algorithm for allocating subjects to treatment groups in sequential experiments. By improving covariate balance across treatment groups, the proposed method yields statistical power at least as high as, and sometimes significantly higher than, state-of- the-art covariate-adaptive randomization approaches. In Part II, we present a mixed-integer optimization approach for identifying exceptional responders in randomized trial data. In practice, this approach can be used to extract added value from costly clinical trials that may have failed to identify a positive treatment effect for the general study population, but could be beneficial to a subgroup of the population. In Part III, we present a personalized approach to diabetes management using electronic medical records. The approach is based on a k-nearest neighbors algorithm. By harnessing the power of optimization and machine learning, we can improve patient outcomes and move from the one-size-fits-all approach that dominates the medical landscape today, to a personalized, patient-centered approach. / by Alexander Michael Weinstein. / Ph. D.
|
Page generated in 0.1474 seconds