• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Capacity expansion in contemporary telecommunication networks

Sivaraman, Raghavendran January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 155-160). / We study three capacity expansion problems in contemporary long distance telecommunication networks. The first two problems, motivated by a major long distance provider, address capacity expansion in national hybrid long distance telecommunication networks that use both the traditional TDM technology and more recent VoIP technology to transport voice calls. While network capacity expansion in general is known to be hard to approximate, we exploit the unique requirements associated with hybrid networks to develop compact models and algorithms with strong performance guarantees for these problems. For a single period single facility capacity expansion problem in a hybrid network, using a decomposition approach we design a (2 + E)-factor approximation algorithm. Generalizing this idea, we propose a Decentralized Routing Scheme that can be used to design approximation algorithms for many variations of hybrid network capacity expansion. For the Survivable Capacity Expansion Problem in hybrid networks, in which we are required to install enough capacity to be able to support all demands even if a single link fails, we propose a compact integer program model. We show that this problem is APX-Hard, and present two heuristics with constant worst case performance guarantees. Finally, we consider the capacity planning problem when peak demands occurring at different times can share network capacity. We propose a general model for capturing time variation of demand, and establish a necessary and sufficient condition for a capacity plan to be feasible. Using a cutting plane approach, we develop a heuristic procedure. Computational experiments on real and random instances show that the proposed procedure performs well, producing solutions within 12% of optimality on average for the instances tested. The tests also highlight the significant savings potential that might be obtained by capacity planning with time sharing. / by Raghavendran Sivaraman. / Ph.D.
342

A robust optimization approach to online problems / RO approach to online problems

Korolko, Nikita (Nikita E.) January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 149-155). / In this thesis, we consider online optimization problems that are characterized by incrementally revealed input data and sequential irrevocable decisions that must be made without complete knowledge of the future. We employ a combination of mixed integer optimization (MIO) and robust optimization (RO) methodologies in order to design new efficient online algorithms that outperform state-of-the-art methods for many important practical applications. We empirically demonstrate that RO-based algorithms are computationally tractable for instances of practical size, generate more cost-effective decisions and can simultaneously model a large class of similar online problems due to exceptional modeling power of MIO. In Part I, we consider the well-known K-server problem from the perspective of robust adaptive optimization. We propose a new tractable mixed integer linear formulation of the K-server problem that incorporates both information from the past and uncertainty about the future. By combining ideas from classical online algorithms developed in the computer science literature and robust and adaptive optimization developed in the operations research literature we propose a new method that (a) is computationally tractable, (b) almost always outperforms all other methods in numerical experiments, and (c) is stable with respect to potential errors in the assumptions about the future. In Part II, we consider several extensions of the asset-based weapon-to-target assignment problem whose objective is to protect ships in a fleet from incoming threats. We demonstrate that the new highly nonlinear MIO formulation (a) can be combined with lazy constraints techniques allowing the system designer to find optimal solutions in real time, (b) can be extended to the multi-period setting, and (c) admits a decentralized solution with limited loss of optimality. In Part III, we present a novel covariate-adaptive optimization algorithm for online allocation in clinical trials. The new approach leveraging MIO and RO techniques (a) guarantees a better between-group covariate balance in comparison with state-of- the-art methods, (b) yields statistical power at least as high as, and sometimes significantly higher than, randomization-based algorithms, and (c) is well protected against selection, investigator and accidental bias. / by Nikita Korolko. / Ph. D.
343

Competitive multi-period pricing for perishable products

Sood, Anshul, 1978- January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 161-165). / Pricing of a perishable product over a multi-period time horizon is a challenging problem under an oligopolistic market framework. We propose and study a model for multi-period pricing in an oligopolistic market for a single perishable product. Each participating seller in the market has a fixed inventory of the product at the beginning of the time horizon and additional production is not an available option. Any unsold inventory at the end of the horizon is worthless. The sellers do not have the option of periodically reviewing and replenishing their inventory. Such a model is appropriate for modelling competition in situations where inventory replenishment decisions are made over a longer time horizon and can be considered exogenous to the pricing decision process. This kind of a setup can be used to model pricing of air fares, hotel reservations, bandwidth in communication networks, etc. In this thesis, we study two issues related to multi-period pricing of a perishable product. First we study the competitive aspect of the problem. Second we study the setup where the demand function for each seller has some associated uncertainty. We assume that the sellers would like to adopt a policy that is robust to adverse uncertain circumstances. We discuss the challenges associated with the analysis for this model. / (cont.) We study non-cooperative Nash equilibrium policies for the sellers. We discuss why known results from the literature do not extend to this model. We introduce an optimization approach using results from variational inequality theory and robust optimization to establish existence of the pricing equilibrium policy and comment on the uniqueness of the pricing equilibrium policy. We also introduce an iterative learning algorithm for computing the equilibrium policy and analyze its convergence. We study how much is lost in terms of efficiency (in terms of total system profit) due to competition. Finally, we illustrate our results with some numerical examples and discuss some insights. / by Anshul Sood. / Ph.D.
344

The cross section of expected stock returns revisited / Cross-section of expected stock returns revisited

Sursock, Jean-Paul, 1974- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Also available online at the DSpace at MIT website. / Includes bibliographical references (leaves 60-61). / We review and extend two important empirical financial studies: Fama and MacBeth [1973] and Fama and French [1992]. Fama and MacBeth [1973] sort stocks on the New York Stock Exchange into 20 portfolios based on their market [beta]. They test for, and conclude that, [beta] does in fact explain the cross-sectional variation in average stock returns for the 1926-1968 period. After we replicate the results in their study we extend their work to the most current data. The coefficients and t-statistics for five-year sub-periods exhibit roughly the same properties during the last half of the century as they did during the period originally studied. Fama and MacBeth report statistically significant results for their overall period (1935-1968) as well. When we run the same test on the all the data currently available (1935-1998) we find that the t-statistics are lower, instead of higher, than they were for the 1935-1968 period. We run several variations on the Fama and MacBeth [1973] paper. For example, we vary the exchanges (NYSE, AMEX, and/or NASDAQ) and indexes (value-weighted or equally-weighted) employed. We also study the effect of using robust (least absolute deviation) regressions instead of ordinary least squares. In all cases, the results are similar to those described above. Fama and French [1993] show that, when size is controlled for, market [beta] does not explain the cross-sectional variation in returns for the 1963-1990 period. They find that two other variables, size (market equity) and book-to-market equity, combine to capture the cross-sectional variation in average stock returns during the same period. After replicating their results, we update the study to the most current data. We find that the t-statistics for size and book-to-market equity are more significant during the 1963-1998 period than they were for the 1963-1990 period. We also confirm that [beta] is statistically insignificant during the 1963-1998 period. / by Jean-Paul Sursock. / S.M.
345

Analysis of employee stock options and guaranteed withdrawal benefits for life

Shah, Premal (Premal Y.) January 2008 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 215-224). / In this thesis we study three problems related to financial modeling. First, we study the problem of pricing Employee Stock Options (ESOs) from the point of view of the issuing company. Since an employee cannot trade or eectively hedge ESOs, she exercises them to maximize a subjective criterion of value. Modeling this exercise behavior is key to pricing ESOs. We argue that ESO exercises should not be modeled on a one by one basis, as is commonly done, but at a portfolio level because exercises related to different ESOs that an employee holds would be coupled. Using utility based models we also show that such coupled exercise behavior leads to lower average ESO costs for the commonly used utility functions such as power and exponential utilities. Unfortunately, utility based models do not lead to tractable solutions for finding costs associated with ESOs. We propose a new risk management based approach to model exercise behavior based on mean-variance portfolio maximization. The resulting exercise behavior is both intuitive and leads to a computationally tractable model for finding ESO exercises and pricing ESOs as a portfolio. We also study a special variant of this risk-management based exercise model, which leads to a decoupling of the ESO exercises and then obtain analytical bounds on the implied cost of an ESO for the employer in this case. Next, we study Guaranteed Withdrawal Benefits (GWB) for life, a recent and popular product that many insurance companies have offered for retirement planning. The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. / (cont.) The GWB feature promises to the investor increasing withdrawals over her lifetime and is an exotic option that bears financial and mortality related risks for the insurance company. We first analyze a continuous time version of this product in a Black Scholes economy with simplifying assumptions on population mortality and obtain an analytical solution for the product value. This simple analysis reveals the high sensitivity the product bears to several risk factors. We then further investigate the pricing of GWB in a more realistic setting using different asset pricing models, including those that allow the interest rates and the volatility of returns to be stochastic. Our analysis reveals that 1) GWB has insufficient price discrimination and is susceptible to adverse selection and 2) valuations can vary substantially depending on which class of models is used for accounting. We believe that the ambiguity in value and the presence of significant risks, which can be challenging to hedge, should create concerns to the GWB underwriters, their clients as well as the regulators. Finally, many problems in finance are Sequential Decision Problems (SDPs) under uncertainty. We nd that SDP formulations using commonly used financial metrics or acceptability criteria can lead to dynamically inconsistent strategies. We study the link between objective functions used in SDPs, dynamic consistency and dynamic programming. We then propose ways to create dynamically consistent formulations. / by Premal Shah. / Ph.D.
346

Effectiveness and design of sparse process flexibilities

Wei, Yehua, Ph. D. Massachusetts Institute of Technology January 2013 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 119-121). / The long chain has been an important concept in the design of flexible processes. This design concept, as well as other sparse flexibility structures, have been applied by the automotive and other industries as a way to increase flexibility in order to better match available capacities with variable demands. Numerous empirical studies have validated the effectiveness of these structures. However, there is little theory that explains the effectiveness of the long chain, except when the system size is large, i.e., by applying an asymptotic analysis. Our attempt in this thesis is to develop a theory that explains the effectiveness of long chain and other sparse flexibility structures for finite size systems. We study the sales of sparse flexibility structures under both stochastic and worst-case demands. From our analysis, we not only provide rigorous explanation to the effectiveness of the long chain, but also refine guidelines in designing other sparse flexibility structures. Under stochastic demand, we first develop two deterministic properties, supermodularity and decomposition of the long chain, that serve as important building blocks in our analysis. Applying the supermodularity property, we show that the marginal benefit, i.e., the increase in expected sales, increases as the long chain is constructed, and the largest benefit is always achieved when the chain is closed by adding the last arc to the system. Then, applying the decomposition property, we develop four important results for the long chain under IID demands: (i) an effective algorithm to compute the performance of long chain using only matrix multiplications; (ii) a proof on the optimality of the long chain among all 2-flexibility structures; (iii) a result that the gap between the fill rate of full flexibility and that of the long chain increases with system size, thus implying that the effectiveness of the long chain relative to full flexibility increases as the number of products decreases; (iv) a risk-pooling result implying that the fill rate of a long chain increases with the number of products, but this increase converges to zero exponentially fast. Under worst-case demand, we propose the plant cover index, an index defined by a constrained bipartite vertex cover problem associated with a given flexibility structure. We show that the plant cover index allows for a comparison between the worst-case performances of two flexibility structures based only on their structures and is independent of the choice of the uncertainty set or the choice of the performance measure. More precisely, we show that if all of the plant cover indices of one structure are greater than or equal to the plant cover indices of the other structure, then the first structure is more robust than the second one, i.e. performs better in worst-case under any symmetric uncertainty set and a large class of performance measures. Applying this relation, we demonstrate the effectiveness of the long chain in worst-case performances, and derive a general heuristic that generates sparse flexibility structures which are tested to be effective under both stochastic and worst-case demands. Finally, to understand the effect of process flexibility in reducing logistics cost, we study a model where the manufacturer is required to satisfy deterministic product demand at different distribution centers. Under this model, we prove that if the cost of satisfying product demands at distribution centers is independent of production plants or distribution centers, then there always exists a long chain that is optimal among 2-flexibility structures. Moreover, when all plants and distribution centers are located on a line, we provide a characterization for the optimal long chain that minimizes the total transportation cost. The characterization gives rise to a heuristic that finds effective sparse flexibility structures when plants and distribution centers are located on a 2-dimensional plane. / by Yehua Wei. / Ph.D.
347

Selfish versus coordinated routing in network games

Stier Moses, Nicolás E January 2004 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 159-170) and index. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / A common assumption in network optimization models is that a central authority controls the whole system. However, in some applications there are independent users, and assuming that they will follow directions given by an authority is not realistic. Individuals will only accept directives if they are in their own interest or if there are incentives that encourage them to do so. Actually, it would be much easier to let users make their own decisions hoping that the outcome will be close to the authority's goals. Our main contribution is to show that, in static networks subject to congestion, users' selfish decisions drive the system close to optimality with respect to various common objectives. This connection to individual decision making proves fruitful; not only does it provide us with insights and additional understanding of network problems, but it also allows us to design approximation algorithms for computationally difficult problems. More specifically, the conflicting objectives of the users prompt the definition of a network game in which they minimize their own latencies. We show that the so-called price of anarchy is small in a quite general setting. Namely, for networks with side constraints and non-convex, non-differentiable, and even discontinuous latency functions, we show that although an arbitrary equilibrium need not be efficient, the total latency of the best equilibrium is close to that of an optimal solution. In addition, when the measure of the solution quality is the maximum latency, equilibria in networks without constraints are also near-optimal. We provide the first analysis of the problem of minimizing that objective in static networks with congestion. / (cont.) As this problem is NP-hard, computing an equilibrium represents a constant-factor approximation algorithm. In some situations, the network authority might still want to do better than in equilibrium. We propose to use a solution that minimizes the total latency, subject to constraints designed to improve the solution's fairness. For several real-world instances, we compute traffic assignments of notably smaller total latency than an equilibrium, yet of similar fairness. Furthermore, we provide theoretical results that explain the conclusions derived from the computational study. / by Nicolás E. Stier-Moses. / Ph.D.
348

A Langrangian decomposition approach to weakly coupled dynamic optimization problems and its applications

Hawkins, Jeffrey Thomas, 1977- January 2003 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 187-192). / We present a Lagrangian based approach to decoupling weakly coupled dynamic optimization problems for both finite and infinite horizon problems. The main contributions of this dissertation are: (i) We develop methods for obtaining bounds on the optimal cost based on solving low dimensional dynamic programs; (ii) We utilize the resulting low dimensional dynamic programs and combine them using integer programming methods to find feasible policies for the overall problem; (iii) To illustrate the power of our methods we apply them to a large collection of dynamic optimization problems: multiarmed bandits, restless bandits, queueing networks, serial supply chains, linear control problems and on-line auctions, all with promising results. In particular, the resulting policies appear to be near optimal. (iv) We provide an indepth analysis of several aspects of on-line auctions, both from a buyer's and a seller's perspective. Specifically, for buyers we construct a model of on-line auctions using publicly available data and develop an algorithm for optimally bidding in multiple simultaneous auctions. For sellers we construct a model of on-line auctions using publicly available data and demonstrate how a seller can increase the final selling price using dynamic programming. / by Jeffrey Thomas Hawkins. / Ph.D.
349

Robust, risk-sensitive, and data-driven control of Markov Decision Processes

Le Tallec, Yann January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 201-211). / Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature. / (cont.) The structural and computational results for such games make this formulation appealing. By adding a penalty for unlikely parameters, we extend the formulation of worst-case control of uncertain MDPs and mitigate its conservativeness. We show a duality between the penalized worst-case control of uncertain MDPs with rectangular uncertainty and the minimization of a Markovian dynamically consistent convex risk measure of the sample cost. This notion of risk has desirable properties for multi-period decision making, including a new Markovian property that we introduce and motivate. This Markovian property is critical in establishing the equivalence between minimizing some risk measure of the sample cost and solving a certain zero-sum Markov game between the decision maker and nature, and to tackling infinite-horizon problems. An alternative approach to dealing with uncertain MDPs, which avoids the curse of uncertainty, is to exploit directly observational data. Specifically, we estimate the expected performance of any given policy (and its gradient with respect to certain policy parameters) from a training set comprising observed trajectories sampled under a known policy. / (cont.) We propose new value (and value gradient) estimators that are unbiased and have low training set to training set variance. We expect our approach to outperform competing approaches when there are few system observations compared to the underlying MDP size, as indicated by numerical experiments. / by Yann Le Tallec. / Ph.D.
350

An analytics approach to hypertension treatment

Epstein, Christina (Christina Lynn) January 2014 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / 13 / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 67-68). / Hypertension is a major public health issue worldwide, affecting more than a third of the adult population and increasing the risk of myocardial infarction, heart failure, stroke, and kidney disease. Current clinical guidelines have yet to achieve consensus and continue to rely on expert opinion for recommendations lacking a sufficient evidence base. In practice, trial and error is typically required to discover a medication combination and dosage that works to control blood pressure for a given patient. We propose an analytics approach to hypertension treatment: applying visualization, predictive analytics methods, and optimization to existing electronic health record data to (1) find conjectures parallel and potentially orthogonal to guidelines, (2) hasten response time to therapy, and/or (3) optimize therapy selection. This thesis presents work toward these goals including data preprocessing and exploration, feature creation, the discovery of clinically-relevant clusters based on select blood pressure features, and three development spirals of predictive models and results. / by Christina Epstein. / S.M.

Page generated in 0.1329 seconds