• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 342
  • 11
  • 10
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 376
  • 376
  • 319
  • 32
  • 23
  • 21
  • 21
  • 15
  • 13
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Dynamic pricing and inventory control with no backorders under uncertainty and competition

Adida, Elodie January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 271-284). / Recently, revenue management has become popular in many industries such as the airline, the supply chain, and the transportation industry. Decision makers realize that even small improvements in their operations can have a significant impact on their profits. Nevertheless, determining pricing and inventory optimal policies in more realistic settings may not be a tractable task. Ignoring the potential inaccuracy of parameters may lead to a solution that actually performs poorly, or even that violates some constraints. Finally, competitors impact a supplier's best strategy by influencing her demand, revenues, and field of possible actions. Taking a game theoretic approach and determining the equilibrium of the system can help understand its state in the long run. This thesis presents a continuous time optimal control model for studying a dynamic pricing and inventory control problem in a make-to-stock manufacturing system. We consider a multi-product capacitated, dynamic setting. We introduce a demand-based model with convex costs. A key part of the model is that no backorders are allowed, as this introduces a constraint on the state variables. We first study the deterministic version of this problem. / (cont.) We introduce and study a solution method that enables to compute the optimal solution on a finite time horizon in a monopoly setting. Our results illustrate the role of capacity and the effects of the dynamic nature of demand. We then introduce an additive model of demand uncertainty. We use a robust optimization approach to protect the solution against data uncertainty in a tractable manner, and without imposing stringent assumptions on available information. We show that the robust formulation is of the same order of complexity as the deterministic problem and demonstrate how to adapt solution method. Finally, we consider a duopoly setting and use a more general model of additive and multiplicative demand uncertainty. We formulate the robust problem as a coupled constraint differential game. Using a quasi-variational inequality reformulation, we prove the existence of Nash equilibria in continuous time and study issues of uniqueness. Finally, we introduce a relaxation-type algorithm and prove its convergence to a particular Nash equilibrium (normalized Nash equilibrium) in discrete time. / by Elodie Adida. / Ph.D.
212

Velocity-based storage and stowage decisions in a semi-automated fulfillment system

Yuan, Rong, Ph. D. Massachusetts Institute of Technology January 2016 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 153-156). / The supply chain management for an online retailing business is centered around the operations of its fulfillment centers. A fulfillment center receives and holds inventory from vendors, and then uses this inventory to fill customer orders. Our research focuses on a new operating architecture of an order fulfillment system, enabled by new technology. We refer to it as the Semi-automated Fulfillment System. Different from the person-to-goods model in traditional warehouses, the semi-automated fulfillment system adopts a goods-to-person model for stowing and picking items from a storage field. In a semi-automated fulfillment system the inventory is stored on mobile storage pods; those mobile pods are then carried by robotic drives to static stations at which the operators conduct pick or stow operations. In the first chapter, we describe and identify three key operational decisions in the semiautomated fulfillment system, namely from which pods to pick the inventory needed (picking decision), where to return the pod to the storage field upon the completion of a pick or stow operation (storage decision), and to which pods to replenish the received inventory (stowage decision). We present a high-level capacity planning model for determining the number of robotic drives needed to achieve a given throughput level. This model highlights how the operational efficiency in this system depends on two key parameters, namely the travel time for an entire drive trip and the number of unit picks or stows per pod trip. In the second chapter, we focus on the storage decisions. The storage decision is to decide to which storage location to return a pod upon the completion of a pick or stow operation. We extend the academic results on the benefits of adopting velocity-based and class-based storage policies to the context of the semi-automated fulfillment system. We associate with each storage pod a velocity measure that represents an expectation of the number of picks from that pod in the near future. We then show that by assigning the high velocity pods to the most desirable storage locations, we can significantly reduce the drive travel time, compared to the random storage policy that returns the pod to a randomly-chosen storage location. We show that class-based storage policies with two or three classes, can achieve most of the benefits from the idealized velocity-based policy. Furthermore, we characterize how the performance of the velocity-based and class-based storage policies depend on the velocity variability across the storage pods; in particular we model how the benefits from velocity-based storage policies increase with increased variation in the pod velocities. In the third chapter, we build a discrete-time simulator to validate the theoretical models in the second chapter with real industry data. We observe a 6% to 11% reduction in the travel distance with 2-class or 3-class system, depending on the parameter settings. From a sensitivity analysis we establish the robustness of the class-based storage policies as they continue to perform well under a broad range of warehouse settings including different zoning strategies, resource utilization levels and space utilization levels. In the fourth chapter, we examine two stowage decisions, one at the zone level and the other at the pod level. The zone-level decision is to decide how to allocate the received inventory to multiple storage zones. The objective is to assure that the resulting picking workload for each zone is within its capacity. We show by simulation that a chaining-based allocation can be effective to balance the picking workload across different storage zones. The pod-level stowage decision is to decide on which pods to stow the inventory. We formulate a mixedinteger program (MIP) to find the optimal stowage profile that maximizes the number of unit picks per pod trip. We solve the MIP for a set of test cases to gain insight into the structure of optimal stowage policy. Motivated by these insights, we further propose a class-based stowage process that induces variability across the pod velocities. / by Rong Yuan. / Ph. D.
213

Analytics for financing drug development

Fagnan, David Erik January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 133-139). / Financing drug development has a particular set of challenges including long development times, high chance of failure, significant market valuation uncertainty, and high costs of development. The earliest stages of translational research pose the greatest risks, which have been termed the "valley of death" as a result of a lack of funding. This thesis focuses on an exploration of financial engineering techniques aimed at addressing these concerns. Despite the recent financial crisis, many suggest that securitization is an appropriate tool for financing such large social challenges. Although securitization has been demonstrated effectively at later stages of drug development for drug royalties of approved drugs, it has yet to be utilized at earlier stages. This thesis starts by extending the model of drug development proposed by Fernandez et al. (2012). These extensions significantly influence the resulting performance and optimal securitization structures. Budget-constrained venture firms targeting high financial returns are incentivized to fund only the best projects, thereby potentially stranding less-attractive projects. Instead, such projects have the potential to be combined in larger portfolios through techniques such as securitization which reduce the cost of capital. In addition to modeling extensions, we provide examples of a model calibrated to orphan drugs, which we argue are particularly suited to financial engineering techniques. Using this model, we highlight the impact of our extensions on financial performance and compare with previously published results. We then illustrate the impact of incorporating a credit enhancement or guarantee, which allows for added flexibility of the capital structure and therefore greater access to lower costing capital. As an alternative to securitization, we provide some examples of a structured equity approach, which may allow for increased access to or efficiency of capital by matching investor objectives. Finally, we provide examples of optimizing the Sortino ratio through constrained Bayesian optimization. / by David Erik Fagnan. / Ph. D.
214

New algorithms in machine learning with applications in personalized medicine

Zhuo, Ying Daisy January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 165-173). / Recent advances in machine learning and optimization hold much promise for influencing real-world decision making, especially in areas such as health care where abundant data are increasingly being collected. However, imperfections in the data pose a major challenge to realizing their full potential: missing values, noisy observations, and unobserved counterfactuals all impact the performance of data-driven methods. In this thesis, with a fresh perspective from optimization, I revisit some of the well-known problems in statistics and machine learning, and develop new methods for prescriptive analytics. I show examples of how common machine learning tasks, such as missing data imputation in Chapter 2 and classication in Chapter 3, can benet from the added edge of rigorous optimization formulations and solution techniques. In particular, the proposed opt.impute algorithm improves imputation quality by 13.7% over state-of-the-art methods, as averaged over 95 real data sets, which leads to further performance gains in downstream tasks. The power of prescriptive analytics is shown in Chapter 4 by our approach to personalized diabetes management, which identifies response patterns using machine learning and individualizes treatments via optimization. These newly developed machine learning algorithms not only demonstrate improved performance in large-scale experiments, but are also applied to solve the problems in health care that motivated them. Our simulated trial for diabetic patients in Chapter 4 demonstrates a clinically relevant reduction in average hemoglobin A1c levels compared to current practice. Finally, when predicting mortality for cancer patients in Chapter 5, applying opt.impute on missing data along with the cutting-edge algorithm Optimal Classication Tree on a rich data set prepared from electronic medical records, we are able to accurately risk stratify patients, providing physicians with interpretable insights and valuable risk estimates at time of treatment decisions and end-of-life planning. / by Ying Daisy Zhuo. / Ph. D.
215

The dynamics of global financial crises

Amonlirdviman, Kevin, 1975- January 2002 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2002. / Includes bibliographical references (p. 57-58). / This thesis presents a Markov chain model of the transmission of financial crises. Using bilateral trade data and a measure of exchange market pressure, it develops a method to determine a set of transition probabilities that describe the crisis transmission dynamics. The dynamics are characterized by one month conditional crisis probabilities and the probability of a crisis occurring within one year. Calculations of the transition probabilities for a three country example suggest that minor trading partners can increase the likelihood of a crisis in the home country through their effect on major trading partners. / by Kevin Amonlirdviman. / S.M.
216

Robust estimation, regression and ranking with applications in portfolio optimization

Nguyen, Tri-Dung, Ph. D. Massachusetts Institute of Technology January 2009 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 108-112). / Classical methods of maximum likelihood and least squares rely a great deal on the correctness of the model assumptions. Since these assumptions are only approximations of reality, many robust statistical methods have been developed to produce estimators that are robust against the deviation from the model assumptions. Unfortunately, these techniques have very high computational complexity that prevents their application to large scale problems. We present computationally efficient methods for robust mean-covariance estimation and robust linear regression using special mathematical programming models and semi-definite programming (SDP). In the robust covariance estimation problem, we design an optimization model with a loss function on the weighted Mahalanobis distances and show that the problem is equivalent to a system of equations and can be solved using the Newton-Raphson method. The problem can also be transformed into an SDP problem from which we can flexibly incorporate prior beliefs into the estimators without much increase in the computational complexity. The robust regression problem is often formulated as the least trimmed squares (LTS) regression problem where we want to nd the best subset of observations with the smallest sum of squared residuals. We show the LTS problem is equivalent to a concave minimization problem, which is very hard to solve. We resolve this difficulty by introducing the maximum trimmed squares" problem that finds the worst subset of observations. This problem can be transformed into an SDP problem that can be solved efficiently while still guaranteeing that we can identify outliers. / (cont.) In addition, we model the robust ranking problem as a mixed integer minimax problem where the ranking is in a discrete uncertainty set. We use mixed integer programming methods, specifically column generation and network flows, to solve the robust ranking problem. To illustrate the power of these robust methods, we apply them to the mean-variance portfolio optimization problem in order to incorporate estimation errors into the model. / by Tri-Dung Nguyen. / Ph.D.
217

Data-driven approach to health care : applications using claims data

Bjarnadóttir, Margrét Vilborg January 2008 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 123-130). / Large population health insurance claims databases together with operations research and data mining methods have the potential of significantly impacting health care management. In this thesis we research how claims data can be utilized in three important areas of health care and medicine and apply our methods to a real claims database containing information of over two million health plan members. First, we develop forecasting models for health care costs that outperform previous results. Secondly, through examples we demonstrate how large-scale databases and advanced clustering algorithms can lead to discovery of medical knowledge. Lastly, we build a mathematical framework for a real-time drug surveillance system, and demonstrate with real data that side effects can be discovered faster than with the current post-marketing surveillance system. / by Margrét Vilborg Bjarnadóttir. / Ph.D.
218

Planning combat outposts to maximize population security / Planning COPs to maximize population security / Combat outpost planning to maximize population security

Seidel, Scott B January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 123-124). / Combat outposts (COPs) are small, well-protected bases from which soldiers reside and conduct operations from. Used extensively during the "Surge" in Iraq, COPs are usually established in populated areas and are prevalent in the counterinsurgency operations in Afghanistan in 2010. This research models population security to determine combat outpost locations in a battalion area of operation. Population security is measured by level of violence, level of insurgent activity, and effectiveness of host nation security forces. The area of operation is represented as a graphical network of nodes and arcs. Operational inputs include pertinent information about each node. The model allows the commander to set various weights that reflect his understanding of the situation, mission, and local people. Based on trade-offs in patrolling and self-protection, the deterministic model recommends the size and locations for emplacing combat outposts and conducting patrols. We use piecewise linear approximation to solve the problem as a mixed-integer linear program. Results are based on two representative scenarios and show the impact of an area of operation's characteristics and commander's weights on COP size and locations. / by Scott B. Seidel. / S.M.
219

Combinatorial structures in online and convex optimization

Gupta, Swati, Ph. D. Massachusetts Institute of Technology January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 157-163). / Motivated by bottlenecks in algorithms across online and convex optimization, we consider three fundamental questions over combinatorial polytopes. First, we study the minimization of separable strictly convex functions over polyhedra. This problem is motivated by first-order optimization methods whose bottleneck relies on the minimization of a (often) separable, convex metric, known as the Bregman divergence. We provide a conceptually simple algorithm, Inc-Fix, in the case of submodular base polyhedra. For cardinality-based submodular polytopes, we show that Inc-Fix can be speeded up to be the state-of-the-art method for minimizing uniform divergences. We show that the running time of Inc-Fix is independent of the convexity parameters of the objective function. The second question is concerned with the complexity of the parametric line search problem in the extended submodular polytope P: starting from a point inside P, how far can one move along a given direction while maintaining feasibility. This problem arises as a bottleneck in many algorithmic applications like the above-mentioned Inc-Fix algorithm and variants of the Frank-Wolfe method. One of the most natural approaches is to use the discrete Newton's method, however, no upper bound on the number of iterations for this method was known. We show a quadratic bound resulting in a factor of n6 reduction in the worst-case running time from the previous state-of-the-art. The analysis leads to interesting extremal questions on set systems and submodular functions. Next, we develop a general framework to simulate the well-known multiplicative weights update algorithm for online linear optimization over combinatorial strategies U in time polynomial in log /U/, using efficient approximate general counting oracles. We further show that efficient counting over the vertex set of any 0/1 polytope P implies efficient convex minimization over P. As a byproduct of this result, we can approximately decompose any point in a 0/1 polytope into a product distribution over its vertices. Finally, we compare the applicability and limitations of the above results in the context of finding Nash-equilibria in combinatorial two-player zero-sum games with bilinear loss functions. We prove structural results that can be used to find certain Nash-equilibria with a single separable convex minimization. / by Swati Gupta. / Ph. D.
220

Effective contracts in supply chains

Shum, Wanhang January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / Includes bibliographical references (p. 115-121). / In the past decade, we have seen significant increase in the level of outsourcing in many industries. This increase in the level of outsourcing increases the importance of implementing effective contracts in supply chains. In this thesis, we study several issues in supply chain contracts. In the first part of the thesis, we study the impact of effort in a supply chain with multiple retailers. The costly effort engaged by a retailer may increase or decrease the demands of other retailers. However, effort is usually not verifiable and hence not contractible. Based on the impact of a retailer's effort on its own and other retailers' revenue, we classify each retailer into different categories. According to the corresponding categories of all retailers, we identify coordinating contracts and general classes of contracts that cannot coordinate. Second, we study the stability of coordinating contracts in supply chains. We illustrate that, due to competition, not all coordinating contracts are achievable. Thus, we introduce the notion of rational contracts, which reflects the agents "bargaining power". We propose a general framework for coordinating and rational contracts. Using this framework, we analyze two supply chains, a supply chain with multiple suppliers and single retailer, and a supply chain with a single supplier and price-competing retailers. / (cont.) We identify coordinating contracts for each case and characterize the bounds on profit shares for the agents in any rational contracts. Finally, we study the robustness of coordinating contracts to renegotiation. Applying the concept of contract equilibrium, we show that many coordinating contracts are not robust to bilateral renegotiation if the relationship between the supplier and the retailers is a one-shot game. If the supplier and retailers engage in long-term relationship, then many coordinating contracts are robust to bilateral renegotiation. We also extend concept of contract equilibrium to the concept of strong contract equilibrium to study the robustness of contracts to multilateral renegotiation. We show that, in repeated game setting, the concept of strong contract equilibrium is related to the concept of rational contracts. / by Wanhang (Stephen) Shum. / Ph.D.

Page generated in 0.1595 seconds