• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1718
  • 915
  • 622
  • 369
  • 282
  • 186
  • 160
  • 149
  • 130
  • 67
  • 60
  • 58
  • 34
  • 31
  • 18
  • Tagged with
  • 5421
  • 1084
  • 586
  • 532
  • 517
  • 504
  • 477
  • 476
  • 439
  • 423
  • 421
  • 380
  • 362
  • 361
  • 358
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Regression under a modern optimization lens

King, Angela, Ph. D. Massachusetts Institute of Technology January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 131-139). / In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving mixed integer optimization (MIO) problems. The common mindset of MIO as theoretically elegant but practically irrelevant is no longer justified. In this thesis, we propose a methodology for regression modeling that is based on optimization techniques and centered around MIO. In Part I we propose a method to select a subset of variables to include in a linear regression model using continuous and integer optimization. Despite the natural formulation of subset selection as an optimization problem with an lo-norm constraint, current methods for subset selection do not attempt to use integer optimization to select the best subset. We show that, although this problem is non-convex and NP-hard, it can be practically solved for large scale problems. We numerically demonstrate that our approach outperforms other sparse learning procedures. In Part II of the thesis, we build off of Part I to modify the objective function and include constraints that will produce linear regression models with other desirable properties, in addition to sparsity. We develop a unified framework based on MIO which aims to algorithmize the process of building a high-quality linear regression model. This is the only methodology we are aware of to construct models that imposes statistical properties simultaneously rather than sequentially. Finally, we turn our attention to logistic regression modeling. It is the goal of Part III of the thesis to efficiently solve the mixed integer convex optimization problem of logistic regression with cardinality constraints to provable optimality. We develop a tailored algorithm to solve this challenging problem and demonstrate its speed and performance. We then show how this method can be used within the framework of Part II, thereby also creating an algorithmic approach to fitting high-quality logistic regression models. In each part of the thesis, we illustrate the effectiveness of our proposed approach on both real and synthetic datasets. / by Angela King. / Ph. D.
202

Constructing learning models from data : the dynamic catalog mailing problem

Sun, Peng, 1974- January 2003 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 105-107). / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / The catalog industry is a large and important industry in the US economy. One of the most important and challenging business decisions in the industry is to decide who should receive catalogs, due to the significant mailing cost and the low response rate. The problem is a dynamic one - when a customer is ready to purchase, s/he may order from a previous catalog if s/he does not have the most recent one. In this sense, customers' purchasing behavior depends not only on the firm's most recent mailing decision, but also on prior mailing decisions. From the firm's perspective, in order to maximize its long-term profit it should make a series of optimal mailing decisions to each customer over time. Contrary to the traditional myopic catalog mailing decision process that is generally implemented in the catalog industry, we propose a model that allows firms to design optimal dynamic mailing policies using their own business data. We constructed the model from a large data set provided by a catalog mailing company. The computational results from the historical data show great potential profit improvement. This application differs from many other applications of (approximate) dynamic programming in that an underlying Markov model is not a priori available, nor can it be derived in a principled manner. Instead, it has to be estimated or "learned" from available data. The thesis furthers the discussion on issues related to constructing learning models from data. More specifically, we discuss the so called "endogeneity problem" and the effects of inaccuracy in model parameter estimation. The fact that the model parameter estimation depends on data collected according to a specific policy introduces an endogeneity problem. As a result, the derived optimal policy depends on the original policy used to collect the data. / (cont.) In the thesis we discuss a specific endogeneity problem, "attribution error." We also investigate whether online learning can solve this problem. More specifically, we discuss the existence of fixed point policies for potential on-line learning algorithms. Imprecision in model parameter estimation also creates the potential for bias. We illustrate this problem and offer a method for detecting it. Finally, we report preliminary results from a large scale field test that tests the effectiveness of the proposed approach in a real business decision setting. / by Peng Sun. / Ph.D.
203

Large scale queueing systems : asymptotics and insights

Goldberg, David Alan, Ph. D. Massachusetts Institute of Technology January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 195-203). / Parallel server queues are a family of stochastic models useful in a variety of applications, including service systems and telecommunication networks. A particular application that has received considerable attention in recent years is the analysis of call centers. A feature common to these models is the notion of the 'trade-off' between quality and efficiency. It is known that if the underlying system parameters scale together according to a certain 'square-root scaling law', then this trade-off can be precisely quantified, in which case the queue is said to be in the Halfin-Whitt regime. A common approach to understanding this trade-off involves restricting one's models to have exponentially distributed call lengths, and restricting one's analysis to the steady-state behavior of the system. However, these are considered shortcomings of much work in the area. Although several recent works have moved beyond these assumptions, many open questions remain, especially w.r.t. the interplay between the transient and steady-state properties of the relevant models. These questions are the primary focus of this thesis. In the first part of this thesis, we prove several results about the rate of convergence to steady-state for the A/M/rn queue, i.e. n-server queue with exponentially distributed inter-arrival and processing times, in the Halfini-Whitt regime. We identify the limiting rate of convergence to steady-state, discover an asymptotic phase transition that occurs w.r.t. this rate, and prove explicit bounds on the distance to stationarity. The results of the first part of this thesis represent an important step towards understanding how to incorporate transient effects into the analysis of parallel server queues. In the second part of this thesis, we prove several results regarding the steadystate G/G/n queue, i.e. n-server queue with generally distributed inter-arrival and processing times, in the Halfin-Whitt regime. We first prove that under minor technical conditions, the steady-state number of jobs waiting in queue scales like the square root of the number of servers. We then establish bounds for the large deviations behavior of this model, partially resolving a conjecture made by Gamarnik and Momcilovic in [431. We also derive bounds for a related process studied by Reed in [91]. We then derive the first qualitative insights into the steady-state probability that an arriving job must wait for service in the Halfin-Whitt regime, for generally distributed processing times. We partially characterize the behavior of this probability when a certain excess parameter B approaches either 0 or oo. We conclude by studying the large deviations of the number of idle servers, proving that this random variable has a Gaussian-like tail. We prove our main results by combining tools from the theory of stochastic comparison [99] with the theory of heavy-traffic approximations [113]. We compare the system of interest to a 'modified' queue, in which all servers are kept busy at all times by adding artificial arrivals whenever a server would otherwise go idle, and certain servers can permanently break down. We then analyze the modified system using heavy-traffic approximations. The proven bounds hold for all n, have representations as the suprema of certain natural processes, and may prove useful in a variety of settings. The results of the second part of this thesis enhance our understanding of how parallel server queues behave in heavy traffic, when processing times are generally distributed. / by David Alan Goldberg. / Ph.D.
204

Competition and loss of efficiency : from electricity markets to pollution control

Kluberg, Lionel J. (Lionel Jonathan) January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 221-230). / The thesis investigates the costs and benefits of free competition as opposed to central regulation to coordinate the incentives of various participants in a market. The overarching goal of the thesis is to decide whether deregulated competition is beneficial for society, or more precisely, in which context and under what market structure and what conditions deregulation is beneficial. We consider oligopolistic markets in which a few suppliers with significant market power compete to supply differentiated substitute goods. In such markets, competition is modeled through the game theoretic concept of Nash equilibrium. The thesis compares the Nash equilibrium competitive outcome of these markets with the regulated situation in which a central authority coordinates the decision of the market participants to optimize social welfare. The thesis analyzes the impact of deregulation for producers, for consumers and for society as a whole. The thesis begins with a general quantity (Cournot) oligopolistic market where each producer faces independent production constraints. We then study how a company with multiple subsidiaries can reduce its global energy consumption in a decentralized manner while ensuring that the subsidiaries adopt a globally optimal behavior. We finally propose a new model of supply function competition for electricity markets and show how the number of competing generators and the electrical network constraints affect the performance of deregulation. / by Lionel J. Kluberg. / Ph.D.
205

New statistical techniques for designing future generation retirement and insurance solutions

Zhu, Zhe January 2014 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages )103-106. / This thesis presents new statistical techniques for designing future generation retirement and insurance solutions. It addresses two major challenges for retirement and insurance products: asset allocation and policyholder behavior. In the first part of the thesis, we focus on estimating the covariance matrix for multidimensional data, and it is used in the application of asset allocation. Classical sample mean and covariance estimates are very sensitive to outliers, and therefore their robust counterparts are considered to overcome the problem. We propose a new robust covariance estimator using the regular vine dependence structure and pairwise robust partial correlation estimators. The resulting robust covariance estimator delivers high performance for identifying outliers under the Barrow Wheel Benchmark for large high dimensional datasets. Finally, we demonstrate a financial application of active asset allocation using the proposed robust covariance estimator. In the second part of the thesis, we expand the regular vine robust estimation technique proposed in the first part, and provide a theory and algorithm for selecting the optimal vine structure. Only two special cases of the regular vine structure were discussed in the previous part, but there are many more different types of regular vines that are neither type. In many applications, restricting our selection to just those two special types is not appropriate, and therefore we propose a vine selection theory based on optimizing the entropy function, as well as an approximation heuristic using the maximum spanning tree to find an appropriate vine structure. Finally, we demonstrate the idea with two financial applications. In the third part of the thesis, we focus on the policyholder behavior modeling for insurance and retirement products. In particular, we choose the variable annuity product, which has many desirable features for retirement saving purposes, such as stock-linked growth potential and protection against losses in the investment. Policyholder behavior is one of the most important profit or loss factors for the variable annuity product, and insurance companies generally do not have sophisticated models at the current time. We discuss a few new approaches using modem statistical learning techniques to model policyholder withdrawal behavior, and the result is promising. / by Zhe Zhu. / Ph. D.
206

An Operations Research approach to aviation security

Martonosi, Susan Elizabeth January 2005 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 151-163). / Since the terrorist attacks of September 11, 2001, aviation security policy has remained a focus of national attention. We develop mathematical models to address some prominent problems in aviation security. We explore first whether securing aviation deserves priority over other potential targets. We compare the historical risk of aviation terrorism to that posed by other forms of terrorism and conclude that the focus on aviation might be warranted. Secondly, we address the usefulness of passenger pre-screening systems to select potentially high-risk passengers for additional scrutiny. We model the probability that a terrorist boards an aircraft with weapons, incorporating deterrence effects and potential loopholes. We find that despite the emphasis on the pre-screening system, of greater importance is the effectiveness of the underlying screening process. Moreover, the existence of certain loopholes could occasionally decrease the overall chance of a successful terrorist attack. Next, we discuss whether proposed explosives detection policies for cargo, airmail and checked luggage carried on passenger aircraft are cost-effective. / (cont.) We define a threshold time such that if an attempted attack is likely to occur before this time, it is cost-effective to implement the policy, otherwise not. We find that although these three policies protect against similar types of attacks, their cost-effectiveness varies considerably. Lastly, we explore whether dynamically assigning security screeners at various airport security checkpoints can yield major gains in efficiency. We use approximate dynamic programming methods to determine when security screeners should be switched between checkpoints in an airport to accommodate stochastic queue imbalances. We compare the performance of such dynamic allocations to that of pre-scheduled allocations. We find that unless the stochasticity in the system is significant, dynamically reallocating servers might reduce only marginally the average waiting time. Without knowing certain parameter values or understanding terrorist behavior, it can be difficult to draw concrete conclusions about aviation security policies. / (cont.) Nevertheless, these mathematical models can guide policy-makers in adopting security measures, by helping to identify parameters most crucial to the effectiveness of aviation security policies, and helping to analyze how varying key parameters or assumptions can affect strategic planning. / by Susan Elizabeth Martonosi. / Ph.D.
207

The aircraft sequencing problem with arrivals and departures

Muharremogl̆u, Alp, 1975- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, Operations Research Center, 2000. / Includes bibliographical references (leaves 57-58). / This thesis investigates the Aircraft Sequencing Problem (ASP) with Arrivals and Departures. The ASP is the problem of sequencing the arriving and departing aircraft on a single nmway to minimize certain performance criteria. We focus on minimizing the total weighted delay. Both the theoretical aspects of the problem, and some practical issues are discussed. The static version of the problem is basically a scheduling problem with sequence dependent processing times and ready times, with the objective of minimizing total weighted delay. Exact algorithms for this problem are not fast enough for practical implementation. WP- give several algorithms that can be used both for the static and the dynamic versions of the problem. These algorithms are not exact solutions, however they are much faster than an exact algorithm and address some very important practical issues related to the ASP. Computational results from these algorithms are given. The computational results demonstrate that the potential benefits of using optimization in the sequencing of arrivals and departures in the Terminal Area are fairly significant. For example, the algorithm HWTW with 11,f PS= (0,0) reduces delays by 40% compared to FCFS. Certain fairness and safety issues are addressed as well. Acknowlegmets I would like to thank my advisor, Prof. Amedeo R. Odoni for his support during the past two years. This research was partially supported by the Federal Aviation Administration (FAA) under the project" Advanced Concepts for Collaborative Decision Making (CDM)," award number SA1603JB and by the Charles Stark Draper Laboratory Inc., under Contract Numnber DLH- 505328. / by Alp Muharremoglu. / S.M.
208

Dynamic pricing with demand learning under competition

Simon, Carine (Carine Anne Marie) January 2007 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 199-204). / In this thesis, we focus on oligopolistic markets for a single perishable product, where firms compete by setting prices (Bertrand competition) or by allocating quantities (Cournot competition) dynamically over a finite selling horizon. The price-demand relationship is modeled as a parametric function, whose parameters are unknown, but learned through a data driven approach. The market can be either in disequilibrium or in equilibrium. In disequilibrium, we consider simultaneously two forms of learning for the firm: (i) learning of its optimal pricing (resp. allocation) strategy, given its belief regarding its competitors' strategy; (ii) learning the parameters in the price-demand relationship. In equilibrium, each firm seeks to learn the parameters in the price-demand relationship for itself and its competitors, given that prices (resp. quantities) are in equilibrium. In this thesis, we first study the dynamic pricing (resp. allocation) problem when the parameters in the price-demand relationship are known. We then address the dynamic pricing (resp. allocation) problem with learning of the parameters in the price-demand relationship. We show that the problem can be formulated as a bilevel program in disequilibrium and as a Mathematical Program with Equilibrium Constraints (MPECs) in equilibrium. Using results from variational inequalities, bilevel programming and MPECs, we prove that learning the optimal strategies as well as the parameters, is achieved. Furthermore, we design a solution method for efficiently solving the problem. We prove convergence of this method analytically and discuss various insights through a computational study. / (cont.) Finally, we consider closed-loop strategies in a duopoly market when demand is stochastic. Unlike open-loop policies (such policies are computed once and for all at the beginning of the time horizon), closed loop policies are computed at each time period, so that the firm can take advantage of having observed the past random disturbances in the market. In a closed-loop setting, subgame perfect equilibrium is the relevant notion of equilibrium. We investigate the existence and uniqueness of a subgame perfect equilibrium strategy, as well as approximations of the problem in order to be able to compute such policies more efficiently. / by Carine Simon. / Ph.D.
209

Finding optimal strategies for influencing social networks in two player games

Howard, Nicholas J. (Nicholas Jacob) January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, June 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 141). / This thesis considers the problem of optimally influencing social networks in Afghanistan as part of ongoing counterinsurgency efforts. The social network is analyzed using a discrete time agent based model. Each agent has a belief [-0.5,0.5] and interacts stochastically pairwise with their neighbors. The network converges to a set of equilibrium beliefs in expectation. A 2-player game is formulated in which the players control a set of stubborn agents whose beliefs never change, and who wield significant influence in the network. Each player chooses how to connect their stubborn agents to maximally influence the network. Two different payoff functions are defined, and the pure Nash equilibrium strategy profiles are found in a series of test networks. Finding equilibrium strategy profiles can be difficult for large networks due to exponential increases in the strategy space but a simulated annealing heuristic is used to rapidly find equilibria using best response dynamics. We demonstrate through experimentation that the games formulated admit pure Nash equilibrium strategy profiles and that best response dynamics can be used to find them. We also test a scenario based on the author's experience in Afghanistan to show how nonsymmetric equilibria can naturally emerge if each player weights the value of agents in the network differently. / by Nicholas J Howard. / S.M.
210

Coordinated dynamic planning for air and space operations / CDASOCS

Wroten, Matthew Christian January 2005 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 148-150). / Planners of military air and space operations in a battlefield environment seek to allocate resources against targets in a way that best achieves the objectives of the commander. In future conflicts, the presence of new types of assets, such as tactical space-based sensors and Operationally Responsive Spacelift (ORS) assets, will add complexity to air and space operations decisions. In order to best achieve objectives, planners of different types of assets will likely need to work collaboratively when formulating tasking for their resources. The purpose of this research is to investigate the challenges of air and space collaboration and to quantify its potential benefit. We model a future threat scenario involving a rogue nation with Weapons of Mass Destruction (WMD) capability and a significant air defense force. We consider three separately-controlled resource groups - aircraft, satellites, and ORS assets - to combat the target threat. In addition, we formulate a top-level coordination controller, whose job it is to effect collaborative decision-making among resource groups. Using a combination of pre-existing software and new algorithms, we develop the Coordinated Dynamic Air and Space Operations Control System (CDASOCS), which simulates controller-generated plans in a battlefield environment recurring over multiple planning periods. New algorithms are presented for both the top-level coordination controller and the ORS controller. The benefits of resource coordination in CDASOCS are demonstrated in three main experiments along with several parameter variation tests. / by Matthew Christian Wroten. / S.M.

Page generated in 0.1278 seconds