• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1409
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2125
  • 2125
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 204
  • 175
  • 162
  • 157
  • 141
  • 137
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
851

Modeling preferences for innovative modes and services : a case study in Lisbon

Yang, Lang, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 150-156). / Increases in car ownership and usage have resulted in serious traffic congestion problems in many large cities worldwide. Innovative travel modes and services can play an important role in improving the efficiency and sustainability of transportation systems. In this study, we evaluate the preferences for some new modes and services (one-way car rental, shared taxi, express minibus, school bus service for park and ride, and congestion pricing) in the context of Lisbon, Portugal using stated preferences (SP) techniques. The survey design is challenging from several aspects. First of all, the large number of existing and innovative modes poses a challenge for the SP design. To simplify choice experiment, sequential approaches are used to divide the large choice set into car-based, public transport, and multimodal groups. Secondly, there is a large set of candidate variables that are likely to affect the mode choices. The findings of focus group discussion are analyzed to identify the key variables. Thirdly, the innovative modes and services are likely to affect not only the mode choices but also the choices of departure time and occupancy (in case of private modes). A multidimensional choice set of travel mode, departure time, and occupancy is considered. Two types of models are used to investigate the preferences and acceptability of innovative modes and services -- nested logit models and mixed logit models. The main attributes in the systematic utilities include natural logarithm of travel time and cost, schedule delay, size variables for unequal departure time intervals, and inertia to revealed preferences (RP) choices of travel mode, departure time, and occupancy. The values of willingness to pay (WTP) are found to depend on trip purpose, market segment, and the magnitude of travel cost and time. Mixed logit models can address complex correlation and heterogeneity problems in the SP data better than nested logit models. Based on the estimation results, mixed logit models are found more efficient and reliable. They can provide important information for transportation planners and policy makers working to achieve sustainable transportation systems in Portugal as well as in other countries. / by Lang Yang. / S.M.
852

Survivable paths in multilayer networks

Parandehgheibi, Marzieh January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 75-77). / We consider the problem of protection in multilayer networks. In single-layer net- works, a pair of disjoint paths can be used to provide protection for a source-destination pair. However, this approach cannot be directly applied to layered networks where disjoint paths may not always exist. In this thesis, we take a new approach which is based on finding a set of paths that may not be disjoint but together will survive any single physical link failure. First, we consider the problem of finding the minimum number of survivable paths. In particular, we focus on two versions of this problem: one where the length of a path is restricted, and the other where the number of paths sharing a fiber is restricted. We prove that in general, finding the minimum survivable path set is NP-hard, whereas both of the restricted versions of the problem can be solved in polynomial time. We formulate the problem as Integer Linear Programs (ILPs), and use these formulations to develop heuristics and approximation algorithms. Next, we consider the problem of finding a set of survivable paths that uses the minimum number of fibers. We show that this problem is NP-hard in general, and develop heuristics and approximation algorithms with provable approximation bounds. We also model the dependency of communication networks on the power grid as a layered network, and investigate the survivability of communication networks in this layered setting. Finally, we present simulation results comparing the different algorithms. / by Marzieh Parandehgheibi. / S.M.
853

Performance of Dynamic Programming methods in airline Revenue Management / Performance of DP methods in airline RM

Diwan, Sarvee January 2010 (has links)
Thesis (S.M. in Transportation)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 159-163). / This thesis evaluates the performance of Dynamic Programming (DP) models as applied to airline Revenue Management (RM) compared to traditional Revenue Management models like EMSRb as DP models offer a theoretically attractive alternative to traditional RM models. In the first part of this thesis, we develop a simplified simulator to evaluate the effects of changing demand variance on the performance of standard DP on a single flight leg. This simulator excludes the effects of forecast quality and competitive effects like passenger sell-up and inter-airline spill. In the next part of the thesis, we introduce two network based DP methods that incorporate the network displacement costs in the standard DP based optimizer and perform simulation experiments in a larger competitive network using the Passenger Origin Destination Simulator to study the performance of DP methods in airline Revenue Management systems. The results of single flight leg experiments from the simplified simulator show that DP methods do not consistently outperform EMSRb and the sensitivity analysis show that the performance of DP relative to EMSRb depends on the demand variability, demand factor, fare ratios and passenger arrival pattern. The results from the PODS competitive network simulations show that DP methods, despite not showing any significant benefits in the simplified simulator, can outperform EMSRb when used in a competitive environment because DP's aggressive seat protection policy helps DP generate more revenues than EMSRb due to competitive feedback effects like inter-airline passenger spill-in, and passenger sell-up within the airline. / by Sarvee Diwan. / S.M. / S.M.in Transportation
854

Essays on Infrastructure Design and Planning for Clean Energy Systems

Kocaman, Ayse January 2014 (has links)
The International Energy Agency estimates that the number of people who do not have access to electricity is nearly 1.3 billion and a billion more have only unreliable and intermittent supply. Moreover, current supply for electricity generation mostly relies on fossil fuels, which are finite and one of the greatest threats to the environment. Rising population growth rates, depleting fuel sources, environmental issues and economic developments have increased the need for mathematical optimization to provide a formal framework that enables systematic and clear decision-making in energy operations. This thesis through its methodologies and algorithms enable tools for energy generation, transmission and distribution system design and help policy makers make cost assessments in energy infrastructure planning rapidly and accurately. In Chapter 2, we focus on local-level power distribution systems planning for rural electrification using techniques from combinatorial optimization. We describe a heuristic algorithm that provides a quick solution for the partial electrification problem where the distribution network can only connect a pre-specified number of households with low voltage lines. The algorithm demonstrates the effect of household settlement patterns on the electrification cost. We also describe the first heuristic algorithm that selects the locations and service areas of transformers without requiring candidate solutions and simultaneously builds a two-level grid network in a green-field setting. The algorithms are applied to real world rural settings in Africa, where household locations digitized from satellite imagery are prescribed. In Chapter 3 and 4, we focus on power generation and transmission using clean energy sources. Here, we imagine a country in the future where hydro and solar are the dominant sources and fossil fuels are only available in minimal form. We discuss the problem of modeling hydro and solar energy production and allocation, including long-term investments and storage, capturing the stochastic nature of hourly supply and demand data. We mathematically model two hybrid energy generation and allocation systems where time variability of energy sources and demand is balanced using the water stored in the reservoirs. In Chapter 3, we use conventional hydro power stations (incoming stream flows are stored in large dams and water release is deferred until it is needed) and in Chapter 4, we use pumped hydro stations (water is pumped from lower reservoir to upper reservoir during periods of low demand to be released for generation when demand is high). Aim of the models is to determine optimal sizing of infrastructure needed to match demand and supply in a most reliable and cost effective way. An innovative contribution of this work is the establishment of a new perspective to energy modeling by including fine-grained sources of uncertainty such as stream flow and solar radiations in hourly level as well as spatial location of supply and demand and transmission network in national level. In addition, we compare the conventional and the pumped hydro power systems in terms of reliability and cost efficiency and quantitatively show the improvement provided by including pumped hydro storage. The model will be presented with a case study of India and helps to answer whether solar energy in addition to hydro power potential in Himalaya Mountains would be enough to meet growing electricity demand if fossil fuels could be almost completely phased out from electricity generation.
855

Asymptotic Analysis of Service Systems with Congestion-Sensitive Customers

Yao, John Jia-Hao January 2016 (has links)
Many systems in services, manufacturing, and technology, feature users or customers sharing a limited number of resources, and which suffer some form of congestion when the number of users exceeds the number of resources. In such settings, queueing models are a common tool for describing the dynamics of the system and quantifying the congestion that results from the aggregated effects of individuals joining and leaving the system. Additionally, the customers themselves may be sensitive to congestion and react to the performance of the system, creating feedback and interaction between individual customer behavior and aggregate system dynamics.This dissertation focuses on the modeling and performance of service systems with congestion-sensitive customers using large-scale asymptotic analyses of queueing models. This work extends the theoretical literature on congestion-sensitive customers in queues in the settings of service differentiation and observational learning and abandonment. Chapter 2 considers the problem of a service provider facing a heterogeneous market of customers who differ based on their value for service and delay sensitivity. The service provider seeks to find the revenue maximizing level of service differentiation (offering different price-delay combinations). We show that the optimal policy places the system in heavy traffic, but at substantially different levels of congestion depending on the degree of service differentiation. Moreover, in a differentiated offering, the level of congestion will vary substantially between service classes. Chapter 3 presents a new model of customer abandonment in which congestion-sensitive customers observe the queue length, but do not know the service rate. Instead, they join the queue and observe their progress in order to estimate their wait times and make abandonment decisions. We show that an overloaded queue with observational learning and abandonment stabilizes at a queue length whose scale depends on the tail of the service time distribution. Methodologically, our asymptotic approach leverages stochastic limit theory to provide simple and intuitive results for optimizing or characterizing system performance. In particular, we use the analysis of deterministic fluid-type queues to provide a first-order characterization of the stochastic system dynamics, which is demonstrated by the convergence of the stochastic system to the fluid model. This also allows us to crisply illustrate and quantify the relative contributions of system or customer characteristics to overall system performance.
856

Approximation Algorithms for Demand-Response Contract Execution and Coflow Scheduling

Qiu, Zhen January 2016 (has links)
Solving operations research problems with approximation algorithms has been an important topic since approximation algorithm can provide near-optimal solutions to NP-hard problems while achieving computational efficiency. In this thesis, we consider two different problems in the field of optimal control and scheduling theory respectively and develop efficient approximation algorithms for those problems with performance guarantee. Chapter 2 presents approximation algorithms for solving the optimal execution problem for demand-response contract in electricity markets. Demand side participation is essential for achieving real-time energy balance in today's electricity grid. Demand-response contracts, where an electric utility company buys options from consumers to reduce their load in the future, are an important tool to increase demand-side participation. In this chapter, we consider the operational problem of optimally exercising the available contracts over the planning horizon such that the total cost to satisfy the demand is minimized. In particular, we consider the objective of minimizing the sum of the expected ℓ_β-norm of the load deviations from given thresholds and the contract execution costs over the planning horizon. For β=∞, this reduces to minimizing the expected peak load. The peak load provides a good proxy to the total cost of the utility as spikes in electricity prices are observed only in peak load periods. We present a data driven near-optimal algorithm for the contract execution problem. Our algorithm is a sample average approximation (SAA) based dynamic program over a multi-period planning horizon. We provide a sample complexity bound on the number of demand samples required to compute a (1+ε)-approximate policy for any ε>0. Our SAA algorithm is quite general and we show that it can be adapted to quite general demand models including Markovian demands and objective functions. For the special case where the demand in each period is i.i.d., we show that a static solution is optimal for the dynamic problem. We also conduct a numerical study to compare the performance of our SAA based DP algorithm. Our numerical experiments show that we can achieve a (1+ε)-approximation in significantly smaller numbers of samples than what is implied by the theoretical bounds. Moreover, the structure of the approximate policy also shows that it can be well approximated by a simple affine function of the state. In Chapter 3, we study the NP-hard coflow scheduling problem and develop a polynomial-time approximation algorithm for the problem with constant approximation ratio. Communications in datacenter jobs (such as the shuffle operations in MapReduce applications) often involve many parallel flows, which may be processed simultaneously. This highly parallel structure presents new scheduling challenges in optimizing job-level performance objectives in data centers. Chowdhury and Stoica [13] introduced the coflow abstraction to capture these communication patterns, and recently Chowdhury et al. [15] developed effective heuristics to schedule coflows. In this chapter, we consider the problem of efficiently scheduling coflows so as to minimize the total weighted completion time, which has been shown to be strongly NP-hard [15]. Our main result is the first polynomial-time deterministic approximation algorithm for this problem, with an approximation ratio of $64/3$, and a randomized version of the algorithm, with a ratio of 8+16sqrt{2}/3. Our results use techniques from both combinatorial scheduling and matching theory, and rely on a clever grouping of coflows. In Chapter 4, we carry out a comprehensive experimental analysis on a Facebook trace and extensive simulated instances to evaluate the practical performance of several algorithms for coflow scheduling, including our approximation algorithms developed in Chapter 3. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that the performance of the approximation algorithm of Chapter 3 is relatively robust, near optimal, and always among the best compared with the other algorithms, in both the offline and online settings.
857

Optimization in Strategic Environments

Feigenbaum, Itai Izhak January 2016 (has links)
This work considers the problem faced by a decision maker (planner) trying to optimize over incomplete data. The missing data is privately held by agents whose objectives are dierent from the planner's, and who can falsely report it in order to advance their objectives. The goal is to design optimization mechanisms (algorithms) that achieve "good" results when agents' reports follow a game-theoretic equilibrium. In the first part of this work, the goal is to design mechanisms that provide a small worst-case approximation ratio (guarantee a large fraction of the optimal value in all instances) at equilibrium. The emphasis is on strategyproof mechanisms|where truthfulness is a dominant strategy equilibrium|and on the approximation ratio at that equilibrium. Two problems are considered|variants of knapsack and facility location problems. In the knapsack problem, items are privately owned by agents, who can hide items or report fake ones; each agent's utility equals the total value of their own items included in the knapsack, while the planner wishes to choose the items that maximize the sum of utilities. In the facility location problem, agents have private linear single sinked/peaked preferences regarding the location of a facility on an interval, while the planner wishes to locate the facility in a way that maximizes one of several objectives. A variety of mechanisms and lower bounds are provided for these problems. The second part of this work explores the problem of reassigning students to schools. Students have privately known preferences over the schools. After an initial assignment is made, the students' preferences change, get reported again, and a reassignment must be obtained. The goal is to design a reassignment mechanism that incentivizes truthfulness, provides high student welfare, transfers relatively few students from their initial assignment, and respects student priorities at schools. The class of mechanisms considered is permuted lottery deferred acceptance (PLDA) mechanisms, which is a natural class of mechanisms based on permuting the lottery numbers students initially draw to decide the initial assignment. Both theoretical and experimental evidence is provided to support the use of a PLDA mechanism called reversed lottery deferred acceptance (RLDA). The evidence suggests that under some conditions, all PLDA mechanisms generate roughly equal welfare, and that RLDA minimizes transfers among PLDA mechanisms.
858

Clearinghouse Default Resources: Theory and Empirical Analysis

Cheng, Wan-Schwin Allen January 2017 (has links)
Clearinghouses insure trades. Acting as a central counterparty (CCP), clearinghouses consolidate financial exposures across multiple institutions, aiding the efficient management of counterparty credit risk. In this thesis, we study the decision problem faced by for-profit clearinghouses, focusing on primary economic incentives driving their determination of layers of loss-absorbing capital. The clearinghouse's loss-allocation mechanism, referred to as the default waterfall, governs the allocation and management of counterparty risk. This stock of loss-absorbing capital typically consists of initial margins, default funds, and the clearinghouse's contributed equity. We separate the overall decision problem into two distinct subproblems and study them individually. The first is the clearinghouse's choice of initial margin and clearing fee requirements, and the second involves its choice of resources further down the waterfall, namely the default funds and clearinghouse equity. We solve for the clearinghouse's equilibrium choices in both cases explicitly, and address the different economic roles they play in the clearinghouse's profit-maximization process. The models presented in this thesis show, without exception, that clearinghouse choices should depend not only on the riskiness of the cleared position but also on market and participants' characteristics such as default probabilities, fundamental value, and funding opportunity cost. Our results have important policy implications. For instance, we predict a counteracting force that dampens monetary easing enacted via low interest rate policies. When funding opportunity costs are low, our research shows that clearinghouses employ highly conservative margin and default funds, which tie up capital and credit. This is supported by the low interest rate environment following the financial crisis of 2007--08. In addition to low productivity growth and return on capital, major banks have chosen to accumulate large cash piles on their balance sheets rather than increase lending. In terms of systemic risk, our empirical work, joint with the U.S. Commodity Futures Trading Commission (CFTC), points to the possibility of destabilizing loss and margin spirals: in the terminology of Brunnermeier and Pedersen (2009), we argue that a major clearinghouse's behavior is consistent with that of an uninformed financier and that common shocks to credit quality can lead to tightening margin constraints.
859

Non-Bayesian Inference and Prediction

Xiao, Di January 2017 (has links)
In this thesis, we first propose a coherent inference model that is obtained by distorting the prior density in Bayes' rule and replacing the likelihood with a so-called pseudo-likelihood. This model includes the existing non-Bayesian inference models as special cases and implies new models of base-rate neglect and conservatism. We prove a sufficient and necessary condition under which the coherent inference model is processing consistent, i.e., implies the same posterior density however the samples are grouped and processed retrospectively. We show that processing consistency does not imply Bayes' rule by proving a sufficient and necessary condition under which the coherent inference model can be obtained by applying Bayes' rule to a false stochastic model. We then propose a prediction model that combines a stochastic model with certain parameters and a processing-consistent, coherent inference model. We show that this prediction model is processing consistent, which states that the prediction of samples does not depend on how they are grouped and processed prospectively, if and only if this model is Bayesian. Finally, we apply the new model of conservatism to a car selection problem, a consumption-based asset pricing model, and a regime-switching asset pricing model.
860

Dynamic Trading Strategies in the Presence of Market Frictions

Saglam, Mehmet January 2012 (has links)
This thesis studies the impact of various fundamental frictions in the microstructure of financial markets. Specific market frictions we consider are latency in high-frequency trading, transaction costs arising from price impact or commissions, unhedgeable inventory risks due to stochastic volatility and time-varying liquidity costs. We explore the implications of each of these frictions in rigorous theoretical models from an investor's point of view and derive analytical expressions or efficient computational procedures for dynamic strategies. Specific methodologies in computing these policies include stochastic control theory, dynamic programming and tools from applied probability and stochastic processes. In the first chapter, we describe a theoretical model for the quantitative valuation of latency and its impact on the optimal dynamic trading strategy. Our model measures the trading frictions created by the presence of latency, by considering the optimal execution problem of a representative investor. Via a dynamic programming analysis, our model provides a closed-form expression for the cost of latency in terms of well-known parameters of the underlying asset. We implement our model by estimating the latency cost incurred by trading on a human time scale. Examining NYSE common stocks from 1995 to 2005 shows that median latency cost across our sample more than tripled during this time period. In the second chapter, we provide a highly tractable dynamic trading policy for portfolio choice problems with return predictability and transaction costs. Our rebalancing rule is a linear function of the return predicting factors and can be utilized in a wide spectrum of portfolio choice models with minimal assumptions. Linear rebalancing rules enable to compute exact and efficient formulations of portfolio choice models with linear constraints, proportional and nonlinear transaction costs, and quadratic utility function on the terminal wealth. We illustrate the implementation of the best linear rebalancing rule in the context of portfolio execution with positivity constraints in the presence of short-term predictability. We show that there exists a considerable performance gain in using linear rebalancing rules compared to static policies with shrinking horizon or a dynamic policy implied by the solution of the dynamic program without the constraints. Finally, in the last chapter, we propose a factor-based model that incorporates common factor shocks for the security returns. Under these realistic factor dynamics, we solve for the dynamic trading policy in the class of linear policies analytically. Our model can accommodate stochastic volatility and liquidity costs as a function of factor exposures. Calibrating our model with empirical data, we show that our trading policy achieves superior performance in the presence of common factor shocks.

Page generated in 0.1158 seconds