• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22006
  • 3290
  • 2297
  • 1633
  • 1583
  • 627
  • 600
  • 240
  • 219
  • 217
  • 180
  • 180
  • 180
  • 180
  • 180
  • Tagged with
  • 40621
  • 6653
  • 5726
  • 5244
  • 4156
  • 4038
  • 3372
  • 3201
  • 2826
  • 2714
  • 2608
  • 2475
  • 2363
  • 2339
  • 2304
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

On the Kidney Exchange Problem and Online Minimum Energy Scheduling

Herrera Humphries, Tulia January 2014 (has links)
The allocation and management of scarce resources are of central importance in the design of policies to improve social well-being. This dissertation consists of three essays; the first two deals with the problem of allocating kidneys and the third one on power management in computing devices. Kidney exchange programs are an attractive alternative for patients who need a kidney transplant and who have a willing, but medically incompatible, donor. A registry that keeps track of such patient-donor pairs can nd matches through exchanges amongst such pairs. This results in a quicker transplant for the patients involved, and equally importantly, keeps such patients from the long wait list of patients without an intended donor. As of March 2014, there were at least 99,000 candidates waiting for a kidney transplant in the U.S. However, in 2013 only 16,893 transplants were conducted. This imbalance between supply and demand among other factors, has driven the development of multiple kidney exchange programs in the U.S. and the subsequent development of matching mechanisms to run the programs. In the first essay we consider a matching problem arising in kidney exchanges between hospitals. Focusing on the case of two hospitals, we construct a strategy-proof matching mechanism that is guaranteed to return a matching that is at least 3/4 the size of a maximum cardinality matching. It is known that no better performance is possible if one focuses on mechanisms that return a maximal matching, and so our mechanism is best possible within this natural class of mechanisms. For path-cycle graphs we construct a mechanism that returns a matching that is at least 4/5 the size of max-cardinality matching. This mechanism does not necessarily return a maximal matching. Finally, we construct a mechanism that is universally truthful on path-cycle graphs and whose performance is within 2/3 of optimal. Again, it is known that no better ratio is possible. In most of the existing literature, mechanisms are typically evaluated by their overall performance on a large exchange pool, based on which conclusions and recommendations are drawn. In our second essay, we consider a dynamic framework to evaluate extensively used kidney exchange mechanisms. We conduct a simulation-based study of a dynamically evolving exchange pool during 9 years. Our results suggest that some of the features that are critical in a mechanism in the static setting have only a minor impact in its longrun performance when viewed in the dynamic setting. More importantly, features that are generally underestimated in the static setting turn to be relevant when we look at dynamically evolving exchange pool. For example, the pairs' arrival rates. In particular we provide insights into the eect on the waiting times and the probability to receive an oer of controllable features such as the frequency at which matching are run, the structures through which pairs could be matched (cycles or chains) as well as inherent features such as the pairs ABO-PRA characteristics, the availability of altruistic donors, and wether or not compatible pairs join the exchange etc. We evaluate the odds to receive an oer and the expected time to receive an oer for each ABO-PRA type of pairs in the model. Power management in computing devices aims to minimize energy consumption to perform tasks, meanwhile keeping acceptable performance levels. A widely used power management strategy for devices, is to transit the devices and/or components to lower power consumption states during inactivity periods. Transitions between power states consume energy, thus, depending on such costs, it may be advantageous to stay in high power state during some inactivity periods. In our third essay we consider the problem of minimizing the total energy consumed by a 2-power state device, to process jobs that are sent over time by a constrained adversary. Jobs can be preempted, but deadlines need to be met. In this problem, an algorithm must decide when to schedule the jobs, as well as a sequence of power states, and the discrete time thresholds at which these states will be reached. We provide an online algorithm to minimize the energy consumption when the cost of a transition to the low power state is small enough. In this case, the problem of minimizing the energy consumption is equivalent to minimizing the total number of inactivity periods. We also provide an algorithm to minimize the energy consumption when it may be advantageous to stay in high power state during some inactivity periods. In both cases we provide upper bounds on the competitive ratio of our algorithms, and lower bounds on the competitive ratio of all online algorithms.
72

New Quantitative Approaches to Asset Selection and Portfolio Construction

Song, Irene January 2014 (has links)
Since the publication of Markowitz's landmark paper "Portfolio Selection" in 1952, portfolio construction has evolved into a disciplined and personalized process. In this process, security selection and portfolio optimization constitute key steps for making investment decisions across a collection of assets. The use of quantitative algorithms and models in these steps has become a widely-accepted investment practice by modern investors. This dissertation is devoted to exploring and developing those quantitative algorithms and models. In the first part of the dissertation, we present two efficiency-based approaches to security selection: (i) a quantitative stock selection strategy based on operational efficiency and (ii) a quantitative currency selection strategy based on macroeconomic efficiency. In developing the efficiency-based stock selection strategy, we exploit a potential positive link between firm's operational efficiency and its stock performance. By means of data envelopment analysis (DEA), a non-parametric approach to productive efficiency analysis, we quantify firm's operational efficiency into a single score representing a consolidated measure of financial ratios. The financial ratios integrated into an efficiency score are selected on the basis of their predictive power for the firm's future operating performance using the LASSO (least absolute shrinkage and selection operator)-based variable selection method. The computed efficiency scores are directly used for identifying stocks worthy of investment. The basic idea behind the proposed stock selection strategy is that as efficient firms are presumed to be more profitable than inefficient firms, higher returns are expected from their stocks. This idea is tested in a contextual and empirical setting provided by the U.S. Information Technology (IT) sector. Our empirical findings confirm that there is a strong positive relationship between firm's operational efficiency and its stock performance, and further establish that firm's operational efficiency has significant explanatory power in describing the cross-sectional variations of stock returns. We moreover offer an economic argument that posits operational efficiency as a systematic risk factor and the most likely source of excess returns of investing in efficient firms. The efficiency-based currency selection strategy is developed in a similar way; i.e. currencies are selected based on a certain efficiency metric. An exchange rate has long been regarded as a reliable barometer of the state of the economy and the measure of international competitiveness of countries. While strong and appreciating currencies correspond to productive and efficient economies, weak and depreciating currencies correspond to slowing down and less efficient economies. This study hence develops a currency selection strategy that utilizes macroeconomic efficiency of countries measured based on a widely-accepted relationship between exchange rates and macroeconomic variables. For quantifying macroeconomic efficiency of countries, we first establish a multilateral framework using effective exchange rates and trade-weighted macroeconomic variables. This framework is used for transforming the three representative bilateral structural exchange rate models: the flexible price monetary model, the sticky price monetary model, and the sticky price asset model, into their multilateral counterparts. We then translate these multilateral models into DEA models, which yield an efficiency score representing an aggregate measure of macroeconomic variables. Consistent with the stock selection strategy, the resulting efficiency scores are used for identifying currencies worthy of investment. We evaluate our currency selection strategy against appropriate market and strategic benchmarks using historical data. Our empirical results confirm that currencies of efficient countries have stronger performance than those of inefficient countries, and further suggest that compared to the exchange rate models based on standard regression analysis, our models based on DEA improve on the predictability of the future performance of currencies. In the first part of the dissertation, we also develop a data-driven variable selection method for DEA based on the group LASSO. This method extends the LASSO-based variable selection method used for specifying a DEA model for estimating firm's operational efficiency. In our proposed method, we derive a special constrained version of the group LASSO with the loss function suited for variable selection in DEA models and solve it by a new tailored algorithm based on the alternating direction method of multipliers (ADMM). We conduct a thorough evaluation of the proposed method against two widely-used variable selection methods: the efficiency contribution measure (ECM) method and the regression-based (RB) test, in the DEA literature using Monte Carlo simulations. The simulation results show that our method provides more favorable performance compared with its benchmarks. In the second part of the dissertation, we propose a generalized risk budgeting (GRB) approach to portfolio construction. In a GRB portfolio, assets are grouped into possibly overlapping subsets, and each subset is allocated a risk budget that has been pre-specified by the investor. Minimum variance, risk parity and risk budgeting portfolios are all special instances of a GRB portfolio. The GRB portfolio optimization problem is to find a GRB portfolio with an optimal risk-return profile where risk is measured using any positively homogeneous risk measure. When the subsets form a partition, the assets all have identical returns and we restrict ourselves to long-only portfolios, then the GRB problem can in fact be solved as a convex optimization problem. In general, however, the GRB problem is a constrained non-convex problem, for which we propose two solution approaches. The first approach uses a semidefinite programming (SDP) relaxation to obtain an (upper) bound on the optimal objective function value. In the second approach we develop a numerical algorithm that integrates augmented Lagrangian and Markov chain Monte Carlo (MCMC) methods in order to find a point in the vicinity of a very good local optimum. This point is then supplied to a standard non-linear optimization routine with the goal of finding this local optimum. It should be emphasized that the merit of this second approach is in its generic nature: in particular, it provides a starting-point strategy for any non-linear optimization algorithms.
73

Graph Structure and Coloring

Plumettaz, Matthieu January 2014 (has links)
We denote by G=(V,E) a graph with vertex set V and edge set E. A graph G is claw-free if no vertex of G has three pairwise nonadjacent neighbours. Claw-free graphs are a natural generalization of line graphs. This thesis answers several questions about claw-free graphs and line graphs. In 1988, Chvatal and Sbihi proved a decomposition theorem for claw-free perfect graphs. They showed that claw-free perfect graphs either have a clique-cutset or come from two basic classes of graphs called elementary and peculiar graphs. In 1999, Maffray and Reed successfully described how elementary graphs can be built using line graphs of bipartite graphs and local augmentation. However gluing two claw-free perfect graphs on a clique does not necessarily produce claw-free graphs. The first result of this thesis is a complete structural description of claw-free perfect graphs. We also give a construction for all perfect circular interval graphs. This is joint work with Chudnovsky. Erdos and Lovasz conjectured in 1968 that for every graph G and all integers s,t≥ 2 such that s+t-1=χ(G) > ω(G), there exists a partition (S,T) of the vertex set of G such that ω(G|S)≥ s and χ(G|T)≥ t. This conjecture is known in the graph theory community as the Erdos-Lovasz Tihany Conjecture. For general graphs, the only settled cases of the conjecture are when s and t are small. Recently, the conjecture was proved for a few special classes of graphs: graphs with stability number 2, line graphs and quasi-line graphs. The second part of this thesis considers the conjecture for claw-free graphs and presents some progresses on it. This is joint work with Chudnovsky and Fradkin. Reed's ω, ∆, χ conjecture proposes that every graph satisfies χ≤ ⎡½ (Δ+1+ω)⎤ ; it is known to hold for all claw-free graphs. The third part of this thesis considers a local strengthening of this conjecture. We prove the local strengthening for line graphs, then note that previous results immediately tell us that the local strengthening holds for all quasi-line graphs. Our proofs lead to polytime algorithms for constructing colorings that achieve our bounds: The complexity are O(n²) for line graphs and O(n³m²) for quasi-line graphs. For line graphs, this is faster than the best known algorithm for constructing a coloring that achieves the bound of Reed's original conjecture. This is joint work with Chudnovsky, King and Seymour.
74

Data-driven Decisions in Service Systems

Kim, Song-Hee January 2014 (has links)
This thesis makes contributions to help provide data-driven (or evidence-based) decision support to service systems, especially hospitals. Three selected topics are presented. First, we discuss how Little's Law, which relates average limits and expected values of stationary distributions, can be applied to service systems data that are collected over a finite time interval. To make inferences based on the indirect estimator of average waiting times, we propose methods for estimating confidence intervals and for adjusting estimates to reduce bias. We show our new methods are effective using simulations and data from a US bank call center. Second, we address important issues that need to be taken into account when testing whether real arrival data can be modeled by nonhomogeneous Poisson processes (NHPPs). We apply our method to data from a US bank call center and a hospital emergency department and demonstrate that their arrivals come from NHPPs. Lastly, we discuss an approach to standardize the Intensive Care Unit admission process, which currently lacks a well-defined criteria. Using data from nearly 200,000 hospitalizations, we discuss how we can quantify the impact of Intensive Care Unit admission on individual patient's clinical outcomes. We then use this quantified impact and a stylized model to discuss optimal admission policies. We use simulation to compare the performance of our proposed optimal policies to the current admission policy, and show that the gain can be significant.
75

Network Resource Allocation Under Fairness Constraints

Chandramouli, Shyam Sundar January 2014 (has links)
This work considers the basic problem of allocating resources among a group of agents in a network, when the agents are equipped with single-peaked preferences over their assignments. This generalizes the classical claims problem, which concerns the division of an estate's liquidation value when the total claim on it exceeds this value. The claims problem also models the problem of rationing a single commodity, or the problem of dividing the cost of a public project among the people it serves, or the problem of apportioning taxes. A key consideration in this classical literature is equity: the good (or the ``bad,'' in the case of apportioning taxes or costs) should be distributed as fairly as possible. The main contribution of this dissertation is a comprehensive treatment of a generalization of this classical rationing problem to a network setting. Bochet et al. recently introduced a generalization of the classical rationing problem to the network setting. For this problem they designed an allocation mechanism---the egalitarian mechanism---that is Pareto optimal, envy free and strategyproof. In chapter 2, it is shown that the egalitarian mechanism is in fact group strategyproof, implying that no coalition of agents can collectively misreport their information to obtain a (weakly) better allocation for themselves. Further, a complete characterization of the set of all group strategyproof mechanisms is obtained. The egalitarian mechanism satisfies many attractive properties, but fails consistency, an important property in the literature on rationing problems. It is shown in chapter 3 that no Pareto optimal mechanism can be envy-free and consistent. Chapter 3 is devoted to the edge-fair mechanism that is Pareto optimal, group strategyproof, and consistent. In a related model where the agents are located on the edges of the graph rather than the nodes, the edge-fair rule is shown to be envy-free, group strategyproof, and consistent. Chapter 4 extends the egalitarian mechanism to the problem of finding an optimal exchange in non-bipartite networks. The results vary depending on whether the commodity being exchanged is divisible or indivisible. For the latter case, it is shown that no efficient mechanism can be strategyproof, and that the egalitarian mechanism is Pareto optimal and envy-free. Chapter 5 generalizes recent work on finding stable and balanced allocations in graphs with unit capacities and unit weights to more general networks. The existence of a stable and balanced allocation is established by a transformation to an equivalent unit capacity network.
76

Essays in Financial Engineering

Ahn, Andrew January 2014 (has links)
This thesis consists of three essays in financial engineering. In particular we study problems in option pricing, stochastic control and risk management. In the first essay, we develop an accurate and efficient pricing approach for options on leveraged ETFs (LETFs). Our approach allows us to price these options quickly and in a manner that is consistent with the underlying ETF price dynamics. The numerical results also demonstrate that LETF option prices have model-dependency particularly in high-volatility environments. In the second essay, we extend a linear programming (LP) technique for approximately solving high-dimensional control problems in a diffusion setting. The original LP technique applies to finite horizon problems with an exponentially-distributed horizon, T. We extend the approach to fixed horizon problems. We then apply these techniques to dynamic portfolio optimization problems and evaluate their performance using convex duality methods. The numerical results suggest that the LP approach is a very promising one for tackling high-dimensional control problems. In the final essay, we propose a factor model-based approach for performing scenario analysis in a risk management context. We argue that our approach addresses some important drawbacks to a standard scenario analysis and, in a preliminary numerical investigation with option portfolios, we show that it produces superior results as well.
77

Stochastic Approximation Algorithms in the Estimation of Quasi-Stationary Distribution of Finite and General State Space Markov Chains

Zheng, Shuheng January 2014 (has links)
This thesis studies stochastic approximation algorithms for estimating the quasi-stationary distribution of Markov chains. Existing numerical linear algebra methods and probabilistic methods might be computationally demanding and intractable in large state spaces. We take our motivation from a heuristic described in the physics literature and use the stochastic approximation framework to analyze and extend it. The thesis begins by looking at the finite dimensional setting. The finite dimensional quasi-stationary estimation algorithm was proposed in the Physics literature by [#latestoliveira, #oliveiradickman1, #dickman], however no proof was given there and it was not recognized as a stochastic approximation algorithm. This and related schemes were analyzed in the context of urn problems and the consistency of the estimator is shown there [#aldous1988two, #pemantle, #athreya]. The rate of convergence is studied by [#athreya] in special cases only. The first chapter provides a different proof of the algorithm's consistency and establishes a rate of convergence in more generality than [#athreya]. It is discovered that the rate of convergence is only fast when a certain restrictive eigenvalue condition is satisfied. Using the tool of iterate averaging, the algorithm can be modified and we can eliminate the eigenvalue condition. The thesis then moves onto the general state space discrete-time Markov chain setting. In this setting, the stochastic approximation framework does not have a strong theory in the current literature, so several of the convergence results have to be adapted because the iterates of our algorithm are measure-valued The chapter formulates the quasi-stationary estimation algorithm in this setting. Then, we extend the ODE method of [#kushner2003stochastic] and proves the consistency of algorithm. Through the proof, several non-restrictive conditions required for convergence of the algorithm are discovered. Finally, the thesis tests the algorithm by running some numerical experiments. The examples are designed to test the algorithm in various edge cases. The algorithm is also empirically compared against the Fleming-Viot method.
78

The Theory of Systemic Risk

Chen, Chen January 2014 (has links)
Systemic risk is an issue of great concern in modern financial markets as well as, more broadly, in the management of complex business and engineering systems. It refers to the risk of collapse of an entire complex system, as a result of the actions taken by the individual component entities or agents that comprise the system. We investigate the topic of systemic risk from the perspectives of measurement, structural sources, and risk factors. In particular, we propose an axiomatic framework for the measurement and management of systemic risk based on the simultaneous analysis of outcomes across agents in the system and over scenarios of nature. Our framework defines a broad class of systemic risk measures that accommodate a rich set of regulatory preferences. This general class of systemic risk measures captures many specific measures of systemic risk that have recently been proposed as special cases, and highlights their implicit assumptions. Moreover, the systemic risk measures that satisfy our conditions yield decentralized decompositions, i.e., the systemic risk can be decomposed into risk due to individual agents. Furthermore, one can associate a shadow price for systemic risk to each agent that correctly accounts for the externalities of the agent's individual decision-making on the entire system. Also, we provide a structural model for a financial network consisting of a set of firms holding common assets. In the model, endogenous asset prices are captured by the marketing clearing condition when the economy is in equilibrium. The key ingredients in the financial market that are captured in this model include the general portfolio choice flexibility of firms given posted asset prices and economic states, and the mark-to-market wealth of firms. The price sensitivity can be analyzed, where we characterize the key features of financial holding networks that minimize systemic risk, as a function of overall leverage. Finally, we propose a framework to estimate risk measures based on risk factors. By introducing a form of factor-separable risk measures, the acceptance set of the original risk measure connects to the acceptance sets of the factor-separable risk measures. We demonstrate that the tight bounds for factor-separable coherent risk measures can be explicitly constructed.
79

Essays on Inventory Management and Conjoint Analysis

Chen, Yupeng January 2015 (has links)
With recent theoretic and algorithmic advancements, modern optimization methodologies have seen a substantial expansion of modeling power, being applied to solve challenging problems in impressively diverse areas. This dissertation aims to extend the modeling frontier of optimization methodologies in two exciting fields inventory management and conjoint analysis. Although the three essays concern distinct applications using different optimization methodologies, they share a unifying theme, which is to develop intuitive models using advanced optimization techniques to solve problems of practical relevance. The first essay (Chapter 2) applies robust optimization to solve a single installation inventory model with non stationary uncertain demand. A classical problem in operations research, the inventory management model could become very challenging to analyze when lost sales dynamics, non zero fixed ordering cost, and positive lead time are introduced. In this essay, we propose a robust cycle based control policy based on an innovative decomposition idea to solve a family of variants of this model. The policy is simple, flexible, easily implementable and numerical experiments suggest that the policy has very promising empirical performance.The policy can be used both when the excess demand is backlogged as well as when it is lost; with non zero fixed ordering cost, and also when lead time is non zero. The policy decisions are computed by solving a collection of linear programs even when there is a positive fixed ordering cost. The policy also extends in a very simple manner to the joint pricing and inventory control problem. The second essay (Chapter 3) applies sparse machine learning to model multimodal continuous heterogeneity in conjoint analysis. Consumers' heterogeneous preferences can often be represented using a multimodal continuous heterogeneity (MCH) distribution. One interpretation of MCH is that the consumer population consists of a few distinct segments, each of which contains a heterogeneous sub population. Modeling of MCH raises considerable challenges as both across and within segment heterogeneity need to be accounted for. In this essay, we propose an innovative sparse learning approach for modeling MCH and apply it to conjoint analysis where adequate modeling of consumer heterogeneity is critical. The sparse learning approach models MCH via a two-stage divide and conquer framework, in which we first decompose the consumer population by recovering a set of candidate segmentations using structured sparsity modeling, and then use each candidate segmentation to develop a set of individual level representations of MCH. We select the optimal individual level representation of MCH and the corresponding optimal candidate segmentation using cross-validation. Two notable features of our approach are that it accommodates both across and within segment heterogeneity and endogenously imposes an adequate amount of shrinkage to recover the individual level partworths. We empirically validate the performance of the sparse learning approach using extensive simulation experiments and two empirical conjoint data sets. The third essay (Chapter 4) applies dynamic discrete choice models to investigate the impact of return policies on consumers' product purchase and return behavior. Return policies have been ubiquitous in the marketplace, allowing consumers to use and evaluate a product before fully committing to purchase. Despite the clear practical relevance of return policies, however, few studies have provided empirical assessments of how consumers' purchase and return decisions respond to the return policies facing them. In this essay, we propose to model consumers' purchase and return decisions using a dynamic discrete choice model with forward looking and Bayesian learning. More specifically, we postulate that consumers' purchase and return decisions are optimal solutions for some underlying dynamic expected utility maximization problem in which consumers learn their true evaluations of products via usage in a Bayesian manner and make purchase and return decisions to maximize their expected present value of utility, and return policies impact consumers' purchase and return decisions by entering the dynamic expected utility maximization problem as constraints. Our proposed model provides a behaviorally plausible approach to examine the impact of return policies on consumers' purchase and return behavior.
80

Perfect Simulation and Deployment Strategies for Detection

Wallwater, Aya January 2015 (has links)
This dissertation contains two parts. The first part provides the first algorithm that, under minimal assumptions, allows to simulate the stationary waiting-time sequence of a single-server queue backwards in time, jointly with the input processes of the queue (inter-arrival and service times). The single-server queue is useful in applications of DCFTP (Dominated Coupling From The Past), which is a well known protocol for simulation without bias from steady-state distributions. Our algorithm terminates in finite time assuming only finite mean of the inter-arrival and service times. In order to simulate the single-server queue in stationarity until the first idle period in finite expected termination time we require the existence of finite variance. This requirement is also necessary for such idle time (which is a natural coalescence time in DCFTP applications) to have finite mean. Thus, in this sense, our algorithm is applicable under minimal assumptions. The second part studies the behavior of diffusion processes in a random environment. We consider an adversary that moves in a given domain and our goal is to produce an optimal strategy to detect and neutralize him by a given deadline. We assume that the target's dynamics follows a diffusion process whose parameters are informed by available intelligence information. We will dedicate one chapter to the rigorous formulation of the detection problem, an introduction of several frameworks that can be considered when applying our methods, and a discussion on the challenges of finding the analytical optimal solution. Then, in the following chapter, we will present our main result, a large deviation behavior of the adversary's survival probability under a given strategy. This result will be later give rise to asymptotically efficient Monte Carlo algorithms.

Page generated in 0.0672 seconds