• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Distributionally Robust Performance Analysis: Data, Dependence and Extremes

He, Fei January 2018 (has links)
This dissertation focuses on distributionally robust performance analysis, which is an area of applied probability whose aim is to quantify the impact of model errors. Stochastic models are built to describe phenomena of interest with the intent of gaining insights or making informed decisions. Typically, however, the fidelity of these models (i.e. how closely they describe the underlying reality) may be compromised due to either the lack of information available or tractability considerations. The goal of distributionally robust performance analysis is then to quantify, and potentially mitigate, the impact of errors or model misspecifications. As such, distributionally robust performance analysis affects virtually any area in which stochastic modelling is used for analysis or decision making. This dissertation studies various aspects of distributionally robust performance analysis. For example, we are concerned with quantifying the impact of model error in tail estimation using extreme value theory. We are also concerned with the impact of the dependence structure in risk analysis when marginal distributions of risk factors are known. In addition, we also are interested in connections recently found to machine learning and other statistical estimators which are based on distributionally robust optimization. The first problem that we consider consists in studying the impact of model specification in the context of extreme quantiles and tail probabilities. There is a rich statistical theory that allows to extrapolate tail behavior based on limited information. This body of theory is known as extreme value theory and it has been successfully applied to a wide range of settings, including building physical infrastructure to withstand extreme environmental events and also guiding the capital requirements of insurance companies to ensure their financial solvency. Not surprisingly, attempting to extrapolate out into the tail of a distribution from limited observations requires imposing assumptions which are impossible to verify. The assumptions imposed in extreme value theory imply that a parametric family of models (known as generalized extreme value distributions) can be used to perform tail estimation. Because such assumptions are so difficult (or impossible) to be verified, we use distributionally robust optimization to enhance extreme value statistical analysis. Our approach results in a procedure which can be easily applied in conjunction with standard extreme value analysis and we show that our estimators enjoy correct coverage even in settings in which the assumptions imposed by extreme value theory fail to hold. In addition to extreme value estimation, which is associated to risk analysis via extreme events, another feature which often plays a role in the risk analysis is the impact of dependence structure among risk factors. In the second chapter we study the question of evaluating the worst-case expected cost involving two sources of uncertainty, each of them with a specific marginal probability distribution. The worst-case expectation is optimized over all joint probability distributions which are consistent with the marginal distributions specified for each source of uncertainty. So, our formulation allows to capture the impact of the dependence structure of the risk factors. This formulation is equivalent to the so-called Monge-Kantorovich problem studied in optimal transport theory, whose theoretical properties have been studied in the literature substantially. However, rates of convergence of computational algorithms for this problem have been studied only recently. We show that if one of the random variables takes finitely many values, a direct Monte Carlo approach allows to evaluate such worst case expectation with $O(n^{-1/2})$ convergence rate as the number of Monte Carlo samples, $n$, increases to infinity. Next, we continue our investigation of worst-case expectations in the context of multiple risk factors, not only two of them, assuming that their marginal probability distributions are fixed. This problem does not fit the mold of standard optimal transport (or Monge-Kantorovich) problems. We consider, however, cost functions which are separable in the sense of being a sum of functions which depend on adjacent pairs of risk factors (think of the factors indexed by time). In this setting, we are able to reduce the problem to the study of several separate Monge-Kantorovich problems. Moreover, we explain how we can even include martingale constraints which are often natural to consider in settings such as financial applications. While in the previous chapters we focused on the impact of tail modeling or dependence, in the later parts of the dissertation we take a broader view by studying decisions which are made based on empirical observations. So, we focus on so-called distributionally robust optimization formulations. We use optimal transport theory to model the degree of distributional uncertainty or model misspecification. Distributionally robust optimization based on optimal transport has been a very active research topic in recent years, our contribution consists in studying how to specify the optimal transport metric in a data-driven way. We explain our procedure in the context of classification, which is of substantial importance in machine learning applications.
862

Efficient Simulation and Performance Stabilization for Time-Varying Single-Server Queues

Ma, Ni January 2019 (has links)
This thesis develops techniques to evaluate and to improve the performance of single-server service systems with time-varying arrivals. The performance measures considered are the time-varying expected length of the queue and the expected customer waiting time. Time varying arrival rates are considered because they often occur in service systems. For example, arrival rates often vary significantly over the hours of each day and over the days of each week. Stochastic textbook methods do not apply to models with time-varying arrival rates. Hence new techniques are needed to provide high quality of service when stationary steady-state analysis is not appropriate. In contrast to the extensive recent literature on many-server queues with time-varying arrival rates, we focus on single-server queues with time-varying arrival rates. Single-server queues arise in real applications where there is no flexibility in the number of service facilities (servers). Different analysis techniques are required for single-server queues, because the two kinds of models exhibit very different performance. Many-server models are more tractable because methods for highly tractable infinite-server models can be applied. In contrast, single-server models are more complicated because it takes a long time to respond to a build up of workload when there is only one server. The thesis is divided into two parts: simulation algorithms for performance evaluation and service-rate controls for performance stabilization. The first part of the thesis develops algorithms to efficiently simulate the single-server time-varying queue. For the generality considered, no explicit mathematical formulas are available for calculating performance measures, so simulation experiments are needed to calculate and evaluate system performance. Efficient algorithms for both standard simulation and rare-event simulation are developed. The second part of the thesis develops service-rate controls to stabilize performance in the time-varying single-server queue. The performance stabilization problem aims to minimize fluctuations in mean waiting times for customers coming at different times even though the arrival rate is time-varying. A new service rate control is developed, where the service rate at each time is a function of the arrival rate function. We show that a specific service rate control can be found to stabilize performance. In turn, that service rate control can be used to provide guidance for real applications on optimal changes in staffing, processing speed or machine power status over time. Both the simulation experiments to evaluate performance of alternative service-rate controls and the simulation search algorithm to find the best parameters for a damped time-lag service-rate control are based on efficient performance evaluation algorithms in the first part of the thesis. In Chapter Two, we present an efficient algorithm to simulate a general non-Poisson non-stationary point process. The general point process can be represented as a time transformation of a rate-one base process and by exploiting a table of the inverse cumulative arrival rate function outside of simulation, we can efficiently convert the simulated rate-one process into the simulated general point process. The simulation experiments can be conducted in linear time subject to small error bounds. Then we can apply this efficient algorithm to generate the arrival process, the service process and thus to calculate performance measures for the G_t/G_t/1 queues, which are single-server queues with time-varying arrival rates and service rates. Service models are constructed for this purpose where time-varying service rates are specified separately from the rate-one service requirement process, and service times are determined by equating service requirements with integrals of service rates over a time period equal to the service time. In Chapter Three, we develop rare-event simulation algorithms in periodic GI_t/GI/1 queues and further in GI_t/GI_t/1 queues to estimate probabilities of rare but important events as a sanity check of the system, for example, estimating the probability that the waiting time is very long. Importance sampling, specifically exponential tilting, is required to estimate rare-event probabilities because in standard simulation, the number of experiments may blow up to achieve a targeted relative error and for each experiment, it may take a very long time to determine that the rare event does not happen. To extend the rare-event simulation algorithm to periodic queues, we derive a convenient expression for the periodic steady-state virtual waiting time. We apply this expression to establish bounds between the periodic workload and the steady-state workload in stationary queues, so that we can prove that the exponential tilting algorithm with the same parameter efficient in stationary queues is efficient in the periodic setting as well, which has a bounded relative error. We apply this algorithm to compute the periodic steady-state distribution of reflected periodic Brownian motion with support of a heavy-traffic limit theorem and to calculate the periodic steady-state distribution and moments of the virtual waiting time. This algorithm's advantage in calculating these distributions and moments is that it can directly estimate them at a specific position of the cycle without simulating the whole queueing process until steady state is reached for the whole cycle. In Chapter Four, we conduct simulation experiments to validate performance of four service-rate controls: the rate-matching control, which is directly proportional to the arrival rate, two square-root controls related to the square root staffing formula and the square-root control based on the mean stationary waiting time. Simulations show that the rate-matching control stabilizes the queue length distribution but not the virtual waiting time. This is consistent with established theoretical results, which follow from the observation that with rate-matching control, the queueing process becomes a time transformation of the stationary queueing process with constant arrival rates and service rates. Simulation results also show that the two square-root controls analogous to the server staffing formula are not effective in stabilizing performance. On the other hand, the alternative square-root service rate control based on the mean stationary waiting time approximately stabilizes the virtual waiting time when the cycle is long so that the arrival rate changes slowly enough. In Chapter Five, since we are mostly interested in stabilizing waiting times in more common scenarios when the traffic intensity is not close to one or when the arrival rate does not change slowly, we develop a damped time-lag service-rate control that performs fairly well for this purpose. This control is a modification of the rate-matching control involving a time lag and a damping factor. To find the best parameters for this control, we search over reasonable intervals for the most time-stable performance measures, which are computed by the extended rare-event simulation algorithm in GI_t/GI_t/1 queue. We conduct simulation experiments to validate that this control is effective for stabilizing the expected steady-state virtual waiting time (and its distribution to a large extent). We also establish a heavy-traffic limit with periodicity in the fluid scale to provide theoretical support for this control. We also show that there is a time-varying Little's law in heavy-traffic, which implies that this control cannot stabilize the queue length and the waiting time at the same time.
863

Structured Tensor Recovery and Decomposition

Mu, Cun January 2017 (has links)
Tensors, a.k.a. multi-dimensional arrays, arise naturally when modeling higher-order objects and relations. Among ubiquitous applications including image processing, collaborative filtering, demand forecasting and higher-order statistics, there are two recurring themes in general: tensor recovery and tensor decomposition. The first one aims to recover the underlying tensor from incomplete information; the second one is to study a variety of tensor decompositions to represent the array more concisely and moreover to capture the salient characteristics of the underlying data. Both topics are respectively addressed in this thesis. Chapter 2 and Chapter 3 focus on low-rank tensor recovery (LRTR) from both theoretical and algorithmic perspectives. In Chapter 2, we first provide a negative result to the sum of nuclear norms (SNN) model---an existing convex model widely used for LRTR; then we propose a novel convex model and prove this new model is better than the SNN model in terms of the number of measurements required to recover the underlying low-rank tensor. In Chapter 3, we first build up the connection between robust low-rank tensor recovery and the compressive principle component pursuit (CPCP), a convex model for robust low-rank matrix recovery. Then we focus on developing convergent and scalable optimization methods to solve the CPCP problem. In specific, our convergent method, proposed by combining classical ideas from Frank-Wolfe and proximal methods, achieves scalability with linear per-iteration cost. Chapter 4 generalizes the successive rank-one approximation (SROA) scheme for matrix eigen-decomposition to a special class of tensors called symmetric and orthogonally decomposable (SOD) tensor. We prove that the SROA scheme can robustly recover the symmetric canonical decomposition of the underlying SOD tensor even in the presence of noise. Perturbation bounds, which can be regarded as a higher-order generalization of the Davis-Kahan theorem, are provided in terms of the noise magnitude.
864

Optimal Dynamic Strategies for Index Tracking and Algorithmic Trading

Ward, Brian Michael January 2017 (has links)
In this thesis we study dynamic strategies for index tracking and algorithmic trading. Tracking problems have become ever more important in Financial Engineering as investors seek to precisely control their portfolio risks and exposures over different time horizons. This thesis analyzes various tracking problems and elucidates the tracking errors and strategies one can employ to minimize those errors and maximize profit. In Chapters 2 and 3, we study the empirical tracking properties of exchange traded funds (ETFs), leveraged ETFs (LETFs), and futures products related to spot gold and the Chicago Board Option Exchange (CBOE) Volatility Index (VIX), respectively. These two markets provide interesting and differing examples for understanding index tracking. We find that static strategies work well in the nonleveraged case for gold, but fail to track well in the corresponding leveraged case. For VIX, tracking via neither ETFs, nor futures portfolios succeeds, even in the nonleveraged case. This motivates the need for dynamic strategies, some of which we construct in these two chapters and further expand on in Chapter 4. There, we analyze a framework for index tracking and risk exposure control through financial derivatives. We derive a tracking condition that restricts our exposure choices and also define a slippage process that characterizes the deviations from the index over longer horizons. The framework is applied to a number of models, for example, Black-Scholes model and Heston model for equity index tracking, as well as the Square Root (SQR) model and the Concatenated Square Root (CSQR) model for VIX tracking. By specifying how each of these models fall into our framework, we are able to understand the tracking errors in each of these models. Finally, Chapter 5 analyzes a tracking problem of a different kind that arises in algorithmic trading: schedule following for optimal execution. We formulate and solve a stochastic control problem to obtain the optimal trading rates using both market and limit orders. There is a quadratic terminal penalty to ensure complete liquidation as well as a trade speed limiter and trader director to provide better control on the trading rates. The latter two penalties allow the trader to tailor the magnitude and sign (respectively) of the optimal trading rates. We demonstrate the applicability of the model to following a benchmark schedule. In addition, we identify conditions on the model parameters to ensure optimality of the controls and finiteness of the associated value functions. Throughout the chapter, numerical simulations are provided to demonstrate the properties of the optimal trading rates.
865

Optimal supply chain configuration for the additive manufacturing of biomedical implants

Emelogu, Adindu Ahurueze 11 January 2017 (has links)
<p> In this dissertation, we study two important problems related to additive manufacturing (AM). In the first part, we investigate the economic feasibility of using AM to fabricate biomedical implants at the sites of hospitals AM versus traditional manufacturing (TM). We propose a cost model to quantify the supply-chain level costs associated with the production of biomedical implants using AM technology, and formulate the problem as a two-stage stochastic programming model, which determines the number of AM facilities to be established and volume of product flow between manufacturing facilities and hospitals at a minimum cost. We use the sample average approximation (SAA) approach to obtain solutions to the problem for a real-world case study of hospitals in the state of Mississippi. We find that the ratio between the unit production costs of AM and TM (ATR), demand and product lead time are key cost parameters that determine the economic feasibility of AM.</p><p> In the second part, we investigate the AM facility deployment approaches which affect both the supply chain network cost and the extent of benefits derived from AM. We formulate the supply chain network cost as a continuous approximation model and use optimization algorithms to determine how centralized or distributed the AM facilities should be and how much raw materials these facilities should order so that the total network cost is minimized. We apply the cost model to a real-world case study of hospitals in 12 states of southeastern USA. We find that the demand for biomedical implants in the region, fixed investment cost of AM machines, personnel cost of operating the machines and transportation cost are the major factors that determine the optimal AM facility deployment configuration.</p><p> In the last part, we propose an enhanced sample average approximation (eSAA) technique that improves the basic SAA method. The eSAA technique uses clustering and statistical techniques to overcome the sample size issue inherent in basic SAA. Our results from extensive numerical experiments indicate that the eSAA can perform up to 699% faster than the basic SAA, thereby making it a competitive solution approach of choice in large scale stochastic optimization problems.</p>
866

Implementing reusable solvers : an object-oriented framework for operations research algorithms

Ruark, John Douglas, 1971- January 1998 (has links)
Thesis (Ph.D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 1998. / Includes bibliographical references (p. 325-338) and indexes. / by John Douglas Ruark. / Ph.D.
867

DRUG SUPPLY CHAIN OPTIMIZATION FOR ADAPTIVE CLINICAL TRIALS

Wei-An Chen (7474730) 17 October 2019 (has links)
As adaptive clinical trials (ACTs) receive growing attention and exhibit promising performance in practical trials during last decade, they also present challenges to drug supply chain management. As indicated by Burnham et al. (2015), the challenges include the uncertainty of maximum drug supply needed, the shifting of supply requirement, and rapid availability of new supply at decision points. To facilitate drug supply decision making and the development of mathematical analysis tools, we propose two trial supply chain optimization problems that represent different mindsets in response to trial adaptations. In the first problem, we treat the impacts of ACTs as exogenous uncertainties and study important aspects of trial supply, including drug wastage, resupply policy, trial length, and costs minimization, via a two-stage stochastic program. In the second problem, we incorporate the adaptation rules of ACTs with supply chain management and numerically study the impact of joint optimization on the trial and drug supply planning through a mixed-integer nonlinear program (MINLP). For solution approaches to the problems, we use progressive hedging algorithm (PHA) and particle swarm optimization (PSO) respectively, and take advantages of the problem structures to enhance the solution efficiency. With case studies, we see that the proposed models capture the features of ACT drug supply and the mechanisms of trial conduction well. The solutions not only reflect the impact of trial adaptations but also provide managerial suggestions, e.g. the prediction of needed production amount, storage capacity at clinical sites, and resupply schemes. The joint optimization also suggests a new angle and research extension in the field of ACT design and supply.
868

Pricing Analytics for Reusable Resources

Sun, Yunjie January 2019 (has links)
First, we consider a fundamental pricing model for a single type of reusable resource in which a fixed number of units are used to serve stochastically arriving customers. Customers choose to purchase the resource based on their willingness-to-pay and the current price. If purchased, occupy one unit of the reusable resources for a random amount of time. The firm seeks to maximize a weighted combination of profit, market share, and service level. We establish a series of theoretical results that characterize the strong universal performance of static pricing in such an environment. Second, we describe a comprehensive approach to pricing analytics for reusable resources in the context of rotable spare parts with an industrial partner. We discuss the process of instilling a new pricing culture and developing a scalable new pricing methodology at a major aircraft manufacturer. We develop a novel pricing analytics approach for all rotable spare parts. The new approach tackles the challenges of limited data availability, minimal demand information, and complex inventory dynamics. We also present a successful large-scale implementation of our approach which led to significant profit gains. Third, we extend the pricing model for reusable resources to the setting of multiple customer classes. We describe two types of heuristics for this class of problem with accompanying numerical experiments. In addition, we provide a universal performance guarantee for a special case. We also discuss the role of substitution effects between different classes of customers.
869

Complexity of scheduling problems with constraints

January 1992 (has links)
In deterministic scheduling theory, scheduling problems have been formulated with common assumptions that: (i) timing constraints are absolute and independent; (ii) performance measures are regular; and (iii) a job can execute on at most one processor at any time. In this dissertation, we investigate the complexity of scheduling problems with constraints that do not conform to these assumptions Firstly, we consider a relative timing constraint called temporal distance constraint. We study the problem of scheduling a set of unit-execution-time jobs on one processor subject to both deadline and temporal distance constraints. We show that the problem is NP-hard in the strong sense even when temporal distance constraints take the structure of arbitrary chains and arbitrary values. When the values of temporal distance constraints are the same constant, we extend Han and Lin's work to accommodate problem instances subject to temporal distance constraints of directed trees in which the root has at most two immediate children. We present an $O(n)$ algorithm for the distinct deadline case and an $O(n\sp2)$ algorithm for the non-distinct deadline case Secondly, we consider a non-regular performance measure that imposes penalties for jobs completed early or late. We study the problem of finding a nonpreemptive schedule with minimum maximal weighted earliness and tardiness about a common due date d on one processor. For jobs with arbitrary execution times and arbitrary weights, we prove that this problem in NP-hard in the ordinary sense for any given relation between d and the total execution time. For jobs with equal execution time, we present an $O(n\sp2$) algorithm to find an optimal nonpreemptive schedule Thirdly, we propose a new scheduling model, called the Generalized Task System, to provide the flexibility of both sequential and simultaneous parallel execution of subtasks of a single job on multiple processors. Based on this new model, we study the problem of scheduling a set of independent jobs on two identical processors with the objective of minimizing the total waiting time. For both preemptive and nonpreemptive cases, we show that the problem is NP-hard in the ordinary sense when jobs are allowed to have two parallelizable subtasks / acase@tulane.edu
870

Wavelet Methods in Quality Engineering: Statistical Process Monitoring and Experimentation for Profile Responses

Unknown Date (has links)
Advances in measurement technology have led to an interest in methods for analyzing functional response data, also known as profiles. Profiles are response variables that, rather than taking on a single value, can be considered a function of one or more independent variables. In quality engineering, profiles present challenges for both statistical process monitoring and experimentation because they tend to be high dimensional. High dimensional responses can result in low power tests statistics and may preclude the use of conventional multivariate statistics. Moreover, profile responses can differ at any combination of locations along the independent variable axes, compared to a simple increase or decrease for a single-valued response. This leads to potentially ambiguous interpretation of results and may induce a disparity in the ability to detect differences that occur at only a few points (a local difference) compared to a systematic difference that impacts the entire length of the profile (a global difference). Wavelet-based methods show a strong potential for addressing these challenges. This dissertation presents an overview of wavelets, emphasizing the potential advantages of wavelets for statistical process monitoring applications. Next, the performances of wavelet-based, parametric, and residual control chart methods to quickly detect a range of local and global within-profile change types are compared and contrasted. Finally, four methods are proposed for testing hypotheses about profile differences between treatments. The performance of these methods are compared and an extension to one-way ANOVA is introduced. We conclude that for both profile monitoring and hypothesis testing applications, wavelet-based methods can out-perform other approaches. In addition, wavelet-based statistical methods tend be more robust than competing approaches when the local or global nature of process changes or profile differences are not known a priori. / A Dissertation submitted to the Department of Industrial and Manufacturing Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Degree Awarded: Summer Semester, 2008. / Date of Defense: June 30, 2008. / Functional Responses, Statistical Process Control, Discrete Wavelet Transform / Includes bibliographical references. / Joseph J. Pignatiello, Jr., Professor Co-Directing Dissertation; James R. Simpson, Professor Co-Directing Dissertation; Eric Chicken, Outside Committee Member; Timothy J. Robinson, Committee Member.

Page generated in 0.3134 seconds