• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 45
  • 17
  • 15
  • 5
  • 4
  • 1
  • Tagged with
  • 110
  • 110
  • 110
  • 36
  • 27
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Acceleration of Iterative Methods for Markov Decision Processes

Shlakhter, Oleksandr 21 April 2010 (has links)
This research focuses on Markov Decision Processes (MDP). MDP is one of the most important and challenging areas of Operations Research. Every day people make many decisions: today's decisions impact tomorrow's and tomorrow's will impact the ones made the day after. Problems in Engineering, Science, and Business often pose similar challenges: a large number of options and uncertainty about the future. MDP is one of the most powerful tools for solving such problems. There are several standard methods for finding optimal or approximately optimal policies for MDP. Approaches widely employed to solve MDP problems include value iteration and policy iteration. Although simple to implement, these approaches are, nevertheless, limited in the size of problems that can be solved, due to excessive computation required to find close-to-optimal solutions. My thesis proposes a new value iteration and modified policy iteration methods for classes of the expected discounted MDPs and average cost MDPs. We establish a class of operators that can be integrated into value iteration and modified policy iteration algorithms for Markov Decision Processes, so as to speed up the convergence of the iterative search. Application of these operators requires a little additional computation per iteration but reduces the number of iterations significantly. The development of the acceleration operators relies on two key properties of Markov operator, namely contraction mapping and monotonicity in a restricted region. Since Markov operators of the classical value iteration and modified policy iteration methods for average cost MDPs do not possess the contraction mapping property, for these models we restrict our study to average cost problems that can be formulated as the stochastic shortest path problem. The performance improvement is significant, while the implementation of the operator into the value iteration is trivial. Numerical studies show that the accelerated methods can be hundreds of times more efficient for solving MDP problems than the other known approaches. The computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which case the standard iterative algorithms suffer from slow convergence.
2

Acceleration of Iterative Methods for Markov Decision Processes

Shlakhter, Oleksandr 21 April 2010 (has links)
This research focuses on Markov Decision Processes (MDP). MDP is one of the most important and challenging areas of Operations Research. Every day people make many decisions: today's decisions impact tomorrow's and tomorrow's will impact the ones made the day after. Problems in Engineering, Science, and Business often pose similar challenges: a large number of options and uncertainty about the future. MDP is one of the most powerful tools for solving such problems. There are several standard methods for finding optimal or approximately optimal policies for MDP. Approaches widely employed to solve MDP problems include value iteration and policy iteration. Although simple to implement, these approaches are, nevertheless, limited in the size of problems that can be solved, due to excessive computation required to find close-to-optimal solutions. My thesis proposes a new value iteration and modified policy iteration methods for classes of the expected discounted MDPs and average cost MDPs. We establish a class of operators that can be integrated into value iteration and modified policy iteration algorithms for Markov Decision Processes, so as to speed up the convergence of the iterative search. Application of these operators requires a little additional computation per iteration but reduces the number of iterations significantly. The development of the acceleration operators relies on two key properties of Markov operator, namely contraction mapping and monotonicity in a restricted region. Since Markov operators of the classical value iteration and modified policy iteration methods for average cost MDPs do not possess the contraction mapping property, for these models we restrict our study to average cost problems that can be formulated as the stochastic shortest path problem. The performance improvement is significant, while the implementation of the operator into the value iteration is trivial. Numerical studies show that the accelerated methods can be hundreds of times more efficient for solving MDP problems than the other known approaches. The computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which case the standard iterative algorithms suffer from slow convergence.
3

A minimum cost and risk mitigation approach for blood collection

Zeng, Chenxi 27 May 2016 (has links)
Due to the limited supply and perishable nature of blood products, effective management of blood collection is critical for high quality healthcare delivery. Whole blood is typically collected over a 6 to 8 hour collection window from volunteer donors at sites, e.g., schools, universities, churches, companies, that are a significant distance from the blood products processing facility and then transported from collection site to processing facility by a blood mobile. The length of time between collecting whole blood and processing it into cryoprecipitate ("cryo"), a critical blood product for controlling massive hemorrhaging, cannot take longer than 8 hours (the 8 hour collection to completion constraint), while the collection to completion constraint for other blood products is 24 hours. In order to meet the collection to completion constraint for cryo, it is often necessary to have a "mid-drive collection"; i.e., for a vehicle other than the blood mobile to pickup and transport, at extra cost, whole blood units collected during early in the collection window to the processing facility. In this dissertation, we develop analytical models to: (1) analyze which collection sites should be designated as cryo collection sites to minimize total collection costs while satisfying the collection to completion constraint and meeting the weekly production target (the non-split case), (2) analyze the impact of changing the current process to allow collection windows to be split into two intervals and then determining which intervals should be designated as cryo collection intervals (the split case), (3) insure that the weekly production target is met with high probability. These problems lead to MDP models with large state and action spaces and constraints to guarantee that the weekly production target is met with high probability. These models are computationally intractable for problems having state and action spaces of realistic cardinality. We consider two approaches to guarantee that the weekly production target is met with high probability: (1) a penalty function approach and (2) a chance constraint approach. For the MDP with penalty function approach, we first relax a constraint that significantly reduces the cardinality of the state space and provides a lower bound on the optimal expected weekly cost of collecting whole blood for cryo while satisfying the collection to completion constraint. We then present an action elimination procedure that coupled with the constraint relaxation leads to a computationally tractable lower bound. We then develop several heuristics that generate sub-optimal policies and provide an analytical description of the difference between the upper and lower bounds in order to determine the quality of the heuristics. For the multiple decision epoch MDP model with chance constraint approach, we first note by example that a straightforward application of dynamic programming can lead to a sub-optimal policy. We then restrict the model to a single decision epoch. We then use a computationally tractable rolling horizon procedure for policy determination. We also present a simple greedy heuristic (another rolling horizon decision making procedure) based on ranking the collection intervals by mid-drive pickup cost per unit of expected cryo collected, which results in a competitive sub-optimal solution and leads to the development of a practical decision support tool (DST). Using real data from the American Red Cross (ARC), we estimate that this DST reduces total cost by about 30% for the non-split case and 70% for the split case, compared to the current practice. Initial implementation of the DST at the ARC Southern regional manufacturing and service center supports our estimates and indicates the potential for significant improvement in current practice.
4

Reinforcement learning by incremental patching

Kim, Min Sub, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This thesis investigates how an autonomous reinforcement learning agent can improve on an approximate solution by augmenting it with a small patch, which overrides the approximate solution at certain states of the problem. In reinforcement learning, many approximate solutions are smaller and easier to produce than ???flat??? solutions that maintain distinct parameters for each fully enumerated state, but the best solution within the constraints of the approximation may fall well short of global optimality. This thesis proposes that the remaining gap to global optimality can be efficiently minimised by learning a small patch over the approximate solution. In order to improve the agent???s behaviour, algorithms are presented for learning the overriding patch. The patch is grown around particular regions of the problem where the approximate solution is found to be deficient. Two heuristic strategies are proposed for concentrating resources to those areas where inaccuracies in the approximate solution are most costly, drawing a compromise between solution quality and storage requirements. Patching also handles problems with continuous state variables, by two alternative methods: Kuhn triangulation over a fixed discretisation and nearest neighbour interpolation with a variable discretisation. As well as improving the agent???s behaviour, patching is also applied to the agent???s model of the environment. Inaccuracies in the agent???s model of the world are detected by statistical testing, using a selective sampling strategy to limit storage requirements for collecting data. The patching algorithms are demonstrated in several problem domains, illustrating the effectiveness of patching under a wide range of conditions. A scenario drawn from a real-time strategy game demonstrates the ability of patching to handle large complex tasks. These contributions combine to form a general framework for patching over approximate solutions in reinforcement learning. Complex problems cannot be solved by brute force alone, and some form of approximation is necessary to handle large problems. However, this does not mean that the limitations of approximate solutions must be accepted without question. Patching demonstrates one way in which an agent can leverage approximation techniques without losing the ability to handle fine yet important details.
5

Systems Medicine: An Integrated Approach with Decision Making Perspective

Faryabi, Babak 14 January 2010 (has links)
Two models are proposed to describe interactions among genes, transcription factors, and signaling cascades involved in regulating a cellular sub-system. These models fall within the class of Markovian regulatory networks, and can accommodate for different biological time scales. These regulatory networks are used to study pathological cellular dynamics and discover treatments that beneficially alter those dynamics. The salient translational goal is to design effective therapeutic actions that desirably modify a pathological cellular behavior via external treatments that vary the expressions of targeted genes. The objective of therapeutic actions is to reduce the likelihood of the pathological phenotypes related to a disease. The task of finding effective treatments is formulated as sequential decision making processes that discriminate the gene-expression profiles with high pathological competence versus those with low pathological competence. Thereby, the proposed computational frameworks provide tools that facilitate the discovery of effective drug targets and the design of potent therapeutic actions on them. Each of the proposed system-based therapeutic methods in this dissertation is motivated by practical and analytical considerations. First, it is determined how asynchronous regulatory models can be used as a tool to search for effective therapeutic interventions. Then, a constrained intervention method is introduced to incorporate the side-effects of treatments while searching for a sequence of potent therapeutic actions. Lastly, to bypass the impediment of model inference and to mitigate the numerical challenges of exhaustive search algorithms, a heuristic method is proposed for designing system-based therapies. The presentation of the key ideas in method is facilitated with the help of several case studies.
6

Reinforcement learning by incremental patching

Kim, Min Sub, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
This thesis investigates how an autonomous reinforcement learning agent can improve on an approximate solution by augmenting it with a small patch, which overrides the approximate solution at certain states of the problem. In reinforcement learning, many approximate solutions are smaller and easier to produce than ???flat??? solutions that maintain distinct parameters for each fully enumerated state, but the best solution within the constraints of the approximation may fall well short of global optimality. This thesis proposes that the remaining gap to global optimality can be efficiently minimised by learning a small patch over the approximate solution. In order to improve the agent???s behaviour, algorithms are presented for learning the overriding patch. The patch is grown around particular regions of the problem where the approximate solution is found to be deficient. Two heuristic strategies are proposed for concentrating resources to those areas where inaccuracies in the approximate solution are most costly, drawing a compromise between solution quality and storage requirements. Patching also handles problems with continuous state variables, by two alternative methods: Kuhn triangulation over a fixed discretisation and nearest neighbour interpolation with a variable discretisation. As well as improving the agent???s behaviour, patching is also applied to the agent???s model of the environment. Inaccuracies in the agent???s model of the world are detected by statistical testing, using a selective sampling strategy to limit storage requirements for collecting data. The patching algorithms are demonstrated in several problem domains, illustrating the effectiveness of patching under a wide range of conditions. A scenario drawn from a real-time strategy game demonstrates the ability of patching to handle large complex tasks. These contributions combine to form a general framework for patching over approximate solutions in reinforcement learning. Complex problems cannot be solved by brute force alone, and some form of approximation is necessary to handle large problems. However, this does not mean that the limitations of approximate solutions must be accepted without question. Patching demonstrates one way in which an agent can leverage approximation techniques without losing the ability to handle fine yet important details.
7

Optimizing Trading Decisions for Hydro Storage Systems using Approximate Dual Dynamic Programming

Löhndorf, Nils, Wozabal, David, Minner, Stefan 22 August 2013 (has links) (PDF)
We propose a new approach to optimize operations of hydro storage systems with multiple connected reservoirs whose operators participate in wholesale electricity markets. Our formulation integrates short-term intraday with long-term interday decisions. The intraday problem considers bidding decisions as well as storage operation during the day and is formulated as a stochastic program. The interday problem is modeled as a Markov decision process of managing storage operation over time, for which we propose integrating stochastic dual dynamic programming with approximate dynamic programming. We show that the approximate solution converges towards an upper bound of the optimal solution. To demonstrate the efficiency of the solution approach, we fit an econometric model to actual price and in inflow data and apply the approach to a case study of an existing hydro storage system. Our results indicate that the approach is tractable for a real-world application and that the gap between theoretical upper and a simulated lower bound decreases sufficiently fast. (authors' abstract)
8

The Essential Dynamics Algorithm: Essential Results

Martin, Martin C. 01 May 2003 (has links)
This paper presents a novel algorithm for learning in a class of stochastic Markov decision processes (MDPs) with continuous state and action spaces that trades speed for accuracy. A transform of the stochastic MDP into a deterministic one is presented which captures the essence of the original dynamics, in a sense made precise. In this transformed MDP, the calculation of values is greatly simplified. The online algorithm estimates the model of the transformed MDP and simultaneously does policy search against it. Bounds on the error of this approximation are proven, and experimental results in a bicycle riding domain are presented. The algorithm learns near optimal policies in orders of magnitude fewer interactions with the stochastic MDP, using less domain knowledge. All code used in the experiments is available on the project's web site.
9

A Stochastic Vendor Managed Inventory Problem and Its Variations

Balun, Pairote 14 May 2004 (has links)
We analyze the problem of distributing units of a product, by a capacitated vehicle, from one storage location (depot) to multiple retailers. The demand processes at the retailers are stochastic and time-dependent. Based on current inventory information, the decision maker decides how many units of the product to deposit at the current retailer, or pick up at the depot, and which location to visit next. We refer to this problem as the stochastic vendor managed inventory (SVMI) problem. In the Markov decision process model of the SVMI problem, we show how a retailer continues to be the vehicle's optimal destination as inventory levels of the retailers vary. Furthermore, an optimal inventory action is shown to have monotone relations with the inventory levels. The multi-period SVMI problem and the infinite horizon (periodic) SVMI problem are analyzed. Additionally, we develop three suboptimal solution procedures, complete a numerical study, and present a case study, which involves a distribution problem at the Coca-Cola Enterprises, Inc. We consider four variations of the SVMI problem, which differ in the available state information and/or the vehicle routing procedure. Analytically, we compare the optimal expected total rewards for the SVMI problem and its variations. Our computational experience suggests a complementary relationship between the quality of state information and the size of the set of retailers that the vehicle can visit.
10

Analytical and empirical models of online auctions

Ødegaard, Fredrik 11 1900 (has links)
This thesis provides a discussion on some analytical and empirical models of online auctions. The objective is to provide an alternative framework for analyzing online auctions, and to characterize the distribution of intermediate prices. Chapter 1 provides a mathematical formulation of the eBay auction format and background to the data used in the empirical analysis. Chapter 2 analyzes policies for optimally disposing inventory using online auctions. It is assumed a seller has a fixed number of items to sell using a sequence of, possibly overlapping, single-item auctions. The decision the seller must make is when to start each auction. The decision involves a trade-off between a holding cost for each period an item remains unsold, and a cannibalization effect among competing auctions. Consequently the seller must trade-off the expected marginal gain for the ongoing auctions with the expected marginal cost of the unreleased items by further deferring their release. The problem is formulated as a discrete time Markov Decision Problem. Conditions are derived to ensure that the optimal release policy is a control limit policy in the current price of the ongoing auctions. Chapter 2 focuses on the two item case which has sufficient complexity to raise challenging questions. An underlying assumption in Chapter 2 is that the auction dynamics can be captured by a set of transition probabilities. Chapter 3 shows with two fixed bidding strategies how the transition probabilities can be derived for a given auction format and bidder arrival process. The two specific bidding strategies analyzed are when bidders bid: 1) a minimal increment, and 2) their true valuation. Chapters 4 and 5 provides empirical analyzes of 4,000 eBay auctions conducted by Dell. Chapter 4 provides a statistical model where over discrete time periods, prices of online auctions follow a zero-inflated gamma distribution. Chapter 5 provides an analysis of the 44,000 bids placed in the auctions, based on bids following a gamma distribution. Both models presented in Chapters 4 and 5 are based on conditional probabilities given the price and elapsed time of an auction, and certain parameters of the competing auctions. Chapter 6 concludes the thesis with a discussion of the main results and possible extensions.

Page generated in 0.1262 seconds