Spelling suggestions: "subject:"markov decision canprocess"" "subject:"markov decision 3.3vprocess""
41 |
Managing populations in the face of uncertainty: adaptive management, partial observability and the dynamic value of information.Moore, Alana L. January 2008 (has links)
The work presented in this thesis falls naturally into two parts. The first part (Chapter 2), is concerned with the benefit of perturbing a population into an immediately undesirable state, in order to improve estimates of a static probability which may improve long-term management. We consider finding the optimal harvest policy for a theoretical harvested population when a key parameter is unknown. We employ an adaptive management framework to study when it is worth sacrificing short term rewards in order to increase long term profits. / Active adaptive management has been increasingly advocated in natural resource management and conservation biology as a methodology for resolving key uncertainties about population dynamics and responses to management. However, when comparing management policies it is traditional to weigh future rewards geometrically (at a constant discount rate) which results in far-distant rewards making a negligible contribution to the total benefit. Under such a discounting scheme active adaptive management is rarely of much benefit, especially if learning is slow. In Chapter 2, we consider two proposed alternative forms of discounting for evaluating optimal policies for long term decisions which have a social component. / We demonstrate that discount functions which weigh future rewards more heavily result in more conservative harvesting strategies, but do not necessarily encourage active learning. Furthermore, the optimal management strategy is not equivalent to employing geometric discounting at a lower rate. If alternative discount functions are made mandatory in calculating optimal management policies for environmental management, then this will affect the structure of optimal management regimes and change when and how much we are willing to invest in learning. / The second part of this thesis is concerned with how to account for partial observability when calculating optimal management policies. We consider the problem of controlling an invasive pest species when only partial observations are available at each time step. In the model considered, the monitoring data available are binomial observations of a probability which is an index of the population size. We are again concerned with estimating a probability, however, in this model the probability is changing over time. / Before including partial observability explicitly, we consider a model in which perfect observations of the population are available at each time step (Chapter 3). It is intuitive that monitoring will be beneficial only if the management decision depends on the outcome. Hence, a necessary condition for monitoring to be worthwhile is that control polices which are specified in terms of the system state, out-perform simpler time-based control policies. Consequently, in addition to providing a benchmark against which we can compare the optimal management policy in the case of partial observations, analysing the perfect observation case also provides insight into when monitoring is likely to be most valuable. / In Chapters 4 and 5 we include partial observability by modelling the control problem as a partially observable Markov decision process (POMDP). We outline several tests which stem from a property of conservation of expected utility under monitoring, which aid in validating the model. We discuss the optimal management policy prescribed by the POMDP for a range of model scenarios, and use simulation to compare the POMDP management policy to several alternative policies, including controlling with perfect observations and no observations. / In Chapter 6 we propose an alternative model, developed in the spirit of a POMDP, that does not strictly satisfy the definition of a POMDP. We find that although the second model has some conceptually appealing attributes, it makes an undesirable implicit assumption about the underlying population dynamics.
|
42 |
Design and Analysis of Ambulance Diversion PoliciesJanuary 2011 (has links)
abstract: Overcrowding of Emergency Departments (EDs) put the safety of patients at risk. Decision makers implement Ambulance Diversion (AD) as a way to relieve congestion and ensure timely treatment delivery. However, ineffective design of AD policies reduces the accessibility to emergency care and adverse events may arise. The objective of this dissertation is to propose methods to design and analyze effective AD policies that consider performance measures that are related to patient safety. First, a simulation-based methodology is proposed to evaluate the mean performance and variability of single-factor AD policies in a single hospital environment considering the trade-off between average waiting time and percentage of time spent on diversion. Regression equations are proposed to obtain parameters of AD policies that yield desired performance level. The results suggest that policies based on the total number of patients waiting are more consistent and provide a high precision in predicting policy performance. Then, a Markov Decision Process model is proposed to obtain the optimal AD policy assuming that information to start treatment in a neighboring hospital is available. The model is designed to minimize the average tardiness per patient in the long run. Tardiness is defined as the time that patients have to wait beyond a safety time threshold to start receiving treatment. Theoretical and computational analyses show that there exists an optimal policy that is of threshold type, and diversion can be a good alternative to decrease tardiness when ambulance patients cause excessive congestion in the ED. Furthermore, implementation of AD policies in a simulation model that accounts for several relaxations of the assumptions suggests that the model provides consistent policies under multiple scenarios. Finally, a genetic algorithm is combined with simulation to design effective policies for multiple hospitals simultaneously. The model has the objective of minimizing the time that patients spend in non-value added activities, including transportation, waiting and boarding in the ED. Moreover, the AD policies are combined with simple ambulance destination policies to create ambulance flow control mechanisms. Results show that effective ambulance management can significantly reduce the time that patients have to wait to receive appropriate level of care. / Dissertation/Thesis / Ph.D. Industrial Engineering 2011
|
43 |
Elicitation and planning in Markov decision processes with unknown rewards / Elicitation et planification dans les processus décisionnel de MARKOV avec récompenses inconnuesAlizadeh, Pegah 09 December 2016 (has links)
Les processus décisionnels de Markov (MDPs) modélisent des problèmes de décisionsséquentielles dans lesquels un utilisateur interagit avec l’environnement et adapte soncomportement en prenant en compte les signaux de récompense numérique reçus. La solutiond’unMDP se ramène à formuler le comportement de l’utilisateur dans l’environnementà l’aide d’une fonction de politique qui spécifie quelle action choisir dans chaque situation.Dans de nombreux problèmes de décision du monde réel, les utilisateurs ont despréférences différentes, donc, les gains de leurs actions sur les états sont différents et devraientêtre re-décodés pour chaque utilisateur. Dans cette thèse, nous nous intéressonsà la résolution des MDPs pour les utilisateurs ayant des préférences différentes.Nous utilisons un modèle nommé MDP à Valeur vectorielle (VMDP) avec des récompensesvectorielles. Nous proposons un algorithme de recherche-propagation qui permetd’attribuer une fonction de valeur vectorielle à chaque politique et de caractériser chaqueutilisateur par un vecteur de préférences sur l’ensemble des fonctions de valeur, où levecteur de préférence satisfait les priorités de l’utilisateur. Etant donné que le vecteurde préférences d’utilisateur n’est pas connu, nous présentons plusieurs méthodes pourrésoudre des MDP tout en approximant le vecteur de préférence de l’utilisateur.Nous introduisons deux algorithmes qui réduisent le nombre de requêtes nécessairespour trouver la politique optimale d’un utilisateur: 1) Un algorithme de recherchepropagation,où nous propageons un ensemble de politiques optimales possibles pourle MDP donné sans connaître les préférences de l’utilisateur. 2) Un algorithme interactifd’itération de la valeur (IVI) sur les MDPs, nommé algorithme d’itération de la valeurbasé sur les avantages (ABVI) qui utilise le clustering et le regroupement des avantages.Nous montrons également comment l’algorithme ABVI fonctionne correctement pourdeux types d’utilisateurs différents: confiant et incertain.Nous travaillons finalement sur une méthode d’approximation par critére de regret minimaxcomme méthode pour trouver la politique optimale tenant compte des informationslimitées sur les préférences de l’utilisateur. Dans ce système, tous les objectifs possiblessont simplement bornés entre deux limites supérieure et inférieure tandis que le systèmeine connaît pas les préférences de l’utilisateur parmi ceux-ci. Nous proposons une méthodeheuristique d’approximation par critère de regret minimax pour résoudre des MDPsavec des récompenses inconnues. Cette méthode est plus rapide et moins complexe queles méthodes existantes dans la littérature. / Markov decision processes (MDPs) are models for solving sequential decision problemswhere a user interacts with the environment and adapts her policy by taking numericalreward signals into account. The solution of an MDP reduces to formulate the userbehavior in the environment with a policy function that specifies which action to choose ineach situation. In many real world decision problems, the users have various preferences,and therefore, the gain of actions on states are different and should be re-decoded foreach user. In this dissertation, we are interested in solving MDPs for users with differentpreferences.We use a model named Vector-valued MDP (VMDP) with vector rewards. We propose apropagation-search algorithm that allows to assign a vector-value function to each policyand identify each user with a preference vector on the existing set of preferences wherethe preference vector satisfies the user priorities. Since the user preference vector is notknown we present several methods for solving VMDPs while approximating the user’spreference vector.We introduce two algorithms that reduce the number of queries needed to find the optimalpolicy of a user: 1) A propagation-search algorithm, where we propagate a setof possible optimal policies for the given MDP without knowing the user’s preferences.2) An interactive value iteration algorithm (IVI) on VMDPs, namely Advantage-basedValue Iteration (ABVI) algorithm that uses clustering and regrouping advantages. Wealso demonstrate how ABVI algorithm works properly for two different types of users:confident and uncertain.We finally work on a minimax regret approximation method as a method for findingthe optimal policy w.r.t the limited information about user’s preferences. All possibleobjectives in the system are just bounded between two higher and lower bounds while thesystem is not aware of user’s preferences among them. We propose an heuristic minimaxregret approximation method for solving MDPs with unknown rewards that is faster andless complex than the existing methods in the literature.
|
44 |
Data-Driven and Game-Theoretic Approaches for PrivacyJanuary 2018 (has links)
abstract: In the past few decades, there has been a remarkable shift in the boundary between public and private information. The application of information technology and electronic communications allow service providers (businesses) to collect a large amount of data. However, this ``data collection" process can put the privacy of users at risk and also lead to user reluctance in accepting services or sharing data. This dissertation first investigates privacy sensitive consumer-retailers/service providers interactions under different scenarios, and then focuses on a unified framework for various information-theoretic privacy and privacy mechanisms that can be learned directly from data.
Existing approaches such as differential privacy or information-theoretic privacy try to quantify privacy risk but do not capture the subjective experience and heterogeneous expression of privacy-sensitivity. The first part of this dissertation introduces models to study consumer-retailer interaction problems and to better understand how retailers/service providers can balance their revenue objectives while being sensitive to user privacy concerns. This dissertation considers the following three scenarios: (i) the consumer-retailer interaction via personalized advertisements; (ii) incentive mechanisms that electrical utility providers need to offer for privacy sensitive consumers with alternative energy sources; (iii) the market viability of offering privacy guaranteed free online services. We use game-theoretic models to capture the behaviors of both consumers and retailers, and provide insights for retailers to maximize their profits when interacting with privacy sensitive consumers.
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. In the second part, a novel context-aware privacy framework called generative adversarial privacy (GAP) is introduced. Inspired by recent advancements in generative adversarial networks, GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. For appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. Both synthetic and real-world datasets are used to show that GAP can greatly reduce the adversary's capability of inferring private information at a small cost of distorting the data. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
|
45 |
Decision-making algorithms for autonomous robots / Algorithmes de prise de décision stratégique pour robots autonomesHofer, Ludovic 27 November 2017 (has links)
Afin d'être autonomes, les robots doivent êtres capables de prendre des décisions en fonction des informations qu'ils perçoivent de leur environnement. Cette thèse modélise les problèmes de prise de décision robotique comme des processus de décision markoviens avec un espace d'état et un espace d'action tous deux continus. Ce choix de modélisation permet de représenter les incertitudes sur le résultat des actions appliquées par le robot. Les nouveaux algorithmes d'apprentissage présentés dans cette thèse se focalisent sur l'obtention de stratégies applicables dans un domaine embarqué. Ils sont appliqués à deux problèmes concrets issus de la RoboCup, une compétition robotique internationale annuelle. Dans ces problèmes, des robots humanoïdes doivent décider de la puissance et de la direction de tirs afin de maximiser les chances de marquer et contrôler la commande d'une primitive motrice pour préparer un tir. / The autonomy of robots heavily relies on their ability to make decisions based on the information provided by their sensors. In this dissertation, decision-making in robotics is modeled as continuous state and action markov decision process. This choice allows modeling of uncertainty on the results of the actions chosen by the robots. The new learning algorithms proposed in this thesis focus on producing policies which can be used online at a low computational cost. They are applied to real-world problems in the RoboCup context, an international robotic competition held annually. In those problems, humanoid robots have to choose either the direction and power of kicks in order to maximize the probability of scoring a goal or the parameters of a walk engine to move towards a kickable position.
|
46 |
On Hierarchical Goal Based Reinforcement LearningDenis, Nicholas 27 August 2019 (has links)
Discrete time sequential decision processes require that an agent select an action
at each time step. As humans, we plan over long time horizons and use temporal
abstraction by selecting temporally extended actions such as “make lunch” or “get
a masters degree”, whereby each is comprised of more granular actions. This thesis
concerns itself with such hierarchical temporal abstractions in the form of macro
actions and options, as they apply to goal-based Markov Decision Processes. A novel
algorithm for discovering hierarchical macro actions in goal-based MDPs, as well as
a novel algorithm utilizing landmark options for transfer learning in multi-task goal-
based reinforcement learning settings are introduced. Theoretical properties regarding the life-long regret of an agent executing the latter algorithm are also discussed.
|
47 |
Optimal Mammography Schedule Estimates Under Varying Disease Burden, Infrastructure Availability, and Other Cause Mortality: A Comparative Analyses of Six Low- and Middle- Income CountriesShifali, Shifali 18 December 2020 (has links)
Low-and-middle-income countries (LMICs) have a higher mortality-to-incidence ratio for breast cancer compared to high-income countries (HICs) because of late-stage diagnosis. Mammography screening is recommended for early diagnosis, however, current screening guidelines are only generalized by economic disparities, and are based on extrapolation of data from randomized controlled trials in HICs, which have different disease burdens and all-cause mortality compared to LMICs. Moreover, the infrastructure capacity in LMICs is far below that needed for adopting current screening guidelines. This study analyzes the impact of disease burden, infrastructure availability, and other cause mortality on optimal mammography screening schedules for LMICs. Further, these key features are analyzed under the context of overdiagnosis, epidemiologic/clinical uncertainty in pathways of the initial stage of cancer, and variability in technological availability for diagnosis and treatment. It uses a Markov decision process (MDP) model to estimate optimal schedules under varying assumptions of resource availability, applying it to six LMICs. Results suggest that screening schedules should change with disease burden and life-expectancy. For countries with similar life-expectancy but different disease burden, the model suggests to screen age groups with higher incidence rates. For countries with similar incidence rate and different life expectancy, the model suggests to screen younger age groups for countries with lower life-expectancy. Overdiagnosis and differences in screening technology had minimal impact on optimal schedules. Optimality of screening schedules were sensitive to epidemiologic/clinical uncertainty. Results from this study suggest that, instead of generalized screening schedules, those tailored to disease burden and infrastructure capacity could help optimize resources. Results from this study can help inform current screening guidelines and future health investment plans.
|
48 |
Increasing the Value of Information During Planning in Uncertain EnvironmentsPokharel, Gaurab 30 July 2021 (has links)
No description available.
|
49 |
Plánování cesty robota pomocí dynamického programování / Robot path planning by means of dynamic programmingStárek, Ivo January 2009 (has links)
This work is dedicated to robot path planning with using principles of dynamic programing in discrete state space. Theoretical part is dedicated to actual situation in this field and to principle of applying Markov decission process to path planning. Practical part is dedicated to implementation of two algorithms based on MDP principles.
|
50 |
Meta-Learning as a Markov Decision Process / Meta-Learning en tant que processus de décision MarkovienSun-Hosoya, Lisheng 19 December 2019 (has links)
L'apprentissage automatique (ML) a connu d'énormes succès ces dernières années et repose sur un nombre toujours croissant d'applications réelles. Cependant, la conception d'algorithmes prometteurs pour un problème spécifique nécessite toujours un effort humain considérable. L'apprentissage automatique (AutoML) a pour objectif de sortir l'homme de la boucle. AutoML est généralement traité comme un problème de sélection d’algorithme / hyper-paramètre. Les approches existantes incluent l’optimisation Bayésienne, les algorithmes évolutionnistes et l’apprentissage par renforcement. Parmi eux, auto-sklearn, qui intègre des techniques de meta-learning à l'initialisation de la recherche, occupe toujours une place de choix dans les challenges AutoML. Cette observation a orienté mes recherches vers le domaine du meta-learning. Cette orientation m'a amené à développer un nouveau cadre basé sur les processus de décision Markovien (MDP) et l'apprentissage par renforcement (RL). Après une introduction générale (chapitre 1), mon travail de thèse commence par une analyse approfondie des résultats du Challenge AutoML (chapitre 2). Cette analyse a orienté mon travail vers le meta-learning, menant tout d’abord à proposer une formulation d’AutoML en tant que problème de recommandation, puis à formuler une nouvelle conceptualisation du problème en tant que MDP (chapitre 3). Dans le cadre du MDP, le problème consiste à remplir de manière aussi rapide et efficace que possible une matrice S de meta-learning, dans laquelle les lignes correspondent aux tâches et les colonnes aux algorithmes. Un élément de matrice S (i, j) est la performance de l'algorithme j appliqué à la tâche i. La recherche efficace des meilleures valeurs dans S nous permet d’identifier rapidement les algorithmes les mieux adaptés à des tâches données. Dans le chapitre 4, nous examinons d’abord le cadre classique d’optimisation des hyper-paramètres. Au chapitre 5, une première approche de meta-learning est introduite, qui combine des techniques d'apprentissage actif et de filtrage collaboratif pour prédire les valeurs manquantes dans S. Nos dernières recherches appliquent RL au problème du MDP défini pour apprendre une politique efficace d’exploration de S. Nous appelons cette approche REVEAL et proposons une analogie avec une série de jeux pour permettre de visualiser les stratégies des agents pour révéler progressivement les informations. Cette ligne de recherche est développée au chapitre 6. Les principaux résultats de mon projet de thèse sont : 1) Sélection HP / modèle : j'ai exploré la méthode Freeze-Thaw et optimisé l'algorithme pour entrer dans le premier challenge AutoML, obtenant la 3ème place du tour final (chapitre 3). 2) ActivMetaL : j'ai conçu un nouvel algorithme pour le meta-learning actif (ActivMetaL) et l'ai comparé à d'autres méthodes de base sur des données réelles et artificielles. Cette étude a démontré qu'ActiveMetaL est généralement capable de découvrir le meilleur algorithme plus rapidement que les méthodes de base. 3) REVEAL : j'ai développé une nouvelle conceptualisation du meta-learning en tant que processus de décision Markovien et je l'ai intégrée dans le cadre plus général des jeux REVEAL. Avec un stagiaire en master, j'ai développé des agents qui apprennent (avec l'apprentissage par renforcement) à prédire le meilleur algorithme à essayer. Le travail présenté dans ma thèse est de nature empirique. Plusieurs méta-données du monde réel ont été utilisées dans cette recherche. Des méta-données artificielles et semi-artificielles sont également utilisées dans mon travail. Les résultats indiquent que RL est une approche viable de ce problème, bien qu'il reste encore beaucoup à faire pour optimiser les algorithmes et les faire passer à l’échelle aux problèmes de méta-apprentissage plus vastes. / Machine Learning (ML) has enjoyed huge successes in recent years and an ever- growing number of real-world applications rely on it. However, designing promising algorithms for a specific problem still requires huge human effort. Automated Machine Learning (AutoML) aims at taking the human out of the loop and develop machines that generate / recommend good algorithms for a given ML tasks. AutoML is usually treated as an algorithm / hyper-parameter selection problems, existing approaches include Bayesian optimization, evolutionary algorithms as well as reinforcement learning. Among them, auto-sklearn which incorporates meta-learning techniques in their search initialization, ranks consistently well in AutoML challenges. This observation oriented my research to the Meta-Learning domain. This direction led me to develop a novel framework based on Markov Decision Processes (MDP) and reinforcement learning (RL).After a general introduction (Chapter 1), my thesis work starts with an in-depth analysis of the results of the AutoML challenge (Chapter 2). This analysis oriented my work towards meta-learning, leading me first to propose a formulation of AutoML as a recommendation problem, and ultimately to formulate a novel conceptualisation of the problem as a MDP (Chapter 3). In the MDP setting, the problem is brought back to filling up, as quickly and efficiently as possible, a meta-learning matrix S, in which lines correspond to ML tasks and columns to ML algorithms. A matrix element S(i, j) is the performance of algorithm j applied to task i. Searching efficiently for the best values in S allows us to identify quickly algorithms best suited to given tasks. In Chapter 4 the classical hyper-parameter optimization framework (HyperOpt) is first reviewed. In Chapter 5 a first meta-learning approach is introduced along the lines of our paper ActivMetaL that combines active learning and collaborative filtering techniques to predict the missing values in S. Our latest research applies RL to the MDP problem we defined to learn an efficient policy to explore S. We call this approach REVEAL and propose an analogy with a series of toy games to help visualize agents’ strategies to reveal information progressively, e.g. masked areas of images to be classified, or ship positions in a battleship game. This line of research is developed in Chapter 6. The main results of my PhD project are: 1) HP / model selection: I have explored the Freeze-Thaw method and optimized the algorithm to enter the first AutoML challenge, achieving 3rd place in the final round (Chapter 3). 2) ActivMetaL: I have designed a new algorithm for active meta-learning (ActivMetaL) and compared it with other baseline methods on real-world and artificial data. This study demonstrated that ActiveMetaL is generally able to discover the best algorithm faster than baseline methods. 3) REVEAL: I developed a new conceptualization of meta-learning as a Markov Decision Process and put it into the more general framework of REVEAL games. With a master student intern, I developed agents that learns (with reinforcement learning) to predict the next best algorithm to be tried. To develop this agent, we used surrogate toy tasks of REVEAL games. We then applied our methods to AutoML problems. The work presented in my thesis is empirical in nature. Several real world meta-datasets were used in this research. Artificial and semi-artificial meta-datasets are also used in my work. The results indicate that RL is a viable approach to this problem, although much work remains to be done to optimize algorithms to make them scale to larger meta-learning problems.
|
Page generated in 0.0645 seconds