• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 17
  • 15
  • 5
  • 4
  • 1
  • Tagged with
  • 113
  • 113
  • 113
  • 38
  • 29
  • 21
  • 19
  • 19
  • 18
  • 17
  • 17
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Modelagem e soluções para redes de anúncios. / Model and solutions for Ad Networks.

Flávio Sales Truzzi 07 May 2014 (has links)
Redes de Anúncios (Ad Networks) são redes que promovem a distribuição de anúncios pela internet, de forma a maximizar o lucro total gerado pela exibição dos anúncios nos websites. Estas redes tipicamente operam através do modelo de negócios chamado CPC (Custo por Clique), em que o anunciante paga um determinado valor somente se algum usuário clicar em seu anúncio. A escolha de como o intermediador planeja a distribuição dos anúncios aos websites é de extrema importância, já que a taxa de cliques nos anúncios é extremamente baixa. Atualmente a alocação dos anúncios tem sido feita através de uma solução aproximada baseada na alocação ótima definida com dados de um período anterior, a qual é calculada através de programação linear aliada à utilização de heurísticas. Entretanto, este sistema claramente é um processo de decisão sequencial em que diversas restrições são aplicáveis, como por exemplo: o orçamento dos anunciantes, limites mínimos do número de exibições de cada anúncio, categorias dos anúncios, entre outras. Neste trabalho argumenta-se que MDPs (Markov Decision Processes) fornecem uma melhor modelagem para o problema, já que conseguem levar em conta a dinâmica do sistema, considerando, por exemplo, que um anúncio que tem poucas chances de ser clicado consiga ser alocado de forma eficiente em relação ao retorno de longo prazo, mesmo quando outros anúncios proveriam um lucro maior a curto prazo. No entanto, devido ao grande número de estados, utilizar uma solução ótima através de MDPs é impraticável. Portanto analisa-se o desempenho relativo entre o estado da arte e a modelagem ótima, obtendo garantias de que a solução aproximada baseada em programação linear não está longe da solução ótima, e que em problemas grandes (similares aos encontrados na prática) essa diferença pode ser ignorada. Por fim, propõe-se uma modelagem baseada em aprendizado por reforço para a solução deste problema, utilizando duas abordagens, uma desconsiderando informações de contexto e outra considerando informações de contexto. Aqui argumenta-se que o uso de aprendizado por reforço é mais apropriado para a solução do problema de alocação de anúncios, já que ele é capaz de adaptar sua política de alocação em função das mudanças que ocorrem como, por exemplo, no perfil do usuário. / Ad Networks promote the distribution of ads in the internet, so as to maximize the revenue generated by their display of ads in websites. These networks typically operate using the CPC (Cost per Click) business model, where the advertiser pays a monetary value when a user clicks in its advertisement. The choice of how the Ad Network distributes ads to websites is of utmost importance, since the rate of clicks on ads is extremely low. The allocation of ads has been done by an approximate solution based on data from an early period of time, which is calculated using linear programming combined with heuristics. However, this problem is clearly a sequential decision process in which multiple sequential restrictions apply, such as: the budget of the advertisers, minimum limits on the number of views for each campaign, categories of advertisements. In this dissertation we argue that MDPs (Markov Decision Processes) provide a better model for the problem, since they can automatically take into account the dynamics of the system, considering, for example, an ad with little chance of being clicked can be allocated in an efficient way, even when other ads would provide a higher profit in the short term. However, due to the large number of states, an optimal solution through MDPs is impractical; therefore we analyze here the relative performance between the linear programming and the MDP approaches, deriving guarantees that the approximate solution based on linear programming is not far from the MDP optimal solution, and in large problems (similar to those found in practice) this difference can be disregarded. Finally, we propose a model based on reinforcement learning using two different approaches, one disregarding the contextual information, and the other using contextual information. We argue that the use of reinforcement learning is more suitable for solving the problem of allocation of ads, since it is able to adapt its allocation policy to reflect changes that occur, e.g., in the user profile.
62

Modelagem e soluções para redes de anúncios. / Model and solutions for Ad Networks.

Truzzi, Flávio Sales 07 May 2014 (has links)
Redes de Anúncios (Ad Networks) são redes que promovem a distribuição de anúncios pela internet, de forma a maximizar o lucro total gerado pela exibição dos anúncios nos websites. Estas redes tipicamente operam através do modelo de negócios chamado CPC (Custo por Clique), em que o anunciante paga um determinado valor somente se algum usuário clicar em seu anúncio. A escolha de como o intermediador planeja a distribuição dos anúncios aos websites é de extrema importância, já que a taxa de cliques nos anúncios é extremamente baixa. Atualmente a alocação dos anúncios tem sido feita através de uma solução aproximada baseada na alocação ótima definida com dados de um período anterior, a qual é calculada através de programação linear aliada à utilização de heurísticas. Entretanto, este sistema claramente é um processo de decisão sequencial em que diversas restrições são aplicáveis, como por exemplo: o orçamento dos anunciantes, limites mínimos do número de exibições de cada anúncio, categorias dos anúncios, entre outras. Neste trabalho argumenta-se que MDPs (Markov Decision Processes) fornecem uma melhor modelagem para o problema, já que conseguem levar em conta a dinâmica do sistema, considerando, por exemplo, que um anúncio que tem poucas chances de ser clicado consiga ser alocado de forma eficiente em relação ao retorno de longo prazo, mesmo quando outros anúncios proveriam um lucro maior a curto prazo. No entanto, devido ao grande número de estados, utilizar uma solução ótima através de MDPs é impraticável. Portanto analisa-se o desempenho relativo entre o estado da arte e a modelagem ótima, obtendo garantias de que a solução aproximada baseada em programação linear não está longe da solução ótima, e que em problemas grandes (similares aos encontrados na prática) essa diferença pode ser ignorada. Por fim, propõe-se uma modelagem baseada em aprendizado por reforço para a solução deste problema, utilizando duas abordagens, uma desconsiderando informações de contexto e outra considerando informações de contexto. Aqui argumenta-se que o uso de aprendizado por reforço é mais apropriado para a solução do problema de alocação de anúncios, já que ele é capaz de adaptar sua política de alocação em função das mudanças que ocorrem como, por exemplo, no perfil do usuário. / Ad Networks promote the distribution of ads in the internet, so as to maximize the revenue generated by their display of ads in websites. These networks typically operate using the CPC (Cost per Click) business model, where the advertiser pays a monetary value when a user clicks in its advertisement. The choice of how the Ad Network distributes ads to websites is of utmost importance, since the rate of clicks on ads is extremely low. The allocation of ads has been done by an approximate solution based on data from an early period of time, which is calculated using linear programming combined with heuristics. However, this problem is clearly a sequential decision process in which multiple sequential restrictions apply, such as: the budget of the advertisers, minimum limits on the number of views for each campaign, categories of advertisements. In this dissertation we argue that MDPs (Markov Decision Processes) provide a better model for the problem, since they can automatically take into account the dynamics of the system, considering, for example, an ad with little chance of being clicked can be allocated in an efficient way, even when other ads would provide a higher profit in the short term. However, due to the large number of states, an optimal solution through MDPs is impractical; therefore we analyze here the relative performance between the linear programming and the MDP approaches, deriving guarantees that the approximate solution based on linear programming is not far from the MDP optimal solution, and in large problems (similar to those found in practice) this difference can be disregarded. Finally, we propose a model based on reinforcement learning using two different approaches, one disregarding the contextual information, and the other using contextual information. We argue that the use of reinforcement learning is more suitable for solving the problem of allocation of ads, since it is able to adapt its allocation policy to reflect changes that occur, e.g., in the user profile.
63

Performance Modeling, Analysis and Control of Capacitated Re-entrant Lines

Choi, Jin Young 09 July 2004 (has links)
This thesis considers the problem of performance modeling, analysis and control of capacitated re-entrant lines. Specifically, the first part of the thesis develops an analytical framework for the modeling, analysis and control of capacitated re-entrant lines, which is based on Generalized Stochastic Petri nets (GSPN) framework. The corresponding scheduling problem is systematically formulated, and the structure of the optimal policy is characterized and compared to that identified for "traditional" re-entrant lines. The second part of thesis addresses the problem of developing a systematic and computationally effective method for computing the optimal scheduling policy for any given configuration of capacitated re-entrant line. Specifically, the underlying scheduling problem is transformed to a Markov Decision Process (MDP) problem and an algorithm that systematically generates the MDP formulation for any given fab configuration is developed. The third part of thesis develops an effective approximating scheme based on the Neuro-Dynamic Programming (NDP) theory. In its general definition, the NDP method seeks the approximation of the optimal relative value function of the underlying MDP formulation by a parameterized function. Hence, an approximating structure for the considered problem is proposed and the quality of the generated approximations is systematically assessed. More specifically, this part of the thesis develops a set of "feature" functions and the mathematical apparatus necessary to evaluate the considered approximating scheme through a numerical experiment. The obtained results indicate that good quality approximations can be achieved by considering a set of features that characterize the distribution of the running process instances to the various processing stages and their lower order interactions. The last part of the thesis exploits the performance models developed in its earlier parts in order to provide an analytical characterization of the optimality of various deadlock resolution strategies for Markovian resource allocation systems under the objective of maximizing throughput.
64

Learning average reward irreducible stochastic games [electronic resource] : analysis and applications / by Jun Li.

Li, Jun, 1974- January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 111 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: A large class of sequential decision making problems under uncertainty with multiple competing decision makers/agents can be modeled as stochastic games. Stochastic games having Markov properties are called Markov games or competitive Markov decision processes. This dissertation presents an approach to solve non cooperative stochastic games, in which each decision maker makes her/his own decision independently and each has an individual payoff function. In stochastic games, the environment is nonstationary and each agent's payoff is affected by joint decisions of all agents, which results in the conflict of interest among the decision makers. In this research, the theory of Markov decision processes (MDPs) is combined with the game theory to analyze the structure of Nash equilibrium for stochastic games. In particular, the Laurent series expansion technique is used to extend the results of discounted reward stochastic games to average reward stochastic games. / ABSTRACT: As a result, auxiliary matrix games are developed that have equivalent equilibrium points and values to a class of stochastic games that are irreducible and have average reward performance metric. R-learning is a well known machine learning algorithm that deals with average reward MDPs. The R-learning algorithm is extended to develop a Nash-R reinforcement learning algorithm for obtaining the equivalent auxiliary matrices. A convergence analysis of the Nash-R algorithm is developed from the study of the asymptotic behavior of its two time scale stochastic approximation scheme, and the stability of the associated ordinary differential equations (ODEs). The Nash-R learning algorithm is tested and then benchmarked with MDP based learning methods using a well known grid game. Subsequently, a real life application of stochastic games in deregulated power market is explored. / ABSTRACT: According to the current literature, Cournot, Bertrand, and Supply Function Equilibrium (SFEs) are the three primary equilibrium models that are used to evaluate the power market designs. SFE is more realistic for pool type power markets. However, for a complicated power system, the convex assumption for optimization problems is violated in most cases, which makes the problems more difficult to solve. The SFE concept in adopted in this research, and the generators' behaviors are modeled as a stochastic game instead of one shot game. The power market is considered to have features such as multi-settlement (bilateral, day-ahead market, spot markets and transmission congestion contracts), and demand elasticity. Such a market consisting of multiple competing suppliers (generators) is modeled as a competitive Markov decision processes and is studied using the Nash-R algorithm. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
65

Bidirectional LAO* Algorithm (A Faster Approach to Solve Goal-directed MDPs)

Bhuma, Venkata Deepti Kiran 01 January 2004 (has links)
Uncertainty is a feature of many AI applications. While there are polynomial-time algorithms for planning in stochastic systems, planning is still slow, in part because most algorithms plan for all eventualities. Algorithms such as LAO* are able to find good or optimal policies more quickly when the starting state of the system is known. In this thesis we present an extension to LAO*, called BLAO*. BLAO* is an extension of the LAO* algorithm to a bidirectional search. We show that BLAO* finds optimal or E-optimal solutions for goal-directed MDPs without necessarily evaluating the entire state space. BLAO* converges much faster than LAO* or RTDP on our benchmarks.
66

Performance improvements through flexible workforce

Kirkizlar, Huseyin Eser 25 August 2008 (has links)
This thesis focuses on increasing the efficiency of systems with cross-trained workforce and finite storage spaces. Our objective is to maximize the throughput and minimize the setup costs (if they exist). More specifically, we are interested in determining effective cross-training strategies and dynamic server assignment policies for flexible servers in production lines with finite buffers. In the first part of this thesis, we study non-Markovian systems and support the conjecture that effective server assignment policies are robust to service time distributions. Next, we consider understaffed tandem lines with partially or fully flexible servers, determine optimal and heuristic server assignment policies, and show that most of the benefits of full flexibility can be achieved with limited flexibility. Finally, we incorporate the setups to our model, determine the optimal server assignment policy for some systems and show how the effective assignment of servers depends on the magnitude of the setup costs.
67

Matching Supply And Demand Using Dynamic Quotation Strategies

January 2012 (has links)
abstract: Today's competitive markets force companies to constantly engage in the complex task of managing their demand. In make-to-order manufacturing or service systems, the demand of a product is shaped by price and lead times, where high price and lead time quotes ensure profitability for supplier, but discourage the customers from placing orders. Low price and lead times, on the other hand, generally result in high demand, but do not necessarily ensure profitability. The price and lead time quotation problem considers the trade-off between offering high and low prices and lead times. The recent practices in make-to- order manufacturing companies reveal the importance of dynamic quotation strategies, under which the prices and lead time quotes flexibly change depending on the status of the system. In this dissertation, the objective is to model a make-to-order manufacturing system and explore various aspects of dynamic quotation strategies such as the behavior of optimal price and lead time decisions, the impact of customer preferences on optimal decisions, the benefits of employing dynamic quotation in comparison to simpler quotation strategies, and the benefits of coordinating price and lead time decisions. I first consider a manufacturer that receives demand from spot purchasers (who are quoted dynamic price and lead times), as well as from contract customers who have agree- ments with the manufacturer with fixed price and lead time terms. I analyze how customer preferences affect the optimal price and lead time decisions, the benefits of dynamic quo- tation, and the optimal mix of spot purchaser and contract customers. These analyses necessitate the computation of expected tardiness of customer orders at the moment cus- tomer enters the system. Hence, in the second part of the dissertation, I develop method- ologies to compute the expected tardiness in multi-class priority queues. For the trivial single class case, a closed formulation is obtained. For the more complex multi-class case, numerical inverse Laplace transformation algorithms are developed. In the last part of the dissertation, I model a decentralized system with two components. Marketing department determines the price quotes with the objective of maximizing revenues, and manufacturing department determines the lead time quotes to minimize lateness costs. I discuss the ben- efits of coordinating price and lead time decisions, and develop an incentivization scheme to reduce the negative impacts of lack of coordination. / Dissertation/Thesis / Ph.D. Industrial Engineering 2012
68

Soluções eficientes para processos de decisão markovianos baseadas em alcançabilidade e bissimulações estocásticas / Efficient solutions to Markov decision processes based on reachability and stochastic bisimulations

Felipe Martins dos Santos 09 December 2013 (has links)
Planejamento em inteligência artificial é a tarefa de determinar ações que satisfaçam um dado objetivo. Nos problemas de planejamento sob incerteza, as ações podem ter efeitos probabilísticos. Esses problemas são modelados como Processos de Decisão Markovianos (Markov Decision Processes - MDPs), modelos que permitem o cálculo de soluções ótimas considerando o valor esperado de cada ação em cada estado. Contudo, resolver problemas grandes de planejamento probabilístico, i.e., com um grande número de estados e ações, é um enorme desafio. MDPs grandes podem ser reduzidos através da computação de bissimulações estocásticas, i.e., relações de equivalência sobre o conjunto de estados do MDP original. A partir das bissimulações estocásticas, que podem ser exatas ou aproximadas, é possível obter um modelo abstrato reduzido que pode ser mais fácil de resolver do que o MDP original. No entanto, para problemas de alguns domínios, a computação da bissimulação estocástica sobre todo o espaço de estados é inviável. Os algoritmos propostos neste trabalho estendem os algoritmos usados para a computação de bissimulações estocásticas para MDPs de forma que elas sejam computadas sobre o conjunto de estados alcançáveis a partir de um dado estado inicial, que pode ser muito menor do que o conjunto de estados completo. Os resultados experimentais mostram que é possível resolver problemas grandes de planejamento probabilístico com desempenho superior às técnicas conhecidas de bissimulação estocástica. / Planning in artificial intelligence is the task of finding actions to reach a given goal. In planning under uncertainty, the actions can have probabilistic effects. This problems are modeled using Markov Decision Processes (MDPs), models that enable the computation of optimal solutions considering the expected value of each action when applied in each state. However, to solve big probabilistic planning problems, i.e., those with a large number of states and actions, is still a challenge. Large MDPs can be reduced by computing stochastic bisimulations, i.e., equivalence relations over the original MDP states. From the stochastic bisimulations, that can be exact or approximated, it is possible to get an abstract reduced model that can be easier to solve than the original MDP. But, for some problems, the stochastic bisimulation computation over the whole state space is unfeasible. The algorithms proposed in this work extend the algorithms that are used to compute stochastic bisimulations for MDPs in a way that they can be computed over the reachable set of states with a given initial state, which can be much smaller than the complete set of states. The empirical results show that it is possible to solve large probabilistic planning problems with better performance than the known techniques of stochastic bisimulation.
69

Programação dinâmica em tempo real para processos de decisão markovianos com probabilidades imprecisas / Real-time dynamic programming for Markov Decision Processes with Imprecise Probabilities

Daniel Baptista Dias 28 November 2014 (has links)
Em problemas de tomada de decisão sequencial modelados como Processos de Decisão Markovianos (MDP) pode não ser possível obter uma medida exata para as probabilidades de transição de estados. Visando resolver esta situação os Processos de Decisão Markovianos com Probabilidades Imprecisas (Markov Decision Processes with Imprecise Transition Probabilities, MDP-IPs) foram introduzidos. Porém, enquanto estes MDP-IPs se mostram como um arcabouço robusto para aplicações de planejamento no mundo real, suas soluções consomem muito tempo na prática. Em trabalhos anteriores, buscando melhorar estas soluções foram propostos algoritmos de programação dinâmica síncrona eficientes para resolver MDP-IPs com uma representação fatorada para as funções de transição probabilística e recompensa, chamados de MDP-IP fatorados. Entretanto quando o estado inicial de um problema do Caminho mais Curto Estocástico (Stochastic Shortest Path MDP, SSP MDP) é dado, estas soluções não utilizam esta informação. Neste trabalho será introduzido o problema do Caminho mais Curto Estocástico com Probabilidades Imprecisas (Stochastic Shortest Path MDP-IP, SSP MDP-IP) tanto em sua forma enumerativa, quanto na fatorada. Um algoritmo de programação dinâmica assíncrona para SSP MDP-IP enumerativos com probabilidades dadas por intervalos foi proposto por Buffet e Aberdeen (2005). Entretanto, em geral um problema é dado de forma fatorada, i.e., em termos de variáveis de estado e nesse caso, mesmo se for assumida a imprecisão dada por intervalos sobre as variáveis, ele não poderá ser mais aplicado, pois as probabilidades de transição conjuntas serão multilineares. Assim, será mostrado que os SSP MDP-IPs fatorados são mais expressivos que os enumerativos e que a mudança do SSP MDP-IP enumerativo para o caso geral de um SSP MDP-IPs fatorado leva a uma mudança de resolução da função objetivo do Bellman backup de uma função linear para uma não-linear. Também serão propostos algoritmos enumerativos, chamados de RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e fatorados chamados factRTDP-IP (factored RTDP-IP) e factLRTDP-IP (factored LRTDP-IP). Eles serão avaliados em relação aos algoritmos de programação dinâmica síncrona em termos de tempo de convergência da solução e de escalabilidade. / In sequential decision making problems modelled as Markov Decision Processes (MDP) we may not have the state transition probabilities. To solve this issue, the framework based in Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) is introduced. Therefore, while MDP-IPs is a robust framework to use in real world planning problems, its solutions are time-consuming in practice. In previous works, efficient algorithms based in synchronous dynamic programming to solve MDP-IPs with factored representations of the probabilistic transition function and reward function, called factored MDP-IPs. However, given a initial state of a system, modeled as a Stochastic Shortest Path MDP (SSP MDP), solutions does not use this information. In this work we introduce the Stochastic Shortest Path MDP-IPs (SSP MDP-IPs) in enumerative form and in factored form. An efficient asynchronous dynamic programming solution for SSP MDP-IPs with enumerated states has been proposed by Buffet e Aberdeen (2005) before which is restricted to interval-based imprecision. Nevertheless, in general the problem is given in a factored form, i.e., in terms of state variables and in this case even if we assume interval-based imprecision over the variables, the previous solution is no longer applicable since we have multilinear parameterized joint transition probabilities. In this work we show that the innocuous change from the enumerated SSP MDP-IP cases to the general case of factored SSP MDP-IPs leads to a switch from a linear to nonlinear objectives in the Bellman backup. Also we propose assynchronous dynamic programming enumerative algorithms, called RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) and LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities), and factored algorithms called factRTDP-IP (factored RTDP-IP) and factLRTDP-IP (factored LRTDP-IP). There algorithms will be evaluated with the synchronous dynamic programming algorithms previously proposed in terms of convergence time and scalability.
70

Processos de decisão Markovianos fatorados com probabilidades imprecisas / Factored Markov decision processes with Imprecise Transition Probabilities

Karina Valdivia Delgado 19 January 2010 (has links)
Em geral, quando modelamos problemas de planejamento probabilístico do mundo real, usando o arcabouço de Processos de Decisão Markovianos (MDPs), é difícil obter uma estimativa exata das probabilidades de transição. A incerteza surge naturalmente na especificação de um domínio, por exemplo, durante a aquisição das probabilidades de transição a partir de um especialista ou de dados observados através de técnicas de amostragem, ou ainda de distribuições de transição não estacionárias decorrentes do conhecimento insuficiente do domínio. Com o objetivo de se determinar uma política robusta, dada a incerteza nas transições de estado, Processos de Decisão Markovianos com Probabilidades Imprecisas (MDP-IPs) têm sido usados para modelar esses cenários. Infelizmente, apesar de existirem diversos algoritmos de solução para MDP-IPs, muitas vezes eles exigem chamadas externas de rotinas de otimização que podem ser extremamente custosas. Para resolver esta deficiência, nesta tese, introduzimos o MDP-IP fatorado e propomos métodos eficientes de programação matemática e programação dinâmica que permitem explorar a estrutura de um domínio de aplicação. O método baseado em programação matemática propõe soluções aproximadas eficientes para MDP-IPs fatorados, estendendo abordagens anteriores de programação linear para MDPs fatorados. Essa proposta, baseada numa formulação multilinear para aproximações robustas da função valor de estados, explora a representação fatorada de um MDP-IP, reduzindo em ordens de magnitude o tempo consumido em relação às abordagens não-fatoradas previamente propostas. O segundo método proposto, baseado em programação dinâmica, resolve o gargalo computacional existente nas soluções de programação dinâmica para MDP-IPs propostas na literatura: a necessidade de resolver múltiplos problemas de otimização não-linear. Assim, mostramos como representar a função valor de maneira compacta usando uma nova estrutura de dados chamada de Diagramas de Decisão Algébrica Parametrizados, e como aplicar técnicas de aproximação para reduzir drasticamente a sobrecarga computacional das chamadas a um otimizador não-linear, produzindo soluções ótimas aproximadas com erro limitado. Nossos resultados mostram uma melhoria de tempo e até duas ordens de magnitude em comparação às abordagens tradicionais enumerativas baseadas em programação dinâmica e uma melhoria de tempo de até uma ordem de magnitude sobre a extensão de técnicas de iteração de valor aproximadas para MDPs fatorados. Além disso, produzimos o menor erro de todos os algoritmos de aproximação avaliados. / When modeling real-world decision-theoretic planning problems with the framework of Markov Decision Processes(MDPs), it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, uncertainty arises in the specification of transitions due to elicitation of MDP transition models from an expert or data, or non-stationary transition distributions arising from insuficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) have been introduced. Unfortunately, while various solutions exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose eficient mathematical programming and dynamic programming methods to exploit its structure. First, we derive eficient approximate solutions for Factored MDP-IPs based on mathematical programming resulting in a multilinear formulation for robust maximin linear-value approximations in Factored MDP-IPs. By exploiting factored structure in MDP-IPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches. Second, noting that the key computational bottleneck in the dynamic programming solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional at dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error among all approximation algorithm evaluated.

Page generated in 0.2654 seconds