• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 2
  • 1
  • 1
  • Tagged with
  • 40
  • 40
  • 40
  • 40
  • 17
  • 16
  • 14
  • 14
  • 9
  • 8
  • 8
  • 8
  • 8
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Solving Games and All That

Saffidine, Abdallah 08 July 2013 (has links) (PDF)
Efficient best-first search algorithms have been developed for deterministic two-player games with two-outcome.We present a formal framework to represent such best-first search algorithms.The framework is general enough to express popular algorithms such as Proof Number Search, Monte Carlo Tree Search, and the Product Propagation algorithm.We then show how a similar framework can be devised for two more general settings: two-player games with multiple outcomes, and the model checking problem in modal logic K.This gives rise to new Proof Number and Monte Carlo inspired search algorithms for these settings.Similarly, the alpha-beta pruning technique is known to be very important in games with sequential actions.We propose an extension of this technique for stacked-matrix games, a generalization of zero-sum perfect information two-player games that allows simultaneous moves.
32

Contributions to Simulation-based High-dimensional Sequential Decision Making

Hoock, Jean-Baptiste 10 April 2013 (has links) (PDF)
My thesis is entitled "Contributions to Simulation-based High-dimensional Sequential Decision Making". The context of the thesis is about games, planning and Markov Decision Processes. An agent interacts with its environment by successively making decisions. The agent starts from an initial state until a final state in which the agent can not make decision anymore. At each timestep, the agent receives an observation of the state of the environment. From this observation and its knowledge, the agent makes a decision which modifies the state of the environment. Then, the agent receives a reward and a new observation. The goal is to maximize the sum of rewards obtained during a simulation from an initial state to a final state. The policy of the agent is the function which, from the history of observations, returns a decision. We work in a context where (i) the number of states is huge, (ii) reward carries little information, (iii) the probability to reach quickly a good final state is weak and (iv) prior knowledge is either nonexistent or hardly exploitable. Both applications described in this thesis present these constraints : the game of Go and a 3D simulator of the european project MASH (Massive Sets of Heuristics). In order to take a satisfying decision in this context, several solutions are brought : 1. Simulating with the compromise exploration/exploitation (MCTS) 2. Reducing the complexity by local solving (GoldenEye) 3. Building a policy which improves itself (RBGP) 4. Learning prior knowledge (CluVo+GMCTS) Monte-Carlo Tree Search (MCTS) is the state of the art for the game of Go. From a model of the environment, MCTS builds incrementally and asymetrically a tree of possible futures by performing Monte-Carlo simulations. The tree starts from the current observation of the agent. The agent switches between the exploration of the model and the exploitation of decisions which statistically give a good cumulative reward. We discuss 2 ways for improving MCTS : the parallelization and the addition of prior knowledge. The parallelization does not solve some weaknesses of MCTS; in particular some local problems remain challenges. We propose an algorithm (GoldenEye) which is composed of 2 parts : detection of a local problem and then its resolution. The algorithm of resolution reuses some concepts of MCTS and it solves difficult problems of a classical database. The addition of prior knowledge by hand is laborious and boring. We propose a method called Racing-based Genetic Programming (RBGP) in order to add automatically prior knowledge. The strong point is that RBGP rigorously validates the addition of a prior knowledge and RBGP can be used for building a policy (instead of only optimizing an algorithm). In some applications such as MASH, simulations are too expensive in time and there is no prior knowledge and no model of the environment; therefore Monte-Carlo Tree Search can not be used. So that MCTS becomes usable in this context, we propose a method for learning prior knowledge (CluVo). Then we use pieces of prior knowledge for improving the rapidity of learning of the agent and for building a model, too. We use from this model an adapted version of Monte-Carlo Tree Search (GMCTS). This method solves difficult problems of MASH and gives good results in an application to a word game.
33

Hra Sokoban a umělá inteligence / Sokoban game and artificial intelligence

Žlebek, Petr January 2021 (has links)
The thesis is focused on solving the Sokoban game using artificial intelligence algorithms. The first part of the thesis describes the Sokoban game, state space and selected state space search methods. In the second part selected methods were implemented and graphic user interface was created in the Python environment. Comparative experiments were executed in the final part.
34

Agentní systém pro hraní her / Agent Based Gameplaying System

Trutman, Michal January 2015 (has links)
This thesis deals with general game playing agent systems. On the contrary with common agents, which are designed only for a specified task or a game, general game playing agents have to be able to play basically any arbitrary game described in a formal declarative language. The biggest challenge is that the game rules are not known beforehand, which makes it impossible to use some optimizations or to make a good heuristic function. The thesis consists of a theoretical and a practical part. The first part introduces the field of general game playing agents, defines the Game Description Language and covers construction of heuristic evaluation functions and their integration within the Monte Carlo tree search algorithm. In the practical part, a general method of creating a new heuristic function is presented, which is later integrated into a proper agent, which is compared then with other systems.
35

Relevance-based Online Planning in Complex POMDPs

Saborío Morales, Juan Carlos 17 July 2020 (has links)
Planning under uncertainty is a central topic at the intersection of disciplines such as artificial intelligence, cognitive science and robotics, and its aim is to enable artificial agents to solve challenging problems through a systematic approach to decision-making. Some of these challenges include generating expectations about different outcomes governed by a probability distribution and estimating the utility of actions based only on partial information. In addition, an agent must incorporate observations or information from the environment into its deliberation process and produce the next best action to execute, based on an updated understanding of the world. This process is commonly modeled as a POMDP, a discrete stochastic system that becomes intractable very quickly. Many real-world problems, however, can be simplified following cues derived from contextual information about the relative expected value of actions. Based on an intuitive approach to problem solving, and relying on ideas related to attention and relevance estimation, we propose a new approach to planning supported by our two main contributions: PGS grants an agent the ability to generate internal preferences and biases to guide action selection, and IRE allows the agent to reduce the dimensionality of complex problems while planning online. Unlike existing work that improves the performance of planning on POMDPs, PGS and IRE do not rely on detailed heuristics or domain knowledge, explicit action hierarchies or manually designed dependencies for state factoring. Our results show that this level of autonomy is important to solve increasingly more challenging problems, where manually designed simplifications scale poorly.
36

Playing the Fox Game With Tree Search: MCTS vs. Alpha-Beta

Ye, David, Trossing, Jacob January 2022 (has links)
The forefront of game playing Artificial Intelligence (AI) has for the better part of 21st century been using an algorithm called Alpha-Beta Pruning (Alpha-Beta). In 2017, DeepMind launched a new AI, based on the algorithm Monte Carlo Tree Search (MCTS), which defeated the former Alpha-Beta based chess AI champion Stockfish. This accomplishment fueled up more excitement and interest for using MCTS to develop more complex and better performing game playing AI.This paper aims to compare the strengths of MCTS and Alpha-Beta by allowing them to play against each other in a classic game with no available robust AI - the Fox Game.The results showed an evident victory for the Alpha-Beta AI. Therefore, Alpha-Beta is the better suited algorithm for developing a simple AI for the Fox Game. Further optimizations would enhance the performance of both algorithms but it is unclear which of the algorithms would benefit from it the most. / Framkanten av Artificiell Intelligens (AI) som spelar spel har i större delen av 2000-talet använt sig av en algorithm vid namn Alpha-Beta-beskärning (Alpha-Beta). Denna bedrift höjde intresset för att använda MCTS i syfte att utveckla mer komplexa och bättre spelande AI.Denna rapport har som mål att jämföra styrkor hos MCTS och Alpha-Beta genom att låta dem spela mot varandra i ett klassiskt spel utan någon tillgänglig AI - Rävspelet. Resultaten visade på en klar seger för Alpha-Beta AI:n. Därför är Alpha-Beta den bättre lämpade algoritmen för att skapa en simpel AI. Fler optimiseringar hade förbättrat spelstyrkan hos bägge algoritmerna med det är oklart vilken av algoritmerna som hade gynnat mest utav det. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
37

Contributions to Simulation-based High-dimensional Sequential Decision Making / Contributions sur la prise de décision séquentielle basée sur des simulations dans des environnements complexes de grande dimension

Hoock, Jean-Baptiste 10 April 2013 (has links)
Ma thèse s'intitule « Contributions sur la prise de décision séquentielle basée sur des simulations dans des environnements complexes de grande dimension ». Le cadre de la thèse s'articule autour du jeu, de la planification et des processus de décision markovien. Un agent interagit avec son environnement en prenant successivement des décisions. L'agent part d'un état initial jusqu'à un état final dans lequel il ne peut plus prendre de décision. A chaque pas de temps, l'agent reçoit une observation de l'état de l'environnement. A partir de cette observation et de ses connaissances, il prend une décision qui modifie l'état de l'environnement. L'agent reçoit en conséquence une récompense et une nouvelle observation. Le but est de maximiser la somme des récompenses obtenues lors d'une simulation qui part d'un état initial jusqu'à un état final. La politique de l'agent est la fonction qui, à partir de l'historique des observations, retourne une décision. Nous travaillons dans un contexte où (i) le nombre d'états est immense, (ii) les récompenses apportent peu d'information, (iii) la probabilité d'atteindre rapidement un bon état final est faible et (iv) les connaissances a priori de l'environnement sont soit inexistantes soit difficilement exploitables. Les 2 applications présentées dans cette thèse répondent à ces contraintes : le jeu de Go et le simulateur 3D du projet européen MASH (Massive Sets of Heuristics). Afin de prendre une décision satisfaisante dans ce contexte, plusieurs solutions sont apportées :1. simuler en utilisant le compromis exploration/exploitation (MCTS)2. réduire la complexité du problème par des recherches locales (GoldenEye)3. construire une politique qui s'auto-améliore (RBGP)4. apprendre des connaissances a priori (CluVo+GMCTS) L'algorithme Monte-Carlo Tree Search (MCTS) est un algorithme qui a révolutionné le jeu de Go. A partir d'un modèle de l'environnement, MCTS construit itérativement un arbre des possibles de façon asymétrique en faisant des simulations de Monte-Carlo et dont le point de départ est l'observation courante de l'agent. L'agent alterne entre l'exploration du modèle en prenant de nouvelles décisions et l'exploitation des décisions qui obtiennent statistiquement une bonne récompense cumulée. Nous discutons de 2 moyens pour améliorer MCTS : la parallélisation et l'ajout de connaissances a priori. La parallélisation ne résout pas certaines faiblesses de MCTS ; notamment certains problèmes locaux restent des verrous. Nous proposons un algorithme (GoldenEye) qui se découpe en 2 parties : détection d'un problème local et ensuite sa résolution. L'algorithme de résolution réutilise des principes de MCTS et fait ses preuves sur une base classique de problèmes difficiles. L'ajout de connaissances à la main est laborieuse et ennuyeuse. Nous proposons une méthode appelée Racing-based Genetic Programming (RBGP) pour ajouter automatiquement de la connaissance. Le point fort de cet algorithme est qu'il valide rigoureusement l'ajout d'une connaissance a priori et il peut être utilisé non pas pour optimiser un algorithme mais pour construire une politique. Dans certaines applications telles que MASH, les simulations sont coûteuses en temps et il n'y a ni connaissance a priori ni modèle de l'environnement; l'algorithme Monte-Carlo Tree Search est donc inapplicable. Pour rendre MCTS applicable dans MASH, nous proposons une méthode pour apprendre des connaissances a priori (CluVo). Nous utilisons ensuite ces connaissances pour améliorer la rapidité de l'apprentissage de l'agent et aussi pour construire un modèle. A partir de ce modèle, nous utilisons une version adaptée de Monte-Carlo Tree Search (GMCTS). Cette méthode résout de difficiles problématiques MASH et donne de bons résultats dans une application dont le but est d'améliorer un tirage de lettres. / My thesis is entitled "Contributions to Simulation-based High-dimensional Sequential Decision Making". The context of the thesis is about games, planning and Markov Decision Processes. An agent interacts with its environment by successively making decisions. The agent starts from an initial state until a final state in which the agent can not make decision anymore. At each timestep, the agent receives an observation of the state of the environment. From this observation and its knowledge, the agent makes a decision which modifies the state of the environment. Then, the agent receives a reward and a new observation. The goal is to maximize the sum of rewards obtained during a simulation from an initial state to a final state. The policy of the agent is the function which, from the history of observations, returns a decision. We work in a context where (i) the number of states is huge, (ii) reward carries little information, (iii) the probability to reach quickly a good final state is weak and (iv) prior knowledge is either nonexistent or hardly exploitable. Both applications described in this thesis present these constraints : the game of Go and a 3D simulator of the european project MASH (Massive Sets of Heuristics). In order to take a satisfying decision in this context, several solutions are brought : 1. Simulating with the compromise exploration/exploitation (MCTS) 2. Reducing the complexity by local solving (GoldenEye) 3. Building a policy which improves itself (RBGP) 4. Learning prior knowledge (CluVo+GMCTS) Monte-Carlo Tree Search (MCTS) is the state of the art for the game of Go. From a model of the environment, MCTS builds incrementally and asymetrically a tree of possible futures by performing Monte-Carlo simulations. The tree starts from the current observation of the agent. The agent switches between the exploration of the model and the exploitation of decisions which statistically give a good cumulative reward. We discuss 2 ways for improving MCTS : the parallelization and the addition of prior knowledge. The parallelization does not solve some weaknesses of MCTS; in particular some local problems remain challenges. We propose an algorithm (GoldenEye) which is composed of 2 parts : detection of a local problem and then its resolution. The algorithm of resolution reuses some concepts of MCTS and it solves difficult problems of a classical database. The addition of prior knowledge by hand is laborious and boring. We propose a method called Racing-based Genetic Programming (RBGP) in order to add automatically prior knowledge. The strong point is that RBGP rigorously validates the addition of a prior knowledge and RBGP can be used for building a policy (instead of only optimizing an algorithm). In some applications such as MASH, simulations are too expensive in time and there is no prior knowledge and no model of the environment; therefore Monte-Carlo Tree Search can not be used. So that MCTS becomes usable in this context, we propose a method for learning prior knowledge (CluVo). Then we use pieces of prior knowledge for improving the rapidity of learning of the agent and for building a model, too. We use from this model an adapted version of Monte-Carlo Tree Search (GMCTS). This method solves difficult problems of MASH and gives good results in an application to a word game.
38

Anytime discovery of a diverse set of patterns with Monte Carlo tree search / Découverte d'un ensemble diversifié de motifs avec la recherche arborescente de Monte Carlo

Bosc, Guillaume 11 September 2017 (has links)
La découverte de motifs qui caractérisent fortement une classe vis à vis d'une autre reste encore un problème difficile en fouille de données. La découverte de sous-groupes (Subgroup Discovery, SD) est une approche formelle de fouille de motifs qui permet la construction de classifieurs intelligibles mais surtout d'émettre des hypothèses sur les données. Cependant, cette approche fait encore face à deux problèmes majeurs : (i) comment définir des mesures de qualité appropriées pour caractériser l'intérêt d'un motif et (ii) comment sélectionner une méthode heuristique adaptée lorsqu’une énumération exhaustive de l'espace de recherche n'est pas réalisable. Le premier problème a été résolu par la fouille de modèles exceptionnels (Exceptional Model Mining, EMM) qui permet l'extraction de motifs couvrant des objets de la base de données pour lesquels le modèle induit sur les attributs de classe est significativement différent du modèle induit par l'ensemble des objets du jeu de données. Le second problème a été étudié en SD et EMM principalement avec la mise en place de méthodes heuristiques de type recherche en faisceau (beam-search) ou avec des algorithmes génétiques qui permettent la découverte de motifs non redondants, diversifiés et de bonne qualité. Dans cette thèse, nous soutenons que la nature gloutonne des méthodes d'énumération précédentes génère cependant des ensembles de motifs manquant de diversité. Nous définissons formellement la fouille de données comme un jeu que nous résolvons par l'utilisation de la recherche arborescente de Monte Carlo (Monte Carlo Tree Search, MCTS), une technique récente principalement utilisée pour la résolution de jeux et de problèmes de planning en intelligence artificielle. Contrairement aux méthodes traditionnelles d'échantillonnage, MCTS donne la possibilité d'obtenir une solution à tout instant sans qu'aucune hypothèse ne soit faite que ce soit sur la mesure de qualité ou sur les données. Cette méthode d'énumération converge vers une approche exhaustive si les budgets temps et mémoire disponibles sont suffisants. Le compromis entre l'exploration et l'exploitation que propose cette approche permet une augmentation significative de la diversité dans l'ensemble des motifs calculés. Nous montrons que la recherche arborescente de Monte Carlo appliquée à la fouille de motifs permet de trouver rapidement un ensemble de motifs diversifiés et de bonne qualité à l'aide d'expérimentations sur des jeux de données de référence et sur un jeu de données réel traitant de l'olfaction. Nous proposons et validons également une nouvelle mesure de qualité spécialement conçue pour des jeux de donnée multi labels présentant une grande variance de fréquences des labels. / The discovery of patterns that strongly distinguish one class label from another is still a challenging data-mining task. Subgroup Discovery (SD) is a formal pattern mining framework that enables the construction of intelligible classifiers, and, most importantly, to elicit interesting hypotheses from the data. However, SD still faces two major issues: (i) how to define appropriate quality measures to characterize the interestingness of a pattern; (ii) how to select an accurate heuristic search technique when exhaustive enumeration of the pattern space is unfeasible. The first issue has been tackled by Exceptional Model Mining (EMM) for discovering patterns that cover tuples that locally induce a model substantially different from the model of the whole dataset. The second issue has been studied in SD and EMM mainly with the use of beam-search strategies and genetic algorithms for discovering a pattern set that is non-redundant, diverse and of high quality. In this thesis, we argue that the greedy nature of most such previous approaches produces pattern sets that lack diversity. Consequently, we formally define pattern mining as a game and solve it with Monte Carlo Tree Search (MCTS), a recent technique mainly used for games and planning problems in artificial intelligence. Contrary to traditional sampling methods, MCTS leads to an any-time pattern mining approach without assumptions on either the quality measure or the data. It converges to an exhaustive search if given enough time and memory. The exploration/exploitation trade-off allows the diversity of the result set to be improved considerably compared to existing heuristics. We show that MCTS quickly finds a diverse pattern set of high quality in our application in neurosciences. We also propose and validate a new quality measure especially tuned for imbalanced multi-label data.
39

Multi-objective sequential decision making

Wang, Weijia 11 July 2014 (has links) (PDF)
This thesis is concerned with multi-objective sequential decision making (MOSDM). The motivation is twofold. On the one hand, many decision problems in the domains of e.g., robotics, scheduling or games, involve the optimization of sequences of decisions. On the other hand, many real-world applications are most naturally formulated in terms of multi-objective optimization (MOO). The proposed approach extends the well-known Monte-Carlo tree search (MCTS) framework to the MOO setting, with the goal of discovering several optimal sequences of decisions through growing a single search tree. The main challenge is to propose a new reward, able to guide the exploration of the tree although the MOO setting does not enforce a total order among solutions. The main contribution of the thesis is to propose and experimentally study two such rewards, inspired from the MOO literature and assessing a solution with respect to the archive of previous solutions (Pareto archive): the hypervolume indicator and the Pareto dominance reward. The study shows the complementarity of these two criteria. The hypervolume indicator suffers from its known computational complexity; however the proposed extension thereof provides fine-grained information about the quality of solutions with respect to the current archive. Quite the contrary, the Pareto-dominance reward is linear but it provides increasingly rare information. Proofs of principle of the approach are given on artificial problems and challenges, and confirm the merits of the approach. In particular, MOMCTS is able to discover policies lying in non-convex regions of the Pareto front, contrasting with the state of the art: existing Multi-Objective Reinforcement Learning algorithms are based on linear scalarization and thus fail to sample such non-convex regions. Finally MOMCTS honorably competes with the state of the art on the 2013 MOPTSP competition.
40

Um agente jogador de GO com busca em árvore Monte-Carlo aprimorada por memória esparsamente distribuída

Aguiar, Matheus Araújo 04 November 2013 (has links)
The game of Go is very ancient, with more than 4000 years of history and it is still popular nowadays, representing a big challenge to the Articial Intelligence. Despite its simple rules, the techniques which obtained success in other games like chess and draughts cannot handle satisfactorily the complex patterns and behaviours that emerge during a match of Go. The present work implements the SDM-Go, a competitive agent for Go that seeks to reduce the usage of supervision in the search process for the best move. The SDMGo utilizes the sparse distributed memory model as an additional resource to the Monte- Carlo tree search, which is used by many of the best automatic Go players nowadays. Based upon the open-source player Fuego, the use of the sparse distributed by SDM-Go has the purpose of being an alternative to the strong supervised process used by Fuego. The Monte-Carlo tree search executed by agent Fuego uses a set of heuristics codied by human professionals to guide the simulations and also to evaluate new nodes found in the tree. In a dierent way, SDM-Go implements a non-supervised and domain independent approach, where the history of the values of board states previously visited during the search are used to evaluate new boards (nodes of the search tree). In this way, SDM-Go reduces the supervision of Fuego, substituting its heuristics by the sparse distributed memory, which works as a repository for the information from the history of visited board states. Thus, the contributions of SDM-Go consist of: (1) the utilization of a sparse distributed memory to substitute the supervised approach of Fuego to evaluate new nodes found in the search tree; (2) the implementation of a board state representation based on bit vectors, in order to not compromise the performance of the system due to the boards stored in the memory; (3) the extension of the usage of the Monte-Carlo simulation results to update the values of the board states stored in the memory. Distinctly from many other existing agents, the use of the sparse distributed memory represents an approach independent of domain. The results obtained in tournaments against the well known open-source agent Fuego show that SDM-Go can perform successfully the task of providing a non-supervised and independent of domain approach to evaluate new nodes found in the search tree. Despite the longer runtime required by the use of the sparse distributed memory, the core of the agent performance, SDM-Go can keep a competitive level of play, especially at the 9X9 board. / Com mais de 4000 anos de história, o jogo de Go é atualmente um dos mais populares jogos de tabuleiro e representa um grande desao para a Inteligência Articial. Apesar de suas regras simples, as técnicas que anteriormente obtiveram sucesso em outros jogos como xadrez e damas não conseguem lidar satisfatoriamente com os padrões e comportamentos complexos que emergem durante uma partida de Go. O presente trabalho implementa o SDM-Go, um agente jogador de Go competitivo que procura reduzir a utilização de supervisão no processo de busca pelo melhor movimento. O SDM-Go emprega o modelo de memória esparsamente distribuída como um recurso adicional à busca em árvore Monte- Carlo utilizada por muitos dos melhores agentes automáticos atuais. Baseado no jogador código-aberto Fuego o uso da memória esparsamente distribuída pelo SDM-Go tem como objetivo ser uma alternativa ao processo fortemente supervisionado utilizado por aquele agente. A busca em árvore Monte-Carlo executada pelo jogador Fuego utiliza um conjunto de heurísticas codicadas por prossionais humanos para guiar as simulações e também avaliar novos nós encontrados na árvore. De maneira distinta, o SDM-Go implementa uma abordagem não supervisionada e independente de domínio, onde o histórico dos valores dos estados de tabuleiros previamente visitados durante a busca são utilizados para avaliar novos estados de tabuleiro (nós da árvore de busca). Desta maneira, o SDM-Go reduz a supervisão do agente Fuego, substituindo as heurísticas deste pela memória esparsamente distribuída que funciona como repositório das informações do histórico de estados de tabuleiro visitados. Assim, as contribuições do SDM-Go consistem em: (1) a utilização de uma memória esparsamente distribuída para substituir a abordagem supervisionada do Fuego para avaliar previamente novos nós encontrados na árvore; (2) a implementação de uma representação de tabuleiro baseada em vetores de bits, para não comprometer o desempenho do sistema em função dos tabuleiros armazenados na memória; (3) a extensão da utilização dos resultados das simulações Monte-Carlo para atualizar os valores dos tabuleiros armazenados na memória. Diferentemente de muitos outros agentes atuais, o uso da memória esparsamente distribuída representa uma abordagem independente de domínio. Os resultados obtidos em torneios contra o conhecido agente código-aberto Fuego mostram que o SDM-Go consegue desempenhar com sucesso a tarefa de prover uma abordagem independente de domínio e não supervisionada para avaliar previamente novos nós encontrados na árvore de busca. Apesar do maior tempo de processamento requerido pela utilização da memória esparsamente distribuída, peça central para o desempenho do agente, o SDM-Go consegue manter um nível de jogo competitivo, principalmente no tabuleiro 9X9. / Mestre em Ciência da Computação

Page generated in 0.1109 seconds