• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 28
  • 22
  • 19
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 331
  • 331
  • 195
  • 79
  • 58
  • 54
  • 45
  • 40
  • 38
  • 35
  • 33
  • 32
  • 32
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Zpětnovazební učení v multiagentním makroekonomickém modelu / Reinforcement learning in Agent-based macroeconomic model

Vlk, Bořivoj January 2018 (has links)
Utilizing game theory, learning automata and reinforcement learning concepts, thesis presents a computational model (simulation) based on general equilibrium theory and classical monetary model. Model is based on interacting Constructively Rational agents. Constructive Ratio- nality has been introduced in current literature as machine learning based concept that allows relaxing assumptions on modeled economic agents information and ex- pectations. Model experiences periodical endogenous crises (Fall in both production and con- sumption accompanied with rise in unemployment rate). Crises are caused by firms and households adopting to a change in price and wage levels. Price and wage level adjustments are necessary for the goods and labor market to clear in the presence of technological growth. Finally, model has good theoretical background and large potential for further de- velopment. Also, general properties of games of learning entities are examined, with special focus on sudden changes (shocks) in the game and behavior of game's play- ers, during recovery from which rigidities can emerge. JEL Classification D80, D83, C63, E32, C73, Keywords Learning, Information and Knowledge, Agent-based, Reinforcement learning, Business cycle, Stochastic and Dynamic Games, Simulation, Modeling Author's e-mail...
12

Finding High Ground: Simulating an Evacuation in a Lahar Risk Zone

Bard, Joseph 27 October 2016 (has links)
Large lahars threaten communities living near volcanoes all over the world. Evacuations are a critical strategy for reducing vulnerability and mitigating a disaster. Hazard perceptions, transportation infrastructure, and transportation mode choice are all important factors in determining the effectiveness of an evacuation. This research explores the effects of population, whether individuals drive or walk, response time, and exit closures on an evacuation in a community threatened by a large lahar originating on Mount Rainier, Washington. An agent-based model employing a co-evolutionary learning algorithm is used to simulate a vehicular evacuation. Clearance times increase when the population is larger and when exits are blocked. Clearance times are reduced when a larger proportion of agents opt out of driving, and as the model learns. Results indicate evacuation times vary greatly due to spatial differences in the transportation network, the initial population distribution, and individual behaviors during the evacuation.
13

Agent-based Modeling for Recovery Planning after Hurricane Sandy

Hajhashemi, Elham 13 September 2018 (has links)
Hurricane Sandy hit New York City on October 29, 2012 and greatly disrupted transportation systems, power systems, work, and schools. This research used survey data from 397 respondents in the NYC Metropolitan Area to develop an agent-based model for capturing commuter behavior and adaptation after the disruption. Six different recovery scenarios were tested to find which systems are more critical to recover first to promote a faster return to productivity. Important factors in the restoration timelines depends on the normal commuting pattern of people in that area. In the NYC Metropolitan Area, transit is one of the common modes of transportation; therefore, it was found that the subway/rail system recovery is the top factor in returning to productivity. When the subway/rail system recovers earlier (with the associated power), more people are able to travel to work and be productive. The second important factor is school and daycare closure (with the associated power and water systems). Parents cannot travel unless they can find a caregiver for their children, even if the transportation system is functional. Therefore, policy makers should consider daycare and school condition as one of the important factors in recovery planning. The next most effective scenario is power restoration. Telework is a good substitute for the physical movement of people to work. By teleworking, people are productive while they skip using the disrupted transportation system. To telework, people need power and communication systems. Therefore, accelerating power restoration and encouraging companies to let their employees' telework can promote a faster return to productivity. Finally, the restoration of major crossings like bridges and tunnels is effective in the recovery process. / Master of Science / Natural and man-made disasters, cause massive destruction of property annually and disrupt the normal economic productivity of an area. Although the occurrence of these disasters cannot be controlled, society can minimize the effects with post-disaster recovery strategies. Hurricane Sandy hit New York City on October 29, 2012 and greatly disrupted transportation systems, power systems, work, and schools. In this research, commuter behavior and adaptation after the hurricane were captured by using a survey data that asked questions from people living in NYC metropolitan area about their commuting behavior before and after Hurricane Sandy. An agent-based model was developed and six different recovery strategies were tested in order to find effective factors in returning people to normal productive life faster. In the NYC Metropolitan Area, transit is one of the common modes of transportation; therefore, it was found that the subway/rail system recovery is the top factor in returning to productivity. The next important factor is school and daycare closure. Parents are responsible for their children, therefore; they may not travel to work when school and daycares are closed. The third important factor is power restoration. To telework, people need power and communication systems. By teleworking, people are productive while they skip using the disrupted transportation system. The final important factor is the restoration of major crossings like bridges and tunnels.
14

Techniques for mathematical analysis and optimization of agent-based models

Oremland, Matthew Scott 23 January 2014 (has links)
Agent-based models are computer simulations in which entities (agents) interact with each other and their environment according to local update rules. Local interactions give rise to global dynamics. These models can be thought of as in silico laboratories that can be used to investigate the system being modeled. Optimization problems for agent-based models are problems concerning the optimal way of steering a particular model to a desired state. Given that agent-based models have no rigorous mathematical formulation, standard analysis is difficult, and traditional mathematical approaches are often intractable. This work presents techniques for the analysis of agent-based models and for solving optimization problems with such models. Techniques include model reduction, simulation optimization, conversion to systems of discrete difference equations, and a variety of heuristic methods. The proposed strategies are novel in their application; results show that for a large class of models, these strategies are more effective than existing methods. / Ph. D.
15

Alternative Methodology To Household Activity Matching In TRANSIMS

Paradkar, Rajan 04 February 2002 (has links)
TRANSIMS (Transportation Analysis and Simulation System) developed at the Los Alamos National Laboratory, is an integrated system of travel forecasting models designed to give transportation planners accurate and complete information on traffic impacts, congestion, and pollution. TRANSIMS is a micro-simulation model which uses census data to generate a synthetic population and assigns activities using activity survey data to each person of every household of the synthetic population. The synthetic households generated from the census data are matched with the survey households based on their demographic characteristics. The activities of the survey household individuals are then assigned to the individuals of the matched synthetic households. The CART algorithm is used to match the households. With the use of CART algorithm a classification tree is built for the activity survey households based on some dependent and independent variables from the demographic data. The TRANSIMS model assumes activity times as dependent variables for building the classification tree. The topic of this research is to compare the TRANSIMS approach of using times spent in executing the activities as dependent variables, compared to match the alternative of using travel times for trips between activities as dependent variables i.e. to use the travel time pattern instead of activity time pattern to match the persons in the survey households with the synthetic households. Thus assuming that if the travel time patterns are the same then we can match the survey households to the synthetic population i.e. people with similar demographic characteristics tend to have similar travel time patterns. The algorithm of the Activity Generator module along with the original set of dependent variables, were first used to generate a base case scenario. Further tests were carried out using an alternative set of dependent variables in the algorithm. A sensitivity analysis was also carried out to test the affect of different sets of dependent variables in generating activities using the algorithm of the Activity Generator. The thesis also includes a detailed documentation of the results from all the tests. / Master of Science
16

Effective and efficient algorithms for simulating sexually transmitted diseases

Tolentino, Sean Lucio 01 December 2014 (has links)
Sexually transmitted diseases affect millions of lives every year. In order to most effectively use prevention resources epidemiologists deploy models to understand how the disease spreads through the population and which intervention methods will be most effective at reducing disease perpetuation. Increasingly agent-based models are being used to simulate population heterogeneity and fine-grain sociological effects that are difficult to capture with traditional compartmental and statistical models. A key challenge is using a sufficiently large number of agents to produce robust and reliable results while also running in a reasonable amount of time. In this thesis we show the effectiveness of agent-based modeling in planning coordinated responses to a sexually transmitted disease epidemic and present efficient algorithms for running these models in parallel and in a distributed setting. The model is able to account for population heterogeneity like age preference, concurrent partnership, and coital dilution, and the implementation scales well to large population sizes to produce robust results in a reasonable amount of time. The work helps epidemiologists and public health officials plan a targeted and well-informed response to a variety of epidemic scenarios.
17

A Genetic Programming Approach to Solving Optimization Problems on Agent-Based Models

Garuccio, Anthony 17 May 2016 (has links)
In this thesis, we present a novel approach to solving optimization problems that are defined on agent-based models (ABM). The approach utilizes concepts in genetic programming (GP) and is demonstrated here using an optimization problem on the Sugarscape ABM, a prototype ABM that includes spatial heterogeneity, accumulation of agent resources, and agents with different attributes. The optimization problem seeks a strategy for taxation of agent resources which maximizes total taxes collected while minimizing impact on the agents over a finite time. We demonstrate how our GP approach yields better taxation policies when compared to simple flat taxes and provide reasons why GP-generated taxes perform well. We also look at ways to improve the performance of the GP optimization method. / McAnulty College and Graduate School of Liberal Arts; / Computational Mathematics / MS; / Thesis;
18

Designing a realistic virtual bumblebee

Marsden, Timothy 09 February 2016 (has links)
Optimal Foraging Theory is a set of mathematical models used in the field of behavioral ecology to predict how animals should weigh foraging costs and benefits in order to maximize their food intake. One popular model, referred to as the Optimal Diet Model (ODM), focuses on how individuals should respond to variation in food quality in order to optimize food selection. The main prediction of the ODM is that low quality food items should only be accepted when higher quality items are encountered below a predicted threshold. Yet, many empirical studies have found that animals still include low quality items in their diet above such thresholds, indicating a sub-optimal foraging strategy. Here, we test the hypothesis that such ‘partial preferences’ are produced as a consequence of incomplete information on prey distributions resulting from memory limitations. To test this hypothesis, we used agent-based modeling in NetLogo to create a model of flower choice behavior in a virtual bumblebee forager (SimBee). We program virtual bee foragers with an adaptive decision-making algorithm based on the classic ODM, which we have modified to include memory. Our results show that the probability of correctly rejecting a low quality food item increases with memory size, suggesting that memory limitations play a significant role in driving partial preferences. We discuss the implications of this finding and further applications of our SimBee model in research and educational contexts.
19

Designing a realistic virtual bumblebee

Marsden, Timothy 09 February 2016 (has links)
Optimal Foraging Theory is a set of mathematical models used in the field of behavioral ecology to predict how animals should weigh foraging costs and benefits in order to maximize their food intake. One popular model, referred to as the Optimal Diet Model (ODM), focuses on how individuals should respond to variation in food quality in order to optimize food selection. The main prediction of the ODM is that low quality food items should only be accepted when higher quality items are encountered below a predicted threshold. Yet, many empirical studies have found that animals still include low quality items in their diet above such thresholds, indicating a sub-optimal foraging strategy. Here, we test the hypothesis that such ‘partial preferences’ are produced as a consequence of incomplete information on prey distributions resulting from memory limitations. To test this hypothesis, we used agent-based modeling in NetLogo to create a model of flower choice behavior in a virtual bumblebee forager (SimBee). We program virtual bee foragers with an adaptive decision-making algorithm based on the classic ODM, which we have modified to include memory. Our results show that the probability of correctly rejecting a low quality food item increases with memory size, suggesting that memory limitations play a significant role in driving partial preferences. We discuss the implications of this finding and further applications of our SimBee model in research and educational contexts.
20

Development of Agent-based Models for Economic Simulation / Vývoj agentních modelů pro ekonomickou simulaci

Šalamon, Tomáš January 2005 (has links)
This thesis is about the development of agent-based models that are a method of simulation of economic processes and environments using multi-agent systems. Agent-based modeling seems to be an unappreciated approach that is expected and has a potential for a much wider application than it actually has. The purpose of thiswork is to evaluate the reasons for such situation and to offer solutions. The following were identified among the reasons for a low utilization of the method: a wide gap between theory and practice in the field, doubtful reliability of the method, lowconfidence in its results, complexity, missing methodologies, problems with suitable development frameworks, limitations of computational performance, a lack of awareness among the public and certain other problems. Agentology; (i.e. a methodology for the development of agent-based models) was proposed in this thesis in order to address issues regarding the development of agent-based models. There are six defined roles of project participants in the methodology: expert, analyst, modeler, platform specialist, programmer and tester. The design and development process consists of four phases and nine steps beginning with task formulation, conceptual modeling, and platformspecific modeling to the development of the system. For the design phases, agent modeling language for agent-based models was derived.

Page generated in 0.0661 seconds