Spelling suggestions: "subject:"agentbased"" "subject:"agentbaserad""
51 |
The complex problem of food safety : Applying agent-based modeling to the policy process2014 October 1900 (has links)
Many problems facing policymakers are complex and cannot be understood by reducing them to their component parts. However, many of the policy responses to complex problems continue to be based on simple, reductionist methods. Agent-based modeling (ABM) is one alternative method for informing policy that is well-suited to analyzing complex problems.
ABM has practical implications for different stages of the policy process, such as testing alternatives, assisting with evaluation by setting up a counterfactual, and agenda setting. The objective of the research presented in this dissertation is to explore the opportunity for using ABM to examine complex problems of relevance for policy. To do so, three separate models were developed to investigate different aspects of food safety inspection systems. Complex problems involve interrelated feedback loops, many actors, exponential growth, asymmetric information, and uncertainty in outcomes and data, and food safety exhibits these traits, providing an interesting case study for the use of ABM.
The first model explores three inspection scenarios incorporating access to information. The main finding was that the number of sick consumers is greatly reduced by giving consumers and inspectors more information about whether a retail outlet is contaminated, even if that information may be uncertain. The second model incorporated theories on risk and the role of transparency in encouraging consumer trust by giving consumers access to inspection scores. Overall, the findings were more nuanced: having access to restaurant inspection scores results in a slightly higher mean number of sick consumers, but less variation overall in the number of sick consumers. As well, a greater number of compliant restaurants results in fewer sick consumers. Rather than investigating the structure of the inspection system, the third model examines the potential for mobile technology to crowdsource information about suspected foodborne illness. This model illustrates the potential for health-oriented mobile technologies to improve the surveillance system for foodborne illness.
Overall, the findings from the three models support using stylized ABMs to study various aspects of food safety inspection systems, and show that these models can be used to generate insight for policy choices and evidence-based decision making in this area.
|
52 |
Freight Market Interactions Simulation (FREMIS): An Agent-based Modelling FrameworkCavalcante, Rinaldo 19 March 2013 (has links)
Freight transport is the output of an economic market, which converts commodity flows into vehicle flows. Interactions in this market influence vehicle flows and since freight market characteristics (product differentiation and economies of scale/scope) violate perfect competition conditions, the output of this market cannot be predicted directly, unless these interactions are represented in the forecasting models. Traditional freight modelling frameworks do not consider these interactions and consequently they may provide inaccurate freight flow forecasts. In this dissertation, a freight modelling framework is proposed using simulation of freight agent interactions in the economic market to forecast freight flows. The framework is named FREMIS (FREight Market Interactions Simulation). The FREMIS framework consists of two demand models to represent shipper decisions (bundling of shipments and carrier selection) in the market and functions based on profit maximizing behaviour to simulate carrier proposals for contracts. Besides that, learning models are proposed to simulate agent learning processes based on their interactions. The framework was developed aiming to create a realistic representation of freight markets using feasible data collection methods. To illustrate the feasibility of the data collection, a customized web survey was implemented with shippers and carriers in a freight market. Two probabilistic models were developed using the data. The first model, a shipment bundling model was proposed combining a probabilistic model and a vehicle routing algorithm. The results of the probabilistic model are presented in this dissertation, where the locations of shipments (origin and destination) influence the probability of bundling them. Second, three carrier selection models were developed aiming to analyse the nonresponse bias and non-attendance problem in the survey. All of these models assumed heteroskedasticity (different scale or variance) in shipper behaviour. In all models, the hypothesis of agents’ heteroskedasticity cannot be rejected. Besides that, nonresponse bias and non-attendance problem were identified in the survey. In conclusion, the models obtained from the survey were consistent with their behavioural assumptions and therefore they can be adopted during FREMIS implementation.
|
53 |
Development of agent-based models for healthcare: applications and critiqueDemianyk, Bryan C.P. January 2010 (has links)
Agent-based modeling (ABM) is a modeling and simulation paradigm well-suited to social systems where agents interact and have some degree of autonomy. In their most basic sense, ABMs consist of agents (generally, individuals) interacting in an environment according to a set of behavioural rules. The foundational premise and the conceptual depth of ABM is that simple rules of individual behaviour will aggregate to illuminate complex and/or emergent group-level phenomena that are not specifically encoded by the modeler and that cannot be predicted or explained by the agent-level rules. In essence, ABM has the potential to reveal a whole that is greater than the sum of its parts. In this thesis, ABMs have been utilized as a modeling framework for three specific healthcare applications, including:
• the development of an ABM of an emergency department within a hospital allowing the modeling of contact-based infectious diseases such as influenza, and simulating various mitigation strategies;
• the development of an ABM to model the effectiveness of a real-time location system (RTLS) using radio frequency identification (RFID) in an emergency department, used for patient tracking as one measure of hospital efficiency; and,
• the development of an ABM to test strategies for disaster preparedness (high volume, high risk patients) using a fictitious case of zombies in an emergency department.
Although each ABM was purposeful and meaningful for its custom application, each ABM also represented an iteration toward the development of a generic ABM framework. Finally, a thorough critique of ABMs and the modifications required to create a more robust framework are provided. / February 2016
|
54 |
Spread Rate Estimation and the Role of Spatial Configuration and Human BehaviorJanuary 2012 (has links)
abstract: The spread of invasive species may be greatly affected by human responses to prior species spread, but models and estimation methods seldom explicitly consider human responses. I investigate the effects of management responses on estimates of invasive species spread rates. To do this, I create an agent-based simulation model of an insect invasion across a county-level citrus landscape. My model provides an approximation of a complex spatial environment while allowing the "truth" to be known. The modeled environment consists of citrus orchards with insect pests dispersing among them. Insects move across the simulation environment infesting orchards, while orchard managers respond by administering insecticide according to analyst-selected behavior profiles and management responses may depend on prior invasion states. Dispersal data is generated in each simulation and used to calculate spread rate via a set of estimators selected for their predominance in the empirical literature. Spread rate is a mechanistic, emergent phenomenon measured at the population level caused by a suite of latent biological, environmental, and anthropogenic. I test the effectiveness of orchard behavior profiles on invasion suppression and evaluate the robustness of the estimators given orchard responses. I find that allowing growers to use future expectations of spread in management decisions leads to reduced spread rates. Acting in a preventative manner by applying insecticide before insects are actually present, orchards are able to lower spread rates more than by reactive behavior alone. Spread rates are highly sensitive to spatial configuration. Spatial configuration is hardly a random process, consisting of many latent factors often not accounted for in spread rate estimation. Not considering these factors may lead to an omitted variables bias and skew estimation results. The ability of spread rate estimators to predict future spread varies considerably between estimators, and with spatial configuration, invader biological parameters, and orchard behavior profile. The model suggests that understanding the latent factors inherent to dispersal is important for selecting phenomenological models of spread and interpreting estimation results. This indicates a need for caution when evaluating spread. Although standard practice, current empirical estimators may both over- and underestimate spread rate in the simulation. / Dissertation/Thesis / M.S. Biology 2012
|
55 |
Competition and collaboration in supply chains : an agent-based modelling approachArvitrida, Niniet I. January 2017 (has links)
Competition has been considered as an effective means to improve business and economic competitiveness. However, competition in supply chain management (SCM) can be viewed as a source of uncertainty. Most recommended collaboration strategies in SCM literature tend to avoid the emergence of competition inside the supply chain, but, in reality, these strategies do not lead all supply chains to success. In addition, from strategic management perspective, these collaboration strategies are not believed to encourage firms to improve their performance. Both competition and collaboration are critical issues in achieving business success, but the effect of both factors on the market has not been explored concurrently in the literature. The complexity of this issue should be investigated using a comprehensive perspective, and it is hard to undertake by using an empirical approach.
|
56 |
Minimal requirements for the cultural evolution of languageSpike, Matthew John January 2017 (has links)
Human language is both a cognitive and a cultural phenomenon. Any evolutionary account of language, then, must address both biological and cultural evolution. In this thesis, I give a mainly cultural evolutionary answer to two main questions: firstly, how do working systems of learned communication arise in populations in the absence of external or internal guidance? Secondly, how do those communication systems take on the fundamental structural properties found in human languages, i.e. systematicity at both a meaningless and meaningful level? A large, multi-disciplinary literature exists for each question, full of apparently conflicting results and analyses. My aim in this thesis is to survey this work, so as to find any commonalities and bring this together in order to provide a minimal account of the cultural evolution of language. The first chapter of this thesis takes a number of well-established models of the emergence of signalling systems. These are taken from several different fields: evolutionary linguistics, evolutionary game theory, philosophy, artificial life, and cognitive science. By using a common framework to directly compare these models, I show that three underlying commonalities determine the ability of any population of agents to reliably develop optimal signalling. The three requirements are that i) agents can create and transfer referential information, ii) there is a systemic bias against ambiguity, and iii) some mechanism leading to information loss exists. Following this, I extend the model to determine the effects of including referential uncertainty. I show that, for the group of models to which this applies, this places certain extra restrictions on the three requirements stated above. In the next chapter, I use an information-theoretic framework to construct a novel analysis of signalling games in general, and rephrase the three requirements in more formal terms. I then show that we can use these 3 criteria as a diagnostic for determining whether any given signalling game will lead to optimal signalling, without the requirement for repeated simulations. In the final, much longer, chapter, I address the topic of duality of patterning. This involves a lengthy review of the literature on duality of patterning, combinatoriality, and compositionality. I then argue that both levels of systematicity can be seen as a functional adaptation which maintains communicative accuracy in the face of noisy processes at different levels of analysis. I support this with results from a new, minimally-specified model, which also clarifies and informs a number of long-fought debates within the field.
|
57 |
The Organization and Evolution of the Hohokam Economy Agent-Based Modeling of Exchange in the Phoenix Basin, Arizona, AD 200-1450January 2013 (has links)
abstract: The Hohokam of central Arizona left behind evidence of a culture markedly different from and more complex than the small communities of O'odham farmers first encountered by Europeans in the sixteenth and seventeenth centuries A.D. Archaeologists have worked for well over a century to document Hohokam culture history, but much about Pre-Columbian life in the Sonoran Desert remains poorly understood. In particular, the organization of the Hohokam economy in the Phoenix Basin has been an elusive and complicated subject, despite having been the focus of much previous research. This dissertation provides an assessment of several working hypotheses regarding the organization and evolution of the pottery distribution sector of the Hohokam economy. This was accomplished using an agent-based modeling methodology known as pattern-oriented modeling. The objective of the research was to first identify a variety of economic models that may explain patterns of artifact distribution in the archaeological record. Those models were abstract representations of the real-world system theoretically drawn from different sources, including microeconomics, mathematics (network/graph theory), and economic anthropology. Next, the effort was turned toward implementing those hypotheses as agent-based models, and finally assessing whether or not any of the models were consistent with Hohokam ceramic datasets. The project's pattern-oriented modeling methodology led to the discard of several hypotheses, narrowing the range of plausible models of the organization of the Hohokam economy. The results suggest that for much of the Hohokam sequence a market-based system, perhaps structured around workshop procurement and shopkeeper merchandise, provided the means of distributing pottery from specialist producers to widely distributed consumers. Perhaps unsurprisingly, the results of this project are broadly consistent with earlier researchers' interpretations that the structure of the Hohokam economy evolved through time, growing more complex throughout the Preclassic, and undergoing a major reorganization resulting in a less complicated system at the transition to the Classic Period. / Dissertation/Thesis / NetLogo code, software, and model initialization data. / Ph.D. Anthropology 2013
|
58 |
Performance Evaluation of Boids on the GPU and CPULindqvist, Sebastian January 2018 (has links)
Context. Agent based models are used to simulate complex systems by using multiple agents that follow a set of rules. One such model is the boid model which is used to simulate movements of synchronized groups of animals. Executing agent based models partially or fully on the GPU has previously shown to increase performance, opening up the possibility for larger simulations. However, few articles have previously compared a full GPU implementation of the boid model with a multi-threaded CPU implementation. Objectives. The objectives of this thesis are to find how parallel execution of boid model performs when executed on the CPU and GPU respectively, based on the variables frames per second and average boid computation time per frame. Methods. A performance benchmark experiment will be set up where three implementations of the boid model are implemented and tested. Results. The collected data is summarized in both tables and graphs, showing the result of the experiment for frames per second and average boid computation time per frame. Additionally, the average results are summarized in two tables. Conclusions. For the largest flock size the GPGPU implementation performs the best with an average FPS of 42 times over the single-core implementation while the multi-core implementation performs with an average FPS 6 times better than the single-core implementation. For the smallest flock size the single-core implementation is most efficient while the GPGPU implementation has 1.6 times slower average update time and the multi-cor eimplementation has an average update time of 11 times slower compared to the single-core implementation.
|
59 |
Agent-based modelling of cell-cell interactions for in vitro vascular formation and cancer cell growthLi, Ye January 2015 (has links)
A complex system is a collection of parts, that can be identical or different, that interact with each other and environment, and exhibit emergent behaviour. Here, I consider the formation of vascular structures in the body as a complex system consisting of an emergent pattern in interacting endothelial cells. A cancer tumour is a different but related complex system that contains various types of cells, some of which having cancer-inducing mutations. To understand the formation of a vascular structure or a cancer tumour, it is important to understand both the single cells and cell-cell interactions. To study the physical interaction among cells in vascular formation or cancer cell growth, in this thesis an agent-based model is built based on the physical properties of cells which includes the size, shape, direction, and position of cells. In this way the mathematical equations in the model can show the physical variation among modelled cells. The 3-dimensional shape of cells is modelled, and so while I start with cell interactions in petri-dish the model can be easily extended to describe motility of cells in a 3-dimensional system in the future. The physical model is implemented and then simulated with in silico experiments, and then the spatial distribution of cells in in vitro experiments is analysed and used to calibrate the model. In vitro experiments with and without a drug in normal and hypoxic conditions are carried out. Also the patterns formed by cells with different treatment are analysed to produce different parameter combinations in the model. This physical model is shown to be able to predict vessel formation and be reused to predict the spatial distribution of cancer cells in in vitro growth experiments. With biological data such as cell size, cell shape, etc. this model is able to predict behaviours of various cell types, and can also be used to predict more complex phenomena, such as mixed type of cancer cells growing in 3-dimensions with vascular structures.
|
60 |
Deep Impact: Geo-Simulations as a Policy Toolkit for Natural DisastersNaqvi, Asjad January 2017 (has links) (PDF)
Adverse post-natural disaster outcomes in low-income regions, like elevated internal migration levels and low consumption levels, are the result of market failures, poor mechanisms for stabilizing income, and missing insurance markets, which force the affected population to respond, and adapt to the shock they face. In a spatial environment, with multiple locations with independent but interconnected markets, these transitions quickly become complex and highly non-linear due to the feedback loops between the micro
individual-level decisions and the meso location-wise market decisions. To capture these continuously evolving micro-meso interactions, this paper presents a spatially explicit bottom-up agent-based model to analyze natural disaster-like shocks to low-income regions. The aim of the model is to temporally and spatially track how population distributions, income, and consumption levels evolve, in order to identify low-income workers that are "food insecure". The model is applied to the 2005 earthquake in northern Pakistan, which faced catastrophic losses and high levels of displacement in a short time span, and with market disruptions, resulted in high levels of food insecurity. The model is calibrated to pre-crisis trends, and shocked using distance-based output and labor loss functions to replicate the earthquake impact. Model results show, how various factors like existing income and saving levels, distance from the fault line, and connectivity to other locations, can give insights into the spatial and temporal emergence of vulnerabilities. The simulation framework presented here, leaps beyond existing modeling efforts, which usually deals with macro long-term loss estimates, and allows policy makers to come up with informed short-term policies in an environment where data is non-existent, policy response is time dependent, and resources are limited.
|
Page generated in 0.0601 seconds