• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 704
  • 194
  • 103
  • 50
  • 30
  • 23
  • 21
  • 21
  • 19
  • 15
  • 12
  • 12
  • 11
  • 9
  • 9
  • Tagged with
  • 1453
  • 1453
  • 188
  • 185
  • 166
  • 162
  • 148
  • 131
  • 129
  • 122
  • 113
  • 112
  • 111
  • 108
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

Pure and Mixed Strategies in Cyclic Competition: Extinction, Coexistence, and Patterns

Intoy, Ben Frederick Martir 04 May 2015 (has links)
We study game theoretic ecological models with cyclic competition in the case where the strategies can be mixed or pure. For both projects, reported in [49] and [50], we employ Monte Carlo simulations to study finite systems. In chapter 3 the results of a previously published paper [49] are presented and expanded upon, where we study the extinction time of four cyclically competing species on different lattice structures using Lotka-Volterra dynamics. We find that the extinction time of a well mixed system goes linearly with respect to the system size and that the probability distribution approximately takes the shape of a shifted exponential. However, this is not true for when spatial structure is added to the model. In that case we find that instead the probability distribution takes on a non-trivial shape with two characteristic slopes and that the mean goes as a power law with an exponent greater than one. This is attributed to neutral species pairs, species who do not interact, forming domains and coarsening. In chapter 4 the results of [50] are reported and expanded, where we allow agents to choose cyclically competing strategies out of a distribution. We first study the case of three strategies and find through both simulation and mean field equations that the probability distributions of the agents synchronize and oscillate with time in the limit where the agents probability distributions can be approximated as continuous. However, when we simulate the system on a one-dimensional lattice and the probability distributions are small and discretized, it is found that there is a drastic transition in stability, where the average extinction time of a strategy goes from being a power law with respect to system size to an exponential. This transition can also be observed in space time images with the emergence of tile patterns. We also look into the case of four cyclically competing strategies and find results similar to that of [49], such as the coarsening of neutral domains. However, the transition from power law to exponential for the average extinction time seen for three strategies is not observed, but we do find a transition from one power law to another with a different slope. This work was supported by the United States National Science Foundation through grants DMR-0904999 and DMR-1205309. / Ph. D.
442

Analysis of Blockchain-based Smart Contracts for Peer-to-Peer Solar Electricity Transactive Markets

Lin, Jason 08 February 2019 (has links)
The emergence of blockchain technology and increasing penetration of distributed energy resources (DERs) have created a new opportunity for peer-to-peer (P2P) energy trading. However, challenges arise in such transactive markets to ensure individual rationality, incentive compatibility, budget balance, and economic efficiency during the trading process. This thesis creates an hour-ahead P2P energy trading network based on the Hyperledger Fabric blockchain and explores a comparative analysis of different auction mechanisms that form the basis of smart contracts. Considered auction mechanisms are discriminatory and uniform k-Double Auction with different k values. This thesis also investigates effects of four consumer and prosumer bidding strategies: random, preference factor, price-only game-theoretic approach, and supply-demand game-theoretic approach. A custom simulation framework that models the behavior of the transactive market is developed. Case studies of a 100-home microgrid at various photovoltaic (PV) penetration levels are presented using typical residential load and PV generation profiles in the metropolitan Washington, D.C. area. Results indicate that regardless of PV penetration levels and employed bidding strategies, discriminatory k-DA can outperform uniform k-DA. Despite so, discriminatory k-DA is more sensitive to market conditions than uniform k-DA. Additionally, results show that the price-only game-theoretic bidding strategy leads to near-ideal economic efficiencies regardless of auction mechanisms and PV penetration levels. / MS
443

Peace or War in the Taiwan Strait: A Game Theoretical Analysis of the Taiwan Issue

Wu, Chengqiu 20 October 2003 (has links)
I define the Taiwan issue as the tense relationship between mainland China and Taiwan since 1949. The tension used to arise from the belligerency between the Kuomintang and the Chinese Communist Party. In the past decade, Taiwan increasingly sought to define its own national identity and international status, but faced diplomatic and military pressures from mainland China, which has insisted that Taiwan is part of China. The relationship between mainland China and Taiwan has been one of the most important issues regarding the peace and security in the Asia-Pacific region. In order to explore the Taiwan issue, this research will examine the interactions among the United States, Taiwan, and mainland China in the realist perspective of international relations. The main research questions are: What determines the costs and benefits of the security decisions of the United States, Taiwan, and mainland China regarding the Taiwan issue? What decisions should the players make based on their costs and benefits? How do these decisions form various scenarios leading to different outcomes? How have the relations among the United States, Taiwan, and mainland China evolved since 1949? This thesis is organized as follows. First, an examination of the interactions among the three players---the United States, Taiwan and mainland China---in a game theoretical model explores the costs and benefits of their security decisions and the formation of various security scenarios in the Taiwan Strait. Second, the evolution of security in the Taiwan Strait is reviewed and analyzed by applying the game theoretical model to the history of the Taiwan issue. Third, based on the game theoretical model, I make some speculations and predictions on the future relations between mainland China and Taiwan. / Master of Arts
444

Playing the Writing Game: Gaming the Writing Play

Beale, Matthew Carson 07 July 2006 (has links)
My studies consider the application of digital game theory to the instruction of writing in the first year composition classroom. I frame my argument through dialectic of representation and simulation and the cultural shift now in progress from the latter to the former. I first address the history of multimodal composition in the writing classroom, specifically noting the movement from analysis to design. In the third chapter, I examine several primary tenants of video game theory in relation to traditional academic writing, such as the concept of authorship and the importance of a rule system. My final chapter combines the multimodal and digital game theory to create what I term "digital game composition pedagogy." The last chapter offers new ways to discuss writing and composing through the theories of video games, and shows how video games extend the theories associated with writing to discussions that coincide with an interest that many of our students have outside of the classroom. / Master of Arts
445

On rational expectations and dynamic games

McGlone, James M. January 1985 (has links)
We consider the problem of uniting dynamic game theory and the rational expectations hypothesis. In doing so we examine the current trend in macroeconomic literature towards the use of dominant player games and offer an alternative game solution that seems more compatible with the rational expectations hypothesis. Our analysis is undertaken in the context of a simple deterministic macroeconomy. Wage setters are the agents in the economy and are playing a non-cooperative game with the Fed. The game is played with the wage setters selecting a nominal wage based on their expectation of the money supply, and the Fed selecta the money supply based on its expectation of the nominal wage. We find it is incorrect to use the rational expectations hypothesis in conjunction with the assumption that wage setters take the Fed's choices as an exogenous uncontrollable forcing process. We then postulate the use of a Nash equilibrium in which players have rational expectations. This results in an equilibrium that has Stackleberg properties. The nature of the solution is driven by the fact that the wage setter's reaction function is a level maximal set that covers all possible choices of the Fed. One of the largest problems we encountered in applying rational expectations to a dynamic game is the interdependence of the players' expectations. This problem raises two interesting but as yet unresolved questions regarding the expectations structures of agents: whether an endogenous expectations structure will yield rational expectations; and can endogenous expectations be completely modelled. In addition to the questions mentioned above we also show that the time inconsistency problem comes from either misspecifying the constraints on the policy maker or an inconsistency in interpreting those constraints. We also show that the Lucas critique holds in a game setting and how the critique relates to the reaction functions of players. / Ph. D.
446

Developing and Testing a Novel De-centralized Cycle-free Game Theoretic Traffic Signal Controller: A Traffic Efficiency and Environmental Perspective

Abdelghaffar, Hossam Mohamed Abdelwahed 30 April 2018 (has links)
Traffic congestion negatively affects traveler mobility and air quality. Stop and go vehicular movements associated with traffic jams typically result in higher fuel consumption levels compared to cruising at a constant speed. The first objective in the dissertation is to investigate the spatial relationship between air quality and traffic flow patterns. We developed and applied a recursive Bayesian estimation algorithm to estimate the source location (associated with traffic jam) of an airborne contaminant (aerosol) in a simulation environment. This algorithm was compared to the gradient descent algorithm and an extended Kalman filter algorithm. Results suggest that Bayesian estimation is less sensitive to the choice of the initial state and to the plume dispersion model. Consequently, Bayesian estimation was implemented to identify the location (correlated with traffic flows) of the aerosol (soot) that can be attributed to traffic in the vicinity of the Old Dominion University campus, using data collected from a remote sensing system. Results show that the source location of soot pollution is located at congested intersections, which demonstrate that air quality is correlated with traffic flows and congestion caused by signalized intersections. Sustainable mobility can help reduce traffic congestion and vehicle emissions, and thus, optimizing the performance of available infrastructure via advanced traffic signal controllers has become increasingly appealing. The second objective in the dissertation is to develop a novel de-centralized traffic signal controller, achieved using a Nash bargaining game-theoretic framework, that operates a flexible phasing sequence and free cycle length to adapt to dynamic changes in traffic demand levels. The developed controller was implemented and tested in the INTEGRATION microscopic traffic assignment and simulation software. The proposed controller was compared to the operation of an optimum fixed-time coordinated plan, an actuated controller, a centralized adaptive phase split controller, a decentralized phase split and cycle length controller, and a fully coordinated adaptive phase split, cycle length, and offset optimization controller to evaluate its performance. Testing was initially conducted on an isolated intersection, showing a 77% reduction in queue length, a 17% reduction in vehicle emission levels, and a 64% reduction in total delay. In addition, the developed controller was tested on an arterial network producing statistically significant reductions in total delay ranging between 36% and 67% and vehicle emissions reductions ranging between 6% and 13%. Analysis of variance, Tukey, and pairwise comparison tests were conducted to establish the significance of the proposed controller. Moreover, the controller was tested on a network of 38 intersections producing significant reduction in the travel time by 23.6%, a reduction in the queue length by 37.6%, and a reduction in CO2 emissions by 10.4%. Finally, the controller was tested on the Los Angeles downtown network composed of 457 signalized intersections, producing a 35% reduction in travel time, a 54.7% reduction in queue length, and a 10% reduction in the CO2 emissions. The results demonstrate that the proposed decentralized controller produces major improvements over other state-of-the-art centralized and de-centralized controllers. The proposed controller is capable of alleviating congestion as well as reducing emissions and enhancing air quality. / PHD / Traffic congestion affects traveler mobility and also has an impact on air quality, and consequently, on public health. Stop-and-go driving, which is typically associated with traffic jams, results in increased fuel consumption when compared to cruising at a constant speed. This in turn contributes to the amount of vehicle emissions that create air pollution, which contributes to global warming. Consequently, studying the spatial relationships between air quality and traffic flow patterns is directly related to enhancing air quality, as improving these patterns can reduce traffic congestion. The first objective in this dissertation is to investigate the spatial relationship between air quality and traffic flow patterns. We developed and applied a recursive Bayesian estimation algorithm to estimate the source location of an airborne contaminant (aerosol) in a simulation environment. This algorithm was compared to the gradient descent algorithm and the extended Kalman filter. Results suggest that Bayesian estimation is less sensitive to the choice of the initial state and to the plume dispersion model when compared to the other two approaches. Consequently, an experimental investigation using Bayesian estimation was conducted to identify the location (correlated with traffic flows) of the aerosol (soot) that can be attributed to traffic in the vicinity of the Old Dominion University campus, using data collected from a remote sensing system (a compact light detection and ranging [LiDAR] system). The results show that the location of soot pollution in the study area is located at congested intersections, which demonstrates that air quality is correlated with traffic flows and congestion caused by signalized intersections. Sustainable mobility could enhance air quality and alleviate congestion. Accordingly, optimizing the utilization of the available infrastructure using advanced traffic signal controllers has become necessary to mitigate traffic congestion in a world with growing pressure on financial and physical resources. The second objective in the dissertation is to develop a novel de-centralized traffic signal controller that is achieved using a Nash bargaining game-theoretic framework. This framework has a flexible phasing sequence and free cycle length, and thus can adapt to dynamic changes in traffic demand. The controller was implemented and evaluated using the INTEGRATION microscopic traffic assignment and simulation software. The proposed controller was tested and compared to state-of-the-art isolated and coordinated traffic signal controllers. The proposed controller was tested on an isolated intersection, producing a reduction in the queue length ranging from 58% to 77%, and a reduction in vehicle emission levels ranging from 6% to 17%. In the case of the arterial testing, the controller was compared to an optimum fixed-time coordinated plan, an actuated controller, a centralized adaptive phase split controller, a decentralized phase split and cycle length controller, and a fully coordinated adaptive phase split, cycle length, and offset optimization controller to evaluate its performance. On the arterial network, the proposed controller produced reductions in the total delay ranging from 36% to 67%, and a reduction in vehicle emissions ranging from 6% to 13%. Statistical tests show that the proposed controller produces major improvements over other state-of-the-art centralized and de-centralized controllers. In the domain of large scale networks, simulations were conducted on the town of Blacksburg, Virginia composed of 38 signalized intersections. The results show significant reductions on the intersection approaches with travel time savings of 23.6%, a reduction in the average queue length of 37.6%, a reduction in the average number of vehicle stops of 23.6%, a reduction in CO₂ emissions of 10.4%, a reduction in the fuel consumption of 9.8%, and a reduction in NO<sub>X<\sub> emissions of 5.4%. In addition, the proposed controller was tested on downtown Los Angles, California, including the most congested downtown area, which has 457 signalized intersections, and compared to the performance of a decentralized phase split and cycle length controller. The results show significant reductions on the intersections links in the average travel time of 35.1%, a reduction in the average queue length of 54.7%, a reduction in the average number of stops of 44%, a reduction in CO₂ emissions of 10%, a reduction in the fuel consumption of 10%, and a reduction in NO<sub>X<\sub> emissions of 11.7%. Furthermore, simulations were conducted at lower traffic flow levels and showed significant reductions on the network performance producing reductions in vehicle average total delay of 36.7%, a reduction in the stopped delay by 90.2%, and a reduction in the average number of stops by 35%, over a decentralized phase split and cycle length controller. The results demonstrate that the proposed decentralized controller reduces traffic congestion, fuel consumption and vehicle emission levels, and produces major improvements over other state-of-the-art centralized and de-centralized controllers.
447

A computational game-theoretic study of reputation

Yan, Chang January 2014 (has links)
As societies become increasingly connected thanks to advancing technologies and the Internet in particular, individuals and organizations (i.e. agents hereafter) engage in innumerable interaction and face constantly the possibilities thereof. Such unprecedented connectivity offers opportunities through which social and economic benefits are realised and disseminated. Nonetheless, risky and damaging interaction abound. To promote beneficial relationships and to deter adverse outcomes, agents adopt different means and resources. This thesis focuses on reputation as a crucial mechanism for promoting positive interaction, and examines the topic from game-theoretic perspective using computational methods. First, we investigate the design of reputation systems by incorporating economic incentives into algorithm design. Focusing on ubiquitous user-generated ratings on the Internet, we propose a truthful reputation mechanism that not only enforces honest reporting from individual raters but also takes into account their personal preferences. The mechanism is constructed using a blend of Bayesian Truth Serum and SimRank algorithms, both specifically adapted for our use case of online ratings. We show that the resulting mechanism is Bayesian incentive compatible and is computable in polynomial time. In addition, the mechanism is shown to be resistant to common manipulations on the Internet such as uniform fake ratings and targeted collusions. Lastly, we discuss detailed considerations for implementing the mechanism in practice. Second, we investigate experimentally the relative importance of reputational and social knowledge in sustaining cooperation in dynamic networks. In our experiments, U.S-based subjects play a repeated game where, in each round, an endogenous network is formed among a group of 13 players and each player chooses a cooperative or non-cooperative action that applies to all her connections. We vary the availability of reputational and social knowledge to subjects in 4 treatments. At the aggregate level, we find that reputational knowledge is of first-order importance for supporting cooperation, while social knowledge plays a complementary role only when reputational knowledge is available. Further community-level analysis reveals that reputational knowledge leads to the emergence of highly cooperative hubs, and a dense and cluster network, while social knowledge enhances cooperation by forming a large, dense and clustered community of cooperators who exclude outsiders through link removals and link refusals. At the individual level, reputational knowledge proves essential for the emergence of network structural characteristics that are associated with cooperative actions. In contrast, in treatments without reputational information, none of the network metrics is predicative of subjects' choices of action. Furthermore, we present UbiquityLab, a pioneering online platform for conducting real-time interactive experiments for game-theoretic studies. UbiquityLab supports both synchronous and asynchronous game models, and allows for complex and customisable interaction between subjects. It offers both back-end and front-end infrastructure with a modularised design to enable rapid development and streamlined operation. For in- stance, in synchronous mode, all per-stage and inter-stage logic are fully encapsulated by a thin server-side module, while a suite of client-side components eases the creation of game interface. The platform features a robust messaging protocol, such that player connection and game states are restored automatically upon networking errors and dropped out subjects are seamlessly substituted by customisable program players. Online experiments enjoy clear advantages over lab equivalents as they benefit from low operation cost, efficient execution, large and diverse subject pools, etc. UbiquityLab aims to promote online experiments as an emerging research methodology in experimental economics by bringing its benefits to other researchers.
448

Stochastic stability and equilibrium selection in games

Matros, Alexander January 2001 (has links)
This thesis consists of five papers, presented as separate chapters within three parts: Industrial Organization, Evolutionary Game Theory and Game Theory. The common basis of these parts is research in the field of game theory and more specifically, equilibrium selection in different frameworks. The first part, Industrial Organization, consists of one paper co-authored with Prajit Dutta and Jörgen Weibull. Forward-looking consumers are analysed in a Bertrand framework. It is assumed that if firms can anticipate a price war and act accordingly, so can consumers. The second part, Evolutionary Game Theory, contains three chapters. All models in these papers are based on Young’s (1993, 1998) approach. In Chapter 2, the Saez Marti and Weibull’s (1999) model is generalized from the Nash Demand Game to generic two-player games. In Chapter 3, co-authored with Jens Josephson, a special set of stochastically stable states is introduced, minimal construction, which is the long-run prediction under imitation behavior in normal form games. In Chapter 4, best reply and imitation rules are considered on extensive form games with perfect information. / Diss. Stockholm : Handelshögsk., 2001
449

Procurement Network Formation : A Cooperative Game Theoretic Approach

Chandrashekar, T S 11 1900 (has links)
Complex economic activity often involves inter-relationships at several levels of production, often referred to as supply chains or procurement networks. In this thesis we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Recent research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. Game theory has therefore emerged as a crucial tool for supply chain researchers to model, analyze, and design supply chains that are both efficient and stable. In this thesis, we explore cooperative game theory as a framework to model and analyze the formation of efficient and stable procurement networks. We view the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We address this problem in three different informational settings: (a) Complete information environments, (b) Incomplete but non-exclusive information environments and (c) Incomplete information environments. In the complete information case, we first investigate the use of the core as a solution concept. We show the structural conditions under which the core is non-empty. We then provide an extensive form game that implements the core in sub-game perfect Nash equilibrium whenever the core is non-empty. Secondly, we examine the implications of using the Shapley value as a solution concept for the game when the buyer is also included as a game theoretic agent. Analogous to the mechanism that implements the core, we adapt and construct an extensive form game to implement the Shapley value of the game. In the incomplete but non-exclusive information case, we focus on the incentive compatible coarse core as an appropriate solution concept and show its non-emptiness for the PNF game. In the incomplete information case, we focus on the incentive compatible fine core as an appropriate solution concept and show its non-emptiness for the PNF game. We believe the thesis establishes cooperative game theory as an extremely effective tool to model and solve the procurement network formation problem. 1
450

On the parameterized complexity of finding short winning strategies in combinatorial games

Scott, Allan Edward Jolicoeur 29 April 2010 (has links)
A combinatorial game is a game in which all players have perfect information and there is no element of chance; some well-known examples include othello, checkers, and chess. When people play combinatorial games they develop strategies, which can be viewed as a function which takes as input a game position and returns a move to make from that position. A strategy is winning if it guarantees the player victory despite whatever legal moves any opponent may make in response. The classical complexity of deciding whether a winning strategy exists for a given position in some combinatorial game has been well-studied both in general and for many specific combinatorial games. The vast majority of these problems are, depending on the specific properties of the game or class of games being studied, complete for either PSPACE or EXP. In the parameterized complexity setting, Downey and Fellows initiated a study of "short" (or k-move) winning strategy problems. This can be seen as a generalization of "mate-in-k" chess problems, in which the goal is to find a strategy which checkmates your opponent within k moves regardless of how he responds. In their monograph on parameterized complexity, Downey and Fellows suggested that AW[*] was the "natural home" of short winning strategy problems, but there has been little work in this field since then. In this thesis, we study the parameterized complexity of finding short winning strategies in combinatorial games. We consider both the general and several specific cases. In the general case we show that many short games are as hard classically as their original variants, and that finding a short winning strategy is hard for AW[P] when the rules are implemented as succinct circuits. For specific short games, we show that endgame problems for checkers and othello are in FPT, that alternating hitting set, hex, and the non-endgame problem for othello are in AW[*], and that short chess is AW[*]-complete. We also consider pursuit-evasion parameterized by the number of cops. We show that two variants of pursuit-evasion are AW[*]-hard, and that the short versions of these problems are AW[*]-complete.

Page generated in 0.0325 seconds