• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 705
  • 194
  • 103
  • 50
  • 30
  • 23
  • 21
  • 21
  • 19
  • 15
  • 12
  • 12
  • 11
  • 9
  • 9
  • Tagged with
  • 1455
  • 1455
  • 188
  • 185
  • 166
  • 162
  • 149
  • 131
  • 129
  • 122
  • 113
  • 112
  • 111
  • 108
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Three Essays on the Search for Economic Efficiency

Delaney, Jason J 15 December 2010 (has links)
The chapters of this dissertation examine efficiency failures in three areas of applied microeconomics: experimental economics, public finance, and game theory. In each case, we look at ways to resolve these failures to promote the public good. The first chapter, “An Experimental Test of the Pigovian Hypothesis,” looks at two different policies designed to reduce congestion in a common-pool resource (CPR). We present an experiment with training and a simplified decision task and find that subject behavior converges to the Nash prediction over a number of periods. A Pigovian subsidy effectively moves subject behavior to the pre-subsidy social optimum. Finally, we find a significant but non-persistent effect of information provision in moving subjects toward the social optimum. The second chapter, “Apples to Apples to Oranges,” looks at efficiency and equity failures across states resulting from public expenditure. This chapter introduces an extension of the Representative Expenditure System that uses regression methods and both state and metropolitan statistical area (MSA) level data, allowing for comparability of input costs, service requirements, and levels of need. The regression-based results are robust across state- and MSA-level formulations, although state-level approaches overestimate need for larger, less populous states. All regression-based results diverge from previous workload-based approaches. The third chapter, “Evading Nash Traps in Two-Player Simultaneous Games,” looks at efficiency failures in two-player simultaneous games. This chapter presents two new concepts: “détente” and “no-initiative,” in which players consider their own strategies and other-best-responses. We discuss their efficiency and descriptive properties across a set of simultaneous games.
562

Game theoretic and machine learning techniques for balancing games

Long, Jeffrey Richard 29 August 2006
Game balance is the problem of determining the fairness of actions or sets of actions in competitive, multiplayer games. This problem primarily arises in the context of designing board and video games. Traditionally, balance has been achieved through large amounts of play-testing and trial-and-error on the part of the designers. In this thesis, it is our intent to lay down the beginnings of a framework for a formal and analytical solution to this problem, combining techniques from game theory and machine learning. We first develop a set of game-theoretic definitions for different forms of balance, and then introduce the concept of a strategic abstraction. We show how machine classification techniques can be used to identify high-level player strategy in games, using the two principal methods of sequence alignment and Naive Bayes classification. Bioinformatics sequence alignment, when combined with a 3-nearest neighbor classification approach, can, with only 3 exemplars of each strategy, correctly identify the strategy used in 55\% of cases using all data, and 77\% of cases on data that experts indicated actually had a strategic class. Naive Bayes classification achieves similar results, with 65\% accuracy on all data and 75\% accuracy on data rated to have an actual class. We then show how these game theoretic and machine learning techniques can be combined to automatically build matrices that can be used to analyze game balance properties.
563

A Fuzzy-Kalman filtering strategy for state estimation

Han, Lee-Ryeok 22 September 2004
This thesis considers the combination of Fuzzy logic and Kalman Filtering that have traditionally been considered to be radically different. The former is considered heuristic and the latter statistical. In this thesis a philosophical justification for their combination is presented. Kalman Filtering is revised to enable the incorporation of fuzzy logic in its formulation. This formulation is subsequently referred to as the Revised-Kalman Filter. Heuristic membership functions are then used in the Revised-Kalman Filter to substitute for the system and measurement covariance matrices to form a fuzzy rendition of the Kalman Filter. The Fuzzy Kalman Filter formulation is further revised according to a concept referred to as the Parallel Distributed Compensation to allow for further heuristic adjustment of the corrective gain. This formulation is referred to as the Parallel Distributed Compensated-Fuzzy Kalman Filter. <p> Simulated implementations of the above filters reveal that a tuned Kalman Filter provides the best performance. However, if conditions change, the Kalman filters performance degrades and a better performance is obtained from the two versions of the Fuzzy Kalman Filters.
564

Some Professionals Play Minimax: A Reexamination of the Minimax Theory in Major League Baseball

Park, Jeffrey 01 January 2010 (has links)
This paper explores the behavior of Major League Baseball pitchers. We analyze the pitching data from 2007-2010 in order to determine whether their actions follow minimax play. We also examine what the OPS statistic tells us about a pitcher's value.
565

Theories of the Fantastic: Postmodernism, Game Theory, and Modern Physics

Pike, Karen 05 December 2012 (has links)
ABSTRACT “Theories of the Fantastic: Postmodernism, Game Theory, and Modern Physics” Karen Pike Degree: Doctor of Philosophy (2010) Centre for Comparative Literature University of Toronto This dissertation examines the fantastic mode of narrative as it appears in postmodern texts in a variety of media including literature, television, and film. By analyzing the kinds of changes which the fantastic mode has undergone in order to accommodate postmodern concerns, this project attempts to answer both how and why the fantastic has maintained its popularity and effectiveness. The first chapter seeks to define the fantastic mode by tracing the history of its definition from the early twentieth century up until the present. In doing so, it revisits the contributions of such analysts as Vax, Caillois, Todorov, and Freud. The second chapter discusses the changes to conventions demanded by postmodern discursive strategies, many of which include a back-and-forth movement between equally valid interpretations of the text. A discussion of Armin Ayren’s “Der Brandstifter,” a comparison of a recurring X-Files sub-plot to Bram Stoker’s Dracula, and an analysis of an intentionally self-reflexive episode of The X-Files demonstrate these changes. The third chapter introduces game theory as a way of understanding the back-and-forth movement typical of the fantastic mode. Hanns Heinz Ewers’s “Die Spinne” is used to illustrate the psychoanalytical aspect of this movement. The next chapter compares and contrasts three vampire films, The Addiction, Lair of the White Worm, and Nadja, in order to demonstrate how the degree to which this back-and-forth movement is present is an indicator of how successfully the fantastic effect emerges. The fifth chapter introduces modern physics as another mode for understanding the presence of the fantastic mode in the postmodern era. The analysis of House of Leaves in the final chapter illustrates how postmodern theory, game theory, and physics all work together to explain the fantastic’s effectiveness. This dissertation’s aim is to explain how and why a mode once defined as a specific nineteenth-century phenomenon keeps reinventing itself and re-emerging to continue to frighten and entertain us.
566

Power control and capacity analysis in cognitive radio networks

Zhou, Pan 16 May 2011 (has links)
The objective of this research is to investigate the power-control problem and analyze the network capacity in cognitive radio (CR) networks. For CR users or Secondary users (SUs), two spectrum-access schemes exist: namely, spectrum underlay and spectrum overlay. Spectrum overlay improves the spectrum utilization by granting SUs the authority to sense and explore the unused spectrum bands provided by PUs. in this scheme, designing effective spectrum-sensing techniques in PHY layer is the major concern. Spectrum underlay permits Sus to share the same spectrum bands with PUS at the same time and location. In this scheme, designing robust power control algorithms that guarantee the QoS of both primary and secondary transmissions is the main task. In this thesis, we first investigate the power-control problems in CR networks. Especially, we conduct two research works on power control for CDMA and OFDMA CR networks. Being aware of the competitive spectrum-access feature of SUs, the non-cooperative game theory, as a standard mathematics, is used to study the power-control problem. Note that game-theoretical approaches provide distributed solutions for CR networks,, which fits the needs of CR networks. However, it requires channel state information (CSI) exchange among all SUs, which will cause great overheads in the large network deployment. To gain better network scalability and design more robust power-control algorithm for any hostile radio-access environments, we propose a reinforcement-learning-based repeated power-control game that solve the problem for the first time. The left part of the dissertation is to study the throughput capacity scaling of the newly arising cognitive ad hoc networks (CRAHNs). Stimulated by the seminal work of Gupta and Kumar, the fundamental throughput scaling law for large-scale wireless ad hoc networks has become an active research topic. This research is of great theoretical value for wireless ad hoc networks. Our proposed research studies it in the scenario of CRAHNs under the impact of PU activity. It is a typical and important network scenario that has never been studied yet. We do believe this research has its unique value, it will have an impact to the research community.
567

On Peer Networks and Group Formation

Ballester Pla, Coralio 23 June 2005 (has links)
En el artículo "NP-completeness in Hedonic Games", identificamos algunas limitaciones significativas de los modelos estándar de juegos cooperativos: A menudo, es imposible alcanzar una organización estable de una sociedad en una cantidad de tiempo razonable. Las implicaciones básicas de estos resultados son las siguientes, Primero, desde un punto de vista positivo, las sociedades están "condenadas" a evolucionar constantemente, más que a alcanzar un estadio de equilibrio en el corto plazo. Segundo, desde una perspectiva normativa, un hipotético organizador de la sociedad debería tomar en consideración las limitaciones prácticas de tiempo a la hora de implementar un orden social estable.Para obtener nuestros resultados, utilizamos el concepto de NP-completitud, que es un modelo bien establecido de complejidad temporal en Ciencias de la Computación. En concreto, nos concentramos en estabilidad grupal y estabilidad individual en juegos hedónicos. Los juegos hedónicos son una clase simple de juegos cooperativos en los que la utilidad de cada individuo viene totalmente determinada por el grupo laboral al que pertenece. Nuestros resultados referentes a la complejidad, expresados en términos de NP-completitud, cubren un amplio espectro de dominios de las preferencias individuales, incluyendo preferencias estrictas, indiferencias en las preferencias o preferencias libres sobre el tamaño de los grupos. Dichos resultados también se cumplen si nos restringimos al caso en el que el tamaño máximo de los grupos es pequeño (dos o tres jugadores)En el artículo "Who is Who in Networks. Wanted: The Key Player" (junto con Antoni Calvó Armengol e Yves Zenou), analizamos un modelo de efectos de grupo en el que los agentes interactúan en un juego de influencias bilaterales. Los juegos no cooperativos con población finita y utilidades linales-cuadráticas, en los cuales cada jugador decide cuánto esfuerzo ejercer, pueden ser interpretados como juegos en red con complementariedades en los pagos, junto con un componente de susitucion global y uniforme, y un efecto de concavidad propia.Para dichos juegos, la acción de cada jugador en un equilibrio de Nash es proporcional a su centralidad de Bonacich en la red de complementariedades, estableciendo así un puente con la literatura de redes sociales. Dicho vínculo entre Bonacich y Nash implica que el equilibrio agregado aumenta con el tamaño y la densidad de la red. También analizamos una política que consiste en seleccionar al jugador clave, ésto es, el jugador que, una vez eliminado del juego, induce un cambio óptimo en la actividad agregada. Proveemos una caracterización geométrica del jugador clave, identificada con una medida de inter-centralidad, la cual toma en cuenta tanto la centralidad de cada jugador como su contribución a la centralidad de los otros.En el artículo "Optimal Targets in Peer Networks" (junto con Antoni Calvó Armengol e Yves Zenou), nos centramos en las consecuencias y limitaciones prácticas que se derivan del modelo de decisiones sobre delincuencia. Las principales metas que aborda el trabajo son las siguientes. Primero, la elección se extiende el concepto de delincuente clave en una red al de grupo clave. En dicha situación se trata de seleccionar de modo óptimo al conjunto de delincuentes a eliminar/neutralizar, dadas las restricciones presupuestarias para aplicar medidas. Dicho problema presenta una inherente complejidad computacional que solo puede salvarse mediante el uso de procedimientos aproximados, "voraces" o probabilísticos. Por otro lado, tratamos el problema del delincuente clave en el contexto de redes dinámicas, en las que, inicialmente, los individuos deciden acerca de su futuro como delincuentes o como ciudadanos que obtienen un salario fijo en el mercado. En dicha situación, la elección del delincuente clave es más compleja, ya que el objetivo de disminuir la delincuencia debe tener en cuenta los efectos en cadena que pueda traer consigo la desaparición de uno o varios delincuentes. Por último, estudiamos la complejidad computacional del problema de elección óptima y explotamos la propiedad de submodularidad de la intercentralidad de grupo, lo cual nos permite acotar el error relativo de la aproximación basada en un algoritmo voraz. / The aim of this thesis work is to contribute to the analysis of the interaction of agents in social networks and groups.In the chapter "NP-completeness in Hedonic Games", we identify some significant limitations in standard models of cooperation in games: It is often impossible to achieve a stable organization of a society in a reasonable amount of time. The main implications of these results are the following. First, from a positive point of view, societies are bound to evolve permanently, rather than reach a steady state configuration rapidly. Second, from a normative perspective, a planner should take into account practical time limitations in order to implement a stable social order.In order to obtain our results, we use the notion of NP-completeness, a well-established model of time complexity in Computer Science. In particular, we concentrate on group stability and individual stability in hedonic games. Hedonic games are a simple class of cooperative games in which each individual's utility is entirely determined by her group. Our complexity results, phrased in terms of NP-completeness, cover a wide spectrum of preference domains, including strict preferences, indifference in preferences or undemanding preferences over sizes of groups. They also hold if we restrict the maximum size of groups to be very small (two or three players).The last two chapters deal with the interaction of agents in the social setting. It focuses on games played by agents who interact among them. The actions of each player generate consequences that spread to all other players throughout a complex pattern of bilateral influences. In "Who is Who in Networks. Wanted: The Key Player" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze a model peer effects where agents interact in a game of bilateral influences. Finite population non-cooperative games with linear-quadratic utilities, where each player decides how much action she exerts, can be interpreted as a network game with local payoff complementarities, together with a globally uniform payoff substitutability component and an own-concavity effect.For these games, the Nash equilibrium action of each player is proportional to her Bonacich centrality in the network of local complementarities, thus establishing a bridge with the sociology literature on social networks. This Bonacich-Nash linkage implies that aggregate equilibrium increases with network size and density. We then analyze a policy that consists in targeting the key player, that is, the player who, once removed, leads to the optimal change in aggregate activity. We provide a geometric characterization of the key player identified with an inter-centrality measure, which takes into account both a player's centrality and her contribution to the centrality of the others.Finally, in the last chapter, "Optimal Targets in Peer Networks" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze the previous model in depth and study the properties and the applicability of network design policies.In particular, the key group is the optimal choice for a planner who wishes to maximally reduce aggregate activity. We show that this problem is computationally hard and that a simple greedy algorithm used for maximizing submodular set functions can be used to find an approximation. We also endogeneize the participation in the game and describe some of the properties of the key group. The use of greedy heuristics can be extended to other related problems, like the removal or addition of new links in the network.
568

Pricing in a Multiple ISP Environment with Delay Bounds and Varying Traffic Loads

Gabrail, Sameh January 2008 (has links)
In this thesis, we study different Internet pricing schemes and how they can be applied to a multiple ISP environment. We first take a look at the current Internet architecture. Then the different classes that make up the Internet hierarchy are discussed. We also take a look at peering among Internet Service Providers (ISPs) and when it is a good idea for an ISP to consider peering. Moreover, advantages and disadvantages of peering are discussed along with speculations of the evolution of the Internet peering ecosystem. We then consider different pricing schemes that have been proposed and study the factors that make up a good pricing plan. Finally, we apply some game theoretical concepts to discuss how different ISPs could interact together. We choose a pricing model based on a Stackelberg game that takes into consideration the effect of the traffic variation among different customers in a multiple ISP environment. It allows customers to specify their desired QoS in terms of maximum allowable end-to-end delay. Customers only pay for the portion of traffic that meet this delay bound. Moreover, we show the effectiveness of adopting this model through a comparison with a model that does not take traffic variation into account. We also develop a naïve case and compare it to our more sophisticated approach.
569

Guidance Under Uncertainty: Employing a Mediator Framework in Bilateral Incomplete-Information Negotiations

Shew, James January 2008 (has links)
Bilateral incomplete-information negotiations of multiple issues present a difficult yet common negotiation problem that is complicated to solve from a mechanism design perspective. Unlike multilateral situations, where the individual aspirations of multiple agents can potentially be used against one another to achieve socially desirable outcomes, bilateral negotiations only involve two agents; this makes the negotiations appear to be a zero-sum game pitting agent against agent. While this is essentially true, the gain of one agent is the loss of the other, with multiple issues, it is not unusual that issues are valued asymmetrically such that agents can gain on issues important to them but suffer losses on issues of less importance. Being able to make trade-offs amongst the issues to take advantage of this asymmetry allows both agents to experience overall benefit. The major complication is negotiating under the uncertainty of incomplete information, where agents do not know each other's preferences and neither agent wants to be taken advantage of by revealing its private information to the other agent, or by being too generous in its negotiating. This leaves agents stumbling in the dark trying to find appropriate trade-offs amongst issues. In this work, we introduce the Bilateral Automated Mediation (BAM) framework. The BAM framework is aimed at helping agents alleviate the difficulties of negotiating under uncertainty by formulating a negotiation environment that is suitable for creating agreements that benefit both agents jointly. Our mediator is a composition of many different negotiation ideas and methods put together in a novel third-party framework that guides agents through the agreement space of the negotiation, but instead of arbitrating a final agreement, it allows the agents themselves to ratify the final agreement.
570

Quantum Strategies and Local Operations

Gutoski, Gustav January 2009 (has links)
This thesis is divided into two parts. In Part I we introduce a new formalism for quantum strategies, which specify the actions of one party in any multi-party interaction involving the exchange of multiple quantum messages among the parties. This formalism associates with each strategy a single positive semidefinite operator acting only upon the tensor product of the input and output message spaces for the strategy. We establish three fundamental properties of this new representation for quantum strategies and we list several applications, including a quantum version of von Neumann's celebrated 1928 Min-Max Theorem for zero-sum games and an efficient algorithm for computing the value of such a game. In Part II we establish several properties of a class of quantum operations that can be implemented locally with shared quantum entanglement or classical randomness. In particular, we establish the existence of a ball of local operations with shared randomness lying within the space spanned by the no-signaling operations and centred at the completely noisy channel. The existence of this ball is employed to prove that the weak membership problem for local operations with shared entanglement is strongly NP-hard. We also provide characterizations of local operations in terms of linear functionals that are positive and "completely" positive on a certain cone of Hermitian operators, under a natural notion of complete positivity appropriate to that cone. We end the thesis with a discussion of the properties of no-signaling quantum operations.

Page generated in 0.0314 seconds