• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 704
  • 194
  • 103
  • 50
  • 30
  • 23
  • 21
  • 21
  • 19
  • 15
  • 12
  • 12
  • 11
  • 9
  • 9
  • Tagged with
  • 1452
  • 1452
  • 188
  • 185
  • 166
  • 162
  • 148
  • 131
  • 128
  • 122
  • 113
  • 111
  • 111
  • 108
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Essays in Microeconomic Theory and Behavioral Economics

Mezhvinsky, Dimitry 20 May 2015 (has links)
No description available.
242

Internationalism, sex role and amount of information as variables in a two-person, non-zero sum game /

Lutzker, Daniel Robert January 1959 (has links)
No description available.
243

Nash strategies with adaptation and their application in the deregulated electricity market

Tan, Xiaohuan 28 November 2006 (has links)
No description available.
244

Essays on Financing Decisions of Not-for-Profit Organisations

Jiang, Han 03 October 2022 (has links)
Chapter 1 novelly examines the nature of the interaction between private donors and not-for-profit organisations (NPOs) when NPOs can invest endowment funds in a two-asset risky portfolio and donors can contribute to both the endowment fund and the annual campaign. I study a three-stage non-cooperative game with two types of economic agents: a cohort of heterogeneous donors and one representative NPO. In equilibrium, donors always contribute to the endowment fund; however, they may not contribute to the annual campaign. The proportion of the NPO's endowment fund invested in the risky asset is a discontinuous function of the endowment; donors contribute less to an aggressive NPO and more to a cautious one. When the NPO can solicit donors to contribute only once, this increases the expected level of the contribution in equilibrium, but this may not generate higher expected utility for donors. Chapter 2 presents a dynamic model of charitable giving. At each period, donors contribute to an NPO's endowment; the NPO provides a charitable good and invests in the  financial market. Investments are made in a risky asset and a risk-free asset. I introduce two types of shocks to account for uncertainty: donors' income shock and  financial market fluctuations. I show that the optimal share of disposable endowment invested in risky asset is constant. Donors' strategy, whether to contribute or free-ride on the NPO's investments, depends on donors' shadow prices. Donors contribute when NPO's endowment is relatively low. Large contribution levels encourage the NPO to participate in the capital market at the expense of providing charitable good. I show that the NPO prefers an environment with a lower rate of return on risk-free assets. NPO's risk exposure to the  financial market affects both NPO's and donors' decisions. However, risk exposures on donors' side do not impact parties' decisions. Regulation analysis suggests that portfolio ceiling and provision floor are achievable. Chapter 3 links two data sources: the National Center for Charitable Statistics (NCCS) data over the period of 1987-2014 and the U.S. presidential elections data. I develop a dynamic model to examine how the national-level political incumbent shapes the NPOs' risky investment portfolio selection, adjusting for a set of NPOs' intrinsic characteristics and real interest rate. I  find that right-leaning Republicans act as a rein on NPOs' risky investments, i.e., a Republican administration is associated with a reduction in NPOs' holdings of corporation stocks and a 16.28% reduction in equity share relative to a Democratic administration. It is attributed to the impact of the Republican administration by more facilitating NPOs' accessibility to borrowing than having a Democratic president. I argue that NPOs behave as backward-looking investors or are reluctant to change their portfolio due to the significant portfolio adjustment cost, using past performance as an indicator to make their current risky investment decisions. Heckman two-step estimation indicates that NPOs' investment is an endogenous sample selection instead of a random choice. I show that NPOs have a less extensive equity share with more severe agency costs; foundation size plays a different role when NPOs decide whether to invest in risky assets compared with investing NPOs. Moreover, for investing NPOs, the equity share is expected to decrease by 12.0% if there is a 1% increase in the real interest rate; NPOs are more inclined to invest in risky assets when the real interest rate increases, in the sense of riding with the rational bubble.
245

In Search of Lost Deterrence – Two essays on deterrence and the models employed to study the phenomenon

Sörenson, Karl January 2019 (has links)
To deter is central for strategic thinking. Some of the more astute observations regarding the dynamics of deterrence were made during the Cold War by game theorists. This set the stage for how deterrence has come to be studied. A strong methodological element like the research on deterrence’s reliance on game theory requires examination in order to understand what sort of knowledge it actually yields. What sort of knowledge does one acquire when deterrence is viewed through game theoretic models? How do they inform us about the phenomenon of deterrence? To understand the nature of a phenomenon through models requires idealization, which in turn presupposes assumptions. This licentiate thesis investigates the type of knowledge we attain when approaching deterrence from a game theoretic perspective. The two articles presented address two separate but related issues. The first article reviews a debate regarding which deterrence model best capture the phenomena of deterrence, i.e. how models can be compared to one and other. The article presents a framework for comparing models and then appraises how these different deterrence models inform us about deterrence. The second article uses one of the more central deterrence models in order to evaluate how and to what extent the naval operation Atalanta managed to deter the Somali piracy. / <p>QC 20190201</p>
246

A mathematical programming-based analysis of a two stage model of interacting producers

Leleno, Joanna M. January 1987 (has links)
This dissertation is concerned with the characterization, existence and computation of equilibrium solutions in a two-stage model of interacting producers. The model represents an industry involved with two major stages of production. On the production side there exist some (upstream) firms which perform the first stage of production and manufacture a semi-finished product, and there exist some other (downstream) firms which perform the second stage of production and convert this semi-finished product to a final commodity. There also exist some (vertically integrated) firms which handle the entire production process themselves. In this research, the final commodity market is an oligopoly which may exhibit one of two possible behavioral patterns: follower-follower or multiple leader-follower. In both cases, the downstream firms are assumed to be price takers in purchasing the intermediate product. For the upstream stage, we consider two situations: a Cournot oligopoly or a perfectly competitive market. An equilibrium analysis of the model is conducted with output quantities as decision variables. The defined equilibrium solutions employ an inverse derived demand function for the semi-finished product. This function is derived and characterized through the use of mathematical programming problems which represent the equilibrating process in the final commodity market. Based on this analysis, we provide sufficient conditions for the existence (and uniqueness) of an equilibrium solution, under various market assumptions. These conditions are formulated in terms of properties of the cost functions and the final product demand function. Next, we propose some computational techniques for determining an equilibrium solution. The algorithms presented herein are based on structural properties of the inverse derived demand function and its local approximation. Both convex as well as nonconvex cases are considered. We also investigate in detail the effects of various integrations among the producers on firms' profits, and on industry outputs and prices at equilibrium. This sensitivity analysis provides rich information and insights for industrial analysts and policy makers into how the foregoing quantities are effected by mergers and collusions and the entry or exit of various types of firms, as well as by differences in market behavior. / Ph. D.
247

Le Partage du Spectre dans les Réseaux Décentralisés Auto-Configurables : Une approche par la Théorie des Jeux.

Perlaza, Samir 08 July 2011 (has links) (PDF)
Les travaux de cette thèse s'inscrivent tous dans la thématique " traitement du signal pour les réseaux de communications distribués ". Le réseau est dit distribué au sens de la décision. Dans ce cadre, le problème générique et important que nous avons approfondi est le suivant. Comment un terminal, qui a accès à plusieurs canaux de communications, doit-il répartir (de manière autonome) sa puissance d'émission entre ses canaux et l'adapter dans le temps en fonction de la variabilité des conditions de communications ? C'est le problème de l'allocation de ressources adaptative et distribuée. Nous avons développé 4 axes de travail qui ont tous conduits à des réponses originales à ce problème ; la forte corrélation entre ces axes est expliquée dans le manuscrit de thèse. Le premier axe a été l'alignement opportuniste d'interférence. Un des scénarios de référence est le cas où deux couples émetteur-récepteur communiquent en interférant (sur la même bande, en même temps, au même endroit, ...), où les 4 terminaux sont équipés de plusieurs antennes et où un émetteur est contraint de ne pas (ou peu) interférer sur l'autre (canal à interférence dit MIMO). Nous avons conçu une technique d'émission de signal multi-antennaire qui exploite l'observation-clé suivante et jamais exploitée auparavant: même lorsqu'un émetteur est égoïste au sens de ses performances individuelles, celui-ci laisse des ressources spatiales (dans le bon espace de signal et que nous avons identifié) vacantes pour l'autre émetteur. L'apport en performances en termes de débit par rapport aux meilleurs algorithmes existants a été quantifié grâce à la théorie des matrices aléatoires et des simulations Monte Carlo. Ces résultats sont particulièrement importants pour le scénario de la radio cognitive en milieu dense. Dans un second temps, nous avons supposé que tous les émetteurs d'un réseau sont libres d'utiliser leurs ressources de manière égoïste. Les ressources sont données ici par les canaux fréquentiels et la métrique individuelle de performance est le débit. Ce problème peut être modélisé par un jeu dont les joueurs sont les émetteurs. Une de nos contributions a été de montrer que ce jeu est un jeu de potentiel, ce qui est fondamental pour la convergence des algorithmes distribués et l'existence d'équilibre de Nash. De plus, nous avons montré l'existence d'un paradoxe de Braess : si l'espace d'optimisation d'un joueur grandit, les performances individuelles et globales peuvent s'en trouver réduites. Cette conclusion a une conséquence pratique immédiate : il peut y a voir intérêt de restreindre le nombre de canaux fréquentiels utilisables dans un réseau à interférence distribué. Dans le jeu précédent, nous avions constaté que les algorithmes distribués d'allocation de ressources (les algorithmes d'apprentissage par renforcement typiquement) demandent un grand nombre d'itérations pour converger vers un état stable tel qu'un équilibre de Nash. Nous avons ainsi proposé un nouveau concept de solution d'un jeu, à savoir l'équilibre de satisfaction ; les joueurs ne modifient pas leur action, même si celle-ci ne maximise pas leur gain, pourvu qu'un niveau minimal de performance soit atteint. Nous avons alors développé une méthodologie d'étude de cette solution (existence, unicité, convergence, ...). Une de nos contributions a aussi été de donner des algorithmes d'apprentissage qui convergent vers cette solution en un temps fini (et même court génériquement). De nombreux résultats numériques réalisés dans des scénarios imposés par Orange ont confirmé la pertinence de cette nouvelle approche. Le quatrième axe de travail a été la conception de nouveaux algorithmes d'apprentissage qui convergent vers des solutions de type équilibre logit, epsilon-équilibre ou équilibre de Nash. Notre apport a été de montrer comment modifier les algorithmes existants pour que ceux-ci évitent les phénomènes de cycles et convergent vers un équilibre présélectionné au départ de la dynamique. Une idée importante a été d'introduire une dynamique d'apprentissage de la fonction métrique de performances en couplage avec la dynamique principale qui régit l'évolution de la distribution de probabilité sur les actions possibles d'un joueur. Le cadre de ces travaux est parfaitement réaliste d'un point de vue informatif au niveau des terminaux en pratique. Il est montré une voie possible pour améliorer l'efficacité des points de convergence, ce qui constitue un problème encore ouvert dans ce domaine.
248

Algorithmic Aspects of the Internet

Saberi, Amin 12 July 2004 (has links)
The goal of this thesis is to use and advance the techniques developed in the field of exact and approximation algorithms for many of the problems arising in the context of the Internet. We will formalize the method of dual fitting and the idea of factor-revealing LP. We use this combination to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61 respectively. We also provide the first polynomial time algorithm for the linear version of a market equilibrium model defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's primal-dual algorithm for bipartite matching. We also study the connectivity properties of the Internet graph and its impact on its structure. In particular, we consider the model of growth with preferential attachment for modeling the graph of the Internet and prove that under some reasonable assumptions, this graph has a constant conductance.
249

Value methods for efficiently solving stochastic games of complete and incomplete information

Mac Dermed, Liam Charles 13 January 2014 (has links)
Multi-agent reinforcement learning (MARL) poses the same planning problem as traditional reinforcement learning (RL): What actions over time should an agent take in order to maximize its rewards? MARL tackles a challenging set of problems that can be better understood by modeling them as having a relatively simple environment but with complex dynamics attributed to the presence of other agents who are also attempting to maximize their rewards. A great wealth of research has developed around specific subsets of this problem, most notably when the rewards for each agent are either the same or directly opposite each other. However, there has been relatively little progress made for the general problem. This thesis address this lack. Our goal is to tackle the most general, least restrictive class of MARL problems. These are general-sum, non-deterministic, infinite horizon, multi-agent sequential decision problems of complete and incomplete information. Towards this goal, we engage in two complementary endeavors: the creation of tractable models and the construction of efficient algorithms to solve these models. We tackle three well known models: stochastic games, decentralized partially observable Markov decision problems, and partially observable stochastic games. We also present a new fourth model, Markov games of incomplete information, to help solve the partially observable models. For stochastic games and decentralized partially observable Markov decision problems, we develop novel and efficient value iteration algorithms to solve for game theoretic solutions. We empirically evaluate these algorithms on a range of problems, including well known benchmarks and show that our value iteration algorithms perform better than current policy iteration algorithms. Finally, we argue that our approach is easily extendable to new models and solution concepts, thus providing a foundation for a new class of multi-agent value iteration algorithms.
250

A game-based decision support methodology for competitive systems design

Briceño, Simón Ignacio 17 November 2008 (has links)
This dissertation describes the development of a game-based methodology that facilitates the exploration and selection of research and development (R&D) projects under uncertain competitive scenarios. The proposed method provides an approach that analyzes competitor positioning and formulates response strategies to forecast the impact of technical design choices on a project's market performance. A critical decision in the conceptual design phase of propulsion systems is the selection of the best architecture, centerline, core size, and technology portfolio. A key objective of this research is to examine how firm characteristics such as their relative differences in completing R&D projects, differences in the degree of substitutability between different project types, and first/second-mover advantages affect their product development strategies. Several quantitative methods are investigated that analyze business and engineering strategies concurrently. In particular, formulations based on the well-established mathematical field of game theory are introduced to obtain insights into the project selection problem. The use of game theory is explored in this research as a method to assist the selection process of R&D projects in the presence of imperfect market information. The proposed methodology focuses on two influential factors: the schedule uncertainty of project completion times and the uncertainty associated with competitive reactions. A normal-form matrix is created to enumerate players, their moves and payoffs, and to formulate a process by which an optimal decision can be achieved. The non-cooperative model is tested using the concept of a Nash equilibrium to identify potential strategies that are robust to uncertain market fluctuations (e.g: uncertainty in airline demand, airframe requirements and competitor positioning). A first/second-mover advantage parameter is used as a scenario dial to adjust market rewards and firms' payoffs. The methodology is applied to a commercial aircraft engine selection study where engine firms must select an optimal engine project for development. An engine modeling and simulation framework is developed to generate a broad engine project portfolio. The proposed study demonstrates that within a technical design environment, a rational and analytical means of modeling project development strategies is beneficial in high market risk situations.

Page generated in 0.0309 seconds