• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 66
  • 33
  • 28
  • 13
  • 10
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 367
  • 158
  • 122
  • 109
  • 46
  • 34
  • 33
  • 31
  • 31
  • 28
  • 27
  • 26
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Simultaneous Move Games in General Game Playing

Shafiei Khadem, Mohammad 06 1900 (has links)
General Game Playing (GGP) deals with the design of players that are able to play any discrete, deterministic, complete information games. For many games like chess, designers develop a player using a specially designed algorithm and tune all the features of the algorithm to play the game as good as possible. However, a general game player knows nothing about the game that is about to be played. When the game begins, game description is given to the players and they should analyze it and decide on the best way to play the game. In this thesis, we focus on two-player constant-sum simultaneous move games in GGP and how this class of games can be handled. Rock-paper-scissors can be considered as a typical example of a simultaneous move game. We introduce the CFR algorithm to the GGP community for the first time and show its effectiveness in playing simultaneous move games. This is the first implementation of CFR outside the poker world. We also improve the UCT algorithm, which is the state of the art in GGP, to be more robust in simultaneous move games. In addition, we analyze how UCT performs in simultaneous move games and argue that it does not converge to a Nash equilibrium. We also compare the usage of UCT and CFR in this class of games. Finally, we discuss about the importance of opponent modeling and how a model of the opponent can be exploited by using CFR.
152

Demand Effects in Productivity and Efficiency Analysis

Lee, Chia-Yen 2012 May 1900 (has links)
Demand fluctuations will bias the measurement of productivity and efficiency. This dissertation described three ways to characterize the effect of demand fluctuations. First, a two-dimensional efficiency decomposition (2DED) of profitability is proposed for manufacturing, service, or hybrid production systems to account for the demand effect. The first dimension identifies four components of efficiency: capacity design, demand generation, operations, and demand consumption, using Network Data Envelopment Analysis (Network DEA). The second dimension decomposes the efficiency measures and integrates them into a profitability efficiency framework. Thus, each component's profitability change can be analyzed based on technical efficiency change, scale efficiency change and allocative efficiency change. Second, this study proposes a proactive DEA model to account for demand fluctuations and proposes input or output adjustments to maximize effective production. Demand fluctuations lead to variations in the output levels affecting measures of technical efficiency. In the short-run, firms can adjust their variable resources to address the demand fluctuates and perform more efficiently. Proactive DEA is a short-run capacity planning method, proposed to provide decision support to a firm interested in improving the effectiveness of a production system under demand uncertainty using a stochastic programming DEA (SPDEA) approach. This method improves the decision making related to short-run capacity expansion and estimates the expected value of effectiveness given demand. In the third part of the dissertation, a Nash-Cournot equilibrium is identified for an oligopolistic market. The standard assumption in the efficiency literature that firms desire to produce on the production frontier may not hold in an oligopolistic market where the production decisions of all firms will determine the market price, i.e. an increase in a firm's output level leads to a lower market clearing price and potentially-lower profits. Models for both the production possibility set and the inverse demand function are used to identify a Nash-Cournot equilibrium and improvement targets which may not be on the strongly efficient production frontier. This behavior is referred to as rational inefficiency because the firm reduces its productivity levels in order to increase profits.
153

Welfare measurement, externalities and Pigouvian taxation in dynamic economies

Backlund, Kenneth January 2000 (has links)
This thesis consists of five papers. Paper [1] analyzes one possible way of replacing dynamic Pigouvian taxes by a static approximation of such taxes from the point of view of social accounting. The idea is to approximate a Pigouvian emission tax by using the instantaneous marginal willingness to pay to reduce the stock of pollution. If this approximation is close enough to the correct Pigouvian tax it will be useful for at least two reasons: (i) it brings the economy close to the socially optimal solution; and (ii) it provides information relevant for social accounting by closely approximating the value of additions to the stock of pollution. Paper [2] analyzes the welfare effects of an agreement between countries to slightly increase their emission taxes. The results indicate that such an agreement need not necessarily increase the global welfare level, even if each individual country has set its prereform emission tax to be lower than the marginal social cost of pollution. Paper [3] provides an economic framework for analyzing the global warming problem, emphasizing the use of forests as a means of carbon sequestration. We explore the difference between the decentralized economy and the socially optimal resource allocation, and discuss the appropriate tax system required to implement the first best optimum. Paper [4] incorporates the uncertainty involved in the production of nuclear energy into a dynamic general equilibrium growth model. We compare the resource allocation in the decentralized economy with the socially optimal resource allocation and design the dynamic Pigouvian taxes that make the decentralized economy reproduce the socially optimal resource allocation. Paper [5] treats externalities from nuclear power in a dynamic differential game framework involving two countries, which differ with regard to their nuclear technology. The model is solved numerically, where one country is considered relatively safe and the other relatively less safe. / <p>Härtill 5 delarbeten.</p> / digitalisering@umu
154

The Complexity of angel-daemons and game isomorphism

García Chacón, Alina 07 May 2012 (has links)
The analysis of the computational aspects of strategic situations is a basic field in Computer Sciences. Two main topics related to strategic games have been developed. First, introduction and analysis of a class of games (so called angel/daemon games) designed to asses web applications, have been considered. Second, the problem of isomorphism between strategic games has been analysed. Both parts have been separately considered. Angel-Daemon Games A service is a computational method that is made available for general use through a wide area network. The performance of web-services may fluctuate; at times of stress the performance of some services may be degraded (in extreme cases, to the point of failure). In this thesis uncertainty profiles and Angel-Daemon games are used to analyse servicebased behaviours in situations where probabilistic reasoning may not be appropriate. In such a game, an angel player acts on a bounded number of ¿angelic¿ services in a beneficial way while a daemon player acts on a bounded number of ¿daemonic¿ services in a negative way. Examples are used to illustrate how game theory can be used to analyse service-based scenarios in a realistic way that lies between over-optimism and over-pessimism. The resilience of an orchestration to service failure has been analysed - here angels and daemons are used to model services which can fail when placed under stress. The Nash equilibria of a corresponding Angel-Daemon game may be used to assign a ¿robustness¿ value to an orchestration. Finally, the complexity of equilibria problems for Angel-Daemon games has been analysed. It turns out that Angel-Daemon games are, at the best of our knowledge, the first natural example of zero-sum succinct games. The fact that deciding the existence of a pure Nash equilibrium or a dominant strategy for a given player is Sp 2-complete has been proven. Furthermore, computing the value of an Angel-Daemon game is EXP-complete. Thus, matching the already known complexity results of the corresponding problems for the generic families of succinctly represented games with exponential number of actions. Game Isomorphism The question of whether two multi-player strategic games are equivalent and the computational complexity of deciding such a property has been addressed. Three notions of isomorphisms, strong, weak and local have been considered. Each one of these isomorphisms preserves a different structure of the game. Strong isomorphism is defined to preserve the utility functions and Nash equilibria. Weak isomorphism preserves only the player preference relations and thus pure Nash equilibria. Local isomorphism preserves preferences defined only on ¿close¿ neighbourhood of strategy profiles. The problem of the computational complexity of game isomorphism, which depends on the level of succinctness of the description of the input games but it is independent of the isomorphism to consider, has been shown. Utilities in games can be given succinctly by Turing machines, boolean circuits or boolean formulas, or explicitly by tables. Actions can be given also explicitly or succinctly. When the games are given in general form, an explicit description of actions and a succinct description of utilities have been assumed. It is has been established that the game isomorphism problem for general form games is equivalent to the circuit isomorphism when utilities are described by Turing Machines; and to the boolean formula isomorphism problem when utilities are described by formulas. When the game is given in explicit form, it is has been proven that the game isomorphism problem is equivalent to the graph isomorphism problem. Finally, an equivalence classes of small games and their graphical representation have been also examined.
155

The Audit Pricing Decisions for Accounting Firms in China : A Case Study from RSM China

Ming Hui, Yang, Lei, Zhang January 2011 (has links)
No description available.
156

none

FanChiang, Chin-Lien 27 June 2000 (has links)
none
157

Endogenous firm asymmetry vs symmetry in oligopoly models /

Tesoriere, Antonio. January 2007 (has links) (PDF)
Univ., Diss.--Louvain-la-Neuve, 2007.
158

Forced vibrations via Nash-Moser iterations

Fokam, Jean-Marcel 11 April 2014 (has links)
In this thesis, we prove the existence of large frequency periodic solutions for the nonlinear wave equations utt − uxx − v(x)u = u3 + [fnof]([Omega]t, x) (1) with Dirichlet boundary conditions. Here, [Omega] represents the frequency of the solution. The method we use to find the periodic solutions u([Omega]) for large [Omega] originates in the work of Craig and Wayne [10] where they constructed solutions for free vibrations, i.e., for [fnof] = 0. Here we construct smooth solutions for forced vibrations ([fnof] [not equal to] 0). Given an x-dependent analytic potential v(x) previous works on (1) either assume a smallness condition on [fnof] or yields a weak solution. The study of equations like (1) goes back at least to Rabinowitz in the sixties [25]. The main difficulty in finding periodic solutions of an equation like (1), is the appearance of small denominators in the linearized operator stemming from the left hand side. To overcome this difficulty, we used a Nash-Moser scheme introduced by Craig and Wayne in [10]. / text
159

Model and Analysis of Provider-User Games

Soterwood, Jeanine Michelle January 2005 (has links)
This dissertation studies the competitive dynamics between two non-identical providers competing for customers seeking low-cost and quick service. Providers have generic delay functions where, asdemand received by each provider grows, so does delay in processing customers' requests. Given a pricing or capacity decision by each provider, customers determine the proportion of demand to send to each provider by minimizing generalized cost (monetary cost plus delaycost). This problem is formulated as a bilevel optimization, with providers competing at the upper level subject to the customers' decisions at the lower level. Occurrence of Nash equilibria between the providers is studied.First studied is the providers' problem of making decisions on capacities, while competing for a single customer. Conditions are derived for one provider to claim the entire market share, and for the occurrence of an equilibrium where both providers receive positivedemand. A numerical example in which no equilibrium exists is presented. Both the inelastic and elastic demand cases are studied for this scenario. In a second model, providers make pricing decisions with capacity fixed. Under some assumptions, it is shownthat a Nash equilibrium between providers always exists and a numerical example is presented. These models are then combined, in which providers make capacity decisions where prices equilibrate based on the results from the second model.Two competing customers with demand for a homogeneous product are then introduced, where providers choose prices as they compete for customers. This model is extended to an application along a highway corridor with a high-occupancy/toll (HOT) lane in parallel with a free road and transit line. A government agency chooses the transit service frequency while a private toll operator competes by choosing a toll to charge single-occupancy vehicles who wish to use the HOT lane.This scenario is also modeled as a bilevel program. For the lower level, a new dynamic equilibration process where homogeneous users make mode choice decisions based on previous generalized costs ofusing a particular mode is developed. Two numerical examples are presented showing a unique Nash equilibrium between the providers and an example in which multiple equilibria exist.
160

Implementing Lindahl Allocation - Incorporating Experimental Observations into Mechanism Design Theory

Van Essen, Matthew J. January 2010 (has links)
Mechanism design theory has given economists a set of tools for designing institutions to achieve socially desirable outcomes. Unfortunately, the behavioral assumptions that these theories often rest are somewhat unrealistic. Testing these institutions in a laboratory setting gives us insight into what assumptions or properties of institutions make them behaviorally successful. Moreover these insights allow us to create new theories that offer, in principle, better actual performance. Thus, the interplay between experimental economics and economic theory seems vital in mechanism design to insure successful institutions. It is in this spirit that this dissertation precedes focusing entirely with mechanisms that were designed to achieve the Lindahl allocation in a public goods environment. The first chapter experimentally examines three such mechanisms in a laboratory setting. It finds that the mechanism that gets the closest to the Lindahl allocation is the one that induces a game with very strong stability of equilibrium properties. Unfortunately this mechanism also has some clear disadvantages: first, it is very complicated; second, payoffs to consumers while learning to play equilibrium are very low; and last, the mechanism gets more complicated when more people participate. The second chapter uses the insights from the first experiment to create a new institution which avoids some of the concerns outlined above while maintaining the strong stability of equilibrium property. The third chapter contributes a missing stability result into the literature. The final chapter of the dissertation experimentally compares the new mechanism introduced in chapter 2 with the most successful mechanism from the first experiment. The treatments in this experiment are designed to stress the above observed trouble areas.

Page generated in 0.0326 seconds