451 |
On the Economics of Happiness and Climate ChangeSekulova, Filka 28 June 2013 (has links)
The present study bridges the field of happiness economics with the economics of climate change, based on two research questions. One is related to the effects of (extreme) climate events on individual happiness and their qualitative measurement. The empirical method to analyze this relation includes the identification of proxies of extreme climate events and studying their relationships with well-being for the impacted population. Here floods, and to some extent forest fires, are taken as an approximation of extreme climate events. The second research question concerns the way happiness studies can inform climate policy and how stringent climate policy would affect well-being. Assuming that effective climate abatement implies a reduction in the rate of economic (income) growth and carbon intensive consumption, I look at how income decline influences subjective well-being in the context of the economic crisis in Spain. To explore the happiness effect of a wider range of climate change mitigation strategies, including ones which are not solely policy-oriented, the sharing of goods is also taken as a case of a community-based initiative resulting in a reduction of greenhouse gas emissions.
|
452 |
Ecological and economic impacts of distant water fishing: three empirical studiesFreiherr von Gagern, Cyrill Antonius 17 July 2014 (has links)
En la segona meitat del segle XX, la industrialització dels vaixells pesquers va donar lloc a una sobreexplotació dels recursos marins a les àrees costeres dels estats tradicionalment pesquers. En conseqüència, els països pesquers industrialitzats van començar a explorar les aigües llunyanes, en gran part sense traves legals, per alimentar la creixent demanda de peix i marisc. Amb l'entrada en vigor de la Convenció de les Nacions Unides de 1982 sobre el Dret del Mar (UNCLOS, en anglès), així com l’Acord de Recursos Pesqueres (“Fish stock agreement”) de les Nacions Unides de 1995, es van reestructurar radicalment els drets i les responsabilitats de la pesca de captura marina. No obstant això, aquestes noves legislacions van deixar molt d’espai a l’explotació econòmicament ineficient i ecològicament insostenible dels recursos pesquers. En tres assaigs, aquesta tesi dóna llum a la interacció entre les flotes industrialitzades d'aigües llunyanes i les regions sovint vulnerables on s’hi pesca.
El primer assaig revisa críticament el desenvolupament de la pesca en aigües llunyanes en el món tropical durant els darrers 50 anys i ofereix una anàlisi quantitativa de la relació entre les flotes d'aigües llunyanes i els països tropicals d’acollida. Es conclou que hi ha un clar canvi de poder dels països tradicionalment pesquers als nouvinguts, sobretot asiàtics, i que els països petits i econòmicament febles són els més vulnerables a relacions d'explotació amb els estats que pesquen en aigües llunyanes. El segon assaig aborda la qüestió de si, des d'un punt de vista econòmic, els Països Insulars del Pacífic (PICs, en anglès) haurien de continuar concedint accés als estats pesquers d'aigües llunyanes, o si s'hauria d'intentar desenvolupar una indústria pròpia pesquera nacional. Amb aquesta finalitat, un nou model bioeconòmic amb múltiples espècies i actors és analitzat. Els resultats del model mostren que els PICs maximitzarien els seus beneficis mitjançant l'eliminació gradual dels acords d'accés amb els estats pesquers d'aigües llunyanes, substituint-los per un esforç pesquer nacional. L'alternativa és elevar considerablement les tarifes d'accés, tot i que, per descomptat, això pot tenir diferents conseqüències a llarg termini. En el tercer assaig, es descriu un model empíric per inferir el pes viu de la pesca de la tonyina vermella a l’Atlàntic Est i al Mediterrani (EBFT, en anglès) a partir de dades mensuals del comerç entre 2005 i 2011. En base a la captura total estimada, s’arriba a la conclusió que l’EBFT ha estat persistentment sobreexplotada al llarg de tot el període esmentat.
En conclusió, aquesta tesi ha contribuït a la literatura tot estudiant l'impacte de la pesca en aigües llunyanes sobre la salut de les poblacions de peixos en alta mar i en les zones econòmiques exclusives en regions tropicals, així com en el benestar dels països en desenvolupament que són abundants en recursos pesquers. / In the second half of the 20th century, the industrialization of fishing vessels led to an over-exploitation of marine resources in near-shore areas of traditional fishing nations. As a result, industrialized fishing nations started to explore distant waters, largely unhindered by legal boundaries, to fuel the growing demand of fish and seafood products. While the coming into force of the 1982 United Nations Convention on the Law of the Sea (UNCLOS), and the 1995 United Nations “Fish Stocks Agreement”, dramatically restructured rights and responsibilities of marine capture fisheries, they left much room for economically inefficient and ecologically unsustainable exploitation of fisheries resources. In three essays, this thesis sheds light on the interplay between industrialized distant water fleets and the often vulnerable regions where they fish.
The first essay critically reviews the development of distant water fishing in the tropical world over the past 50 years and provides a quantitative analysis of the relationship between distant water fleets and tropical host countries. It concludes that there is a clear shift in powers from traditional fishing countries to mainly Asian newcomers, and that small and economically weak countries are most vulnerable to exploitative relationships with distant water fishing nations. The second essay addresses the question whether, from an economic point of view, Pacific Island Countries (PICs) should continue granting access to distant water fishing nations or whether they should attempt to develop an own domestic fishing industry. To this end, a newly developed multispecies, multiplayer bioeconomic model is analyzed. It provides the insight that PICs would maximize their profits by phasing out access agreements with distant water fishing nations, replacing these by domestic fishing effort. The alternative is to considerably raise access fees, although this does of course may have different long term consequences. In the third essay, an empirical model is constructed to derive life catch weight for Eastern Atlantic and Mediterranean Bluefin tuna (EBFT) from monthly trade data for all major countries involved in its trade between 2005 and 2011. Based on estimated total catch we conclude that EBFT has persistently been overfished, throughout the entire period.
In conclusion, this thesis has contributed to the literature on the impact of distant water fishing on fish stock health in the high seas and tropical Exclusive economic zones, and on the welfare of resource-rich developing countries.
|
453 |
Essays on Social Groups: Inequality, Influence and the Structure of InteractionsCuhadaroglu, Tugce 30 September 2013 (has links)
Uno de los principales problemas en economía siempre ha sido entender y formalizar la relación dinámica entre lo individual y lo social. Esta tesis incluye dos perspectivas complementarias para explorar esta importante cuestión.
En el primer enfoque, que se refiere al primer capítulo, se investiga la forma de evaluar el grado en que las diferencias en las características individuales dan lugar a diferencias en los resultados sociales, por así decirlo, perseguimos lo 'individual' en lo `social'. Nos centramos en las desigualdades no relacionadas con los ingresos entre grupos sociales, tales como las desigualdades de nivel de educación, situación laboral, la salud o el bienestar subjetivo. Proponemos una nueva metodología, el Índice de Dominación, para evaluar las desigualdades. Al proporcionar un enfoque axiomático, logramos mostrar cómo un conjunto de propiedades deseables para una medida de la desigualdad entre grupos, cuando la variable de interés no es cardinal sino ordinal, caracterizan nuestro Índice de Dominación. Por otra parte, en función de nuestro análisis, se explora la estrecha relación entre segregación y desigualdades entre grupos.
Los dos capítulos restantes de la tesis se pueden considerar como una persecución de lo `social 'en lo `individual'. Consideramos a una persona como un agente social e investigamos el papel de las interacciones sociales en la toma de decisiones individuales. En el segundo capítulo, nos centramos en el problema de identificar la influencia social y la homofilia. Proponemos una metodología que hace uso de los resultados de las decisiones individuales con el fin de evaluar el nivel de homofilia y la influencia recibida mediante la interacción social.
El objeto estudiado en el tercer capítulo, por otra parte, es la estructura de las interacciones sociales. Sugerimos, para descubrir la estructura subyacente de una red social, utilizar el análisis de patrones de conducta individual. En general, caracterizamos cuatro posibles estructuras de interacción diferentes mediante las cuales los individuos pueden estar interconectados en una red social. / One of the main questions of economics has always been to understand and formalize the dynamic relation between what is individual and what is social. This dissertation includes two complementary perspectives to explore this major question.
In the first approach, which refers to the first chapter, we investigate how to evaluate the degree to which differences in individual characteristics result in differences in social outcomes; so to speak, we chase the `individual' in `social'. We focus on non-income inequalities between social group, such as the inequalities of educational attainment, occupational status, health or subjective-wellbeing. We propose a new methodology, the Domination Index, to evaluate those inequalities. Providing an axiomatic approach, we show that a set of desirable properties for a group inequality measure when the variable of interest is not cardinal but ordinal, characterize the Domination Index. Moreover, depending on our analysis, we explore the close relation between segregation and group inequalities.
The remaining two chapters of the thesis can be seen as a chase for the `social’ in `individual’. We consider an individual as a social agent and investigate the role of social interactions in individual decision making. In the second chapter, we focus on the identification problem of social influence and homophily. We suggest a methodology that exploits individual decision outcomes in order to assess the level of homophily and influence related to social interaction.
The subject matter of the third chapter, on the other hand, is the structure of social interactions. We suggest to uncover the underlying structure of a social network by analyzing individual behavior patterns. Overall we characterize four different possible interaction structures by which individuals may be connected in a social network.
|
454 |
Essays on Monetary and Fiscal PolicyBlas Pérez, Beatriz de 20 September 2002 (has links)
Esta tesis estudia cuestiones de política monetaria y fiscal en macroeconomías con fricciones financieras. El Capítulo 1 analiza numéricamente el funcionamiento de reglas de política monetaria en economías con y sin imperfecciones financieras. El capítulo compara una política monetaria endógena con una regla de crecimiento del dinero constante en un escenario de participación limitada. Las imperfecciones surgen por información asimétrica en la producción de capital. El modelo se ajusta bastante bien a los datos de EE.UU. El escenario con imperfecciones financieras es capaz de reflejar algunos hechos estilizados del ciclo económico, como la relación negativa entre producto y prima de riesgo, que no aparecen en el caso estándar sin fricciones. El uso de reglas de tipos de interés en un modelo de participación limitada tiene efectos estabilizadores contrarios a los de los modelos neo-Keynesianos. Concretamente, en un modelo de participación limitada, usar reglas de tipos de interés ayuda a estabilizar producto e inflación frente a un shock tecnológico, mientras que existe un trade-off entre estabilizar producto e inflación si el shock es a la demanda de dinero. Finalmente, los efectos de una regla de Taylor son más fuertes -más estabilizadores o más desestabilizadores- cuando hay fricciones financieras. El Capítulo 2 utiliza datos de EE.UU. de posguerra para analizar si las fricciones financieras pueden haber contribuido a reducir la variabilidad del producto y la inflación desde los 80. Los datos sobre producto, inflación, tipo de interés y prima de riesgo indican un punto de ruptura en 1981:2, tras el cual estas variables son menos volátiles. El modelo anterior se utiliza aquí para calibrar una regla de tipos de interés para cada submuestra. Sin fricciones financieras, los resultados confirman el reconocido cambio en la política monetaria al presentar reglas bastante diferentes antes y después de 1981:2. Sin embargo, en contraste con la literatura empírica, la calibración no refleja un mayor peso sobre la estabilización de la inflación después de 1981:2. Sorprendentemente, con un nivel positivo de costes de control, la calibración presenta dos reglas mucho menos distintas que aquellas encontradas en ausencia de imperfecciones. Las reglas calibradas sí que asignan un mayor peso a la estabilización de la inflación y menor a la del producto tras 1981:2, a diferencia del caso de costes de control cero. Cuando la regla, costes de control, y shocks cambian entre submuestras, la calibración presenta dos reglas con más peso a la estabilización de la inflación y menos a la del producto después de 1981:2. El grado de fricciones financieras cae un 10% tras 1981:2. El Capítulo 3 estudia las consecuencias en crecimiento y bienestar de imponer límites de deuda a la restricción presupuestaria del gobierno. El modelo presenta crecimiento endógeno y permite al gasto público tener dos papeles diferentes, bien como factor productivo o bien como servicios en la función de utilidad (en este caso, el capital privado genera crecimiento.) En el largo plazo, sin límites de deuda, mayores impuestos sobre el trabajo reducen el crecimiento, independientemente del papel desempeñado por el gasto público. Con límites de deuda, mayores impuestos sobre el trabajo aumentan el crecimiento si el gasto público es productivo. También se analiza la dinámica de una política fiscal más restrictiva para alcanzar un límite de deuda menor, cuando el gasto público es productivo. Mayores impuestos sobre el trabajo para reducir la deuda llevan a un nuevo estado estacionario con mayor crecimiento y menores impuestos, debido al papel productivo del gasto público. Igualmente, un menor ratio de gasto público-producto reduce el crecimiento y producto. Mayores impuestos sobre el trabajo conllevan menos costes de bienestar que cortes en el gasto público para reducir la deuda. / This dissertation analyzes monetary and fiscal policy issues in macroeconomies with financial frictions. Chapter 1 analyzes numerically the performance of monetary policy rules in economies with and without financial imperfections. Endogenously driven monetary policy is compared to a constant money growth rule in a limited participation framework. The imperfections arise due to asymmetric information emerging in the production of capital. The model economy fits US data reasonably well. The setup with financial imperfections is able to account for some stylized facts of the business cycle, like the negative correlation between output and risk premium, which are absent in the standard frictionless case. The use of interest rate rules in a limited participation model has the opposite stabilization effects compared with new Keynesian models. More concretely, in a limited participation model, using interest rate rules helps stabilize both output and inflation in the face of technology shocks, whereas there is a trade-off between stabilizing output and inflation if the shock is to money demand. Finally, the effects of a Taylor rule are stronger -either more strongly stabilizing or more strongly destabilizing- when there are financial frictions in the economy.In Chapter 2, postwar US data are employed to analyze whether financial frictions may have contributed to reduce the variability of output and inflation since the 1980s. Data on output, inflation, interest rate, and risk premium indicate a structural break at 1981:2, after which these variables become less volatile. The model economy of Chapter 1 is used to calibrate an interest rate rule for each subsample. Without financial frictions, the results confirm the widely recognized change in the conduct of monetary policy by reporting substantially different rules before and after 1981:2. However, in contrast with empirical literature, the calibration fails to assign more weight to inflation stabilization after 1981:2. Interestingly, when a positive level of monitoring costs is introduced, the procedure yields two calibrated rules that are much less different than those found in the absence of frictions. Furthermore, the calibrated rules do report a stronger weight to inflation and less to output stabilization after 1981:2, as opposed to the zero monitoring costs case. When the rule, monitoring costs, and shocks are allowed to change across subsamples, the calibration reports two interest rate rules that assign more weight to inflation and less to output stabilization after 1981:2. Also, the degree of financial frictions is 10% less after 1981:2.Chapter 3 studies the growth and welfare consequences of imposing debt limits on the government budget constraint. The model economy displays endogenous growth and allows public spending to have two different roles, either as productive input or as services in the utility function (in this case private capital drives growth). Introducing debt limits is determinant for the growth effects of different fiscal policies. In the long run, without debt limits, the growth effects of raising taxes on labor income are negative regardless of the role of government spending. Interestingly, with debt limits, higher labor tax rates affect positively growth if government spending is productive. The chapter also analyzes the dynamic effects of imposing a more restrictive fiscal policy in order to attain a debt limit with a lower debt to output ratio, for the case of productive government spending. Raising taxes to lower debt leads to a new balanced growth path with higher growth and lower taxes, because of the productive role of government spending. By the same reason, a fiscal policy consisting of reducing government spending over output has the opposite effects, reducing growth and output. Finally, raising labor income taxes implies a lower welfare cost of reducing debt than does cutting spending.
|
455 |
Essays on Networks in Economics: Pairwise Influences and Communication ProcessesMartí Beltran, Joan de 28 May 2007 (has links)
Aquesta tesi estudia dues qüestions diferents. En primer lloc, les conseqüències econòmiques de les influències per parelles, enteses com externalitats amb intensitat i signe depenent de la parella d'agents considerats. En segon lloc, els efectes de diferents processos de comunicació en grups petits, i les conseqüències per estructures de xarxa òptimes en organitzacions informals. En el primer capítol, "Pairwise Influences and Bargaining Among the Many", estudio com les influències per parelles afecten la natura i solució dels conflictes distributius. El patró d'influències per parelles té una estructura de xarxa. Analitzo com la solució del conflicte distributiu en què els agents han de dividir entre ells un cert nivell de recursos disponibles incorpora el patró heterogeni d'influències per parelles en les porcions i utilitats obtingudes. Utilitzo la solució de Nash com a eina per resoldre el conflicte distributiu. Els resultats es fonamenten en I'índexs de centralitat en xarxes que mesuren la prominència de cada agent deguda a la seva posició dins la xarxa que representa l'estructura d'influències. El capítol "On Pairwise Influence Models: Networks and Efficiency" proveeix una anàlisi completa de la relació entre influències per parelles i externalitats de xarxa, que tenen en compte tots el nivells d'efectes indirectes generats pel patró d'influències, i una caracteritzaciò completa del conjunt d'assignacions eficients en el sentit de Pareto per gairebé qualsevol economia amb influències per parelles en termes de mesures de prestigi, que deriven de la literatura analítica de xarxes socials. El capítol "Spatial Spillovers and Local Public Goods" analitza qüestions relacionades amb els efectes d'externalitats espacials en estructures urbanes i la provisiò de béns públics locals, sota l'òptica de models d'influències per parelles. Estudio un joc de provisió de béns públics amb externalitats espacials. Els barris escullen quant volen contribuir a la provisió de serveis püblics que després són assignats entre ells amb l'ús de la solució de Nash. Analitzo el paper de la distribució de riquesa i el patró d'externalitats en els nivells de provisió insuficient de bé públic. En el capítol "Communication Processes: Knowledge and Decisions", realitzat amb el professor Antoni Calvó-Armengol, introduïm un model de comunicació en organitzacions informals. Analitzem com diferents processos de comunicació d'informació privada impacten en les accions òptimes de cada membre. Cada un d'aquests esquemes d'agregació d'informació determina la forma en què cada membre construeix le seves creences sobre la tasca a realitzar. L'anàlisi introdueix un nou concepte, l'índex de coneixement, que agrega en un valor idiosincràtic totes les creences de qualsevol ordre per qualsevol procés de comunicació. En general el joc analitzat té un únic equilibri Bayesià. L'acció d'equilibri de cada agent és lineal en el missatge obtingut en el procés de comunicació, i el pes d'aquest missatge és precisament l''ndex de coneixement. Les propietats d'unicitat i linealitat de l'equilibri Bayesià ens permeten obtenir clares implicacions en termes de benestar, i obtenir diversos resultats d'estàtica comparativa. El darrer capítol, "On Optimal Communication Networks," també realitzat amb el professor Antoni Calvó-Armengol, es fonamenta en el model i resultats de l'anterior capítol per estudiar una família de processos de comunicació en xarxa. Obtenim un ordre parcial en el conjunt de xarxes possibles i la nostra anàlisi mostra que quan hi ha una única ronda de comunicació, i un nombre limitat de possibles vincles, l'estructura geomètrica òptima de xarxa que organitza aquests vincles maximitza un índex d'irregularitat de la xarxa. En canvi, quan el nombre de possibles rondes de comunicació augmenta, la xarxa òptima passa a ser regular. Obtenim així, per un conjunt ampli de paràmetres, un resultat de polaritzaciò en termes del nombre de rondes de comunicació disponibles. / This thesis addresses two different questions. First, the economic consequences of pairwise influences, understood as externalities which intensity and sign depend on the pair of agents considered. Second, the effects of different communication processes in small groups, and its consequences for the optimal inner network structure of informal organizations.In the first chapter "Pairwise Influences and Bargaining Among the Many" we analyze how pairwise influences affect the nature and solution of distributional conflict. The pattern of pairwise influences takes the form of a weighted and directed network. When agents have to divide some available resource among them, pairwise influences affect the nature of this distributional conflict. We analyze how the solution to this conflict maps the heterogeneous pattern of pairwise influences to shares and utilities obtained. We use the Nash bargaining solution as the solution to distributional conflict. Our results rely on network centrality indexes that measure each agent's prominence due to his position in the networked influence structure. The chapter "On Pairwise Influence Models: Networks and Efficiency" provides a complete analysis of the mapping from pairwise influences to network externalities, that account for all levels of indirect effects generated by the pat-tern of influences, and a complete characterization of the set of Pareto efficient allocations for almost every economy with pairwise influences in terms of prestige measures, derived from the literature on social network analysis. The chapter "Spatial Spillovers and Local Public Goods" analyzes issues related to the effects of spatial spillovers in urban structure and the provision of local public goods under the light of pairwise influence models. We analyze a local public good provision game with spatial spillovers. Neighbourhoods choose how much they want to contribute to the provision of public services that later on are assigned to them with the use of the Nash bargaining solution. We analyze the role of the wealth distribution and the pattern of spatial spillovers in the levels of underprovision. In the chapter "Communication Processes: Knowledge and Decisions", in joint work with my advisor Antoni Calvó-Armengol, we introduce a model of communication in informal organizations. We analyze how different communication processes of private information impact the optimal actions of each member. Each decentralized information-sharing scheme determines the way in which each member constructs his beliefs on the task to be performed. The analysis introduces a new concept, the knowledge index, that sums up in an idiosyncratic value these higher order beliefs for each possible communication process. In most cases the game has a unique Bayesian equilibrium. The equilibrium action of each agent is linear in the communication report each agent obtains, and this report is, precisely, weighted by the knowledge index. The uniqueness and linearity properties of the Bayesian equilibrium allow for clear welfare implications in our analysis, and to obtain different comparative statics results. The last chapter, "On Optimal Communication Networks," also in joint work with professor Antoni Calvó-Armengol, builds on the model and results developed in the previous chapter to study a family of networked communication processes. We obtain a partial order on the set of possible networks and our analysis shows that when there is one unique round of communication, and there is a fixed supply of possible links, the optimal geometric arrangement of these links maximizes a network span index, a measure of network irregularity. Instead, when the number of possible communication rounds increases the optimal network is regular. Hence, we obtain for a wide set of parameters, a polarization result in terms of the number of available rounds to communicate.
|
456 |
Information, Behavior, and the Design of InstitutionsLouis, Philippos 06 July 2012 (has links)
El objeto de estudio de esta tesis es la interacción entre el diseño de las instituciones, el
comportamiento de los agentes económicos, y la información.
Comenzamos en el capítulo 2 mirando a una vieja pregunta: ¿cuál es el valor de la
información? Lo hacemos desde el punto de vista de los grupos. Hemos creado un
modelo bastante general de toma de decisiones colectivas a través del voto. La atención
se limita a los grupos en los que los miembros comparten un objetivo común.
Examinamos si es posible comparar las estructuras de información en términos del valor
agregado esperado que ofrecen al grupo. Nuestro primer resultado muestra que la
comparación es posible en algunos casos, para cualquier grupo de ideas afines y
cualquier regla de votación. Sin embargo, mostramos que los casos en que tales
comparaciones son posibles son muy limitados. El conjunto de estructuras de información
que pueden ser comparadas se amplía si uno plantea restricciones en el perfil de las
preferencias de los miembros del grupo o la regla de votación. Aplicamos algunos de
nuestros resultados a un modelo en el que la información es endógena.
El capítulo 3 estudia una institución diferente: los mercados. Las características
específicas de los mercados, que a priori parecen no estar relacionadas a la información,
puede tener efectos significativos sobre como los agentes forman a sus creencias acerca
de los bienes disponibles. Las creencias determinan la demanda que puede resultar
afectada. Nos fijamos en un modelo en el que los agentes pueden invertir en un proyecto
con un número limitado de plazas disponibles. Los agentes tienen información incompleta
sobre los beneficios esperados del proyecto. En base a eso, ellos deben decidir si invertir
en el proyecto o elegir una opción segura. Las participaciónes se asignan siguiendo un
orden de prioridad exógeno. Agentes de baja prioridad pueden enfrentarse a una
maldición del ganador: si optan por invertir y obtener una participación en el proyecto,
debe ser que los agentes con mayor prioridad optan por no hacerlo. En el equilibrio, sólo
los agentes de alta prioridad optan por invertir si su información privada indica que
deberían. Los agentes de baja prioridad escogen la opción segura, independientemente
de su información privada. Esta característica de equilibrio se mantiene cuando nos
fijamos en las variaciones del modelo con las prioridades asignadas por sorteo o
determinado por un proceso de Bernoulli. Realizamos estáticas comparativas y
comparamos los resultados del equilibrio de nuestro modelo de acción simultánea con los
de un modelo de aprendizaje social. Nuestro análisis pone de relieve los vínculos entre
características de diseño del mercado inexploradas y el desempeño de estos mercados.
En particular, el conocimiento de los agentes de la orden de prioridad afecta a la demanda
y la eficiencia. Por otra parte, el “herding” se produce incluso en ausencia de aprendizaje
social.
En el último capítulo de la tesis, en lugar de estudiar en una institución
particular,estudiamos a las personas y la forma en que se comportan en diferentes
situaciones estratégicas. El orden y la observabilidad de las acciones en un juego
determinan las inferencias que pueden hacer los jugadores. La intuición sugiere que tales
inferencias requieren un mayor nivel de sofisticación cuando se refieren a acciones que
no son directamente observables, como en los juegos de acción simultáneos, en
comparación con los juegos secuenciales donde un jugador puede observar las acciones
de otros antes de tomar decisiones. Esta intuición contrasta con el supuesto. Utilizamos
un novedoso diseño experimental en el que los sujetos juegan, de forma simultánea y
secuencial, un juego en el que cualquiera de estos fenómenos pueden ocurrir. Los
resultados confirman nuestra intuición y contradicen las predicciones de ambas teorías
clásicas y de comportamiento (behavioral). / The object of study in this dissertation is the interaction among the design of institutions,
the behavior of economic agents, and information.
We start in chapter 2 by looking at an old question: what is the value of information? We
do so from the point of view of groups. We set up a fairly general model of collective
decision making through voting. Attention is restricted to groups in which members share a
common objective. We examine whether it is possible to compare information structures in
terms of the expected aggregate value they offer the group. Our first result shows that
such a comparison is possible in some cases, for any such like-minded group and any
possible voting rule. Still, we show that the instances where such comparisons are
possible are very limited. The set of information structures that can be compared is
extended if one poses restrictions upon the profile of group members' preferences or the
voting rule. We apply some of our results to a model in which information is endogenous.
In chapter 3 we turn our attention to a different institution: markets. Specific features of
markets, that a priori seem unrelated to information, can have significant effects on how
agents shape their beliefs about the available goods. Beliefs determine demand which
may be affected in unexpected ways. We look at a model in which agents can invest in a
project with a limited number of available slots. Agents have incomplete information about
the projectʼs expected payoffs. Based on that, they must decide whether to invest in the
risky project or take a safe outside option. Slots are assigned following an exogenous
priority order. Low priority agents may face a winner's curse: if they choose to invest and
obtain a slot in the project it must be that agents with higher priority choose not to do so. In
equilibrium, only high priority agents choose to invest when their private information
indicates they should. Low priority agents take the outside option independently of their
private information. This feature of equilibrium is maintained when we look at variations of
the model with priorities assigned by lottery or determined by a Bernoulli process. We
perform relevant comparative statics and compare equilibrium outcomes of our
simultaneous action model with the ones from a social learning model. Our analysis
highlights unexplored links between market design features and the performance of such
markets. In particular, agents' knowledge of the priority order affects both demand and
efficiency. Furthermore, herding behavior occurs even in the absence of social learning.
In the last chapter of the dissertation we take a step back. Instead of looking at a particular
institution we switch the focus to individuals and the way they behave across different
strategic situations. The order and observability of actions in a game determine the
informational inferences players can make. Intuition suggests that such inferences require
a higher level of sophistication when they concern actions that are not directly observed,
like in simultaneous action games, compared to sequential games where a player can
observe others' actions before making decisions. This intuition contrasts with the
assumption of full sophistication embodied in the Bayes-Nash equilibrium concepts.
Informational cascades the winner's curse may depend on, respectively, the ability or
inability to make such inferences. We use a novel experimental design in which subjects
play, both simultaneously and sequentially, a game in which either of these phenomena
can occur. We find that, in accordance to our intuition, some subjects participate in
informational cascades in the sequential game and suffer a winner's curse in the
simultaneous game. ``Level-k'' thinking and ``cursed equilibrium'' are theories that have
been proposed to explain why an individual may suffer from the winner's curse in common
value auctions and other environments. Nevertheless, according to these theories the
same individual could not participate in an informational cascade. Therefore, our results
contradict the predictions of both classical and behavioral theories.
|
457 |
Essays on Empirical Asset PricingZhang, Xiang 30 September 2013 (has links)
This thesis consists of three essays on empirical asset pricing around three themes: evaluating linear factor asset pricing models by comparing their misspecified measures, understanding the long-run risk on consumption-leisure to investigate their pricing performances on cross-sectional returns, and evaluating conditional asset pricing models by using the methodology of dynamic cross-sectional regressions.
The first chapter is ``Comparing Asset Pricing Models: What does the Hansen-Jagannathan Distance Tell Us?''. It compares the relative performance of some important linear asset pricing models based on the Hansen-Jagannathan (HJ) distance using data over a long sample period from 1952-2011 based on U.S. market. The main results are as follows: first, among return-based linear models, the Fama-French (1993) five-factor model performs best in terms of the normalized pricing errors, compared with the other candidates. On the other hand, the macro-factor model of Chen, Roll, and Ross (1986) five-factor is not able to explain industry portfolios: its performance is even worse than that of the classical CAPM. Second, the Yogo (2006) non-durable and durable consumption model is the least misspecified, among consumption-based asset pricing models, in capturing the spread in industry and size portfolios. Third, the Lettau and Ludvigson (2002) scaled consumption-based CAPM (C-CAPM) model obtains the smallest normalized pricing errors pricing gross and excess returns on size portfolios, respectively, while Santos and Veronesi (2006) scaled C-CAPM model does better in explain the return spread on portfolios of U.S. government bonds.
The second chapter (``Leisure, Consumption and Long Run Risk: An Empirical Evaluation'') uses a long-run risk model with non-separable leisure and consumption, and studies its ability to price equity returns on a variety of portfolios of U.S. stocks using data from 1948-2011. It builds on early work by Eichenbaum et al. (1988) that explores the empirical properties of intertemporal asset pricing models where the representative agent has utility over consumption and leisure. Here we use the framework in Uhlig (2007) that allows for a stochastic discount factor with news about long-run growth in consumption and leisure. To evaluate our long-run model, we assess its performance relative to standard asset pricing models in explaining the cross-section of returns across size, industry and value-growth portfolios. We find that the long-run consumption-leisure model cannot be rejected by the J-statistic and it does better than the standard C-CAPM, the Yogo durable consumption and Fama-French three-factor models. We also rank the normalized pricing errors using the HJ distance: our model has a smaller HJ distance than other candidate models. Our paper is the first, as far as we are aware, to use leisure data with adjusted working hours as a measure of leisure i.e., defined as the difference between a fixed time endowment and the observable hours spent on working, home production, schooling, communication, and personal care (Yang (2010)).
The third essay: ``Empirical Evaluation of Conditional Asset Pricing Models: An Economic Perspective'' uses dynamic Fama-MacBeth cross-sectional regressions and tests the performance of several important conditional asset pricing models when allowing for time-varying price of risk. It compares the performance of conditional asset pricing models, in terms of their ability to explain the cross-section of returns across momentum, industry, value-growth and government bond portfolios. We use the new methodology introduced by Adrian et al. (2012). Our main results are as follows: first we find that the Lettau and Ludvigson (2001) conditional model does better than other models in explaining the cross-section of momentum and value-growth portfolios. Second we find that the Piazessi et al. (2007) consumption model does better than others in pricing the cross-section of industry portfolios. Finally, we find that in the case of the cross-section of risk premia on U.S. government bond portfolios the conditional model in Santos and Veronesi (2006) outperforms other candidate models. Overall, however, the Lettau and Ludvigson (2001) model does better than other candidate models. Our main contributions here is using a recently developed method of dynamic Fama-MacBeth regressions to evaluate the performance of leading conditional CAPM (C-CAPM) models in a common set of test assets over the time period from 1951-2012.
|
458 |
Volatility forecasting with latent information and exogenous variablesKesamoon, Chainarong 08 May 2015 (has links)
En aquesta tesi s'han presentat idees per a la previsió de la volatilitat que cobreixen la teoria bàsica, estudis de simulació i implementacions pràctiques dels nous models que s’han proposat. Hem analitzat les dades amb eines alternatives que poden fer-nos adonar de factors diferents als que s'han observat amb altres mètodes més comuns. El nostre objectiu és aportar nova llum més que substituir els models assentats.
El model de distribució normal inversa de Gaussiana (NIG) s'ha emprat per a l’anàlisi exploratori de dades i hem arribat a la conclusió que és capaç de descriure bé la distribució marginal de les rendibilitats durant la crisi de 2008 i pot ser estimat fàcilment, ja sigui amb el mètode de moments o amb la estimació de màxima versemblança. Els resultats mostren que la distribució NIG té un atractiu potencial per modelar les dades financeres. Atorga una forma alternativa de modelatge de les dades de manera que els models de tipus GARCH, populars de la literatura financera, no subministren.
La distribució NIG té propietats desitjables que han estat utilitzades per construir el model de volatilitat estocàstica NIG (NIG-SV). S'introdueixen mètodes dels model lineals generalitzats jeràrquics (HGLM) per a l'estimació que s’apliquen al model NIG-SV. Els resultats empírics mostren que el mètode HGLM és tan precís com el mètode de màxima versemblança, però no requereix la integració complicada que necessita la distribució marginal. A més, la H-versemblança associada al mètode HGLM ens ofereix les estimacions dels efectes aleatoris que són la informació latent. En conseqüència, utilitzem els efectes aleatoris per estimar i predir la volatilitat. Els models de predicció NIG-SV superen els models de predicció estàndard en algunes ocasions.
A més dels models basats en rendibilitats, també s’han investigat les propietats de variables exògenes com els preus d’obertura i tancament, o màxims i mínims, com estimadors de la volatilitat. El rang calculat a partir de la diferència entre els preus màxims i mínims diaris és de gran interès. Diversos estimadors basats en rangs de la volatilitat han estat estudiats i s’ha corregit el biaix generat a partir de la discretització. S’ha contrastat mitjançant simulacions si aquests estimadors són rellevants en diferents escenaris quan les condicions teòriques no es compleixen correctament. L'estimador de Garman-Klass funciona de manera impressionant i els altres estimadors també poden proporcionar estimacions adequades per a la volatilitat. Arribem a la conclusió que el rang conté informació important i és rellevant per incorporar en el nou model.
Al final, tota la informació obtinguda en els apartats anteriors s’incorpora en el nou model. S'ha proposat el model de volatilitat estocàstica dinàmica normal inversa Gaussiana (DNIG). És l'extensió del model NIG-SV, on la dinàmica de la volatilitat és dirigida pel rang. El model DNIG es pot estendre a ordres superiors d’autocorrelació i es pot estimar simplement pel mètode HGLM. La informació rellevant del procés autoregressiu d'ordre dos no ha estat fins ara utilitzada en el model SV estàndard per altres investigadors. No obstant això, hem demostrat en aquesta tesi que en la majoria dels casos, el model DNIG d'ordre dos s'ajusti millor a la sèrie que el model DNIG d'ordre un i que els models i els models NIG-SV. També és notable que el coeficient del rang dels models DNIG sembla ser un bon indicador del indici de la crisi.
L'estimació del model DNIG amb el mètode HGLM proporciona estimacions dels efectes aleatoris i per tant estimacions de la volatilitat diària. Hem demostrat que els rendiments normalitzats per volatilitats estimades a partir dels models DNIG no presenten cues pesades. Aquests resultats indiquen que els models DNIG són capaços de capturar la informació rellevant dels retorns. Els models DNIG amb el mètode HGLM també ens permeten pronosticar la volatilitat amb facilitat. Els models de predicció DNIG s'han provat amb dades reals en comparació amb els models estàndard i funcionen bé. En molts casos, els models de predicció DNIG són més precisos que els populars models GARCH (1,1). / This thesis has presented the insight into volatility forecasting covering from basic theory, simulation studies, practical implementations, to the new proposed models. We have analyzed the data with alternative tools that can make us aware of different factors that have been observed with other common methods. We aim to bring new light, rather than replace settled models. The normal inverse Gaussian (NIG) distribution has been employed to analyze the exploratory data and we have concluded that it is capable of describing the marginal distribution of return during the crisis of 2008 and it can be estimated plainly with either the method of moments or the maximum likelihood estimation. The results show that the NIG distribution has attractive potential to model financial data. It grants an alternative way of modeling financial data in such a way that GARCH-type models, popular models in the financial literature, do not supply.
The NIG distribution has desirable properties such that it has been used to construct the NIG stochastic volatility (NIG-SV) model. We introduce the hierarchical generalized linear model (HGLM) method for estimation that can be applied to the NIG-SV model. The empirical results show that the HGLM method is as accurate as the maximum likelihood method but it does not require complicated integration. Moreover, the h-likelihood associating the HGLM method provides us the estimates of random effects that are latent information. Consequently, we apply the random effects to estimate and forecast volatility. The NIG-SV forecasting models overcome the standard forecasting models in some occasions.
Apart from the return-based models, we also investigated the properties of exogenous variables such as open, close, high and low prices as volatility estimators. The range calculated from the difference between daily high and low prices is of interest. Several range-based volatility estimators have been studied and corrected the bias generated from discretization. We test by simulations whether these estimators are relevant in different scenarios when the theoretical conditions do not hold perfectly. It turns out that the Garman-Klass estimator performs impressively and the other estimators also provide proper estimates for volatility. We conclude that the range contains substantial information and it is relevant to incorporate into the new model.
In the end, all information obtained from previous studies is summarized into the new model. The dynamic normal inverse Gaussian stochastic volatility (DNIG) model has been proposed. It is the extension of the NIG-SV model where the dynamics of the volatility is driven by the range. The DNIG model can be extended to higher orders and it can be estimated simply by the HGLM method. Other researchers have never discovered the relevant information of the autoregressive process of order two in the standard SV model. However, we have shown in this thesis that in most cases, the DNIG of order two models fit better to the series than the DNIG of order one models and the NIG-SV models. It is also remarkable that the coefficient of the range in the DNIG models might be the indication of the crisis.
Estimating the DNIG model with the HGLM method yields the random effects estimates and consequently the estimates for volatility. We have shown that the returns standardized by volatilities estimated from DNIG models do not exhibit heavy tails. These results indicate that DNIG models are capable in capturing the relevant information from the returns. DNIG models with HGLM method also allow us to forecast volatility with ease. The DNIG forecasting models have been tested with real data in comparisons with the standard models and they perform nicely. In many cases, the DNIG forecasting models are more accurate than the popular GARCH(1,1) models.
|
459 |
Expectativas y preferencias en la utilización de servicios en atención primariaAlfranca Pardillos, Rebeca 03 June 2014 (has links)
The main aim of this thesis is to study the expectations and preferences of the Catalan population regarding the public health system, and an appraisal of the influence played by factors such as age, gender, geographical origin and socioeconomic status. Secondarily, the effect of the economic crisis on patient satisfaction with the health service is evaluated. The results showed the high importance attached by patients to competency and relationships with health professionals. Patient expectations were met by competency and respectful manner, but not by closeness in the relationship established with the healthcare professional. As to the factors studied, the most influential was age, followed by socioeconomic status. This thesis reflects a greater influence of the economic crisis and health cuts in the elderly, who in turn present a greater dissatisfaction and failed expectations / El objetivo principal de la tesis es el estudio de las expectativas y las preferencias de la población catalana ante el sistema sanitario público y la valoración de la influencia de los factores edad, sexo, origen de procedencia y nivel socioeconómico. Secundariamente, se evalúa el efecto de la crisis económica en la satisfacción. Los resultados destacan la alta importancia concedida a la competencia y a la relación con los profesionales de la salud. Las expectativas se cumplen en referencia a la competencia y el trato respetuoso, pero no en la relación de cercanía establecida. En relación a los factores estudiados, el que más influye es la edad y en segundo lugar el nivel socioeconómico. La tesis refleja una mayor afectación de la crisis económica y de los recortes en salud en los pacientes de edad avanzada, que son a su vez los que presentan mayor insatisfacción e incumplimiento de expectativas
|
460 |
The level of adoption of analytical toolsBarahona Torres, Igor 26 July 2013 (has links)
This PhD thesis is focused on disclosing features that might cause the increase in the use of analytical tools for better decision making. The theoretical part of this research is developed in two phases. At first, an exhaustive literature review was conducted with the purpose of identifying the main features in companies that impact positively the adoption of new analytical tools. This review brought our attention in four key drivers which were the foundation of the theoretical model: management support on data analysis, data-based competitive advantage, systemic thinking and communication outside the company. Secondly, a scale was proposed for classifying companies according with how its analytical capabilities are developed.
The theoretical model and scale required to be validated with data from the real world. Four constructs derived from the model were operationalized in 17 items. An extensive statistical research related with the agreement, convergence, test-retest reliability and factor structure of the dimensions was conducted. Results allowed us to ascertain that our instrument is reliable and valid. Then, the questionnaire was sent to companies located in Barcelona area.
The central part of the research analyzes data obtained from the companies. At first, the statistical engineering, which is interpreted as the link between the statistical thinking (the strategic management) and methods (the day-to-day operations), was adapted as guideline. A set of seven statistical tools were wisely assembled in a sequential order and relevant conclusions were obtained. Later, it was necessary to validate our preliminary conclusions with additional research and make them more robust. A second approach was utilized with this purpose. The evidential reasoning, which is a type of multi criteria decision analysis method, was implemented. Two different approaches lead us to similar results.
At this phase of the thesis unstructured and soft features about the analytical practices were still missing. A complementary approach was needed to include aspects as personal values, beliefs and motivations and identify how they influence on analytical practices of the companies. The laddering methodology was utilized for these purposes. It is defined as a type of in-depth interview that is applied to understand how individuals transform attributes of any given concept into meaningful associations with respect to themselves. Consider this analogy; the data from questionnaires gave us "the picture of forest", then in-depth interviews yielded "the picture of the three".
The last part of the thesis is reserved to provide guidelines to companies interested on increasing their analytical capabilities. Here it is offered a road map composed of five stages. The proposed order is: A company receive its diagnostic and is given a stage in the road map, later guidelines are provided to move the company upwards into the scale. The sequence of diagnostic-guidelines-diagnostic should be repeated until the company reach the highest level in the scale: analytics as competitive advantage.
At the end of the thesis are presented two sets of values and attributes which were found decisive for increasing the adoption of analytical tools. In the first set, three values: honesty, serving the society and leadership impact the statistical thinking (the strategic level) in the company, whereas three attributes: the goal setting, creativity and information from outside are acting on the statistical methods (the operational level). The statistical engineering (the tactical level) establish a link between strategic and operational levels.
All the tools and methods developed in this thesis, including the questionnaire, the scale for ranking the companies, the script for in-depth interviews, the road map for moving upward to higher levels in the scale and its related guidelines, represent an original and helpful toolkit for improving the analytical capabilities in companies.
|
Page generated in 0.0557 seconds