• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 316
  • 89
  • 29
  • 29
  • 12
  • 10
  • 10
  • 10
  • 9
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 664
  • 664
  • 228
  • 224
  • 179
  • 134
  • 112
  • 83
  • 80
  • 77
  • 76
  • 75
  • 72
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Virtue Ethics and right action

Moula, Payam January 2010 (has links)
<p>This paper evaluates some arguments made against the conceptions of right action within virtue ethics. I argue that the different accounts of right action can meet the objections raised against them. Michael Slote‘s agent-based and Rosalind Hursthouses agent-focused account of right action give different judgments of right action but there seems to be a lack of real disagreement between the two accounts. I also argue that the concept of right action often has two important parts, relating to action guidance and moral appraisal, respectively, and that virtue ethics can deal with both without a concept of right action.</p>
142

Essays on credit markets and banking

Holmberg, Ulf January 2012 (has links)
This thesis consists of four self-contained papers related to banking, credit markets and financial stability.    Paper [I] presents a credit market model and finds, using an agent based modeling approach, that credit crunches have a tendency to occur; even when credit markets are almost entirely transparent in the absence of external shocks. We find evidence supporting the asset deterioration hypothesis and results that emphasize the importance of accurate firm quality estimates. In addition, we find that an increase in the debt’s time to maturity, homogenous expected default rates and a conservative lending approach, reduces the probability of a credit crunch. Thus, our results suggest some up till now partially overlooked components contributing to the financial stability of an economy.     Paper [II] derives an econometric disequilibrium model for time series data. This is done by error correcting the supply of some good. The model separates between a continuously clearing market and a clearing market in the long-run such that we are able to obtain a novel test of clearing markets. We apply the model to the Swedish market for short-term business loans, and find that this market is characterized by a long-run nonmarket clearing equilibrium.    Paper [III] studies the risk-return profile of centralized and decentralized banks. We address the conditions that favor a particular lending regime while acknowledging the effects on lending and returns caused by the course of the business cycle. To analyze these issues, we develop a model which incorporates two stylized facts; (i) banks in which lendingdecisions are decentralized tend to have a lower cost associated with screening potential borrowers and (ii) decentralized decision-making may generate inefficient outcomes because of lack of coordination. Simulations are used to compare the two banking regimes. Among the results, it is found that even though a bank group where decisions are decentralizedmay end up with a portfolio of loans which is (relatively) poorly diversified between regions, the ability to effectively screen potential borrowers may nevertheless give a decentralized bank a lower overall risk in the lending portfolio than when decisions are centralized.    In Paper [IV], we argue that the practice used in the valuation of a portfolio of assets is important for the calculation of the Value at Risk. In particular, a seller seeking to liquidate a large portfolio may not face horizontal demand curves. We propose a partially new approach for incorporating this fact in the Value at Risk and Expected Shortfall measures and in an empirical illustration, we compare it to a competing approach. We find substantial differences.
143

Information horizons in a complex world

Rosvall, Martin January 2006 (has links)
The whole in a complex system is the sum of its parts, plus the interactions between the parts. Understanding social, biological, and economic systems therefore often depends on understanding their patterns of interactions---their networks. In this thesis, the approach is to understand complex systems by making simple network models with nodes and links. It is first of all an attempt to investigate how the communication over the network affects the network structure and, vice versa, how the network structure affects the conditions for communication. To explore the local mechanism behind network organization, we used simplified social systems and modeled the response to communication. Low communication levels resulted in random networks, whereas higher communication levels led to structured networks with most nodes having very few links and a few nodes having very many links. We also explored various models where nodes merge into bigger units, to reduce communication costs, and showed that these merging models give rise to the same kind of structured networks. In addition to this modeling of communication networks, we developed new ways to measure and characterize real-world networks. For example, we found that they in general favor communication on short distance, two-three steps away in the network, within what we call the information horizon. / Helheten i ett komplext system är mer än summan av dess delar, då den även inbegriper interaktionerna mellan dem. Att studera sociala, biologiska och ekonomiska system blir därför ofta en fråga om att förstå deras interaktionsmönster, d.v.s. deras nätverk av noder och länkar. Med utgångspunkt i enkla nätverksmodeller undersöker avhandlingen i huvudsak hur kommunikation i nätverk påverkar nätverksstrukturen och, vice versa, hur nätverksstrukturen påverkar villkoren för kommunikation. Vi utforskade mekanismerna bakom hur nätverk är organiserade genom att modellera effekten av kommunikation i förenklade sociala system. En låg kommunikationsnivå visade sig ge upphov till kaotiska nätverk där ingen nod i princip hade fler länkar än någon annan. En hög kommunikationsnivå resulterade däremot i strukturerade nätverk, med några få centrala noder med många länkar, medan flertalet noder var perifera med enbart några få länkar. Det visade sig också att alla aktörer i nätverket gynnades av kommunikation, även när den var ojämnt fördelad. Kvaliteten på kommunikationen, d.v.s. informationens giltighet, var också avgörande för vilka positioner som gynnades i ett nätverk, vilket vi visade genom att studera aktörer som spred falsk information. Eftersom effektiv kommunikation är en viktig del i många nätverk betraktar vi utvecklingen av dem som en optimeringsprocess. Varje kommunikationshandling mellan noderna tar tid och genom att slå sig samman till större enheter begränsas dessa kostnader och gör nätverket effektivare. Dessa s.k. sammanslagningsmodeller gav upphov till samma typ av strukturerade nätverk som ovan. Genom att utveckla olika sätt att mäta nätverksstrukturer visade vi bland annat att många verkliga system främjar kommunikation över korta avstånd, två-tre steg bort i nätverket, innanför det vi kallar informationshorisonten. Vi uppskattade också den mängd information som krävs för att orientera sig i städer, och fann att det är lättare att hitta i moderna, planerade städer än i äldre städer som utvecklats under lång tid.
144

Exurban land cover and land market evolution: Analysis, review and computational experimentation of spatial and agent heterogeneity from the bottom up

Huang, Qingxu 22 January 2013 (has links)
This dissertation investigates selected empirical and theoretical aspects of land-use and land-cover change (LUCC) in exurban areas. Two challenges – observation and monitoring of LUCC, and spatially explicit modeling, are addressed using three main approaches – measuring, reviewing and agent-based modeling (ABM). All of these approaches focus on LUCC at the individual household level, investigating how micro-scale elements interact to influence macro-scale functional patterns—bottom-up analysis. First, the temporal change of the quantity and pattern of land-cover types within exurban residential parcels in three townships in the southeastern Michigan is examined using landscape metrics and local indicators of spatial association at the parcel and parcel-neighborhood level respectively. The results demonstrate that the number and area of exurban residential parcels increased steadily from 1960 to 2000, and different land-cover types have distinctive temporal changes over time. The results also indicate that there is a convergence process at the neighborhood level through which the quantity and pattern of land cover in parcels conform with the neighborhood appearance. Second, 51 urban residential choice models based on ABM are reviewed. The results divide these models into three categories (i.e. models based on classical theories, models focusing on different stages of urbanization process; and integrated ABM and microsimulation models). This review also compares the differences among these models in their representations of three essential features brought by the technique of ABM: agent heterogeneity, the land market and output measurement. Challenges in incorporating these features, such as the trade-off between the simplicity and abstraction of model and the complexity of urban residential system, interactions of multiple features and demands for data at individual level, are also discussed. Third, the effects of agent heterogeneity on spatial and socioeconomic outcomes under different levels of land-market representations are explored through three experiments using a stylized agent-based land-market model. The results reveal that budget heterogeneity has prominent effects on socioeconomic outcomes, while preference heterogeneity is highly pertinent to spatial outcomes. The relationship between agent heterogeneity and macro-measures becomes more complex as more land-market mechanisms are represented. The results also imply that land-market representation (e.g., competitive bidding) is indispensable to reproduce the results of classical urban land market models (e.g., monocentric city model) in a spatial ABM when agents are heterogeneous.
145

Economic Analysis of World's Carbon Markets

Bhatia, Tajinder Pal Singh 26 March 2012 (has links)
Forestry activities play a crucial role in climate change mitigation. To make carbon credits generated from such activities a tradable commodity, it is important to analyze the price dynamics of carbon markets. This dissertation contains three essays that examine various issues confronting world’s carbon markets. The first essay investigates cointegration of carbon markets using Johansen maximum likelihood procedure. All carbon markets of the world are not integrated. North American carbon markets show integration and so do the CDM markets. For future, the possibilities of arbitrage across world’s markets are expected to be limited, and carbon trading in these markets will be globally inefficient. There is a strong need of a global agreement that allows carbon trade to prevent climate change at the least cost options. The second essay evaluates various econometric models for predicting price volatility in the carbon markets. Voluntary carbon market of Chicago is relatively more volatile; and like other financial markets, its volatility is forecasted best by a complex non-linear GARCH model. The compliance market of Europe, on the other hand, is less volatile and its volatility is forecasted best by simple econometric models like Historical Averages and GARCH and hence is different from other markets. Findings could be useful for investment decision making, and for making choice between various policy instruments. The last essay focuses on agent based models that incorporate interactions of heterogeneous entities. Artificial carbon markets obtained from such models have statistical properties - lack of autocorrelations, volatility clustering, heavy tails, conditional heavy tails, and non-Gaussianity; which are similar to the actual carbon markets. These models possess considerably higher forecasting capabilities than the traditional econometric models. Forecast accuracy is further improved considerably through experimentation, when agent characteristics like wealth distribution, proportion of allowances and number of agents are set close to the real market situations.
146

Extremal dependency:The GARCH(1,1) model and an Agent based model

Aghababa, Somayeh January 2013 (has links)
This thesis focuses on stochastic processes and some of their properties are investigated which are necessary to determine the tools, the extremal index and the extremogram. Both mathematical tools measure extremal dependency within random time series. Two different models are introduced and related properties are discussed. The probability function of the Agent based model is surveyed explicitly and strong stationarity is proven. Data sets for both processes are simulated and clustering of the data is investigated with two different methods. Finally an estimation of the extremogram is used to interpret dependency of extremes within the data.
147

Economic Analysis of World's Carbon Markets

Bhatia, Tajinder Pal Singh 26 March 2012 (has links)
Forestry activities play a crucial role in climate change mitigation. To make carbon credits generated from such activities a tradable commodity, it is important to analyze the price dynamics of carbon markets. This dissertation contains three essays that examine various issues confronting world’s carbon markets. The first essay investigates cointegration of carbon markets using Johansen maximum likelihood procedure. All carbon markets of the world are not integrated. North American carbon markets show integration and so do the CDM markets. For future, the possibilities of arbitrage across world’s markets are expected to be limited, and carbon trading in these markets will be globally inefficient. There is a strong need of a global agreement that allows carbon trade to prevent climate change at the least cost options. The second essay evaluates various econometric models for predicting price volatility in the carbon markets. Voluntary carbon market of Chicago is relatively more volatile; and like other financial markets, its volatility is forecasted best by a complex non-linear GARCH model. The compliance market of Europe, on the other hand, is less volatile and its volatility is forecasted best by simple econometric models like Historical Averages and GARCH and hence is different from other markets. Findings could be useful for investment decision making, and for making choice between various policy instruments. The last essay focuses on agent based models that incorporate interactions of heterogeneous entities. Artificial carbon markets obtained from such models have statistical properties - lack of autocorrelations, volatility clustering, heavy tails, conditional heavy tails, and non-Gaussianity; which are similar to the actual carbon markets. These models possess considerably higher forecasting capabilities than the traditional econometric models. Forecast accuracy is further improved considerably through experimentation, when agent characteristics like wealth distribution, proportion of allowances and number of agents are set close to the real market situations.
148

From agent-based models to artificial economies

Teglio, Andrea 03 October 2011 (has links)
The aim of this thesis is to propose and illustrate an alternative approach to economic modeling and policy design that is grounded in the innovative field of agent-based computational economics (ACE). The recent crisis pointed out the fundamental role played by macroeconomic policy design in order to preserve social welfare, and the consequent necessity of understanding the effects of coordinated policy measures on the economic system. Classic approaches to macroeconomic modeling, mainly represented by dynamic stochastic general equilibrium models, have been recently criticized for they difficulties in explaining many economic phenomena. The absence of interaction among heterogeneous agents, along with their strong rationality, are two of the main of criticisms that emerged, among others. Actually, decentralized market economies consist of large numbers of economic agents involved in local interactions and the aggregated macroeconomic trends should be considered as the result of these local interactions. The approach of agent-based computational economics consists in designing economic models able to reproduce the complicated dynamics of recurrent chains connecting agent behaviors, interaction networks, and to explain the global outcomes emerging from the bottom-up. The work presented in this thesis tries to understand the feedback between the microstructure of the economic model and the macrostructure of policy design, investigating the effects of different policy measures on agents behaviors and interactions. In particular, the attention is focused on modeling the relation between the financial and the real sides of the economy, linking the financial markets and the credit sector to the markets of goods and labor. The model complexity is increasing with the different chapters. The agent-based models presented in the first part evolve to a more complex object in the second part, becoming a sort of complete ``artificial economy''. The problems tackled in the thesis are various and go from the investigation of the equity premium puzzle, to study of the effects of classic monetary policy rules (as the Taylor rule) or to the study of the macroeconomic implications of bank's capital requirement or quantitative easing.
149

Analytic and agent-based approaches: mitigating grain handling risks

2013 March 1900 (has links)
Agriculture is undergoing extreme change. The introduction of new generation agricultural products has generated an increased need for efficient and accurate product segregation across a number of Canadian agricultural sectors. In particular, monitoring, controlling and preventing commingling of various wheat grades is critical to continued agri-food safety and quality assurance in the Canadian grain handling system. The Canadian grain handling industry is a vast regional supply chain with many participants. Grading of grain for blending had historically been accomplished by the method of Kernel Visual Distinguishability (KVD). KVD allowed a trained grain grader to distinguish the class of a registered variety of wheat solely by visual inspection. While KVD enabled rapid, dependable, and low-cost segregation of wheat into functionally different classes or quality types, it also put constraints on the development of novel traits in wheat. To facilitate the introduction of new classes of wheat to enable additional export sales in new markets, the federal government announced that KVD was to be eliminated from all primary classes of wheat as of August 1, 2008. As an alternative, the Canadian Grain Commission has implemented a system called Variety Eligibility Declaration (VED) to replace KVD. As a system based on self-declaration, the VED system may create moral hazard for misrepresentation. This system is problematic in that incentives exist for farmers to misrepresent their grain. Similarly, primary elevators have an incentive to commingle wheat classes in a profitable manner. Clearly, the VED system will only work as desired for the grain industry when supported by a credible monitoring system. That is, to ensure the security of the wheat supply chain, sampling and testing at some specific critical points along the supply chain is needed. While the current technology allows the identification of visually indistinguishable grain varieties with enough precision for most modern segregation requirements, this technology is relatively slow and expensive. With the potential costs of monitoring VED through the current wheat supply chain, there is a fundamental tradeoff confronting grain handlers, and effective handling strategies will be needed to maintain historical wheat uniformity and consistency while keeping monitoring costs down. There are important operational issues to efficiently testing grain within the supply chain, including the choice of the optimal location to test and how intensively to test. The testing protocols for grain deliveries as well as maintaining effective responsiveness to information feedback among farmers will certainly become a strategic emphasis for wheat handlers in the future. In light of this, my research attempts to identify the risks, incentives and costs associated with a functional declaration system. This research tests a series of incentives designed to generate truthful behavior within the new policy environment. In this manner, I examine potential and easy to implement testing strategies designed to maintain integrity and efficiency in this agricultural supply chain. This study is developed in the first instance by using an analytic model to explore the economic incentives for motivating farmer’s risk control efforts and handlers’ optimal handling strategies with respect to testing cost, penalty level, contamination risks and risk control efforts. We solve for optimal behavior in the supply chain assuming cost minimization among the participants, under several simplifying assumptions. In reality, the Canadian grain supply chain is composed of heterogeneous, boundedly rational and dynamically interacting individuals, and none of these characteristics fit the standard optimization framework used to solve these problems. Given this complex agent behavior, the grain supply chain is characterized by a set of non-linear relationships between individual participants, coupled with out of equilibrium dynamics, meaning that analytic solutions will not always identify or validate the set of optimized strategies that would evolve in the real world. To account for this inherent complexity, I develop an agent-based (farmers and elevators) model to simulate behaviour in a more realistic but virtual grain supply chain. After characterizing the basic analytics of the problem, the grain supply chain participants are represented as autonomous economic agents with a certain level of programmed behavioral heterogeneity. The agents interact via a set of heuristics governing their actions and decisions. The operation of a major portion of the Canadian grain handling system is simulated in this manner, moving from the individual farm up through to the country elevator level. My simulation results suggest testing strategies to alleviate misrepresentation (moral hazard) in this supply chain are more efficient for society when they are flexible and can be easily adjusted to react to situational change within the supply chain. While the idea of using software agents for modeling and understanding the dynamics of the supply chain under consideration is somewhat novel, I consider this exercise a first step to a broader modeling representation of modern agricultural supply chains. The agent-based simulation methodology developed in my dissertation can be extended to other economic systems or chains in order to examine risk management and control costs. These include food safety and quality assurance network systems as well as natural-resource management systems. Furthermore, to my knowledge there are no existing studies that develop and compare both analytic and agent-based simulation approaches for this type of complex economic situation. In the dissertation, I conduct explicit comparisons between the analytic and agent-based simulation solutions where applicable. While the two approaches generated somewhat different solutions, in many respects they led to similar overall conclusions regarding this particular agricultural policy issue.
150

The Economics of Malaria Vector Control

Brown, Zachary Steven January 2011 (has links)
<p>In recent years, government aid agencies and international organizations have increased their financial commitments to controlling and eliminating malaria from the planet. This renewed emphasis on elimination is reminiscent of a previous worldwide campaign to eradicate malaria in the 1960s, a campaign which ultimately failed. To avoid a repeat of the past, mechanisms must be developed to sustain effective malaria control programs.</p><p>A number of sociobehavioral, economic, and biophysical challenges exist for sustainable malaria control, particularly in high-burden areas such as sub-Saharan Africa. Sociobehavioral challenges include maintaining high long-term levels of support for and participation in malaria control programs, at all levels of society. Reasons for the failure of the previous eradication campaign included a decline in donor, governmental, community, and household-level support for control programs, as malaria prevalence ebbed due in part to early successes of these programs.</p><p>Biophysical challenges for the sustainability of national malaria control programs (NMCPs) encompass evolutionary challenges in controlling the protozoan parasite and the mosquito vector, as well as volatile transmission dynamics which can lead to epidemics. Evolutionary challenges are particularly daunting due to the rapid generational turnover of both the parasites and the vectors: The reliance on a handful of insecticides and antimalarial drugs in NMCPs has placed significant selection pressures on vectors and parasites respectively, leading to a high prevalence of genetic mutations conferring resistance to these biocides.</p><p>The renewed global financing of malaria control makes research into how to effectively surmount these challenges arguably more salient now than ever. Economics has proven useful for addressing the sociobehavioral and biophysical challenges for malaria control. A necessary next step is the careful, detailed, and timely integration of economics with the natural sciences to maximize and sustain the impact of this financing.</p><p>In this dissertation, I focus on 4 of the challenges identified above: In the first chapter, I use optimal control and dynamic programming techniques to focus on the problem of insecticide resistance in malaria control, and to understand how different models of mosquito evolution can affect our policy prescriptions for dealing with the problem of insecticide resistance. I identify specific details of the biological model--the mechanisms for so-called "fitness costs" in insecticide-resistant mosquitoes--that affect the qualitative properties of the optimal control path. These qualitative differences carry over to large impacts on the economic costs of a given control plan.</p><p>In the 2nd chapter, I consider the interaction of parasite resistance to drugs and mosquito resistance to insecticides, and analyze cost-effective malaria control portfolios that balance these 2 dynamics. I construct a mathematical model of malaria transmission and evolutionary dynamics, and calibrate the model to baseline data from a rural Tanzanian district. Four interventions are jointly considered in the model: Insecticide-spraying, insecticide-treated net distribution, and the distribution of 2 antimalarial drugs--sulfadoxine pyramethamine (SP) and artemisinin-based combination therapies (ACTs). Strategies which coordinate vector controls and treatment protocols should provide significant gains, in part due to the issues of insecticide and drug resistance. In particular, conventional vector control and ACT use should be highly complementary, economically and in terms of disease reductions. The ongoing debate concerning the cost-effectiveness of ACTs should thus consider prevailing (and future) levels of conventional vector control methods, such as ITN and IRS: If the cost-effectiveness of widespread ACT distribution is called into question in a given locale, scaling up IRS and/or ITNs probably tilts the scale in favor of distributing ACTs. </p><p>In the 3rd chapter, I analyze results from a survey of northern Ugandan households I oversaw in November 2009. The aim of this survey was to assess respondents' perceptions about malaria risks, and mass indoor residual spraying (IRS) of insecticides that had been done there by government-sponsored health workers. Using stated preference methods--specifically, a discrete choice experiment (DCE)--I evaluate: (a) the elasticity of household participation levels in IRS programs with respect to malaria risk, and (b) households' perceived value of programs aimed at reducing malaria risk, such as IRS. Econometric results imply that the average respondent in the survey would be willing to forego a $10 increase in her assets for a permanent 1% reduction in malaria risk. Participation in previous IRS significantly increased the stated willingness to participate in future IRS programs. However, I also find that at least 20% of households in the region perceive significant transactions costs from IRS. One implication of this finding is that compensation for these transactions costs may be necessary to correct theorized public good aspects of malaria prevention via vector control.</p><p>In the 4th chapter, I further study these public goods aspects. To do so, I estimate a welfare-maximizing system of cash incentives. Using the econometric models estimated in the 3rd chapter, in conjunction with a modified version of the malaria transmission models developed and utilized in the first 2 chapters, I calculate village-specific incentives aimed at correcting under-provision of a public good--namely, malaria prevention. This under-provision arises from incentives for individual malaria prevention behavior--in this case the decision whether or not to participate in a given IRS round. The magnitude of this inefficiency is determined by the epidemiological model, which dictates the extent to which households' prevention decisions have spillover effects on neighbors. </p><p>I therefore compute the efficient incentives in a number of epidemiological contexts. I find that non-negligible monetary incentives for participating in IRS programs are warranted in situations where policymakers are confident that IRS can effectively reduce the incidence of malaria cases, and not just exposure rates. In these cases, I conclude that the use of economic incentives could reduce the incidence of malaria episodes by 5%--10%. Depending on the costs of implementing a system of incentives for IRS participation, such a system could provide an additional tool in the arsenal of malaria controls.</p> / Dissertation

Page generated in 0.064 seconds