• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 492
  • 34
  • 24
  • 22
  • 21
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Economic analysis for policy formulation in the National Health Service

Rickard, John Hellyar January 1976 (has links)
This thesis explores the role economic analysis can play in policy formulation in the British National Health Service. The historical allocation process in the NHS is based on the idea of meeting fully the 'need' for health care of each individual. However, 'need' has never been comprehensively defined nor universally applied but has been interpreted by numerous, isolated judgements, often by doctors and others at the point of delivery of the service. These decisions have been strongly influenced by local availability of resources and since these have been distributed unevenly, the result has been a wide variation in standards of service, eg hospital cases per head of population and costs per case. While the perceived objective was to give everyone all the health care 'needed', it was not necessary to consider problems of distribution, between individuals and geographical areas, or problems of effectiveness, that is the relative efficiency of different forms of intervention. However, in reality the NHS budget is constrained and a rational decision-making mechanism must incorporate a rationing device which contains a consideration of distribution and effectiveness and these aspects comprise the two parts of the thesis. PART A THE ALLOCATION OF RESOURCES BETWEEN AREAS It has been the declared objective of the NHS since its creation that health resources should be equitably distributed. However, there has always been considerable variation in the per capita expenditure of the 14 regions because the system of incremental budgeting meant that the existing level of services was always financed. Even in 1971 when a formula was introduced for hospital expenditure, 50% of revenue funds was still distributed according to factors influenced by historical provision. Moreover no consideration was given to the allocation process below regional level though it was suspected that variations sub-regionally were greater than between regions. The objective of this study is to explore the concept of an 'equitable distribution' at the disaggregated level of Area Health Authorities, to see if it can be defined and quantified in operational terns suitable for policy recommendations. 'Equal treatment opportunity for patients of similar risk' is taken as the initial definition. The re-organisation of the NHS in 1974 made it feasible to relate services to geographical Areas. Data for the Oxford region in 1971-2 were analysed and applied to the post 1974 structure. Dividing general hospital expenditure by the population of each Area gave co-efficient of variation of 20%. Tests were then applied to see if there were factors, consistent with a 'equitable distribution', which explained part of the variation. The most important factor was found to be the flow of patients across boundaries. These flows were used to derive notional catchment populations and when these were divided into expenditure, the co-efficient of variation fell to 10%. An attempt was made to see if the variation was explained by differences in the morbidity characteristics of the populations. Various indicators were considered but the age-sex structure was found to be the only discriminator for which it was possible to obtain data on differential health service use. While nationally old people make greater use of services, there was no evidence that in the Oxford region more resources had been made available to the Areas with a greater proportion of elderly. Others factors considered include the cost of regional clinical specialties, the cost of teaching hospitals, psychiatric hospital provision and community health care. (These costing exercises themselves have been a useful spin-off). At each stage of the analysis an attempt was made to relate the expenditure differences between Areas to indicators of quantity and quality of services. It was found for example that the Area with least expenditure had the lowest acute-hospital provision but not the lowest provision for the chronic sick. It was concluded that while the variation was not as great as originally suggested, nevertheless taking into account all the factors, a re-allocation of 3% of the regions expenditure would be necessary to bring about 'equality of opportunity'. However, even at this level some differences would remain. For example, if cross-boundary flows were perpetuated, some people would travel further than others to receive care, and while the special funding of teaching hospitals and regional specialties nay be justified on efficiency grounds, this conveys special benefits for local residents. Despite these reservations, in operational terms much can be done to reduce 'basic' inequalities. No work had previously been undertaken on sub-regional variations and, already, at the request of the Minister of State for Health, this analysis has been extended to all 90 Areas in England and the results have added impetus to the formation of the Resource Allocation Working Party, on which the author has served as technical adviser. PART B COST-EFFECTIVENESS ANALYSIS OF THE COMMUNITY HOSPITAL PROGRAMME The objective of this part is to examine how far analysis can help the choice of efficient methods of delivering health care. It comprises an appraisal of the Oxford region's Community Hospital (CH) Programme, a system of peripheral acute hospitals surrounding a District General Hospital (DGH). Originally a full cost-benefit study was planned but the practical problems of quantifying, valuing and aggregating benefits could not be overcome. Preliminary results suggested that differences in benefits were not measurably significant and so a cost-effectiveness approach was used. Firstly a regression analysis of the costs of 525 existing small hospitals showed that average costs vary with size, the curve being a tilted 'L' shape. The minimum cost size was 25 beds though when the sample was later disaggregated to remove hospitals which treat more complex cases this rose to 35 beds. This result was important for subsequent analysis since the two experimental CHs are in the higher cost range. Secondly the capital costs of CHs were compared with hypothetical DGH ward equivalents, arguing that with a rising population, building a CH is an alternative to building an additional DGH ward of the same size. CH construction costs were lower because less space per bed was used and certain features simplified. Land costs per bed, on the other hand, were higher because a low plot ratio was used which more than offset the relatively cheaper land on peripheral sites. The cheapest way, hov/ever, of providing CH services is to convert existing hospitals which have no alternative use and hence low opportunity cost. The main part of the cost-effectiveness exercise entailed a comparison of each component service, considered in terms of the type of patient, with the service which would have been provided without the CH. For example, surgical patients transferred after their operations would otherwise have been retained in the DGH. Some medical patients would have been treated in the DGH, others at home. No evidence was found to suggest that length of stay in the CH differed from the DGH, though this could not be proved and the conclusions were contingent on this. The CH had higher resource costs than the DGH alternative because of high nursing levels, and this was partly the result of the small scale. Also for the surgical patients the transfer by ambulance increased the cost. No account was taken of possible differences in benefits. It was concluded that the CH in-patient services could be cost-effective only if nursing staff were reduced or if converted hospital buildings with a low opportunity cost were used. A comparison of the CH service with domiciliary care entailed a detailed study of the cost of domiciliary services.
472

Fertility and the economic value of children : evidence from Nepal

Frost, Melanie Dawn January 2011 (has links)
Economic theories of fertility transition were the dominant paradigm during the second half of the twentieth century, but in more recent years their relevance has been questioned and sociological or cultural explanations have become more popular in the demographic literature. In many cases theoretical perspectives have been abandoned all together in favour of an empirical approach leaving economists and demographers isolated from each other. Using data collected in Nepal as part of the World Bank‟s Living Standards Measurement Study, which includes large amounts of economic information at the household and individual level, the feasibility of the economic approach to fertility transition is tested in the context of rural Nepal. In order to do this it was necessary to check the quality of the Nepali fertility data. This was done and it was concluded that higher parity births tend to be underreported, while childlessness tends to be over-reported. It was also found that the quality of urban fertility data is suspect – rural fertility is focussed on throughout since it relates to economic variables in a substantively different way to urban fertility. The relationships between fertility and the main components of income in rural Nepal – agriculture and remittances – are studied. It is hypothesised that fertility and landholding are related through the land-security hypothesis and the land-labour hypothesis. The land-security hypothesis holds that owned landholding and children are substitutes because they are both forms of security, while the land-labour hypothesis holds that cultivated landholding and fertility are complements since children can assist in tilling the land. Remittances are purported to affect fertility through increasing son preference. This is because remittances provide security and sons send remittances. Support is found for all the hypothesised relationships. This implies that the people of rural Nepal value children for the economic benefits they can bring. The economic value of sons vastly outweighs that of daughters and the findings of this thesis indicate that increasing remittances and high levels of functionally landless households mean that son preference is unlikely to disappear soon. Overall, this research highlights that economic theories of fertility transition have been unjustly neglected and are important for our understanding of fertility determinants – they are therefore extremely relevant for both demographers and policy makers
473

Evaluating reinforcement learning for game theory application learning to price airline seats under competition

Collins, Andrew January 2009 (has links)
Applied Game Theory has been criticised for not being able to model real decision making situations. A game's sensitive nature and the difficultly in determining the utility payoff functions make it hard for a decision maker to rely upon any game theoretic results. Therefore the models tend to be simple due to the complexity of solving them (i.e. finding the equilibrium). In recent years, due to the increases of computing power, different computer modelling techniques have been applied in Game Theory. A major example is Artificial Intelligence methods e.g. Genetic Algorithms, Neural Networks and Reinforcement Learning (RL). These techniques allow the modeller to incorporate Game Theory within their models (or simulation) without necessarily knowing the optimal solution. After a warm up period of repeated episodes is run, the model learns to play the game well (though not necessarily optimally). This is a form of simulation-optimization. The objective of the research is to investigate the practical usage of RL within a simple sequential stochastic airline seat pricing game. Different forms of RL are considered and compared to the optimal policy, which is found using standard dynamic programming techniques. The airline game and RL methods displays various interesting phenomena, which are also discussed. For completeness, convergence proofs for the RL algorithms were constructed.
474

Firm performance and institutional context : a theoretical exploration with evidence from the Italian cooperative sector

Gagliardi, F. January 2010 (has links)
This thesis examines the relationship between institutional context and firm performance, from both a theoretical and empirical perspective. The aim is to engage with the debate seeking to explain the observed diversity in the forms of economic organisation prevailing in socio-economic systems. The focus of the empirical work is on investigating the effects of the structure and behaviour of banking institutions on firm performance, in the Italian context. The analysis is comparative in the sense that confronts cooperative and capitalist business structures. The analytical framework is institutionalist in emphasising the institutionally embedded nature of economic performance, and the historical and cultural dimensions of economic behaviour. The institutional complementarity approach is used to investigate the hypothesis that the relative performance of different firm structures is context dependent. The main conclusions are that the economic performance of cooperative firms is strongly conditioned in a sense of institutional complementarity by the degree of development and competition characterising the financial domain. Rejected are the pessimistic predictions of conventional accounts that democratic firms are unequivocally unviable. Instead, there are relations of context dependency, of institutional complementarity that influence the viability of firm types. The overall conclusion is that the dynamics governing the evolution of socio-economic systems are much more complex than mainstream economics suggests; productive organisations may assume a multiplicity of forms. The theoretical claims of a universalistic history in which all production systems must follow the same line of development must be abandoned. This brings about major policy implications at the regional, national and international levels.
475

Three essays on the economic theory of mating and parental choice

Antrup, Andreas Hermann January 2012 (has links)
Chapter 1: Relative Concerns and the Choice of Fertility Empirical research has shown that people exhibit relative concerns, they value social status. If they value their children's status as well, what effect will that have on their decisions as parents? This paper argues that parents and potential parents are in competition for status and rank in the generation of their children; as a consequence richer agents may cut back on the number of children they have and invest more in each child to prevent children of lower income agents from mimicking their own children. This effect need not be uniform so that equilibrium fertility may e.g. be a U-shaped function of income, even when agents would privately like to increase fertility when they receive greater income. These findings have wide ramifications: they may contribute to our understanding of the working of the demographic transition; they also suggest that the low fertility traps seen in some developed countries are rather strongly entrenched phenomena; and they o er a new explanation for voluntary childlessness. Chapter 2: Relative Concerns and Primogeniture While pervasive in the past, differential treatment of children, i.e. different levels of attention and parental investments into children of the same parent, has become rare in modern societies. This paper offers an explanation based on technological change which has rendered the success of a child more uncertain for a parent who is deciding on how much to invest into each of his children. Within a framework of concerns for social status (or relative concerns), agents decide on how many children to have and how much to invest in each child. When their altruism towards each child is decreasing in the total number of children, it is shown that they may solve the trade-off between low investment, high marginal return children (that come in large numbers and hence hurt parental altruism) and high investment, low marginal return children (that come in low numbers) by demanding both types and hence practice differential treatment. Uncertainty over status or rank outcomes of children reduces the range of equilibrium investment levels intro children so that the difference in the numbers they come in is reduced. Eventually the concern for return dominates and differential treatment disappears. Chapter 3: Co-Evolution of Institutions and Preferences: the case of the (human) mating market This paper explores the institutions that may emerge in response to mating preferences being constrained in their complexity in that they can only be conditioned on gender not other characteristics of the carrier of the preferences. When the cognitive capacity of the species allows a sophisticated institutional setup of one gender proposing and the other accepting or rejecting to be adopted, this setup is shown to be able to structure the mating allocation process such that preferences evolve to forms that, conditional on the setup, are optimal despite the constraint on complexity. Nature can be thought of as delegating information processing to the institutional setup. In an application to humans it is shown that the mechanism of the model can help explain why men and women may exhibit opposed preferences in traits such as looks and cleverness. The anecdotal fact that women do not marry down while men do can be interpreted as a maladaptation of female preferences to modern marriage markets.
476

Essays on long memory time series and fractional cointegration

Algarhi, Amr Saber Ibrahim January 2013 (has links)
The dissertation considers an indirect approach for the estimation of the cointegrating parameters, in the sense that the estimators are jointly constructed along with estimating other nuisance parameters. This approach was proposed by Robinson (2008) where a bivariate local Whittle estimator was developed to jointly estimate a cointegrating parameter along with the memory parameters and the phase parameters (discussed in chapter 2). The main contributions of this dissertation is to establish, similar to Robinson (2008), a joint estimation of the memory, cointegrating and phase parameters in stationary and nonstationary fractionally cointegrated models in a multivariate framework. In order to accomplish such task, a general shape of the spectral density matrix, first noted in Davidson and Hashimzade (2008), is utilised to cover multivariate jointly dependent stationary long memory time series allowing more than one cointegrating relation (discussed in chapter 3). Consequently, the notion of the extended discrete Fourier transform is adopted based on the work of Phillips (1999) to allow for the multivariate estimation to cover the non stationary region (explained in chapter 4). Overall, the estimation methods adopted in this dissertation follows the semiparametric approach, in that the spectral density is only specified in a neighbourhood of zero frequency. The dissertation is organised in four self-contained chapters that are connected to each other, in additional to this introductory chapter: • Chapter 1 discusses the univariate long memory time series analysis covering different definitions, models and estimation methods. Consequently, parametric and semiparametric estimation methods were applied to a univariate series of the daily Egyptian stock returns to examine the presence of long memory properties. The results show strong and significant evidence of long memory in the Egyptian stock market which refutes the hypothesis of market efficiency. • Chapter 2 expands the analysis in the first chapter using a bivariate framework first introduced by Robinson (2008) for long memory time series in stationary system. The bivariate model presents four unknown parameters, including two memory parameters, a phase parameter and a cointegration parameter, which are jointly estimated. The estimation analysis is applied to a bivariate framework includes the US and Canada inflation rates where a linear combination between the US and Canada inflation rates that has a long memory less than the two individual series has been detected. • Chapter 3 introduces a semiparametric local Whittle (LW) estimator for a general multivariate stationary fractional cointegration using a general shape of the spectral density matrix first introduced by Davidson and Hashimzade (2008). The proposed estimator is used to jointly estimate the memory parameters along with the cointegrating and phase parameters. The consistency and asymptotic normality of the proposed estimator is proved. In addition, a Monte Carlo study is conducted to examine the performance of the new proposed estimator for different sample sizes. The multivariate local whittle estimation analysis is applied to three different relevant examples to examine the presence of fractional cointegration relationships. • In the first three chapters, the estimation procedures focused on the stationary case where the memory parameter is between zero and half. On the other hand, the analysis in chapter 4, which is a natural progress to that in chapter 3, adjusts the estimation procedures in order to cover the non-stationary values of the memory parameters. Chapter 4 expands the analysis in chapter 3 using the extended discrete Fourier transform and periodogram to extend the local Whittle estimation to non stationary multivariate systems. As a result, the new extended local Whittle (XLW) estimator can be applied throughout the stationary and non stationary zones. The XLW estimator is identical to the LW estimator in the stationary region, introduced in chapter 3. Application to a trivariate series of US money aggregates is employed.
477

Economic issues of informal care : values and determinants

Mentzakis, Emmanouil January 2008 (has links)
More than 6 million people are currently involved in the provision of informal care in the UK, with demand expected to rise exponentially over the next decades. The aim of this thesis is twofold.  First, to estimate and compare informal care time valuations using a number of methods, and second, to estimate econometrically the determinants of informal care provision.  The first aim is addressed using both new and existing secondary data, with comparison of values across opportunity cost, market replacement cost, compensating income variation, contingent valuation and discrete choice experiments methods.  The second aim is addressed using existing secondary data, where methods of dynamic panel estimation are employed to assess the influence of various socio-economic and demographic characteristics on the decision to provide care and on the level of provision. This is the first study that estimates and compares informal care time valuations using preference-based techniques for the UK and also it is the first time that the determinants of informal care are assessed using dynamic two-part data models. Findings suggest per hour valuations from £8.5 to £14 for most of the methods with the exception of the choice experiment (with values less than £1).  A great deal of heterogeneity can be found in the valuation, especially according to type of care provided.  Informal care is found to be a complement and/or a substitute for formal care, depending on the task in question, while at the same time it competes with other time intensive activities for the allocation of time.
478

Three essays on dynamic general equilibrium models

Fujiwara, Ippei January 2009 (has links)
This thesis aims at contributing to the existing studies in the dynamic stochastic general equilibrium model, particularly in the new Keynesian models, on three aspects. It consists of three chapters. Chapter 2 is on “Dynamic new Keynesian Life-Cycle Model.” Chapter 3 is on “Re-thinking Price Stability in an Economy with Endogenous Firm Entry: Real Imperfections under Product Variety.” Chapter 4 is on “Growth Expectation.” Abstracts of each Chapter are as follows. In Chapter 2, we first construct a dynamic new Keynesian model that incorporates life-cycle behavior a la Gertler (1999), in order to study whether structural shocks to the economy have asymmetric effects on heterogeneous agents, namely workers and retirees. We also examine whether considerations of life-cycle and demographic structure alter the dynamic properties of the monetary business cycle model, specifically the degree of amplification in impulse responses. According to our simulation results, shocks indeed have asymmetric impacts on different households and the demographic structure does alter the size of responses against shocks by changing the trade-off between substitution and income effects. In Chapter 3, we re-think price stability in an economy with endogenous firm entry under possible distortions. We first demonstrate that endogenous entry causes real imperfections. Reflecting fluctuations in the number of varieties, the gap between the natural and the efficient level of output is no longer constant and variant to shocks. As a result, the central bank faces a trade-off between stabilizing inflation and welfare-relevant output gap. Then, we show that this results in the non-zero optimal rate of inflation. We further check whether welfare can be enhanced by targeting welfare-based inflation instead of cross-sectional average inflation contrary to the previous findings. Simulations even with such distortions as unknown natural interest rate or no fiscal remedy for efficient non-stochastic steady states, however, support cross-sectional average inflation targeting although there may exist some small gains by referring also to welfare-based inflation rates. Incomplete stabilization may enhance welfare in an economy when agents cannot internalize the externality on the love for variety. Chapter 4 is about the difficulty in producing reasonable business cycles for the expectation shock about higher future technology. For a long time, changes in expectations about the future have been thought to be significant sources of economic fluctuations, as argued by Pigou (1926). Although creating such an expectation-driven cycle (the Pigou cycle) in equilibrium business cycle models was considered to be a difficult challenge, as pointed out by Barro and King (1984), recently, several researchers have succeeded in producing the Pigou cycle by balancing the tension between the wealth effect and the substitution effect stemming from the higher expected future productivity. Seminal research by Christiano et al. (2007a) explains the “stock market boom-bust cycles,” characterized by increases in consumption, labor inputs, investment and the stock prices relating to high expected future technology levels, by introducing investment growth adjustment costs, habit formation in consumption, sticky prices and an inflation-targeting central bank. We, however, show that such a cycle is difficult to generate based on “growth expectation,” which reflect expectations of higher productivity growth rates. Thus, Barro and King’s (1984) prediction still applies.
479

Essays in panel data and financial econometrics

Pakel, Cavit January 2012 (has links)
This thesis is concerned with volatility estimation using financial panels and bias-reduction in non-linear dynamic panels in the presence of dependence. Traditional GARCH-type volatility models require large time-series for accurate estimation. This makes it impossible to analyse some interesting datasets which do not have a large enough history of observations. This study contributes to the literature by introducing the GARCH Panel model, which exploits both time-series and cross-section information, in order to make up for this lack of time-series variation. It is shown that this approach leads to gains both in- and out-of-sample, but suffers from the well-known incidental parameter issue and therefore, cannot deal with short data either. As a response, a bias-correction approach valid for a general variety of models beyond GARCH is proposed. This extends the analytical bias-reduction literature to cross-section dependence and is a theoretical contribution to the panel data literature. In the final chapter, these two contributions are combined in order to develop a new approach to volatility estimation in short panels. Simulation analysis reveals that this approach is capable of removing a substantial portion of the bias even when only 150-200 observations are available. This is in stark contrast with the standard methods which require 1,000-1,500 observations for accurate estimation. This approach is used to model monthly hedge fund volatility, which is another novel contribution, as it has hitherto been impossible to analyse hedge fund volatility, due to their typically short histories. The analysis reveals that hedge funds exhibit variation in their volatility characteristics both across and within investment strategies. Moreover, the sample distributions of fund volatilities are asymmetric, have large right tails and react to major economic events such as the recent credit crunch episode.
480

A Bayesian approach to financial model calibration, uncertainty measures and optimal hedging

Gupta, Alok January 2010 (has links)
In this thesis we address problems associated with financial modelling from a Bayesian point of view. Specifically, we look at the problem of calibrating financial models, measuring the model uncertainty of a claim and choosing an optimal hedging strategy. Throughout the study, the local volatility model is used as a working example to clarify the proposed methods. This thesis assumes a prior probability density for the unknown parameter in a model we try to calibrate. The prior probability density regularises the ill-posedness of the calibration problem. Further observations of market prices are used to update this prior, using Bayes law, and give a posterior probability density for the unknown model parameter. Resulting Bayes estimators are shown to be consistent for finite-dimensional model parameters. The posterior density is then used to compute the Bayesian model average price. In tests on local volatility models it is shown that this price is closer than the prices of comparable calibration methods to the price given by the true model. The second part of the thesis focuses on quantifying model uncertainty. Using the framework for market risk measures we propose axioms for new classes of model uncertainty measures. Similar to the market risk case, we prove representation theorems for coherent and convex model uncertainty measures. Example measures from the latter class are provided using the Bayesian posterior. These are used to value the model uncertainty for a range of financial contracts priced in the local volatility model. In the final part of the thesis we propose a method for selecting the model, from a set of candidate models, that optimises the hedging of a specified financial contract. In particular we choose the model whose corresponding price and hedge optimises some hedging performance indicator. The selection problem is solved using Bayesian loss functions to encapsulate the loss from using one model to price and hedge when the true model is a different model. Linkages are made with convex model uncertainty measures and traditional utility functions. Numerical experiments on a stochastic volatility model and the local volatility model show that the Bayesian strategy can outperform traditional strategies, especially for exotic options.

Page generated in 0.0618 seconds