1 |
Testing the unit root hypothesis in nonlinear time series and panel modelsSandberg, Rickard January 2004 (has links)
The thesis contains the four chapters: Testing parameter constancy in unit root autoregressive models against continuous change; Dickey-Fuller type of tests against nonlinear dynamic models; Inference for unit roots in a panel smooth transition autoregressive model where the time dimension is fixed; Testing unit roots in nonlinear dynamic heterogeneous panels. In Chapter 1 we derive tests for parameter constancy when the data generating process is non-stationary against the hypothesis that the parameters of the model change smoothly over time. To obtain the asymptotic distributions of the tests we generalize many theoretical results, as well as new are introduced, in the area of unit roots . The results are derived under the assumption that the error term is a strong mixing. Small sample properties of the tests are investigated, and in particular, the power performances are satisfactory. In Chapter 2 we introduce several test statistics of testing the null hypotheses of a random walk (with or without drift) against models that accommodate a smooth nonlinear shift in the level, the dynamic structure, and the trend. We derive analytical limiting distributions for all tests. Finite sample properties are examined. The performance of the tests is compared to that of the classical unit root tests by Dickey-Fuller and Phillips and Perron, and is found to be superior in terms of power. In Chapter 3 we derive a unit root test against a Panel Logistic Smooth Transition Autoregressive (PLSTAR). The analysis is concentrated on the case where the time dimension is fixed and the cross section dimension tends to infinity. Under the null hypothesis of a unit root, we show that the LSDV estimator of the autoregressive parameter in the linear component of the model is inconsistent due to the inclusion of fixed effects. The test statistic, adjusted for the inconsistency, has an asymptotic normal distribution whose first two moments are calculated analytically. To complete the analysis, finite sample properties of the test are examined. We highlight scenarios under which the traditional panel unit root tests by Harris and Tzavalis have inferior or reasonable power compared to our test. In Chapter 4 we present a unit root test against a non-linear dynamic heterogeneous panel with each country modelled as an LSTAR model. All parameters are viewed as country specific. We allow for serially correlated residuals over time and heterogeneous variance among countries. The test is derived under three special cases: (i) the number of countries and observations over time are fixed, (ii) observations over time are fixed and the number of countries tend to infinity, and (iii) first letting the number of observations over time tend to infinity and thereafter the number of countries. Small sample properties of the test show modest size distortions and satisfactory power being superior to the Im, Pesaran and Shin t-type of test. We also show clear improvements in power compared to a univariate unit root test allowing for non-linearities under the alternative hypothesis. / Diss. Stockholm : Handelshögskolan, 2004
|
2 |
Second-order least squares estimation in dynamic regression modelsAbdelAziz Salamh, Mustafa 16 April 2014 (has links)
In this dissertation we proposed two generalizations of the Second-Order Least Squares (SLS) approach in two popular dynamic econometrics models. The first one is the regression model with time varying nonlinear mean function and autoregressive conditionally heteroskedastic (ARCH) disturbances. The second one is a linear dynamic panel data model.
We used a semiparametric framework in both models where the SLS approach is based only on the first two conditional moments of response variable given the explanatory variables. There is no need to specify the distribution of the error components in both models. For the ARCH model under the assumption of strong-mixing process with finite moments of some order, we established the strong consistency and asymptotic normality of the SLS estimator.
It is shown that the optimal SLS estimator, which makes use of the additional information inherent in the conditional skewness and kurtosis of the process, is superior to the commonly used quasi-MLE, and the efficiency gain is significant when the underlying distribution is asymmetric. Moreover, our large scale simulation studies showed that the optimal SLSE behaves better than the corresponding estimating function estimator in finite sample situation. The practical usefulness of the optimal SLSE was tested by an empirical example on the U.K. Inflation. For the linear dynamic panel data model, we showed that the SLS estimator is consistent and asymptotically normal for large N and finite T under fairly general regularity conditions. Moreover, we showed that the optimal SLS estimator reaches a semiparametric efficiency bound. A specification test was developed for the first time to be used whenever the SLS is applied to real data. Our Monte Carlo simulations showed that the optimal SLS estimator performs satisfactorily in finite sample situations compared to the first-differenced GMM and the random effects pseudo ML estimators. The results apply under stationary/nonstationary process and wih/out exogenous regressors. The performance of the optimal SLS is robust under near-unit root case. Finally, the practical usefulness of the optimal SLSE was examined by an empirical study on the U.S. airfares.
|
3 |
Bootstrap for panel data models with an application to the evaluation of public policiesHounkannounon, Bertrand G. B. 08 1900 (has links)
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons
aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on
utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans
justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application.
Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est
valide avec ces modèles. Le rééchantillonnage seulement dans la dimension
individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle.
Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen-
sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres.
Le troisième chapitre re-examine l exercice de l estimateur de différence
en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est
couramment utilisé dans la littérature pour évaluer l impact de certaines poli-
tiques publiques. L exercice empirique utilise des données de panel provenant
du Current Population Survey sur le salaire des femmes dans les 50 états des
Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions
publiques au niveau des états sont générées et on s attend à ce que les tests
arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur
le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques. / The purpose of this thesis is to develop bootstrap methods for panel data models and to prove their validity. Panel data refers to data sets where observations on individual units (such as households, firms or countries) are available over several time periods. The availability of two dimensions (cross-section and time series) allows for the identi cation of effects that could not be accounted for otherwise. In this thesis, we explore the use of the bootstrap to obtain estimates of the distribution of statistics that are more accurate than the usual asymptotic theory. The method consists in drawing many ran-
dom samples that resembles the sample as much as possible and estimating
the distribution of the object of interest over these random samples. It has been shown, both theoretically and in simulations, that in many instances,this approach improves on asymptotic approximations. In other words, the
resulting tests have a rejection rate close to the nominal size under the null hypothesis and the resulting con dence intervals have a probability of inclu-
ding the true value of the parameter that is close to the desired level.
In the literature, there are many applications of the bootstrap with panel
data, but these methods are carried out without rigorous theoretical justi fication. This thesis suggests a bootstrap method that is suited to panel data (which we call double resampling), analyzes its validity, and implements it in the analysis of treatment e¤ects. The aim is to provide a method that will provide reliable inference without having to make strong assumptions on the underlying data-generating process.
The rst chapter considers a model with a single parameter (the overall expectation) with the sample mean as estimator. We show that our double resampling is valid for panel data models with some cross section and/or temporal heterogeneity. The assumptions made include one-way and two-
way error component models as well as factor models that have become popular with large panels. On the other hand, alternative methods such as bootstrapping cross-sections or blocks in the time dimensions are only valid under some of these models.
The second chapter extends the previous one to the panel linear regression model. Three kinds of regressors are considered : individual characteristics, temporal characteristics and regressors varying across periods and cross-sectional units. We show that our double resampling is valid for inference about all the coe¢ cients in the model estimated by ordinary least squares under general types of time-series and cross-sectional dependence. Again, we show that other bootstrap methods are only valid under more restrictive conditions.
Finally, the third chapter re-examines the analysis of di¤erences-in-differences
estimators by Bertrand, Du o and Mullainathan (2004). Their empirical application uses panel data from the Current Population Survey on wages of women in the 50 states. Placebo laws are generated at the state level, and the authors measure their impact on wages. By construction, no impact should
be found. Bertrand, Dufl o and Mullainathan (2004) show that neglected heterogeneity and temporal correlation lead to spurious ndings of an effect of the Placebo laws. The double resampling method developed in this thesis corrects these size distortions very well and gives more reliable evaluation of public policies.
|
4 |
Hodnotící tabulka jako nástroj pro měření makroekonomických nerovnováh / Scoreboard Indicators as a Measure of Macroeconomic ImbalancesToušková, Daniela January 2013 (has links)
This thesis examined an ability of the scoreboard indicators created by the European Commission to capture macroeconomic imbalances expressed as the changes of GDP. We conducted an empirical analysis for panel data of 27 EU countries in the 1997-2011 period. We adopted three different dynamic panel data models based on the three estimators: the Arrelano- Bond, the Arrelano-Bover and the corrected LSDV estimator. Our results suggest that despite some bad characteristics of our dataset we can conclude that some of the indicators such as 3- year average of current account balance or percentage change in export market shares seem to be inadequate for measuring the imbalances. Moreover, the indicators were proved not to be able to predict an occurrence of imbalances.
|
5 |
Bootstrap for panel data models with an application to the evaluation of public policiesHounkannounon, Bertrand G. B. 08 1900 (has links)
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons
aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on
utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans
justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application.
Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est
valide avec ces modèles. Le rééchantillonnage seulement dans la dimension
individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle.
Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen-
sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres.
Le troisième chapitre re-examine l exercice de l estimateur de différence
en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est
couramment utilisé dans la littérature pour évaluer l impact de certaines poli-
tiques publiques. L exercice empirique utilise des données de panel provenant
du Current Population Survey sur le salaire des femmes dans les 50 états des
Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions
publiques au niveau des états sont générées et on s attend à ce que les tests
arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur
le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques. / The purpose of this thesis is to develop bootstrap methods for panel data models and to prove their validity. Panel data refers to data sets where observations on individual units (such as households, firms or countries) are available over several time periods. The availability of two dimensions (cross-section and time series) allows for the identi cation of effects that could not be accounted for otherwise. In this thesis, we explore the use of the bootstrap to obtain estimates of the distribution of statistics that are more accurate than the usual asymptotic theory. The method consists in drawing many ran-
dom samples that resembles the sample as much as possible and estimating
the distribution of the object of interest over these random samples. It has been shown, both theoretically and in simulations, that in many instances,this approach improves on asymptotic approximations. In other words, the
resulting tests have a rejection rate close to the nominal size under the null hypothesis and the resulting con dence intervals have a probability of inclu-
ding the true value of the parameter that is close to the desired level.
In the literature, there are many applications of the bootstrap with panel
data, but these methods are carried out without rigorous theoretical justi fication. This thesis suggests a bootstrap method that is suited to panel data (which we call double resampling), analyzes its validity, and implements it in the analysis of treatment e¤ects. The aim is to provide a method that will provide reliable inference without having to make strong assumptions on the underlying data-generating process.
The rst chapter considers a model with a single parameter (the overall expectation) with the sample mean as estimator. We show that our double resampling is valid for panel data models with some cross section and/or temporal heterogeneity. The assumptions made include one-way and two-
way error component models as well as factor models that have become popular with large panels. On the other hand, alternative methods such as bootstrapping cross-sections or blocks in the time dimensions are only valid under some of these models.
The second chapter extends the previous one to the panel linear regression model. Three kinds of regressors are considered : individual characteristics, temporal characteristics and regressors varying across periods and cross-sectional units. We show that our double resampling is valid for inference about all the coe¢ cients in the model estimated by ordinary least squares under general types of time-series and cross-sectional dependence. Again, we show that other bootstrap methods are only valid under more restrictive conditions.
Finally, the third chapter re-examines the analysis of di¤erences-in-differences
estimators by Bertrand, Du o and Mullainathan (2004). Their empirical application uses panel data from the Current Population Survey on wages of women in the 50 states. Placebo laws are generated at the state level, and the authors measure their impact on wages. By construction, no impact should
be found. Bertrand, Dufl o and Mullainathan (2004) show that neglected heterogeneity and temporal correlation lead to spurious ndings of an effect of the Placebo laws. The double resampling method developed in this thesis corrects these size distortions very well and gives more reliable evaluation of public policies.
|
6 |
Essays in international finance and bankingNahhas, Abdulkader January 2016 (has links)
In this thesis financial movements are considered in terms of foreign direct investment (FDI) and a related way to international banking. In Chapter 2 FDI is analysed in terms of the major G7 economies. Then this is further handled in Chapter 3 in terms of bilateral FDI (BFDI) data related to a broader group of economies and a main mode of analysis the Gravity model. Gravity models are then used in Chapter 4 to analyse bilateral cross border lending in a similar way. While the exchange rate effect is handled in terms of volatility and measured using models of conditional variance. The analysis focused on the bilateral data pays attention to the breakdown of crises across the whole period. With further consideration made of the Euro zone in terms of the study of BFDI and cross border lending. The initial study looks at the determinants of the inflow and outflow of stocks of FDI in the G7 economies for the period 1980-2011. A number of factors, such as research and development (R&D), openness and relative costs are shown to be important, but the main focus is on the impact of the real and nominal effective exchange rate volatility. Where nominal and real exchange rate volatility are measured using a model of generalised autoregressive conditional heteroscedasticity (GARCH) to explain the variance. Although the impact of volatility is theoretically ambiguous inflows are generally negatively affected by increased volatility, whilst there is some evidence outflows increase when volatility rises. In Chapter 3, the effect of bilateral exchange rate volatility is analysed using BFDI stocks, from 14 high income countries to all the OECD countries over the period 1995-2012. This is done using annual panel data with a gravity model. The empirical analysis applies the generalised method of moments (GMM) estimator to a gravity model of BFDI stocks. The findings imply that exports, GDP and distance are key variables that follow from the Gravity model. This study considers the East Asian, global financial markets and systemic banking crises have exerted an impact on BFDI. These effects vary by the type and origin of the crisis, but are generally negative. A high degree of exchange rate volatility discourages BFDI. Chapter 4 considers the determinants of cross-border banking activity from 19 advanced countries to the European Union (EU) over the period 1999-2014. Bilateral country-level stock data on cross-border lending is examined. The data allows us to analyse the effect of financial crises – differentiated by type: systemic banking crises, the global financial crisis, the Euro debt crisis and the Lehman Brothers crisis on the geography of cross-border lending. The problem is analysed using quarterly panel data with a Gravity model. The empirical "Gravity" model conditioned on distance and size measured by GDP is a benchmark in explaining the volume of cross border banking activities. In addition to the investigation of the impact of crises further comparison is made by investigating the impact of European integration on cross-border banking activities between member states. These results are robust to various econometric methodologies, samples, and institutional characteristics.
|
7 |
Rozpočty obcí v ČR – ekonometrická analýza s využitím panelových dát / Municipal budgets in Czech Republic – econometric panel data analysisZvariková, Alexandra January 2017 (has links)
This paper analyses a panel data of 198 Czech municipalities for the period 2003-2015. The aim is to define determinants of municipalities' tax revenue budgeting errors using static panel data models with fixed and random effect. Czech municipalities have a tendency to underestimate both total and tax revenues. On average, budgeted tax revenues are about 7 % lower than collected revenues during the period under examination. Such action could entail less transparency in budgeting process. Results indicate that structure of tax revenues also plays a role in explaining forecast errors. Further, the analysis shows the impact of electoral cycle and macroeconomic variables on budget deviations.
|
8 |
Unleashing Profitability: Unraveling the Labor-R&D Nexus in SaaS Tech Firms : An Analysis of the Profitability Dynamics in SaaS Tech Firms through Stochastic FrontierAtla, prashant, Salman, Noräs January 2023 (has links)
Background: High-tech's rapid growth and prioritization of expansion over profitability can lead to vulnerability in economic downturns. The SaaS market, a part of the high-tech industry, offers affordable and flexible software solutions but is also susceptible to market volatility. To succeed, SaaS startups must strike a balance between growth and profitability. Stochastic frontier analysis can measure technical efficiency and productivity in the SaaS market, offering insights into resource and labor utilization. We present an empirical study that explores factors that influence a firm's profitability, aiming to inform decision-making for SaaS companies. Purpose: Our academic work is centered around gaining a comprehensive understanding of the Software-as-a-Service (SaaS) market and the role of labor and research and development expenses toexplore these factors and their influence on a firm's profitability. This study seeks to address this gap in knowledge by conducting an empirical analysis to examine the technical efficiency distribution among SaaS firms, with the aim of gaining insights into resource and labor utilization. By analyzing technical efficiency distribution among SaaS firms, the study will provide insights into resource and labor utilization and its effect on profitability. The research questions will focus on the relationship between technical efficiency, labor utilization, and production functions on profitability. Methodology: We utilized Model I - Cobb Douglas Panel Data Regression with Fixed Effects, Model II - Cobb Douglas Panel Data Stochastic Frontier Analysis using the Kumbhakar and Lovell (1990), and Model III - Transcendental Logarithmic Panel Data Cobb Douglas Stochastic Frontier Analysis using the Kumbhakar and Lovell (1990). These models allowed us to measure the technical efficiency of SaaS firms and examine the interplay between various variables, such as employee count and R&D expenseswith liabilities and assets as control variables. Results and analysis: The three models revealed that labor, assets, and R&D expenses positively and significantly affect profitability in SaaS firms. The SaaS industry also exhibits decreasing returns to scale in two models, suggesting that increasing all inputs proportionally leads to a less-than-proportional increase in output with the third model exhibiting an increasing return to scale. Also, top performers in technical efficiency tend to have higher marginal product of labor (MPL) values than bottom performers.Conclusions: Technical efficiency is positively correlated with profitability, indicating that more efficient SaaS firms achieve higher profitability levels. The relationship between technical efficiency and profitability is stronger when using the Translog model compared to the Cobb-Douglas model. The study also found that the factors contributing most to profitability in SaaS firms are the number of employees and assets, followed by research and development expenses. Recommendations for future research: Further studies could explore the extent to which factors such as the quality of the workforce, technology, and business processes impact MPL and technical efficiency in SaaS firms. Additionally, future research could investigate the effects of market competition, firm size, and industry regulation on profitability in the SaaS industry. Finally, research could investigate the potential benefits of diversifying investment portfolios to include SaaS stocks, given the significant impact of labor, assets, and R&D expenses on profitability.
|
9 |
Economic policy in health care : Sickness absence and pharmaceutical costsGranlund, David January 2007 (has links)
<p>This thesis consists of a summary and four papers. The first two concerns health care and sickness absence, and the last two pharmaceutical costs and prices.</p><p>Paper [I] presents an economic federation model which resembles the situation in, for example, Sweden. In the model the state governments provide health care, the fed-eral government provides a sickness benefit and both levels tax labor income. The re-sults show that the states can have either an incentive to under- or over-provide health care. The federal government can, by introducing an intergovernmental transfer, in-duce the state governments to provide the socially optimal amount of health care.</p><p>In Paper [II] the effect of aggregated public health care expenditure on absence from work due to sickness or disability was estimated. The analysis was based on data from a panel of the Swedish municipalities for the period 1993-2004. Public health care expenditure was found to have no statistically significant effect on absence and the standard errors were small enough to rule out all but a minimal effect. The result held when separate estimations were conducted for women and men, and for absence due to sickness and disability.</p><p>The purpose of Paper [III] was to study the effects of the introduction of fixed pharmaceutical budgets for two health centers in Västerbotten, Sweden. Estimation results using propensity score matching methods show that there are no systematic differences for either price or quantity per prescription between health centers using fixed and open-ended budgets. The analysis was based on individual prescription data from the two health centers and a control group both before and after the introduction of fixed budgets.</p><p>In Paper [IV] the introduction of the Swedish substitution reform in October 2002 was used as a natural experiment to examine the effects of increased consumer infor-mation on pharmaceutical prices. Using monthly data on individual pharmaceutical prices, the average reduction of prices due to the reform was estimated to four percent for both brand name and generic pharmaceuticals during the first four years after the reform. The results also show that the price adjustment was not instant.</p>
|
10 |
Economic policy in health care : Sickness absence and pharmaceutical costsGranlund, David January 2007 (has links)
This thesis consists of a summary and four papers. The first two concerns health care and sickness absence, and the last two pharmaceutical costs and prices. Paper [I] presents an economic federation model which resembles the situation in, for example, Sweden. In the model the state governments provide health care, the fed-eral government provides a sickness benefit and both levels tax labor income. The re-sults show that the states can have either an incentive to under- or over-provide health care. The federal government can, by introducing an intergovernmental transfer, in-duce the state governments to provide the socially optimal amount of health care. In Paper [II] the effect of aggregated public health care expenditure on absence from work due to sickness or disability was estimated. The analysis was based on data from a panel of the Swedish municipalities for the period 1993-2004. Public health care expenditure was found to have no statistically significant effect on absence and the standard errors were small enough to rule out all but a minimal effect. The result held when separate estimations were conducted for women and men, and for absence due to sickness and disability. The purpose of Paper [III] was to study the effects of the introduction of fixed pharmaceutical budgets for two health centers in Västerbotten, Sweden. Estimation results using propensity score matching methods show that there are no systematic differences for either price or quantity per prescription between health centers using fixed and open-ended budgets. The analysis was based on individual prescription data from the two health centers and a control group both before and after the introduction of fixed budgets. In Paper [IV] the introduction of the Swedish substitution reform in October 2002 was used as a natural experiment to examine the effects of increased consumer infor-mation on pharmaceutical prices. Using monthly data on individual pharmaceutical prices, the average reduction of prices due to the reform was estimated to four percent for both brand name and generic pharmaceuticals during the first four years after the reform. The results also show that the price adjustment was not instant.
|
Page generated in 0.0732 seconds