Spelling suggestions: "subject:"aggregated"" "subject:"ggregated""
31 |
VARIATION I VALDELTAGANDE : - En statistisk undersökning av moderniseringens subnationella effekter på det svenska valdeltagandetElfving, Johan, Rosén, Elin January 2017 (has links)
This essay sets out to bring further knowledge within the field of political participation on an aggregated level in Sweden. The theoretical access point for this study is the modernization theory provided by Lipset (1959). This theory will be investigated to see if variables within this theory have an effect on the Swedish voter turn-out. Furthermore this study aims to investigate if the contexts of the economical situations have an effect on modernization. The main questions of this essay is: (1) what effects do modernization have on voter turn-out in Swedish municipals? (2) Regarding the economic cycles, what influence does the effects of modernization have on voter turn-out? The method used in this essay is a qauntative analysis in the form of a bivariate and a multiple regression analysis. The empirical material in this study includes statistics from different public agencies. The analysis aims to investigate three election years, 1994, 2006 and 2014. The empirical study shows that socioeconomic pre-conditions, such as average income level, and urbanization have a positive and strong effect on the Swedish voter turn-out on an aggregated level. This shows that modernization theory is not relevant in full, it rather shows that certain parts of the original theory is relevant today. The effects modernization have on voter turn-out is also a lot stronger when the economic context is an economic boom. When the economy goes down the effect goes down with it.
|
32 |
Empirical Essays in Development EconomicsDadzie, Nicholas Nyamekeh January 2013 (has links)
No description available.
|
33 |
Development of simplified power grid models in EU project SpineAlharbi, Mohammad January 2020 (has links)
The electric power system is among the biggest and most complex man-made physical network worldwide. The increase of electricity demand, the integration of ICT technologies for the modernization of the electric grid and the introduction of intermittent renewable generation has resulted in further increasing the complexity of operating and planning the grid optimally. For this reason the analysis of large-scale power systems considering all state variables is a very complicated procedure. Thus, it is necessary to explore methods that represent the original network with smaller equivalent networks in order to simplify power system studies. The equivalent network should provide an accurate and efficient estimation of the behavior of the original power system network without considering the full analytical modelling of the grid infrastructure. This thesis investigates partitioning methods and reduction methodologies in order to develop a proper reduction model. The K-means and K-medoids clustering algorithms are employed to partition the network into numerous clusters of buses. In this thesis the Radial, Equivalent, and Independent (REI) method is further developed, implemented, and evaluated for obtaining a reduced, equivalent circuit of each cluster of the original power system. The basic idea of REI method is to aggregate the power injections of the eliminated buses to two fictitious buses through the zero power balance network. The method is implemented using Julia language and the PowerModels.jl package. The reduction methodology is evaluated using the IEEE 5-bus, 30-bus, and 118-bus systems, by comparing a series of accuracy and performance indices. Factors examined in the comparison include the chosen number of clusters, different assumptions for the slack bus as well as the effect of the imposed voltage limits on the fictitious REI buses. / Elsystemet är ett av de största och mest komplexa människotillverkade fysiskanätverken i världen. Ökad elförbrukning, integration av informationsteknik föratt modernisera elnäten samt införandet av varierande förnybar elproduktion harresulterat i ytterligare ökad komplexitet för att driva nätet optimalt. Därför ärdet mycket komplicerat att analysera storskaliga elsystem samtidigt som man tarhänsyn till alla tillståndsvariabler. Det är således nödvändigt att utforska metoderför att modellera det ursprungliga nätverket med ett mindre ekvivalent nätverk föratt underlätta studier av elsystem. Det ekvivalenta nätverket ska ge en noggrann ocheffektiv uppskattning av det ursprungliga systemets egenskaper utan att inkludera enkompletta analytisk modell av nätverkets stuktur.Den här rapporten undersöker metoder för att dela upp och reducera ett nätverkför att få fram en lämplig ekvivalent modell. Klusteranalysalgotmerna K-meansoch K-medoids används för att dela in nätverket i ett antal kluster av noder. Irapporten vidareutvecklas, implementeras och utvärderas REI-metoden för att ta framreducerade ekvivalenta nätverk för varje kluster i det ursprungliga systemet. Dengrundläggande idén med REI-metoden är att den aggregerar elproduktionen i deelminerade noderna i två fiktiva noder genom ett nolleffektbalansnätverk.Metoden är implementerad i programspråket Julia och programpaketetPowerModels.jl. Reduceringsmetoderna utvärderas på IEEE:s system med 5 noder,30 noder respektive 118 noder, genom att jämföra ett antal index för noggrannhetoch prestanda. De faktorer som undersäks i jämförelsen inkluderar det valda antaletkluster, olika antagande om slacknoden samt följderna av spänningsgränserna för defiktiva REI-noderna.v
|
34 |
Characteristics and prediction of risky gambling behaviour study: A study protocolCzernecka, Robert, Wirkus, Theresa, Bühringer, Gerhard, Kräplin, Anja 27 November 2024 (has links)
Objective: This study protocol describes the RIGAB study, a prospective case-control-study assessing online sports betting behaviour and underlying risk factors for the development of gambling disorder (GD). It has two aims: (1) to characterise sports bettors concerning putative risk factors and their gambling behaviour, and (2) to predict the development of GD from these factors. - Methods: At baseline, online sports bettors took part in an online survey comprising a GD screening (DSM-5), questions on gambling behaviour and on the putative risk factors emotion regulation, impulsivity, comorbidities, stress, and substance use. Participants were reinvited for a 1-year follow-up online survey. In a nested design, a subsample was invited in-person to take part in a cognitive-behavioural task battery and a clinical interview. - Results: Of the initial 6568 online sports bettors invited, 607 participated at baseline (rate: 9.2%), 325 took part in the 1-year follow-up and 54 participated in the nested in-person assessment. - Conclusion: The RIGAB study combines different fields of GD studies: player tracking data and putative risk factors from self-report and behavioural tasks. The results of this study will support the development of preventive measures for participants of online gambling based on the combined findings from previously rather distinct research fields.
|
35 |
[en] A COMPARATIVE STUDY OF METHODOLOGIES FOR MODELLING COMPLEX SURVEYS MODELLING - AN APPLICATION TO SAEB 99 / [pt] UM ESTUDO COMPARATIVO DAS METODOLOGIAS DE MODELAGEM DE DADOS AMOSTRAIS COMPLEXOS: UMA APLICAÇÃO AO SAEB 99 / [es] UN ESTUDIO COMPARATIVO DE LAS METODOLOGÍAS DE MODELAJE DE DATOS PROVENIENTES DE MUESTREOS COMPLEJOS UNA APLICACIÓN AL SAEB 99MARCEL DE TOLEDO VIEIRA 23 July 2001 (has links)
[pt] A consideração do desenho amostral é fundamental e
indispensável em trabalhos que têm como objetivo a análise
e modelagem de dados selecionados através de desenhos
amostrais complexos. Desta forma torna-se possível a
produção de resultados realmente úteis e confiáveis para os
gestores de políticas públicas. O principal objetivo desta
dissertação é chamar a atenção para a importância da
utilização das técnicas adequadas ao tratamento de
dados amostrais complexos, discutindo também as
conseqüências de sua não adoção. As metodologias adequadas
para a análise de dados amostrais complexos podem ser
agrupadas em duas abordagens. A primeira, denominada de
abordagem agregada, se baseia na incorporação de pesos e
efeitos do plano amostral no ajuste dos modelos
estatísticos. Através da outra abordagem, que é denominada
de abordagem desagregada, a lógica de modelagem é
modificada, incorporando os efeitos devidos à amostragem
complexa. Isto pode ser feito através do uso de modelos
lineares hierárquicos, ou multinível. Os dados analisados
nesta dissertação foram coletados pelo Sistema Nacional de
Avaliação da Educação Básica (SAEB) no ano de 1999. Esta
pesquisa compreende um exame de conhecimentos e um
levantamento sobre condições sócio-econômico-demográficas
de mais de 200.000 alunos, suas escolas, professores e
diretores. A amostra do SAEB 99 foi selecionada a partir de
um plano amostral complexo. O desenho amostral do SAEB 99
considera amostragem aleatória estratificada de unidades
conglomeradas, com múltiplas etapas. A estimação pontual de
estatísticas descritivas a partir de dados amostrais
complexos não apresenta grandes dificuldades na medida em
que se utiliza de forma adequada os pesos na expansão da
amostra. Será ilustrada, através de um exemplo, a
importância dos pesos amostrais na estimação. Será
verificado que sua não adoção no cálculo da média, na
situação em questão, poderia gerar resultados
superestimados. Nesta dissertação serão apresentados
aspectos teóricos das técnicas (adequadas a dados amostrais
complexos) de estimação pontual de parâmetros de modelos de
regressão e de suas respectivas variâncias. Também é
realizada uma discussão sobre o efeito do plano amostral,
intervalos de confiança e testes de hipóteses, e sobre o
pacote SUDAAN. Serão apresentados os resultados da
aplicação das técnicas estudadas. Paralelamente, será
conduzido um estudo dos determinantes da proficiência dos
alunos. Ainda, serão apresentadas e analisadas as
conseqüências de não se considerar o desenho amostral na
estimação dos parâmetros dos modelos e de suas respectivas
variâncias, para o SAEB 99. Será realizada uma
interpretação educacional dos resultados apresentados. / [en] It is very important to consider the sample design in the
analysis and modelling of complex survey data. It permits
the production of correct results, which can be used for
public political decision making and evaluation. The main
objective of this dissertation is to give information about
the importance of the use of the techniques for complex
survey data. The methodologies for complex survey data
analysis can be divided in two different approaches. The
first is based on incorporating weights and design effects
in the fitting of usual statistical models, such as
contingency tables, regression, etc. This approach is called
aggregated approach. The other approach, called
disaggregated approach, modifies the model attempting to
incorporate the complex population structure and/or design
effects, for example using hierarchical (or multilevel)
linear models. The data analysed in this dissertation were
collected by the Brazilian National System of Basic
Education Assessment (SAEB), in 1999. This survey applies
an exam and asks social-economic-demographic information
about more than 200.000 students, schools and teachers. The
SAEB 99 sample were selected by a complex survey design,
considering stratification and conglomeration, with
multiples steps. There is not any problem in estimation of
descriptive statistics, such as means, correlation and
regression coefficients, provided that we correctly use the
sample weights to expand the data. An example will be
presented to verify the importance of the use of the
sample weights. The theoretical aspects of the techniques
for the estimation of regression model parameters and their
variances will be presented. The design effect, confidence
intervals, significance tests, and SUDAAN characteristics
will also be discussed. The application of these techniques
will be presented. It will be also conducted a study
of the determinants of the student proficiency. It still
will be presented and analysed the consequences of the non-
consideration of the sample design in the estimation of
parameters and their variances, for SAEB 99 data. The
results will be educationally interpreted. / [es] La consideración del diseño muestral es fundamental e
indispensable en trabajos que tienen como objetivo el
análisis y modelaje de datos selecionados a través de
diseños muestrales complejos. De esta forma es posible la
producción de resultados realmente útiles y confiables para
los gestores de políticas públicas. EL objetivo principal
de esta disertación es llamar la atención para la
importancia de la utilización de las técnicas adecuadas al
tratamiento de datos muestrales complejos, discutiendo
también las consecuencias de no adoptarlas. Las
metodologías adecuadas para el análisis de datos muestrales
complejos pueden ser agrupadas en dos abordajes. La
primera, denominada de abordaje agregado, consiste en la
incorporación de pesos y efectos del plano muestral en el
ajuste de los modelos estadísticos. A través del otro
abordaje, denominado de abordaje desagregado, se modifica
la lógica, incorporando los efectos debidos al muestreo
complejo. Esto puede realizarse a través del uso de modelos
lineales jerárquicos, o multiníveles. Los datos analizados
en esta disertación fueron colectados por el Sistema
Nacional de Evaluación de la Educación Básica (SAEB) en el
año de 1999. Esta investigación comprende un exámen de
conocimientos y un levantamiento sobre condiciones
socioeconómicas-demográficas de más de 200.000 alumnos, sus
escuelas, profesores y directores. La muestra del SAEB 99
fue seleccionada a partir de un diseño muestral complejo.
El diseño muestral del SAEB 99 considera el muestreo
aleatório estratificado de unidades conglomeradas, con
múltiples etapas. La estimación puntual de estadísticas
descriptivas a partir de datos muestrales complejos no
presenta grandes dificuldades si se utiliza de forma
adecuada los pesos en la expansión de la muestra. Se
ilustrará, a través de un ejemplo, la importancia de los
pesos muestrales en la estimación. Será verificado que la
no adopción de estos pesos en el cálculo de la media,
podería generar resultados superestimados. En esta
disertación serán presentados aspectos teóricos de las
técnicas (adecuadas a datos de muestras complejas) de
estimación puntual de parámetros de modelos de regresión y
de sus respectivas varianzas. Se discute también el efecto
del diseño muestral, intervalos de confianza y testes de
hipótesis, y el paquete SUDAAN. Serán presentados los
resultados de la aplicación de las técnicas estudiadas.
Paralelamente, se estudian los determinantes de la
proficiencia de los alumnos. Se presentan y analizan
también, las consecuencias de no considerar el diseño
muestral en la estimación de los parámetros del modelos y
de sus respectivas varianzas, para el SAEB 99. Será
realizada una interpretación educacional de los resultados
presentados.
|
36 |
Programação linear com controle de risco para o planejamento da operação do SIN / Linear programming with risk control for the operation planning of SINRui Bertho Junior 08 March 2013 (has links)
O planejamento da operação energética do sistema interligado nacional brasileiro é realizado por uma cadeia de modelos computacionais de otimização e simulação da operação. Entretanto, o risco de déficit, um importante indicador de segurança energética no setor elétrico, é tratado como uma variável de saída dos modelos computacionais. No planejamento de médio prazo é utilizado o software NEWAVE, que utiliza uma representação agregada em subsistemas equivalentes. Este trabalho propõe a implementação de um modelo de otimização linear para o planejamento da operação de médio prazo capaz de considerar o risco de déficit em sua formulação. Para o controle de risco de déficit, é proposta a utilização da métrica de risco conhecida por CVaR (Conditional Value at Risk), por se caracterizar como uma métrica de risco coerente, além de poder ser implementada por meio de um conjunto de restrições lineares. / The energetic operation planning of the Brazilian interconnected system is performed by a chain of computational models for the system optimization and simulation. However, the deficit risk, an important energy security indicator for the electric sector, is treated as an output variable on the computational models. In the medium-term of the energetic planning is used the software NEWAVE, which uses equivalent systems on aggregated representation. This work proposes the implementation of a linear optimization model for the medium-term of the energetic planning able to consider the deficit risk in its own formulation. To control the deficit risk is proposed the use of the risk metric known as CVaR (Conditional Value at Risk), because it is characterized as a coherent risk metric, and can be implemented through a set of linear constraints.
|
37 |
Programação linear com controle de risco para o planejamento da operação do SIN / Linear programming with risk control for the operation planning of SINBertho Junior, Rui 08 March 2013 (has links)
O planejamento da operação energética do sistema interligado nacional brasileiro é realizado por uma cadeia de modelos computacionais de otimização e simulação da operação. Entretanto, o risco de déficit, um importante indicador de segurança energética no setor elétrico, é tratado como uma variável de saída dos modelos computacionais. No planejamento de médio prazo é utilizado o software NEWAVE, que utiliza uma representação agregada em subsistemas equivalentes. Este trabalho propõe a implementação de um modelo de otimização linear para o planejamento da operação de médio prazo capaz de considerar o risco de déficit em sua formulação. Para o controle de risco de déficit, é proposta a utilização da métrica de risco conhecida por CVaR (Conditional Value at Risk), por se caracterizar como uma métrica de risco coerente, além de poder ser implementada por meio de um conjunto de restrições lineares. / The energetic operation planning of the Brazilian interconnected system is performed by a chain of computational models for the system optimization and simulation. However, the deficit risk, an important energy security indicator for the electric sector, is treated as an output variable on the computational models. In the medium-term of the energetic planning is used the software NEWAVE, which uses equivalent systems on aggregated representation. This work proposes the implementation of a linear optimization model for the medium-term of the energetic planning able to consider the deficit risk in its own formulation. To control the deficit risk is proposed the use of the risk metric known as CVaR (Conditional Value at Risk), because it is characterized as a coherent risk metric, and can be implemented through a set of linear constraints.
|
38 |
Inference for Discrete Time Stochastic Processes using Aggregated Survey DataDavis, Brett Andrew, Brett.Davis@abs.gov.au January 2003 (has links)
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional
surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individuals state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
|
39 |
Entwicklung eines aggregierten Modells zur Simulation der Gewässergüte in Talsperren als Baustein eines FlussgebietsmodellsSiemens, Katja 20 January 2010 (has links) (PDF)
Der großräumige Abbau von Braunkohle in der Lausitz führte in der Vergangenheit zu einer
extremen Beeinflussung des Wasserhaushaltes im Einzugsgebiet der Spree. Mit dem Beginn
der Sanierung und Flutung der Tagebaue kommt es nun langfristig zu einer verstärkten Nutzung
der existierenden Oberflächengewässer und der Einbindung der entstehenden Tagebaurestseen
in das Fließgewässernetz.
Die Kopplung von Mengenbewirtschaftungsmodellen mit Gütemodellen berücksichtigt die
Verfügbarkeit und Verteilung der begrenzten Ressource Wasser im Einzugsgebiet und der
aus der Bewirtschaftung resultierenden Gewässergüte. Dies entspricht auch dem Leitbild der
EU-WRRL (2000) für ein integriertes Flussgebietsmanagement, was eine einzugsgebietsbezogene
Betrachtung der vorhandenen Ressourcen unter Berücksichtigung aller beeinflussten
und beeinflussenden Kriterien fordert.
Werden Modelle, die unterschiedlich sensitive und komplexe Systeme abbilden, miteinander
gekoppelt, erfordert dies eine Anpassung der Datenstruktur und der zeitlichen Skalen.
Schwerpunkt dieser Arbeit war die Entwicklung einfacher, robuster Simulationswerkzeuge
für die Prognose der Gewässergüte in den Talsperren Bautzen und Quitzdorf. Als Basis diente
das komplexe Standgewässergütemodell SALMO. Das Modell wurde zunächst um einfache
Algorithmen ergänzt, so dass es trotz einer angepassten, stark reduzierten Datengrundlage,
plausible Ergebnisse simulierte. Stochastisch erzeugte Bewirtschaftungsszenarien und die
komplex simulierten Modellergebnisse bezüglich der resultierenden Gewässergüte, wurden
als Trainingsdaten für ein Künstliches Neuronales Netz (ANN) genutzt. Die für beide Talsperren
trainierten ANN sind als effektive Black-Box-Module in der Lage, das komplexe
Systemverhalten des deterministischen Modells SALMO widerzuspiegeln.
Durch eine Kopplung der entwickelten ANN mit dem Bewirtschaftungsmodell WBalMo ist
es möglich, Bewirtschaftungsalternativen hinsichtlich ihrer Konsequenzen für die Gewässergüte
zu bewerten.
ANN sind systemgebundene Modelle, die nicht auf andere Gewässersysteme übertragen werden
können. Allerdings stellt die hier erarbeitete Methodik einen fundierten Ansatz dar, der
für die Entwicklung weiterer aggregierter Gütemodule im Rahmen integrierter Bewirtschaftungsmodelle
angewendet werden kann. / The large-scale extraction of lignite in Lusatia in the past had an extreme impact on the water
balance of the Spree river catchment. The restoration and flooding of the opencast pits put
heavy demand on existing surface waters for a long time period. The resulting artificial lakes
have to be integrated in the riverine network.
The coupling of management models and water quality models allows to consider both
availability and distribution of limited water resources in the catchment and resulting water
quality. This is corresponding to the principles of the EU-WFD for integrated river basin management,
which is a basin-related consideration of available resources taking into account
all influencing and influenced characteristics.
Adjustment of data structure and time scale is necessary if models describing unequally sensitive
and complex systems are to be coupled. Main focus of this task was to develop simple
and robust simulation tools for the prediction of water quality in the reservoirs Bautzen and
Quitzdorf. The complex water quality model SALMO served as a basis.
In a first step, simple algorithms had to be amended in order to generate plausible simulation
results despite of an adapted reduced data base. Stochastically generated management
scenarios and complex simulated model results regarding the resulting water quality were
employed as training data for an Artificial Neuronal Network (ANN). The trained ANN’s are
efficient black box modules. As such they are able to mirror complex system behaviour of
the deterministic model SALMO.
By coupling the developed ANN with the management model WBalMo it is possible to
evaluate management strategies in terms of their impact on the quality of the water bodies.
ANN’s are system-linked models. A transfer to other aquatic systems is not possible. However,
the methodology developed here represents an in-depth approach which is applicable to
the development of further aggregated water quality models in the framework of integrated
management models.
|
40 |
The determinants of UK Equity Risk PremiumChandorkar, Pankaj Avinash January 2016 (has links)
Equity Risk Premium (ERP) is the cornerstone in Financial Economics. It is a basic requirement in stock valuation, evaluation of portfolio performance and asset allocation. For the last decades, several studies have attempted to investigate the relationship between macroeconomic drivers of ERP. In this work, I empirically investigate the macroeconomic determinants of UK ERP. For this I parsimoniously cover a large body of literature stemming from ERP puzzle. I motivate the empirical investigation based on three mutually exclusive theoretical lenses. The thesis is organised in the journal paper format. In the first paper I review the literature on ERP over the past twenty-eight years. In particular, the aim of the paper is three fold. First, to review the methods and techniques, proposed by the literature to estimate ERP. Second, to review the literature that attempts to resolve the ERP puzzle, first coined by Mehra and Prescott (1985), by exploring five different types of modifications to the standard utility framework. And third, to review the literature that investigates and develops relationship between ERP and various macroeconomic and market factors in domestic and international context. I find that ERP puzzle is still a puzzle, within the universe of standard power utility framework and Consumption Capital Asset Pricing Model, a conclusion which is in line with Kocherlakota (1996) and Mehra (2003). In the second paper, I investigate the impact of structural monetary policy shocks on ex-post ERP. More specifically, the aim of this paper is to investigate the whether the response of UK ERP is different to the structural monetary policy shocks, before and after the implementation of Quantitative Easing in the UK. I find that monetary policy shocks negatively affect the ERP at aggregate level. However, at the sectoral level, the magnitude of the response is heterogeneous. Further, monetary policy shocks have a significant negative (positive) impact on the ERP before (after) the implementation of Quantitative Easing (QE). The empirical evidence provided in the paper sheds light on the equity market’s asymmetric response to the Bank of England’s monetary policy before and after the monetary stimulus. In the third paper I examine the impact of aggregate and disaggregate consumption shocks on the ex-post ERP of various FTSE indices and the 25 Fama-French style value-weighted portfolios, constructed on the basis of size and book-to-market characteristics. I extract consumption shocks using Structural Vector Autoregression (SVAR) and investigate its time-series and cross-sectional implications for ERP in the UK. These structural consumption shocks represent deviation of agent’s actual consumption path from its theoretically expected path. Aggregate consumption shocks seem to explain significant time variation in the ERP. At disaggregated level, when the actual consumption is less than expected, the ERP rises. Durable and Semi-durable consumption shocks have a greater impact on the ERP than non-durable consumption shocks. In the fourth and final paper I investigate the impact of short and long term market implied volatility on the UK ERP. I also examine the pricing implications of innovations to short and long term implied market volatility in the cross-section of stocks returns. I find that both the short and the long term implied volatility have significant negative impact on the aggregate ERP, while at sectoral level the impact is heterogeneous. I find both short and long term volatility is priced negatively indicating that (i) investors care both short and long term market implied volatility (ii) investors are ready to pay for insurance against these risks.
|
Page generated in 0.0391 seconds