1 |
Two essays on the predictability of asset prices: "Benchmarking problems and long horizon abnormal returns" and, "Low R square in the cross section of expected returns"Sanchez, Benito 18 May 2007 (has links)
This dissertation consists of two essays on predictability of asset prices. "Benchmarking problems and long horizon abnormal returns" and, "Low R-square in the cross section of expected returns". Long run abnormal returns following Initial Public Offerings (IPOs), Seasoned Equity Offers (SEO) and other firm level events are well documented in the finance literature. These findings are difficult to reconcile in an efficient markets world. I examine the seriousness of potential benchmarking errors on the measurement of abnormal returns. I find that the simpler, more parsimonious models perform better in practice and finds that excess performance is not predictable regardless of the APM. Thus, the long run underperformance following SEOs found in the literature is consistent with market efficiency because excess performance itself is not predictable. In the other essay, "Low R-square in the cross section of expected returns", I examine the “low R-square†phenomenon observed in the literature. CAPM predicts exact linear relationship between return and betas (SML). This means that estimated time series betas for firms should be related with firms' future returns. However, the estimated betas have almost no relationship with future returns. The cross-sectional R2 are surprising low (3% average) while time series R2 are higher (around 30 % average). He develops a simple asset pricing model that explains this phenomenon. Even in a perfect world where there are no errors in the benchmark measurement or estimation of the price of market risk the difference in R-squares can be quite large due to the difference in variance between the "market" and average returns. I document that market variance exceeds the variance of average returns, with few exceptions, for the last 74 years.
|
2 |
Empirical analysis in South African agricultural economics and the R-Square diseaseMoldenhauer, Walter Heinrich 24 January 2008 (has links)
The South African agricultural sector underwent a significant amount of institutional and structural changes during the past two decades, especially in the aftermath South Africa’s first democratic elections in 1994 and the deregulation of the agricultural marketing environment in 1996/97. These changes meant that South African agricultural economics scholars had to adapt to these changes. The increased need towards more quantified output in agricultural economic research has led agricultural economic scholars to “borrow” econometric models from their fellow scholars abroad to apply to South African research problems in order to fulfil the need for more quantified research output. However, the development of econometrics has over the years given rise to a disenchantment with the way in which econometrics have been applied in economic research. Consequently it is believed that a large body of literature has entered the public domain without being properly reviewed because South African agricultural economic scholars do not have the necessary insight and knowledge of the problems believed to be at the root of the disenchantment with the manner in which econometrics have been applied. The general objective of this dissertation is to investigate the disenchantment with the manner econometrics has been applied in economic and agricultural economic scholarship in order to identity the main drivers of this disenchantment, and to use this knowledge gained to evaluate the application of econometrics in South African agricultural economic scholarship as portrayed in Agrekon, one of South Africa’s agricultural economics peer review journals. The study is conducted by means of a review of the literature on the history of econometrics, the development of econometric methodologies and the disenchantment with econometrics in economics and agricultural economics. Applied econometrics portrayed in Agrekon is evaluated by means of a survey of papers published in this journal. The main findings of this study revealed that the key drivers of disenchantment can mainly be ascribed to the following: <ul> <li>The misuse of statistical significance tests in applied studies.</li> <li>Problems with data underlying econometric analyses.</li> <li>The problems associated with replication. <li>Data mining <li>The “Black box ideology” in applied econometrics and <li>Scholasticism and associated preference falsification.</li> </ul> A survey of papers published in Agrekon based on a sample of 65 papers, which were sampled by means of stratified random sampling, revealed that elements behind the disenchantment with econometrics are present in South African agricultural economic scholarship. It was also found that the data underlying econometric analyses are a major point of concern in South African agricultural economics and it seem as if South African agricultural economics scholars have adopted a lackadaisical attitude towards data. The study concludes with recommendations for future studies into to the application of econometrics in South African agricultural economics. / Dissertation (MCom(Agricultural Economics))--University of Pretoria, 2008. / Agricultural Economics, Extension and Rural Development / MCom / unrestricted
|
3 |
Model selectionHildebrand, Annelize 11 1900 (has links)
In developing an understanding of real-world problems,
researchers develop mathematical and statistical models. Various
model selection methods exist which can be used to obtain a
mathematical model that best describes the real-world situation
in some or other sense. These methods aim to assess the merits
of competing models by concentrating on a particular criterion.
Each selection method is associated with its own criterion and
is named accordingly. The better known ones include Akaike's
Information Criterion, Mallows' Cp and cross-validation, to name
a few. The value of the criterion is calculated for each model
and the model corresponding to the minimum value of the criterion
is then selected as the "best" model. / Mathematical Sciences / M. Sc. (Statistics)
|
4 |
Model selectionHildebrand, Annelize 11 1900 (has links)
In developing an understanding of real-world problems,
researchers develop mathematical and statistical models. Various
model selection methods exist which can be used to obtain a
mathematical model that best describes the real-world situation
in some or other sense. These methods aim to assess the merits
of competing models by concentrating on a particular criterion.
Each selection method is associated with its own criterion and
is named accordingly. The better known ones include Akaike's
Information Criterion, Mallows' Cp and cross-validation, to name
a few. The value of the criterion is calculated for each model
and the model corresponding to the minimum value of the criterion
is then selected as the "best" model. / Mathematical Sciences / M. Sc. (Statistics)
|
5 |
The relationship between volatility of price multiples and volatility of stock prices : A study of the Swedish market from 2003 to 2012Yang, Yue, Gonta, Viorica January 2013 (has links)
The purpose of our study was to examine the relationship between the volatility of price multiples and the volatility of stock prices in the Swedish market from 2003 to 2012. Our focus was on the price-to-earnings ratio and the price-to-book ratio. Some previous studies showed a link between the price multiples and the volatility of stock prices, this made us question whether there should be a link between the volatility of the price multiples and the volatility of the stock prices. The importance of this subject is accentuated by the financial crisis, as we provide investors with information regarding the movements of price multiples and stock prices. Moreover, we test if the volatility of the price multiples can be used to create a prediction model for the volatility of stock prices. Also we fill the gap in the previous researches as there is no previous literature about this topic. We conducted a quantitative research using statistical tests, such as the correlation test and the linear regression test. For our data sample we chose the Sweden Datastream index. We first calculated the volatility using the GARCH model and then continued with our statistical tests. The results of our tests showed that there is a relationship between the volatility of the price multiples and the volatility of the stock prices in the Swedish market in the past ten years. Our findings show that the correlation coefficients vary across industries and over time in both strength and direction. The second part of our tests is concerned with the linear regression tests, mainly calculating the coefficient of determination. Our results show that the volatility of the price multiples do explain changes in the volatility of stock prices. Thus, the volatility of the P/E ratio and the volatility of the P/B ratio can be used in creating a prediction model for the volatility of stock prices. Nevertheless, we also find that this model is best suited when the economic situation is unstable (i.e. crisis, bad economic outlook) as both the correlation coefficient and the coefficient of determination had the highest values in the last five years, with the peak in 2008.
|
6 |
Har Carharts fyrfaktormodell en högre förklaringsgrad än Fama-Frenchs trefaktormodell? : En kvantitativ studie som utvärderar Carharts fyrfaktormodell och Fama-Frenchs trefaktormodell på den svenska aktiemarknaden.Zeray, Marsa Teklay January 2022 (has links)
Syfte: Syftet med studien är att analysera och utvärdera Carharts fyrfaktormodells och Fama- Frenchs trefaktormodells prestanda vid portföljavkastning på den svenska aktiemarknaden, under perioden 2011–2020. Teori: Denna studie grundar sig i den effektiva marknadshypotesen, Fama och Frenchs trefaktormodell samt Carharts fyrfaktormodell. Metod: En kvantitativ studie med ett deduktivt förhållningssätt. Undersökningen utför tester på den svenska aktiemarknaden under perioden 2011–2020 genom en regressionsanalys. Upptäckter: Carharts fyrfaktormodell har en högre justerad förklaringsgrad än trefaktormodellen, vilket drivs av modellens förmåga att förklara avkastning på portföljer sorterade efter storlek och momentum. Originalitet: Studien särskiljer sig på grund av avsaknaden av forskning på den svenska aktiemarknaden. Vidare bidrar studien till ett forskningsområde för små öppna ekonomier, där den svenska aktiemarknaden ingår. / Purpose: The purpose of the study is to analyze and evaluate Carhart's four-factor model’s and Fama-French's three-factor model's performance in portfolio returns on the Swedish stock market, during the period 2011–2020. Theory: This study is based on the effective market hypothesis, Fama and French's three- factor model and Carhart's four-factor model. Method: A quantitative study with a deductive approach. The survey performs tests on the Swedish stock market between the period 2011-2020 through a regression analysis. Findings: Carhart's four-factor model has a higher adjusted degree of explanation than the three-factor model, which is driven by the model's ability to explain returns on portfolios sorted by size and momentum. Originality: The study differs due to the lack of research on the Swedish stock market. Furthermore, the study contributes to a research area for small open economies, where the Swedish stock market is included.
|
7 |
Evoluční algoritmy pro ultrazvukovou perfúzní analýzu / Evolution algorithms for ultrasound perfusion analysisHemzalová, Zuzana January 2021 (has links)
This thesis deals with the principles of ultrasonic perfusion analysis and methods for determining perfusion parameters. It examines Evolutionary algorithms and their ability to optimize the approximation of dilution curves from ultrasond tissue scannig. It compares the optimization performance of three evolutionary algorithms. Continuous genetic algorithm GA, algorithm SOMA and PSO. Methods are evaluated on simulated and clinical data.
|
8 |
Assessing And Modeling Quality Measures for Healthcare SystemsLi, Nien-Chen 06 November 2021 (has links)
Background:
Shifting the healthcare payment system from a volume-based to a value-based model has been a significant effort to improve the quality of care and reduce healthcare costs in the US. In 2018, Massachusetts Medicaid launched Accountable Care Organizations (ACOs) as part of the effort. Constructing, assessing, and risk-adjusting quality measures are integral parts of the reform process.
Methods:
Using data from the MassHealth Data Warehouse (2016-2019), we assessed the loss of community tenure (CTloss) as a potential quality measure for patients with bipolar, schizophrenia, or other psychotic disorders (BSP). We evaluated various statistical models for predicting CTloss using deviance, Akaike information criterion, Vuong test, squared correlation and observed vs. expected (O/E) ratios. We also used logistic regression to investigate risk factors that impacted medication nonadherence, another quality measure for patients with bipolar disorders (BD).
Results:
Mean CTloss was 12.1 (±31.0 SD) days in the study population; it varied greatly across ACOs. For risk adjustment modeling, we recommended the zero-inflated Poisson or doubly augmented beta model. The O/E ratio ranged from 0.4 to 1.2, suggesting variation in quality, after adjusting for differences in patient characteristics for which ACOs served as reflected in E. Almost half (47.7%) of BD patients were nonadherent to second-generation antipsychotics. Patient demographics, medical and mental comorbidities, receiving institutional services like those from the Department of Mental Health, homelessness, and neighborhood socioeconomic stress impacted medication nonadherence.
Conclusions:
Valid quality measures are essential to value-based payment. Heterogeneity implies the need for risk adjustment. The search for a model type is driven by the non-standard distribution of CTloss.
|
Page generated in 0.0392 seconds