• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 198
  • 36
  • 31
  • 24
  • 19
  • 9
  • 7
  • 7
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 325
  • 325
  • 147
  • 69
  • 50
  • 50
  • 48
  • 48
  • 38
  • 36
  • 34
  • 33
  • 32
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Using Twitter Attribute Information to Predict Stock Prices

Karlemstrand, Roderick, Leckström, Ebba January 2021 (has links)
Being able to predict stock prices might be the unspoken wish of stock investors. Although stock prices are complicated to predict, there are many theories about what affects their movements, including interest rates, news and social media. With the help of Machine Learning, complex patterns in data can be identified beyond the human intellect. In this thesis, a Machine Learning model for time series forecasting is created and tested to predict stock prices. The model is based on a neural network with several layers of Long Short-Term Memory (LSTM) and fully connected layers. It is trained with historical stock values, technical indicators and Twitter attribute information retrieved, extracted and calculated from posts on the social media platform Twitter. These attributes are sentiment score, favourites, followers, retweets and if an account is verified. To collect data from Twitter, Twitter’s API is used. Sentiment analysis is conducted with Valence Aware Dictionary and sEntiment Reasoner (VADER). The results show that by adding more Twitter attributes, the Mean Squared Error (MSE) between the predicted prices and the actual prices improved by 3%. With technical analysis taken into account, MSE decreases from 0.1617 to 0.1437, which is an improvement of around 11%. The restrictions of this study include that the selected stock has to be publicly listed on the stock market and popular on Twitter and among individual investors. Besides, the stock markets’ opening hours differ from Twitter, which constantly available. It may therefore introduce noises in the model. / Att kunna förutspå aktiekurser kan sägas vara aktiespararnas outtalade önskan. Även om aktievärden är komplicerade att förutspå finns det många teorier om vad som påverkar dess rörelser, bland annat räntor, nyheter och sociala medier. Med hjälp av maskininlärning kan mönster i data identifieras bortom människans intellekt. I detta examensarbete skapas och testas en modell inom maskininlärning i syfte att beräkna framtida aktiepriser. Modellen baseras på ett neuralt nätverk med flera lager av LSTM och fullt kopplade lager. Den tränas med historiska aktievärden, tekniska indikatorer och Twitter-attributinformation. De är hämtad, extraherad och beräknad från inlägg på den sociala plattformen Twitter. Dessa attribut är sentiment-värde, antal favorit-markeringar, följare, retweets och om kontot är verifierat. För att samla in data från Twitter används Twitters API och sentimentanalys genomförs genom VADER. Resultatet visar att genom att lägga till fler Twitter attribut förbättrade MSE mellan de förutspådda värdena och de faktiska värdena med 3%. Genom att ta teknisk analys i beaktande minskar MSE från 0,1617 till 0,1437, vilket är en förbättring på 11%. Begränsningar i denna studie innefattar bland annat att den utvalda aktien ska vara publikt listad på börsen och populär på Twitter och bland småspararna. Dessutom skiljer sig aktiemarknadens öppettider från Twitter då den är ständigt tillgänglig. Detta kan då introducera brus i modellen.
252

ARL - anledningen till nästa börskrasch? : En kvantitativ studie om ARL:s påverkan på den svenska aktiemarknaden / ARL - the reason for the next stock market crash? : A quantitative study about ARLs impact on the Swedish stock market

Dagerhem, Einar, Strömberg, Simon January 2020 (has links)
Tidsperioden mellan räkenskapsårets slut och datumet för påskriven revisionsberättelse benämns audit report lag (ARL). Anledningarna till att ARL uppstår har studerats i stor utsträckning, men de konkreta effekterna som uppstår till följd av ARL är mindre studerade. En tidigare studie om ARL:s samband med ökad risk för aktieprisfall på den kinesiska aktiemarknaden visade på ett positivt samband. På grund av detta samband finns ett intresse att studera om ett liknande samband existerar på den svenska aktiemarknaden. Syftet med studien är att förklara ett eventuellt samband mellan lång ARL och ökad risk för aktieprisfall på den svenska aktiemarknaden. Studien använder sig av en deduktiv ansats och en longitudinell forskningsdesign bestående av kvantitativ data för att försöka förklara ett eventuellt samband mellan lång ARL och en ökad risk för aktieprisfall. Datamaterialet bestod av sekundärdata. Studien finner inget samband mellan lång ARL och ökad risk för aktieprisfall på den svenska aktiemarknaden. Däremot visas svaga indikationer på att kort ARL leder till ökad risk för aktieprisfall på den svenska aktiemarknaden. Studien bidrar med utökad kunskap om sambanden mellan ARL och ökad risk för aktieprisfall. Vidare bidrar studien med kunskap för revisorer, bolagsledningar och investerare om vilka konsekvenser ARL kan ha på börsnoterade bolags aktiekurs. / The time period between the fiscal year end and the audit report date is termed audit report lag (ARL). The determinants of ARL have been frequently studied, however the practical consequences of ARL have not been studied to the same extent. A previous study about ARLs association with stock price crash risk on the Chinese stock market showed a positive association. This association made it interesting to study if a similar association exists on the Swedish stock market. The purpose of this study is to explain a possible association between long ARL and an increased stock price crash risk on the Swedish stock market. This study uses a deductive approach and a longitudinal research design consisting of quantitative data to explain a possible association between long ARL and an increased stock price crash risk. The data set consisted of secondary data. The study finds no association between long ARL and an increased stock price crash risk on the Swedish stock market. However, the study does find weak indications that short ARL leads to an increased stock price crash risk on the Swedish stock market. The study contributes with increased knowledge regarding associations between ARL and an increased stock price crash risk. Furthermore, the study contributes with knowledge for auditors, company management and investors of the consequences ARL can have on listed companies’ stock price.
253

Påverkar bedömningar från kreditvärderingsinstitut aktiekursen? : En studie utifrån de svenska storbankerna kring finanskrisen 2008 / Do estimations from credit rating agencies affect the stock price? : A study on the major swedish banks around the financial crisis of 2008

Löfgren, Jesper, Ellmén Millberg, Daniel January 2020 (has links)
Bakgrund: Kreditvärderingsinstituten har genom åren fått en del kritik. Under finanskrisen kring 2008 var en bidragande orsak till att kraschen blev så allvarlig på grund av felaktiga kreditvärderingar. Detta var dock endast möjligt på grund av att banker i stor utsträckning ignorerade riskerna med de felaktiga kreditbetygen, som de med hög sannolikhet var medvetna om. Med bakgrund som denna anser författarna att det är av intresse och nytta att granska huruvida kreditbedömningar på banker påverkar aktiekursen och på så sätt bolagsvärdet. Syfte: Syftet med denna studie är att undersöka om kreditbetygsförändringar på de svenska storbankerna; Handelsbanken, Nordea, SEB och Swedbank påverkar respektive banks aktiekurs. Ett delsyfte är att studera eventuell omfattning av denna påverkan på aktiekursen. Ett vidare delsyfte är att undersöka om det finns en skillnad i hur kreditbetygsförändringar påverkar aktiepriset hos de svenska storbankerna i hög- respektive lågkonjunktur. Metod: Denna kvantitativa studie grundas i en deduktiv ansats och hypoteser har utformats med hjälp av författarnas utvalda teorier: Effektiva marknadshypotesen (EMH), Agentteori och Signalteori. Studien har sedan genomförts i form av en eventstudie och det har uppmätts om det finns signifikanta avvikelser i aktiekursen vid publicerandet av en kreditbetygsförändring. Resultat: Resultatet i studien visar på att det finns signifikant påverkan på aktiekursen vid kreditbetygsnedgraderingar på eventdagen. Det påvisades även att lågkonjunktur var en bidragande faktor till aktieutvecklingen. Slutsats: Denna studie finner att kreditbetygsförändringar utgör en effekt på aktiekursen hos de svenska storbankerna. Det kan dock inte fastställas om det finns någon skillnad mellan upp- och nedgraderingar i denna studie. Resultatet visar istället på att lågkonjunktur är den bakomliggande orsaken till att aktiekursen påverkas signifikant. Resultatet tyder även på att aktiekursen har anpassat sig snabbare än i tidigare studier, vilket kan vara en följd av en mer digitaliserad marknad. / Background: Credit rating agencies have received a lot of criticism over the years. During the financial crisis, a contributing cause to why the crash became so serious was due to incorrect credit ratings. This was although only possible because banks generally ignored the risk in the incorrect credit ratings, which they with high probability knew of. With a background like this, the authors believe that it is of interest and benefit to examine whether credit assessments on banks affect the stock price and thus the company value. Purpose: The purpose of this study is to investigate whether credit rating changes on the major swedish banks; Handelsbanken, Nordea, SEB and Swedbank affect each bank's stock price. One part of the purpose is to study the extent of this impact on the stock price. A further part of the purpose is to study if there is a difference in the effect credit rating changes have on the major swedish banks stock price in a economic expansion respective recession. Methodology: This quantitative study is based on a deductive approach and hypotheses have been designed using the authors' selected theories: The Effective Market Hypothesis (EMH), Agent Theory and Signal Theory. The study is then implemented in the form of an event study and it has been tested if there are any significant deviations in the stock price connected to the credit rating changes. Results: The result of the study indicates that there is significant effect on the stock price during the event day when a credit rating downgrade is announced. The results also show that recession is a contributing factor to the significant effect on the stock price. Conclusions: This study finds that changes of credit ratings constitute an effect on the stock price among the big Swedish banks. It can however not be established if there is a difference between up- and downgrades. The result indicates instead that recession is the contributing factor to the significant effect on the stock price. The results also indicate that the stock price has adjusted faster than in earlier studies, which can be an effect of a more digitized market.
254

En studie av hur aktiekursprediktioner för läkemedelsbolag påverkas av patentgodkännande : En kvantitativ analys genom ARIMA och ARIMAX / A study of how predictions for stock price in pharmaceutical companies is affected by patent approvals : A quantitative analysis using ARIMA and ARIMAX

Hill Anderberg, Camilla, Gustafson, Alice January 2021 (has links)
In this thesis we investigate whether the inclusion of an exogenous variable in the form of patentapproval can improve the ARIMA model's predictions for the pharmaceutical companyAstrazeneca. A point of departure for the study is the questioning of the efficient markethypothesis. When comparing data on patent date approval with stock exchange data for threepharmaceutical companies, it could be observed that share prices increased on the date ofapproval in 65 percent of the cases. This observed correlation combined with the fact that severalpapers have established that the stock market may not be efficient make it interesting to studywhether the value of a patent has been included in the stock price prior to approval date.To investigate this, an ARIMA and an ARIMAX model was estimated. The exogenous variable,which controls for patent approvals, was created by retrieving data from the EPO's databasePATSTAT. The retrieved data was then formatted into a dummy variable. The purpose ofincluding an exogenous variable is to investigate whether the market reacts to patent information.If the addition of the exogenous variable proves significant, the result is in conflict with theefficient market hypothesis.During the model selection, it was found that an ARIMA (4,1,2) was the superior model. Themodel was then compared with the corresponding ARIMAX model. When comparing themodels, it was found that the predictions of the ARIMAX model follow the observed datasomewhat better, but a t-test concluded that the improvement was not statistically significant.This implies that the value of the patent has already been included in stock prices prior to patentapproval and indicates that the price increase is random. This results thus lends support for theefficient market hypothesis. To investigate this further, the stock market data was compared witha random walk and by conducting a t-test it could be concluded that it was not possible to rejectthe hypothesis that share prices follow a random walk, thus the result further supports theefficient market hypothesis.
255

Volatility estimates of ARCH models.

January 2001 (has links)
Chung Kwong-leung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 80-84). / Abstracts in English and Chinese. / ACKNOWOLEDGMENTS --- p.iii / LIST OF TABLES --- p.iv / LIST OF ILLUSTRATIONS --- p.vi / CHAPTER / Chapter ONE --- INTORDUCTION --- p.1 / Chapter TWO --- LITERATURE REVIEW --- p.5 / Volatility / ARCH Models / The Accuracy of ARCH Volatility Estimates / Chapter THREE --- METHODOLOGY --- p.11 / Testing and Estimation / Simulation / Chapter FOUR --- DATA DESCRIPTION AND EMPIRICAL RESULTS --- p.29 / Data Description / Testing and Estimation Results / Simulation Results / Chapter FIVE --- CONCLUSION --- p.45 / TABLES --- p.49 / ILLUSTRATIONS --- p.58 / APPENDICES --- p.77 / BIBOGRAPHY --- p.80
256

A comparison of the Philips price earnings multiple model and the actual future price earnings multiple of selected companies listed on the Johannesburg stock exchange

Coetzee, G. J 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: The price earnings multiple is a ratio of valuation and is published widely in the media as a comparative instrument of investment decisions. It is used to compare company valuation levels and their future growth/franchise opportunities. There have been numerous research studies done on the price earnings multiple, but no study has been able to design or derive a model to successfully predict the future price earnings multiple where the current stock price and following year-end earnings per share is used. The most widely accepted method of share valuation is to discount the future cash flows by an appropriate discount rate. Popular and widely used stock valuation models are the Dividend Discount Model and the Gordon Model. Both these models assume that future dividends are cash flows to the shareholder. Thomas K. Philips, the chief investment officer at Paradigm Asset Management in New York, constructed a valuation model at the end of 1999, which he published in The Journal of Portfolio Management. The model (Philips price earnings multiple model) was derived from the Dividend Discount Model and calculates an implied future price earnings multiple. The Philips price earnings multiple model includes the following independent variables: the cost of equity, the return on equity and the dividend payout ratio. Each variable in the Philips price earnings multiple model is a calculated present year-end point value, which was used to calculate the implied future price earnings multiple (present year stock price divided by following year-end earnings per share). This study used a historical five year (1995-2000) year-end data to calculate the implied and actual future price earnings multiple. Out of 225, Johannesburg Stock Exchange listed companies studied, only 36 were able to meet the criteria of the Philips price earnings multiple model. Correlation and population mean tests were conducted on the implied and constructed data sets. It proved that the Philips price earnings multiple model was unsuccesful in predicting the future price earnings multiple, at a statistical 0,20 level of significance. The Philips price earnings multiple model is substantially more complex than the Discount Dividend Model and includes greater restrictions and more assumptions. The Philips price earnings multiple model is a theoretical instrument which can be used to analyse hypothetical (with all model assumptions and restrictions having been met) companies. The Philips price earnings multiple model thus has little to no applicability in the practical valuation of stock price on Johannesburg Stock Exchange listed companies. / AFRIKAANSE OPSOMMING: Die prysverdienste verhouding is 'n waarde bepalingsverhouding en word geredelik gepubliseer in die media. Hierdie verhouding is 'n maatstaf om maatskappye se waarde vlakke te vergelyk en om toekomstige groei geleenthede te evalueer. Daar was al verskeie navorsingstudies gewy aan die prysverdiensteverhouding, maar nog geen model is ontwikkel wat die toekomstige prysverdiensteverhouding (die teenswoordige aandeelprys en toekomstige jaareind verdienste per aandeel) suksesvol kon modelleer nie. Die mees aanvaarbare metode vir waardebepaling van aandele is om toekomstige kontantvloeie te verdiskonteer teen 'n toepaslike verdiskonteringskoers. Van die vernaamste en mees gebruikte waardeberamings modelle is die Dividend Groei Model en die Gordon Model. Beide modelle gebruik die toekomstige dividendstroom as die toekomstige kontantvloeie wat uitbetaal word aan die aandeelhouers. Thomas K. Philips, die hoof beleggingsbeampte by Paradigm Asset Management in New York, het 'n waardeberamingsmodel ontwerp in 1999. Die model (Philips prysverdienste verhoudingsmodei) was afgelei vanaf die Dividend Groei Model en word gebruik om 'n geïmpliseerde toekomstige prysverdiensteverhouding te bereken. Die Philips prysverdienste verhoudingsmodel sluit die volgende onafhanklike veranderlikes in: die koste van kapitaal, die opbrengs op aandeelhouding en die uitbetalingsverhouding. Elke veranderlike in hierdie model is 'n berekende teenswoordige jaareinde puntwaarde, wat gebruik was om die toekomstige geïmpliseerde prysverdiensteverhouding (teenswoordige jaar aandeelprys gedeel deur die toekomstige verdienste per aandeel) te bereken. In hierdie studie word vyf jaar historiese jaareind besonderhede gebruik om die geïmpliseerde en werklike toekomstige prysverdiensteverhouding te bereken. Van die 225 Johannesburg Effektebeurs genoteerde maatskappye, is slegs 36 gebruik wat aan die vereistes voldoen om die Philips prysverdienste verhoudingsmodel te toets. Korrelasie en populasie gemiddelde statistiese toetse is op die berekende en geïmpliseerde data stelle uitgevoer en gevind dat die Philips prysverdienste verhoudingsmodel, teen 'n statistiese 0,20 vlak van beduidenheid, onsuksesvol was om die toekomstige prysverdiensteverhouding vooruit te skat. Die Philips prysverdienste verhoudingsmodel is meer kompleks as die Dividend Groei Model met meer aannames en beperkings. Die Philips prysverdienste verhoudingsmodel is 'n teoretiese instrument wat gebruik kan word om hipotetiese (alle model aannames en voorwaardes is nagekom) maatskappye te ontleed. Dus het die Philips prysverdienste verhoudingsmodel min tot geen praktiese toepassingsvermoë in die werkilke waardasie van aandele nie.
257

Non-parametric volatility measurements and volatility forecasting models

Du Toit, Cornel 03 1900 (has links)
Assignment (MComm)--Stellenbosch University, 2005. / ENGLISH ABSTRACT: Volatilty was originally seen to be constant and deterministic, but it was later realised that return series are non-stationary. Owing to this non-stationarity nature of returns, there were no reliable ex-post volatility measurements. Subsequently, researchers focussed on ex-ante volatility models. It was only then realised that before good volatility models can be created, reliable ex-post volatility measuremetns need to be defined. In this study we examine non-parametric ex-post volatility measurements in order to obtain approximations of the variances of non-stationary return series. A detailed mathematical derivation and discussion of the already developed volatility measurements, in particular the realised volatility- and DST measurements, are given In theory, the higher the sample frequency of returns is, the more accurate the measurements are. These volatility measurements referred to above, however, all have short-comings in that the realised volatility fails if the sample frequency becomes to high owing to microstructure effects. On the other hand, the DST measurement cannot handle changing instantaneous volatility. In this study we introduce a new volatility measurement, termed microstructure realised volatility, that overcomes these shortcomings. This measurement, as with realised volatility, is based on quadratic variation theory, but the underlying return model is more realistic. / AFRIKAANSE OPSOMMING: Volatiliteit is oorspronklik as konstant en deterministies beskou, dit was eers later dat besef is dat opbrengste nie-stasionêr is. Betroubare volatiliteits metings was nie beskikbaar nie weens die nie-stasionêre aard van opbrengste. Daarom het navorsers gefokus op vooruitskattingvolatiliteits modelle. Dit was eers op hierdie stadium dat navorsers besef het dat die definieering van betroubare volatiliteit metings 'n voorvereiste is vir die skepping van goeie vooruitskattings modelle. Nie-parametriese volatiliteit metings word in hierdie studie ondersoek om sodoende benaderings van die variansies van die nie-stasionêre opbrengste reeks te beraam. 'n Gedetaileerde wiskundige afleiding en bespreking van bestaande volatiliteits metings, spesifiek gerealiseerde volatiliteit en DST- metings, word gegee. In teorie salopbrengste wat meer dikwels waargeneem word tot beter akkuraatheid lei. Bogenoemde volatilitieits metings het egter tekortkominge aangesien gerealiseerde volatiliteit faal wanneer dit te hoog raak, weens mikrostruktuur effekte. Aan die ander kant kan die DST meting nie veranderlike oombliklike volatilitiet hanteer nie. Ons stel in hierdie studie 'n nuwe volatilitieits meting bekend, naamlik mikro-struktuur gerealiseerde volatiliteit, wat nie hierdie tekortkominge het nie. Net soos met gerealiseerde volatiliteit sal hierdie meting gebaseer wees op kwadratiese variasie teorie, maar die onderliggende opbrengste model is meer realisties.
258

Improving the accuracy of prediction using singular spectrum analysis by incorporating internet activity

Badenhorst, Dirk Jakobus Pretorius 03 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Researchers and investors have been attempting to predict stock market activity for years. The possible financial gain that accurate predictions would offer lit a flame of greed and drive that would inspire all kinds of researchers. However, after many of these researchers have failed, they started to hypothesize that a goal such as this is not only improbable, but impossible. Previous predictions were based on historical data of the stock market activity itself and would often incorporate different types of auxiliary data. This auxiliary data ranged as far as imagination allowed in an attempt to find some correlation and some insight into the future, that could in turn lead to the figurative pot of gold. More often than not, the auxiliary data would not prove helpful. However, with the birth of the internet, endless amounts of new sources of auxiliary data presented itself. In this thesis I propose that the near in finite amount of data available on the internet could provide us with information that would improve stock market predictions. With this goal in mind, the different sources of information available on the internet are considered. Previous studies on similar topics presented possible ways in which we can measure internet activity, which might relate to stock market activity. These studies also gave some insights on the advantages and disadvantages of using some of these sources. These considerations are investigated in this thesis. Since a lot of this work is therefore based on the prediction of a time series, it was necessary to choose a prediction algorithm. Previously used linear methods seemed too simple for prediction of stock market activity and a new non-linear method, called Singular Spectrum Analysis, is therefore considered. A detailed study of this algorithm is done to ensure that it is an appropriate prediction methodology to use. Furthermore, since we will be including auxiliary information, multivariate extensions of this algorithm are considered as well. Some of the inaccuracies and inadequacies of these current multivariate extensions are studied and an alternative multivariate technique is proposed and tested. This alternative approach addresses the inadequacies of existing methods. With the appropriate methodology chosen and the appropriate sources of auxiliary information chosen, a concluding chapter is done on whether predictions that includes auxiliary information (obtained from the internet) improve on baseline predictions that are simply based on historical stock market data. / AFRIKAANSE OPSOMMING: Navorsers en beleggers is vir jare al opsoek na maniere om aandeelpryse meer akkuraat te voorspel. Die moontlike finansiële implikasies wat akkurate vooruitskattings kan inhou het 'n vlam van geldgierigheid en dryf wakker gemaak binne navorsers regoor die wêreld. Nadat baie van hierdie navorsers onsuksesvol was, het hulle begin vermoed dat so 'n doel nie net onwaarskynlik is nie, maar onmoontlik. Vorige vooruitskattings was bloot gebaseer op historiese aandeelprys data en sou soms verskillende tipes bykomende data inkorporeer. Die tipes data wat gebruik was het gestrek so ver soos wat die verbeelding toegelaat het, in 'n poging om korrelasie en inligting oor die toekoms te kry wat na die guurlike pot goud sou lei. Navorsers het gereeld gevind dat hierdie verskillende tipes bykomende inligting nie van veel hulp was nie, maar met die geboorte van die internet het 'n oneindige hoeveelheid nuwe bronne van bykomende inligting bekombaar geraak. In hierdie tesis stel ek dus voor dat die data beskikbaar op die internet dalk vir ons kan inligting gee wat verwant is aan toekomstige aandeelpryse. Met hierdie doel in die oog, is die verskillende bronne van inligting op die internet gebestudeer. Vorige studies op verwante werk het sekere spesifieke maniere voorgestel waarop ons internet aktiwiteit kan meet. Hierdie studies het ook insig gegee oor die voordele en die nadele wat sommige bronne inhou. Hierdie oorwegings word ook in hierdie tesis bespreek. Aangesien 'n groot gedeelte van hierdie tesis dus gebasseer word op die vooruitskatting van 'n tydreeks, is dit nodig om 'n toepaslike vooruitskattings algoritme te kies. Baie navorsers het verkies om eenvoudige lineêre metodes te gebruik. Hierdie metodes het egter te eenvoudig voorgekom en 'n relatiewe nuwe nie-lineêre metode (met die naam "Singular Spectrum Analysis") is oorweeg. 'n Deeglike studie van hierdie algoritme is gedoen om te verseker dat die metode van toepassing is op aandeelprys data. Verder, aangesien ons gebruik wou maak van bykomende inligting, is daar ook 'n studie gedoen op huidige multivariaat uitbreidings van hierdie algoritme en die probleme wat dit inhou. 'n Alternatiewe multivariaat metode is toe voorgestel en getoets wat hierdie probleme aanspreek. Met 'n gekose vooruitskattingsmetode en gekose bronne van bykomende data is 'n gevolgtrekkende hoofstuk geskryf oor of vooruitskattings, wat die bykomende internet data inkorporeer, werklik in staat is om te verbeter op die eenvoudige vooruitskattings, wat slegs gebaseer is op die historiese aandeelprys data.
259

Evidence of volatility clustering on the FTSE/JSE top 40 index

Louw, Jan Paul 12 1900 (has links)
Thesis (MBA (Business Management))--Stellenbosch University, 2008. / ENGLISH ABSTRACT: This research report investigated whether evidence of volatility clustering exists on the FTSE/JSE Top 40 Index. The presence of volatility clustering has practical implications relating to market decisions as well as the accurate measurement and reliable forecasting of volatility. This research report was conducted as an in-depth analysis of volatility, measured over five different return interval sizes covering the sample in non-overlapping periods. Each of the return interval sizes' volatility were analysed to reveal the distributional characteristics and if it violated the normality assumption. The volatility was also analysed to identify in which way, if any, subsequent periods are correlated. For each of the interval sizes one-step-ahead volatility forecasting was conducted using Linear Regression, Exponential Smoothing, GARCH(1,1) and EGARCH(1,1) models. The results were analysed using appropriate criteria to determine which of the forecasting models were more powerful. The forecasting models range from very simple to very complex, the rationale for this was to determine if more complex models outperform simpler models. The analysis showed that there was sufficient evidence to conclude that there was volatility clustering on the FTSE/JSE Top 40 Index. It further showed that more complex models such as the GARCH(1,1) and EGARCH(1,1) only marginally outperformed less complex models, and does not offer any real benefit over simpler models such as Linear Regression. This can be ascribed to the mean reversion effect of volatility and gives further insight into the volatility structure over the sample period. / AFRIKAANSE OPSOMMING: Die navorsingsverslag ondersoek die FTSE/JSE Top 40 Indeks om te bepaal of daar genoegsame bewyse is dat volatiliteitsbondeling teenwoordig is. Die teenwoordigheid van volatiliteitsbondeling het praktiese implikasies vir besluite in finansiele markte en akkurate en betroubare volatiliteitsvooruitskattings. Die verslag doen 'n diepgaande ontleding van volatiliteit, gemeet oor vyf verskillende opbrengs interval groottes wat die die steekproef dek in nie-oorvleuelende periodes. Elk van die opbrengs interval groottes se volatiliteitsverdelings word ontleed om te bepaal of dit verskil van die normaalverdeling. Die volatiliteit van die intervalle word ook ondersoek om te bepaal tot watter mate, indien enige, opeenvolgende waarnemings gekorreleer is. Vir elk van die interval groottes word 'n een-stap-vooruit vooruitskatting gedoen van volatiliteit. Dit word gedoen deur middel van Lineêre Regressie, Eksponensiële Gladstryking, GARCH(1,1) en die EGARCH(1,1) modelle. Die resultate word ontleed deur middel van erkende kriteria om te bepaal watter model die beste vooruitskattings lewer. Die modelle strek van baie eenvoudig tot baie kompleks, die rasionaal is om te bepaal of meer komplekse modelle beter resultate lewer as eenvoudiger modelle. Die ontleding toon dat daar genoegsame bewyse is om tot die gevolgtrekking te kom dat daar volatiliteitsbondeling is op die FTSE/JSE Top 40 Indeks. Dit toon verder dat meer komplekse vooruitskattingsmodelle soos die GARCH(1,1) en die EGARCH(1,1) slegs marginaal beter presteer het as die eenvoudiger vooruitskattingsmodelle en nie enige werklike voordeel soos Lineêre Regressie bied nie. Dit kan toegeskryf word aan die neiging van volatiliteit am terug te keer tot die gemiddelde, wat verdere insig lewer oor volatiliteit gedurende die steekproef.
260

An econophysical investigation : using the Boltzmann distribution to determine market temperature as applied to the JSE all share index

Brand, Rene 03 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: Econophysics is a relatively new branch of physics. It entails the use of models in physics applied to economics. The distributions of financial time series are the aspect most intensely studied by physicists. This study is based on a study by Kleinert and Chen who applied the Boltzmann distribution to stock exchange data to define a market temperature that may be used by investors to indicate an impending stock market crash. Most econophysicists’ analysed the tail regions of the distributions as the tails represent risk in financial data. This study’s focus of analysis, on the other hand is the characterisation of the central portion of the probability distribution. The Boltzmann distribution, a cornerstone in statistical physics, yields an exponential distribution. The objective of this study is to investigate the suitability of using a market volatility forecasting method from econophysics, namely the Boltzmann/market temperature method. As econometric benchmark the ARCH/GARCH method is used. Stock market indices are known to be non-normally (non-Gaussian) distributed. The distribution pattern of a stock market index of reasonable high sampling frequency (typically interday or intraday) is leptokurtic with heavy tails. Mesoscopic (interday) distributions of financial time series have been found to be exponential distributions. If the empirical exponential distribution is therefore interpreted as a Boltzmann distribution, then a market temperature can be calculated from the exponential distribution. Empirical data for this study is in the form of daily closing values of the Johannesburg Stock Exchange (JSE) All Share Index (ALSI) and the Standard & Poor 500 (S & P 500) index for the period 1995 through to 2008. The Kleinert and Chen study made use of intraday data obtained from established markets. This study differs from the Kleinert and Chen study in that interday data obtained from an emerging market, namely the South African stock market is used. Neither of the aforementioned two differences had a significant influence on the results of this study. The JSE ALSI log-return data displays non-Gaussian properties and the Laplace (double exponential) distribution fit the data well. A plot of the market temperature provided a clear indication of when stock market crashes occurred. Results of the econophysical (Boltzmann/market temperature) method compared well to results of the econometric (ARCH/GARCH) method and subject to certain improvements can be utilised successfully. A leptokurtic, non-Gaussian nature was established for daily log-returns of the JSE ALSI and the S & P 500 index. The Laplace (double exponential) distribution fit the annual logreturns of the JSE ALSI and S & P 500 index well. As a result of the good Laplace fit, annual market temperatures could be calculated for the JSE ALSI and the S & P 500 index. The market temperature method was effective in identifying market crashes for both indices, but a limitation of the method is that only annual market temperatures can be determined. The availability of intraday stock index data should improve the interval for which market temperature can be determined. / AFRIKAANSE OPSOMMING: Ekonofisika is ‘n relatiewe nuwe studieveld. Dit behels die toepassing van fisiese modelle op finansiële data. Die waarskynlikheidsversdelings van finansiële tydreekse is die aspek wat meeste deur fisisie bestudeer word. Hierdie studie is gebaseer op ‘n studie deur Kleinert en Chen. Hulle het die Boltzmann-verspreiding op ‘n aandele-indeks toegepas en ‘n mark-temperatuur bepaal. Hierdie mark-temperatuur kan deur ontleders gebruik word as waarskuwingsmeganisme teen moontlike aandelebeurs ineenstortings. Die meeste fisisie het die uiterste areas van die verspreidingskurwes geanaliseer omdat hierdie uiterste area risiko in finansiële data verteenwoordig. Die analitiese fokus van hierdie studie, aan die ander kant, is die karakterisering van die die sentrale areas van die waarskeinlikheidsverdeling. Die Boltzmann verspreiding, die hoeksteen van Statistiese Fisika lewer ‘n eksponensiële waarskynlikheidsverdeling. Die doel van hierdie studie is om ‘n ondersoek te doen na die geskiktheid van die gebruik van ‘n ekonofisiese, vooruitskattingsmetode, naamlik die Boltzmann/mark-temperatuur model. As ekonometriese verwysing is die “ARCH/GARCH” metode toegepas. Aandelemark indekse is bekend vir die nie-Gaussiese verspreiding daarvan. Die verspreidingspatroon van ‘n aandelemark indeks met‘n redelike hoë steekproef frekwensie (in die orde van ‘n dag of minder) is leptokurties met breë stert-dele. Mesoskopiese (interdag) verspreidings van finansiële tydreekse is getipeer as eksponensieël. Indien die empiriese eksponensiële-verspreiding as ‘n Boltzmann-verspreiding geinterpreteer word, kan ‘n mark-temperatuur daarvoor bereken word. Empiriese data vir die gebruik in hierdie studie is in die vorm van daaglikse sluitingswaardes van die Johannesburgse Effektebeurs (JSE) se Alle Aandele Indeks (ALSI) en die Standard en Poor 500 (S & P 500) indeks vir die periode 1995 tot en met 2008. Die Kleinert en Chen studie het van intradag data vanuit ‘n ontwikkelde mark gebruik gemaak. Hierdie studie verskil egter van die Kleinert en Chen studie deurdat van interdag data vanuit ‘n opkomende mark, naamlik die Suid-Afrikaanse aandelemark, gebruik is. Nie een van die twee voorafgaande verskille het ‘n beduidende invloed op die resultate van hierdie studie gehad nie. Die JSE ALSI se logaritmiese opbrengs data vertoon nie-Gaussiese eienskappe en die Laplace (dubbeleksponensiële) verspreiding beskryf die data goed. ‘n Grafiek van die mark-temperatuur vertoon duidelik wanneer aandelemarkineenstortings plaasgevind het. Resultate van die ekonofisiese (Boltzmann/mark-temperatuur) metode vergelyk goed met resultate van die ekonometriese (“ARCH/GARCH”) metode en onderhewig aan sekere verbeteringe kan dit met sukses toegepas word. ‘n Leptokurtiese, nie-Gaussiese aard is vir daaglike opbrengswaardes vir die JSE ALSI en die S & P 500 indeks vasgestel. ‘n Laplace (dubbel-eksponensiële) verspreiding kan goed op die jaarlikse logaritmiese opbrengste van die JSE ALSI en die S & P 500 indeks toegepas word. As gevolg van die goeie aanwending van die Laplace-verspreiding kan ‘n jaarlikse mark-temperatuur vir die JSE ALSI en die S & P 500 indeks bereken word. Die mark-temperatuur metode is effektief in die identifisering van aandelemarkineenstorings vir beide indekse, hoewel daar ‘n beperking is op die aantal mark-temperature wat bereken kan word. Die beskikbaarheid van intradag aandele indekswaardes behoort die interval waarvoor mark-temperature bereken kan word te verbeter.

Page generated in 0.0881 seconds