• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 10
  • 9
  • Tagged with
  • 50
  • 50
  • 50
  • 14
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Machine Learning for Market Prediction : Soft Margin Classifiers for Predicting the Sign of Return on Financial Assets

Abo Al Ahad, George, Salami, Abbas January 2018 (has links)
Forecasting procedures have found applications in a wide variety of areas within finance and have further shown to be one of the most challenging areas of finance. Having an immense variety of economic data, stakeholders aim to understand the current and future state of the market. Since it is hard for a human to make sense out of large amounts of data, different modeling techniques have been applied to extract useful information from financial databases, where machine learning techniques are among the most recent modeling techniques. Binary classifiers such as Support Vector Machines (SVMs) have to some extent been used for this purpose where extensions of the algorithm have been developed with increased prediction performance as the main goal. The objective of this study has been to develop a process for improving the performance when predicting the sign of return of financial time series with soft margin classifiers. An analysis regarding the algorithms is presented in this study followed by a description of the methodology that has been utilized. The developed process containing some of the presented soft margin classifiers, and other aspects of kernel methods such as Multiple Kernel Learning have shown pleasant results over the long term, in which the capability of capturing different market conditions have been shown to improve with the incorporation of different models and kernels, instead of only a single one. However, the results are mostly congruent with earlier studies in this field. Furthermore, two research questions have been answered where the complexity regarding the kernel functions that are used by the SVM have been studied and the robustness of the process as a whole. Complexity refers to achieving more complex feature maps through combining kernels by either adding, multiplying or functionally transforming them. It is not concluded that an increased complexity leads to a consistent improvement, however, the combined kernel function is superior during some of the periods of the time series used in this thesis for the individual models. The robustness has been investigated for different signal-to-noise ratio where it has been observed that windows with previously poor performance are more exposed to noise impact.
42

Modelování a predikce volatility finančních časových řad směnných kurzů / Modeling and Forecasting Volatility of Financial Time Series of Exchange Rates

Žižka, David January 2008 (has links)
The thesis focuses on modelling and forecasting the exchange rate time series volatility. The basic approach used for the conditional variance modelling are class (G)ARCH models and their variations. Modelling of the conditional mean is based on the use of AR autoregressive models. Due to the breach of one of the basic assumption of the models (normality assumption), an important part of the work is a detailed analysis of unconditional distribution of returns enabling the selection of a suitable distributional assumption of error terms of (G)ARCH models. The use of leptokurtic distribution assumption leads to a major improvement of volatility forecasting compared to normal distribution. In regard to this fact, the often applied GED and the Student's t distributions represent the key-stones of this work. In addition, the less known distributions are applied in the work, e.g. the Johnson's SU and the normal Inverse Gaussian Distribution. To model volatility, a great number of linear and non-linear models have been tested. Linear models are represented by ARCH, GARCH, GARCH in mean, integrated GARCH, fractionally integrated GARCH and HYGARCH. In the event of the presence of the leverage effect, non-linear EGARCH, GJR-GARCH, APARCH and FIEGARCH models are applied. Using suitable models according to the selected criteria, volatility forecasts are made with different long-term and short-term forecasting horizons. Outcomes of traditional approaches using parametric models (G)ARCH are compared with semi-parametric neural networks based concepts that are widely applicable in clustering and also in time series prediction problems. In conclusion, a description is given of the coincident and different properties of the analyzed exchange rate time series. The author further summarized the models that provide the best forecasts of volatility behaviour of the selected time series, including recommendations for their modelling. Such models can be further used to measure market risk rate by the Value at Risk method or in future price estimating where future volatility is inevitable prerequisite for the interval forecasts.
43

Využití prostředků umělé inteligence pro podporu na kapitálových trzích / The Use of Means of Artificial Intelligence for the Decision Making Support on Stock Market

Jasanský, Michal January 2013 (has links)
This diploma thesis deals with the prediction of financial time series on capital markets using artificial intelligence methods. There are created several dynamic architectures of artificial neural networks, which are learned and subsequently used for prediction of future movements of shares. Based on the results an assessment and recommendations for working with artificial neural networks are provided.
44

Multivariate Financial Time Series and Volatility Models with Applications to Tactical Asset Allocation / Multivariata finansiella tidsserier och volatilitetsmodeller med tillämpningar för taktisk tillgångsallokering

Andersson, Markus January 2015 (has links)
The financial markets have a complex structure and the modelling techniques have recently been more and more complicated. So for a portfolio manager it is very important to find better and more sophisticated modelling techniques especially after the 2007-2008 banking crisis. The idea in this thesis is to find the connection between the components in macroeconomic environment and portfolios consisting of assets from OMX Stockholm 30 and use these relationships to perform Tactical Asset Allocation (TAA). The more specific aim of the project is to prove that dynamic modelling techniques outperform static models in portfolio theory. / Den finansiella marknaden är av en väldigt komplex struktur och modelleringsteknikerna har under senare tid blivit allt mer komplicerade. För en portföljförvaltare är det av yttersta vikt att finna mer sofistikerade modelleringstekniker, speciellt efter finanskrisen 2007-2008. Idéen i den här uppsatsen är att finna ett samband mellan makroekonomiska faktorer och aktieportföljer innehållande tillgångar från OMX Stockholm 30 och använda dessa för att utföra Tactial Asset Allocation (TAA). Mer specifikt är målsättningen att visa att dynamiska modelleringstekniker har ett bättre utfall än mer statiska modeller i portföljteori.
45

Imputation and Generation of Multidimensional Market Data

Wall, Tobias, Titus, Jacob January 2021 (has links)
Market risk is one of the most prevailing risks to which financial institutions are exposed. The most popular approach in quantifying market risk is through Value at Risk. Organisations and regulators often require a long historical horizon of the affecting financial variables to estimate the risk exposures. A long horizon stresses the completeness of the available data; something risk applications need to handle.  The goal of this thesis is to evaluate and propose methods to impute financial time series. The performance of the methods will be measured with respect to both price-, and risk metric replication. Two different use cases are evaluated; missing values randomly place in the time series and consecutively missing values at the end-point of a time series. In total, there are five models applied to each use case, respectively.  For the first use case, the results show that all models perform better than the naive approach. The Lasso model lowered the price replication error by 35% compared to the naive model. The result from use case two is ambiguous. Still, we can conclude that all models performed better than the naive model concerning risk metric replication. In general, all models systemically underestimated the downstream risk metrics, implying that they failed to replicate the fat-tailed property of the price movement.
46

Online Non-linear Prediction of Financial Time Series Patterns

da Costa, Joel 11 September 2020 (has links)
We consider a mechanistic non-linear machine learning approach to learning signals in financial time series data. A modularised and decoupled algorithm framework is established and is proven on daily sampled closing time-series data for JSE equity markets. The input patterns are based on input data vectors of data windows preprocessed into a sequence of daily, weekly and monthly or quarterly sampled feature measurement changes (log feature fluctuations). The data processing is split into a batch processed step where features are learnt using a Stacked AutoEncoder (SAE) via unsupervised learning, and then both batch and online supervised learning are carried out on Feedforward Neural Networks (FNNs) using these features. The FNN output is a point prediction of measured time-series feature fluctuations (log differenced data) in the future (ex-post). Weight initializations for these networks are implemented with restricted Boltzmann machine pretraining, and variance based initializations. The validity of the FNN backtest results are shown under a rigorous assessment of backtest overfitting using both Combinatorially Symmetrical Cross Validation and Probabilistic and Deflated Sharpe Ratios. Results are further used to develop a view on the phenomenology of financial markets and the value of complex historical data under unstable dynamics.
47

Multifraktalita a prediktabilita finančních časových řad / On multifractality and predictability of financial time series

Heller, Michael January 2021 (has links)
The aim of this thesis is to examine an empirical relationship between multifrac- tality of financial time series and its returns. We approach the multifractality of a given time series as a measure of its complexity. Multifractal financial time series exhibit repeating self-similar patterns. Multifractality could be a good predictor of stock returns or a factor which can be used in asset pricing. We expected that capturing the complexity of a given time series by a model, a positive or a negative risk premia for investing into "more multifractal assets" could be found. Daily prices of 31 stock indices and daily returns of 10-years US government bonds were downloaded. All the data were recorded between 2012 and 2021. After estimation the multifractal spectra, applying MF-DFA method, of all stock indices, we ordered all stock indices from the lowest to the most multifractal. Then, we constructed a "multifractal portfolio" holding a long position in the 7 most multifractal and holding a short position in the 7 least multifractal stock indices. Fama-MacBeth regression with market risk premia and multifractal variable as independent variables was applied. Multi- fractality in all examined financial time series was found. We also found a very low negative risk premia for holding "a multifractal...
48

LSTM-based Directional Stock Price Forecasting for Intraday Quantitative Trading / LSTM-baserad aktieprisprediktion för intradagshandel

Mustén Ross, Isabella January 2023 (has links)
Deep learning techniques have exhibited remarkable capabilities in capturing nonlinear patterns and dependencies in time series data. Therefore, this study investigates the application of the Long-Short-Term-Memory (LSTM) algorithm for stock price prediction in intraday quantitative trading using Swedish stocks in the OMXS30 index from February 28, 2013, to March 1, 2023. Contrary to previous research [12, 32] suggesting that past movements or trends in stock prices cannot predict future movements, our analysis finds limited evidence supporting this claim during periods of high volatility. We discover that incorporating stock-specific technical indicators does not significantly enhance the predictive capacity of the model. Instead, we observe a trade-off: by removing the seasonal component and leveraging feature engineering and hyperparameter tuning, the LSTM model becomes proficient at predicting stock price movements. Consequently, the model consistently demonstrates high accuracy in determining price direction due to consistent seasonality. Additionally, training the model on predicted return differences, rather than the magnitude of prices, further improves accuracy. By incorporating a novel long-only and long-short trading strategy using the one-day-ahead predictive price, our model effectively captures stock price movements and exploits market inefficiencies, ultimately maximizing portfolio returns. Consistent with prior research [14, 15, 31, 32], our LSTM model outperforms the ARIMA model in accurately predicting one-day-ahead stock prices. Portfolio returns consistently outperforms the stock market index, generating profits over the entire time period. The optimal portfolio achieves an average daily return of 1.2%, surpassing the 0.1% average daily return of the OMXS30 Index. The algorithmic trading model demonstrates exceptional precision with a 0.996 accuracy rate in executing trades, leveraging predicted directional stock movements. The algorithmic trading model demonstrates an impressive 0.996 accuracy when executing trades based on predicted directional stock movements. This remarkable performance leads to cumulative and annualized excessive returns that surpass the index return for the same period by a staggering factor of 800. / Djupinlärningstekniker har visat en enastående förmåga att fånga icke-linjära mönster och samband i tidsseriedata. Med detta som utgångspunkt undersöker denna studie användningen av Long-Short-Term-Memory (LSTM)-algoritmen för att förutsäga aktiepriser med svenska aktier i OMXS30-indexet från den 28 februari 2013 till den 1 mars 2023. Vår analys finner begränsat stöd till tidigare forskning [12, 32] som hävdar att historisk aktierörelse eller trend inte kan användas för att prognostisera framtida mönster. Genom att inkludera aktiespecifika tekniska indikatorer observerar vi ingen betydande förbättring i modellens prognosförmåga. genom att extrahera den periodiska komponenten och tillämpa metoder för egenskapskonstruktion och optimering av hyperparametrar, lär sig LSTM-modellen användbara egenskaper och blir därmed skicklig på att förutsäga akrieprisrörelser. Modellen visar konsekvent högre noggrannhet när det gäller att bestämma prisriktning på grund av den regelbundna säsongsvariationen. Genom att träna modellen att förutse avkastningsskillnader istället för absoluta prisvärden, förbättras noggrannheten avsevärt. Resultat tillämpas sedan på intradagshandel, där förutsagda stängningspriser för nästkommande dag integreras med både en lång och en lång-kort strategi. Vår modell lyckas effektivt fånga aktieprisrörelser och dra nytta av ineffektiviteter på marknaden, vilket resulterar i maximal portföljavkastning. LSTM-modellen är överlägset bättre än ARIMA-modellen när det gäller att korrekt förutsäga aktiepriser för nästkommande dag, i linje med tidigare forskning [14, 15, 31, 32], är . Resultat från intradagshandeln visar att LSTM-modellen konsekvent genererar en bättre portföljavkastning jämfört med både ARIMA-modellen och dess jämförelseindex. Dessutom uppnår strategin positiv avkastning under hela den analyserade tidsperioden. Den optimala portföljen uppnår en genomsnittlig daglig avkastning på 1.2%, vilket överstiger OMXS30-indexets genomsnittliga dagliga avkastning på 0.1%. Handelsalgoritmen är oerhört exakt med en korrekthetsnivå på 0.996 när den genomför affärer baserat på förutsagda rörelser i aktiepriset. Detta resulterar i en imponerande avkastning som växer exponentiellt och överträffar jämförelseindex med en faktor på 800 under samma period.
49

The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodeller

Dalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
50

Applications of Advanced Time Series Models to Analyze the Time-varying Relationship between Macroeconomics, Fundamentals and Pan-European Industry Portfolios / Anwendungen moderner Zeitreihenverfahren zur Analyse zeitvariabler Zusammenhänge zwischen gesamtwirtschaftlichen Entwicklungen, Fundamentaldaten und europäischen Branchenportfolios

Mergner, Sascha 04 March 2008 (has links)
No description available.

Page generated in 0.0821 seconds