• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A study of forecasts in Financial Time Series using Machine Learning methods

Asokan, Mowniesh January 2022 (has links)
Forecasting financial time series is one of the most challenging problems in economics and business. Markets are highly complex due to non-linear factors in data and uncertainty. It moves up and down without any pattern. Based on historical univariate close prices from the S\&P 500, SSE, and FTSE 100 indexes, this thesis forecasts future values using two different approaches: one using a classical method, a Seasonal ARIMA model, and a hybrid ARIMA-GARCH model, while the other uses an LSTM neural network. Each method is used to perform at different forecast horizons. Experimental results have proven that the LSTM and Hybrid ARIMA-GARCH model performs better than the SARIMA model. To measure the model performance we used the Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE).
2

Εφαρμογή αλγορίθμων και τεχνικών υπολογιστικής νοημοσύνης στην πρόβλεψη χρηματιστηριακών μεγεθών της ελληνικής αγοράς

Μαυρόγιαννη, Παναγιώτα 03 May 2010 (has links)
Στη συγκεκριμένη μελέτη γίνεται προσπάθεια για τη δημιουργία ενός υπολογιστικού πλαισίου για την ανάλυση της ελληνικής χρηματιστηριακής αγοράς. Πιο συγκεκριμένα, η εργασία πραγματεύεται τη δυνατότητα πρόβλεψης κατά την επενδυτική διαδικασία στο Χρηματιστήριο Αξιών Αθηνών, με τη χρήση τεχνικών και αλγορίθμων Υπολογιστικής Νοημοσύνης. Ειδικότερα, χρησιμοποιούνται πραγματικά δεδομένα μετοχών του χρηματιστηρίου σε ημερήσια βάση, τα οποία με κατάλληλη επεξεργασία είναι δυνατόν να μας οδηγήσουν στην κατασκευή μοντέλου πρόβλεψης. “Is the economy stupid?” αναρωτιέται ο διάσημος οικονομολόγος Alan Greenspan, ενώ μέχρι και σήμερα οι κύριες (mainstream) οικονομικές θεωρίες (π.χ. Εfficient Μarket Ηypothesis) αδυνατούν να εξηγήσουν πολλά από τα χαρακτηριστικά και τη συμπεριφορά των οικονομικών και χρηματιστηριακών αγορών. Ο όρος αυτών των χαρακτηριστικών που χρησιμοποιείται στη διεθνή βιβλιογραφία είναι “stylized facts” και αναφέρονται σε γεγονότα όπως “fat tail phenomena”, “clustered volatility” και άλλα. Το ενδιαφέρον μας εστιάζεται σε χρηματιστηριακές οικονομικές αγορές, αλλά η μελέτη μπορεί εύκολα να αναχθεί σε οποιοδήποτε επίπεδο, για παράδειγμα σε αυτό της αλληλεπίδρασης των τραπεζών στον τομέα των διατραπεζικών συναλλαγών. Προφανώς θα προτιμούσαμε να δημιουργήσουμε μια πλήρως ολοκληρωμένη χρηματιστηριακή αγορά που να προσομοιώνει πλήρως την σημερινή κατάσταση, αλλά κάτι τέτοιο μάλλον είναι αδύνατο καθώς οι οικονομικές αγορές γενικότερα είναι τόσο σύνθετες, που και μόνο η πλήρης μοντελοποίηση μίας πτυχής της κρίνεται θεωρητικά ως πράγμα αδύνατο. Η εργασία αποτελεί μια προσπάθεια πρακτικής εφαρμογής της τεχνολογίας της Υπολογιστικής Νοημοσύνης, σύμφωνα με την μεθοδολογία της οποίας η προσπάθεια μας εστιάζεται στην κατάλληλη επιλογή και κατ’ επέκταση επεξεργασία των δεδομένων. Τα δεδομένα περιέχουν πληροφορία την οποία επιθυμούμε να εξάγουμε. Για την αξιοπιστία και την αποτελεσματικότητα του μοντέλου είναι αναγκαία η διαδικασία αξιολόγησής του, κατά την οποία φαίνεται εάν το μοντέλο λειτουργεί σωστά, σύμφωνα δηλαδή με την ακρίβεια (accuracy) που το χαρακτηρίζει κατά τη δημιουργία του. Για τη διασφάλιση της κατά το δυνατό πληρότητας της μελέτης, αφού πρώτα υπολογίσουμε για κάθε μία μετοχή ξεχωριστά την ημερήσια τιμή της, όπως αυτή προκύπτει από ορισμένους, αντιπροσωπευτικούς χρηματιστηριακούς δείκτες, προσπαθούμε να δούμε κατά πόσο πλησιάζει την πραγματική τιμή κλεισίματος της αντίστοιχης μετοχής, γεγονός το οποίο παρέχει σημαντική πληροφορία σχετικά με το χρηματιστηριακό σύστημα. Για την ανάπτυξη αυτού του μοντέλου προσομοίωσης θα είχε εξαιρετικό ενδιαφέρον η χρησιμοποίηση εναλλακτικών προσεγγίσεων τόσο από την πλευρά της ψυχολογίας της αγοράς όσο και από περιοχές της θεωρίας παιγνίων. / The aim of our research is to create a computational frame for analyzing the greek stock market. Specifically, this work deals with the possibility of prediction during the investment procedure in the Stock Market of Athens, using techniques and algorithms of Computational Intelligence. We use real stock market data from the greek market on a daily basis, which with a suitable treatment can lead to the construction of the prediction model. The famous economist Alan Greenspan wonders “Is the economy stupid?”, while to this very day the mainstream economical theories (for example the efficient market hypothesis) are unable to explain plenty of the properties as well as the performance of economic and stock markets. The term for these characteristics which is used in international literature is “stylized facts” and they refer to facts such as the fat tail phenomena, clustered volatility etc. Our interest is concentrated on stock markets, but the research can easily be extended on any level, for example on the field of bank interaction during the inter-bank transactions. Apparently, we would prefer to create a fully integrated stock market which would be able to fully simulate the current situation, but something like this is rather impossible since the economical markets are in general so complex that even the full modelling of one fold is judged as a fact theoretically impossible. This work is an attempt to apply in practise the technology of Computational Intelligence according to the methodology of which our effort is focused on the suitable choice and furthermore the suitable data processing. The data contain information which we wish to extract using either supervised or unsupervised learning. In order to create a reliable and effective model, we need to evaluate it during a process which proves whether the model functions appropriately, according to the accuracy that characterises the model during its creation. In order to ensure the thoroughness of the research, since we first calculate the daily price of each participle, as it comes up from some representative stock market indexes, we try to see how much does this price approach to the real closing price of the respective participle, a fact which provides important information about the stock market system. In order to develop this simulation system it would be interesting to use different approaches not only from the field of psychology of markets but also from the field of game theory.
3

A New Method and Python Toolkit for General Access to Spatiotemporal N-Dimensional Raster Data

Hales, Riley Chad 29 March 2021 (has links)
Scientific datasets from global-scale scientific models and remote sensing instruments are becoming available at greater spatial and temporal resolutions with shorter lag times. These data are frequently gridded measurements spanning two or three spatial dimensions, the time dimension, and often several data dimensions which vary by the specific dataset. These data are useful in many modeling and analysis applications across the geosciences. Unlike vector spatial datasets, raster spatial datasets lack widely adopted conventions in file formats, data organization, and dissemination mechanisms. Raster datasets are often saved using the Network Common Data Format (NetCDF), Gridded Binary (GRIB), Hierarchical Data Format (HDF), or Geographic Tagged Image File Format (GeoTIFF) file formats. Several of these are entirely or partially incompatible with common GIS software which introduces additional complexity in extracting values from these datasets. We present a method and companion Python package as a general-purpose tool for extracting time series subsets from these files using various spatial geometries. This method and tool enable efficient access to multidimensional data regardless of the format of the data. This research builds on existing file formats and software rather than suggesting new alternatives. We also present an analysis of optimizations and performance.
4

Att förutspå Sveriges bistånd : En jämförelse mellan Support Vector Regression och ARIMA

Wågberg, Max January 2019 (has links)
In recent years, the use of machine learning has increased significantly. Its uses range from making the everyday life easier with voice-guided smart devices to image recognition, or predicting the stock market. Predicting economic values has long been possible by using methods other than machine learning, such as statistical algorithms. These algorithms and machine learning models use time series, which is a set of data points observed constantly over a given time interval, in order to predict data points beyond the original time series. But which of these methods gives the best results? The overall purpose of this project is to predict Sweden’s aid curve using the machine learning model Support Vector Regression and the classic statistical algorithm autoregressive integrated moving average which is abbreviated ARIMA. The time series used in the prediction are annual summaries of Sweden’s total aid to the world from openaid.se since 1998 and up to 2019. SVR and ARIMA are implemented in python with the help of the Scikit- and Statsmodels libraries. The results from SVR and ARIMA are measured in comparison with the original value and their predicted values, while the accuracy is measured in Root Square Mean Error and presented in the results chapter. The result shows that SVR with the RBF-kernel is the algorithm that provides the best results for the data series. All predictions beyond the times series are then visually presented on a openaid prototype page using D3.js / Under det senaste åren har användningen av maskininlärning ökat markant. Dess användningsområden varierar mellan allt från att göra vardagen lättare med röststyrda smarta enheter till bildigenkänning eller att förutspå börsvärden. Att förutspå ekonomiska värden har länge varit möjligt med hjälp av andra metoder än maskininlärning, såsom exempel statistiska algoritmer. Dessa algoritmer och maskininlärningsmodeller använder tidsserier, vilket är en samling datapunkter observerade konstant över en given tidsintervall, för att kunna förutspå datapunkter bortom den originella tidsserien. Men vilken av dessa metoder ger bäst resultat? Projektets övergripande syfte är att förutse sveriges biståndskurva med hjälp av maskininlärningsmodellen Support Vector Regression och den klassiska statistiska algoritmen autoregressive integrated moving average som förkortas ARIMA. Tidsserien som används vid förutsägelsen är årliga summeringar av biståndet från openaid.se sedan år 1998 och fram till 2019. SVR och ARIMA implementeras i python med hjälp av Scikit-learn och Statsmodelsbiblioteken. Resultatet från SVR och ARIMA mäts i jämförelse mellan det originala värdet och deras förutspådda värden medan noggrannheten mäts i root square mean error och presenteras under resultatkapitlet. Resultatet visar att SVR med RBF kärnan är den algoritm som ger det bästa testresultatet för dataserien. Alla förutsägelser bortom tidsserien presenteras därefter visuellt på en openaid prototypsida med hjälp av D3.js.
5

Structured clustering representations and methods

Heilbut, Adrian Mark 21 June 2016 (has links)
Rather than designing focused experiments to test individual hypotheses, scientists now commonly acquire measurements using massively parallel techniques, for post hoc interrogation. The resulting data is both high-dimensional and structured, in that observed variables are grouped and ordered into related subspaces, reflecting both natural physical organization and factorial experimental designs. Such structure encodes critical constraints and clues to interpretation, but typical unsupervised learning methods assume exchangeability and fail to account adequately for the structure of data in a flexible and interpretable way. In this thesis, I develop computational methods for exploratory analysis of structured high-dimensional data, and apply them to study gene expression regulation in Parkinson’s (PD) and Huntington’s diseases (HD). BOMBASTIC (Block-Organized, Model-Based, Tree-Indexed Clustering) is a methodology to cluster and visualize data organized in pre-specified subspaces, by combining independent clusterings of blocks into hierarchies. BOMBASTIC provides a formal specification of the block-clustering problem and a modular implementation that facilitates integration, visualization, and comparison of diverse datasets and rapid exploration of alternative analyses. These tools, along with standard methods, were applied to study gene expression in mouse models of neurodegenerative diseases, in collaboration with Dr. Myriam Heiman and Dr. Robert Fenster. In PD, I analyzed cell-type-specific expression following levodopa treatment to study mechanisms underlying levodopa-induced dyskinesia (LID). I identified likely regulators of the transcriptional changes leading to LID and implicated signaling pathways amenable to pharmacological modulation (Heiman, Heilbut et al, 2014). In HD, I analyzed multiple mouse models (Kuhn, 2007), cell-type specific profiles of medium spiny neurons (Fenster, 2011), and an RNA-Seq dataset profiling multiple tissue types over time and across an mHTT allelic series (CHDI, 2015). I found evidence suggesting that altered activity of the PRC2 complex significantly contributes to the transcriptional dysregulation observed in striatal neurons in HD.
6

Automatsko određivanje i analitička provera parametara uzajamne entropije kardiovaskularnih vremenskih nizova / Automatic determination and analytical verification of cross entropyparameters of cardiovascular time series

Škorić Tamara 05 October 2017 (has links)
<p>Unakrsna aproksimativna entropija kvantifikuje međusobnu uređenost<br />dva istomvremeno snimljena vremenska niza. Iako je izvedena iz<br />veoma zastupljene entropije za procenu uređenosti jednog vremenskog<br />niza, još uvek nije dostigla njenu reputaciju. Cilj ove disertacije je<br />da identifikuje probleme koji otežavaju širu primenu unakrsne<br />entropije i da predloži skup rešenja. Validacija rezultata je rađena<br />na kardiovaskularnim signalima, sistolnog krvnog pritiska i palsnog<br />inetervala snimljenim na laboratorijskim pacovima i na signalima<br />zdravih volontera.</p> / <p>Cross-approximate entropy (XApEn) quantifies a mutual orderliness of two<br />simultaneously recorded time series. Although derived from the firmly<br />establishe solitary entropies, it has never reached their reputation and<br />deployment. The aim of this thesis is to identify the problems that preclude<br />wider XApEn implementation and to develop a set of solutions. Results were<br />validated using the cardiovascular time series, systolic blood pressure and<br />pulse interval, recorded from laboratories animals and also signals recorded<br />from healthy human volunteers.</p>
7

Analyses Of Atmospheric And Marine Observations Along The Turkish Coast

Tutsak, Ersin 01 September 2012 (has links) (PDF)
Time series and spectral analyses are applied to meteorological data (wind velocity, air temperature, barometric pressure) and sea level measurements from a total of 13 monitoring stations along the Turkish Coast. Analyses of four-year time series identify main time scales of transport and motion while establishing seasonal characteristics, i.e. distinguishing, for instance, between winter storms and summer sea-breeze system. Marine flow data acquired by acoustic doppler current profilers (ADCP) is also analyzed to better understand the response of the Turkish Strait System dynamics to short-term climatic variability. The cumulative results obtained from these analyses determine temporal and spatial scales of coastal atmospheric and marine fluxes as affected by the regional climate system
8

Ανάλυση σημάτων για τον υπολογισμό της φράκταλ διάστασης σε συνδυασμό με NARMAX μοντέλα

Παγανιά, Δήμητρα-Δέσποινα 01 October 2012 (has links)
Στην παρούσα διπλωματική υλοποιήθηκε ένα προγραμματιστικό περιβάλλον σε Matlab το οποίο θα μας επιτρέπει να αναλύσουμε σήματα (χρονοσειρές) με δύο τεχνικές: (Ι) με την τεχνική της εμβάπτισης του σήματος σε χώρους φάσεων υψηλών διαστάσεων με σκοπό τον υπολογισμό της φράκταλ (μορφοκλασματικής) διάστασης του ελκυστή που παράγεται στο χώρο φάσεων, χρησιμοποιώντας το θεώρημα εμβάπτισης του Takens και τη μέθοδο Grassberger & Procaccia, και (ΙΙ) με μοντελοποίηση του σήματος με τη μέθοδο NARMAX, ενσωματώνοντας Extended Kalman Filters και τη Θεωρία Διαμελισμού του Λαϊνιώτη για την εύρεση της τάξης (βαθμού πολυπλοκότητας) των NARMAX μοντέλων. Σκοπός της διπλωματικής είναι η σύγκριση των αντίστοιχων αποτελεσμάτων για διάφορες κατηγορίες σημάτων, με σκοπό να διαπιστωθεί κατά πόσο η φράκταλ διάσταση του σήματος σχετίζεται με την τάξη των NARMAX μοντέλων του σήματος. / In this thesis, we implemented in Matlab, a programming environment which allows us to analyze signals (time series) with two techniques: (I) with the technique of immersion of the signal in high-dimensional phase space to calculate the fractal dimension of the attractor generated in phase space, using the theorem of Takens and the method of Grassberger & Procaccia, and (II) signal modeling method NARMAX, incorporating Extended Kalman Filters and the Laynioti partition theorem for finding the degree of complexity of NARMAX models. The aim of the thesis is the comparison of the results for various categories of signals, in order to determine if the fractal dimension of the signal is associated with the degree of complexity of NARMAX models of the signal.
9

The Effects of Ketamine on the Brain’s Spontaneous Activity as Measured by Temporal Variability and Scale-Free Properties. A Resting-State fMRI Study in Healthy Adults.

Ayad, Omar January 2016 (has links)
Converging evidence from a variety of fields, including psychiatry, suggests that the temporal correlates of the brain’s resting state could serve as essential markers of a healthy and efficient brain. We use ketamine to induce schizophrenia-like states in 32 healthy individuals to examine the brain’s resting states using fMRI. We found a global reduction in temporal variability quantified by the time series’ standard deviation and an increase in scale-free properties quantified by the Hurst exponent representing the signal self-affinity over time. We also found network-specific and frequency-specific effects of ketamine on these temporal measures. Our results confirm prior studies in aging, sleep, anesthesia, and psychiatry suggesting that increased self-affinity and decreased temporal variability of the brain resting state could indicate a compromised and inefficient brain state. Our results expand our systemic view of the temporal structure of the brain and shed light on promising biomarkers in psychiatry
10

Efficiently Discovering Multipoles by Leveraging Geometric Properties

Dang, Anh The January 2022 (has links)
No description available.

Page generated in 0.0752 seconds