11 |
Essays in forecasting financial markets with predictive analytics techniquesAlroomi, Azzam J. M. A. H. January 2018 (has links)
This PhD dissertation comprises four essays on forecasting financial markets with unsupervised predictive analytics techniques, most notably time series extrapolation methods and artificial neural networks. Key objectives of the research were reproducibility and replicability, which are fundamental principles in management science and, as such, the implementation of all of the suggested algorithms has been fully automated and completely unsupervised in R. As with any predictive analytics exercise, computational intensiveness is a significant challenge and criterion of performance and, thus, both forecasting accuracy and uncertainty as well as computational times are reported in all essays. Multiple horizons, multiple methods and benchmarks and multiple metrics are employed as dictated by good practice in empirical forecasting exercises. The essays evolve in nature as each one is based on the previous one, testing one more condition as the essays progress, outlined in sequence as follows: which method wins overall in a very extensive evaluation over five frequencies (yearly, quarterly, monthly, weekly and daily data) over 18 time series of stocks with the biggest capitalization from the FTSE 100, over the last 20 years (first essay); the impact of horizon in this exercise and how this promotes different winners for different horizons (second essay); the impact of using uncertainty in the form of maximum-minimum values per period, despite still being interested in forecasting the mean expected value over the next period; and introducing a second variable capturing all other aspects of the behavioural nature of the financial environment – the trading volume – and evaluating whether this improves forecasting performance or not. The whole endeavour required the use of the High Performance Computing Wales (HPC Wales) for a significant amount of time, incurring computational costs that ultimately paid off in terms of increased forecasting accuracy for the AI approaches; the whole exercise for one series can be repeated on a fast laptop device (i7 with 16 GB of memory). Overall (forecasting) horses for (data) courses were once again proved to perform best, and the fact that one method cannot win under all conditions was once more evidenced. The introduction of uncertainty (in terms of range for every period), as well as volume as a second variable capturing environmental aspects, was beneficial with regard to forecasting accuracy and, overall, the research provided empirical evidence that predictive analytics approaches have a future in such a forecasting context. Given this was a predictive analytics exercise, focus was placed on forecasting levels (monetary values) and not log-returns; and out-of-sample forecasting accuracy, rather than causality, was a primary objective, thus multiple regression models were not considered as benchmarks. As in any empirical predicting analytics exercise, more time series, more artificial intelligence methods, more metrics and more data can be employed so as to allow for full generalization of the results, as long as all of these can be fully automated and forecast unsupervised in a freeware environment – in this thesis that being R.
|
12 |
Search for commutative fusion schemes in noncommutative association schemesMa, Jianmin. January 2004 (has links)
Thesis (Ph. D.)--Colorado State University, 2004. / Includes bibliographical references.
|
13 |
Audit online visibilityMitysková, Klára January 2013 (has links)
No description available.
|
14 |
Data analytics on Yelp data setTata, Maitreyi January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / William H. Hsu / In this report, I describe a query-driven system which helps in deciding which restaurant to invest in or which area is good to open a new restaurant in a specific place. Analysis is performed on already existing businesses in every state. This is based on certain factors such as the average star rating, the total number of reviews associated with a specific restaurant, the price range of the restaurant etc.
The results will give an idea of successful restaurants in a city, which helps you decide where to invest and what are the things to be kept in mind while starting a new business.
The main scope of the project is to concentrate on Analytics and Data Visualization.
|
15 |
Learning und Academic Analytics in Lernmanagementsystemen (LMS)Gaaw, Stephanie, Stützer, Cathleen M. 27 March 2018 (has links) (PDF)
Der Einsatz digitaler Medien hat in der nationalen Hochschullehre Tradition. Lernmanagementsysteme (LMS), E-Learning, Blended Learning, etc. sind Schlagwörter im Hochschulalltag. Allerdings stellt sich die Frage, was LMS und Blended Learning im Zeitalter digitaler Vernetzung und der herangewachsenen Generation der “Digital Natives” leisten (können bzw. sollen)? Die Verbreitung neuer Technologien im Zusammenhang mit neuen Lehr- und Lernkonzepten wie OER, MOOCS, etc. macht zudem die Entwicklung von Analytics-Instrumenten erforderlich. Das ist auch im nationalen Diskurs von großem Interesse und legt neue Handlungsfelder für Hochschulen offen. Doch es stellt sich die Frage, warum Learning Analytics (LA) bzw. Academic Analytics (AA) bisher nur in einem geringfügigen Maße an deutschen Hochschulen erfolgreich zum Einsatz kommen und warum eine Nutzung insbesondere in LMS, wie zum Beispiel OPAL, nicht ohne Weiteres realisierbar erscheint. Hierzu sollen Einflussfaktoren, die die Implementierung von LA- und AA-Instrumenten hemmen, identifiziert und diskutiert werden. Aufbauend darauf werden erste Handlungsfelder vorgestellt, deren Beachtung eine verstärkte Einbettung von LA- und AA Instrumenten in LMS möglich machen soll.
|
16 |
Criação de valor estratégico através de digital analytics / STRATEGIC VALUE CREATION THROUGH DIGITAL ANALYTICSClaudio Luis Cruz de Oliveira 20 December 2012 (has links)
A Internet mudou a competição entre as empresas, alterou produtos, cadeias de valor e até mesmo os mercados. Sua democratização aumentou o poder dos consumidores; esta mudança pode ser uma ameaça para as corporações. No entanto, o conhecimento emergente derivado de Digital Analtics gerou uma série de benefícios como a personalização de serviços, o impulso à inovação e a promoção de um diálogo com o consumidor em tempo real. O conceito de Digital Analytics inclui o monitoramento, a coleta, a análise e a elaboração de relatórios de dados digitais com a finalidade de entendimento e otimização da performance dos negócios. Esse estudo objetiva entender como as empresas brasileiras implementam Digital Analytics para atingir seus objetivos de negócios e, por consequência, suportar uma vantagem competitiva. Uma survey exploratória e múltiplos estudos de caso compõem a pesquisa de campo dessa tese. / Internet has changed the competition, shifting products, supply-chains and even markets. Its democratization gives more weight to the consumers; this change could be considered a threat to corporations. Although, the emergent knowledge derived from Digital Analytics brings a lot of benefits: delivering personalized services, fostering innovation and promoting the dialogue with the consumer in a real time basis. The concept of Digital Analytics includes the measurement, collection, analysis and reporting of digital data for the purposes of understanding and optimizing business performance. This paper aims to understand why and how the companies implement Digital Analytics in order to achieve business goals and thus support the competitive advantage. An exploratory survey and multiple case studies compound the methodological approach of this paper.
|
17 |
Species distribution modelling of Aloidendron dichotomum (quiver tree)Dube, Qobo 18 February 2019 (has links)
A variety of species distribution models (SDMs) were fit to data collected by a 15,000km road-side visual survey of Aloidendron dichotomum populations in the Northern Cape region of South Africa, and Namibia. We fit traditional presence/absence SDMs as well as SDMs on how proportions are distributed across three species stage classes (juvenile, adult, dead). Using five candidate machine learning methods and an ensemble model, we compared a number of approaches, including the role of balanced class (presence/absence) datasets in species distribution modelling. Secondary to this was whether or not the addition of species’ absences, generated where the species is known not to exist have an impact on findings. The goal of the analysis was to map the distribution of Aloidendron dichotomum under different scenarios. Precipitation-based variables were generally more deterministic of species presence or lack thereof. Visual interpretation of the estimated Aloidendron dichotomum population under current climate conditions, suggested a reasonably well fit model, having a large overlap with the sampled area. There however were some conditions estimated to be suitable for species incidence outside of the sampled range, where Aloidendron dichotomum are not known to occur. Habitat suitability for juvenile individuals was largely decreasing in concentration towards Windhoek. The largest proportion of dead individuals was estimated to be on the northern edge of the Riemvasmaak Conservancy, along the South African/Namibian boarder, reaching up to a 60% composition of the population. The adult stage class maintained overall proportional dominance. Under future climate scenarios, despite maintaining a bulk of the currently habitable conditions, a noticeable negative shift in habitat suitability for the species was observed. A temporal analysis of Aloidendron dichotomum’s latitudinal and longitudinal range revealed a potential south-easterly shift in suitable species conditions. Results were however met with some uncertainty as SDMs were uncovered to be extrapolating into a substantial amount of the study area. We found that balancing response class frequencies within the data proved not to be an effective error reduction technique overall, having no considerable impact on species detection accuracy. Balancing the classes however did improve the accuracy on the presence class, at the cost of accuracy of the observed absence class. Furthermore, overall model accuracy increased as more absences from outside the study area were added, only because these generated absences were predicted well. The resulting models had lower estimated suitability outside of the survey area and noticeably different suitability distributions within the survey area. This made the addition of the generated absences undesirable. Results highlighted the potential vulnerability of Aloidendron dichotomum given the pessimistic, yet likely future climate scenarios.
|
18 |
Estimating stochastic volatility models with student-t distributed errorsRama, Vishal 12 November 2020 (has links)
This dissertation aims to extend on the idea of Bollerslev (1987), estimating ARCH models with Student-t distributed errors, to estimating Stochastic Volatility (SV) models with Student-t distributed errors. It is unclear whether Gaussian distributed errors sufficiently account for the observed leptokurtosis in financial time series and hence the extension to examine Student-t distributed errors for these models. The quasi-maximum likelihood estimation approach introduced by Harvey (1989) and the conventional Kalman filter technique are described so that the SV model with Gaussian distributed errors and SV model with Student-t distributed errors can be estimated. Estimation of GARCH (1,1) models is also described using the method maximum likelihood. The empirical study estimated four models using data on four different share return series and one index return, namely: Anglo American, BHP, FirstRand, Standard Bank Group and JSE Top 40 index. The GARCH and SV model with Student-t distributed errors both perform best on the series examined in this dissertation. The metric used to determine the best performing model was the Akaike information criterion (AIC).
|
19 |
Robust portfolio construction: using resampled efficiency in combination with covariance shrinkageCombrink, James January 2017 (has links)
The thesis considers the general area of robust portfolio construction. In particular the thesis considers two techniques in this area that aim to improve portfolio construction, and consequently portfolio performance. The first technique focusses on estimation error in the sample covariance (one of portfolio optimisation inputs). In particular shrinkage techniques applied to the sample covariance matrix are considered and the merits thereof are assessed. The second technique considered in the thesis focusses on the portfolio construction/optimisation process itself. Here the thesis adopted the 'resampled efficiency' proposal of Michaud (1989) which utilises Monte Carlo simulation from the sampled distribution to generate a range of resampled efficient frontiers. Thereafter the thesis assesses the merits of combining these two techniques in the portfolio construction process. Portfolios are constructed using a quadratic programming algorithm requiring two inputs: (i) expected returns; and (ii) cross-sectional behaviour and individual risk (the covariance matrix). The output is a set of 'optimal' investment weights, one per each share who's returns were fed into the algorithm. This thesis looks at identifying and removing avoidable risk through a statistical robustification of the algorithms and attempting to improve upon the 'optimal' weights provided by the algorithms. The assessment of performance is done by comparing the out-of-period results with standard optimisation results, which highly sensitive and prone to sampling-error and extreme weightings. The methodology looks at applying various shrinkage techniques onto the historical covariance matrix; and then taking a resampling portfolio optimisation approach using the shrunken matrix. We use Monte-Carlo simulation techniques to replicate sets of statistically equivalent portfolios, find optimal weightings for each; and then through aggregation of these reduce the sensitivity to the historical time-series anomalies. We also consider the trade-off between sampling-error and specification-error of models.
|
20 |
Identifying predictors of evolutionary dispersion with phylogeographic generalised linear modelsWolff-Piggott, Timothy January 2017 (has links)
Discrete phylogeographic models enable the inference of the geographic history of biological organisms along phylogenetic trees. Frequently applied in the context of epidemiological modelling, phylogeographic generalised linear models were developed to allow for the evaluation of multiple predictors of spatial diffusion. The standard phylogeographic generalised linear model formulation, however, assumes that rates of spatial diffusion are a noiseless deterministic function of the set of covariates, admitting no other unobserved sources of variation. Under a variety of simulation scenarios, we demonstrate that the lack of a term modelling stochastic noise results in high false positive rates for predictors of spatial diffusion. We further show that the false positive rate can be controlled by including a random effect term, thus allowing unobserved sources of rate variation. Finally, we apply this random effects model to three recently published datasets and contrast the results of analysing these datasets with those obtained using the standard model. Our study demonstrates the prevalence of false positive results for predictors under the standard phylogeographic model in multiple simulation scenarios and, using empirical data from the literature, highlights the importance of a model accounting for random variation.
|
Page generated in 0.0306 seconds