• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • 1
  • 1
  • Tagged with
  • 20
  • 20
  • 20
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Forecasting exchage rates using machine learning models with time-varying volatility

Garg, Ankita January 2012 (has links)
This thesis is focused on investigating the predictability of exchange rate returns on monthly and daily frequency using models that have been mostly developed in the machine learning field. The forecasting performance of these models will be compared to the Random Walk, which is the benchmark model for financial returns, and the popular autoregressive process. The machine learning models that will be used are Regression trees, Random Forests, Support Vector Regression (SVR), Least Absolute Shrinkage and Selection Operator (LASSO) and Bayesian Additive Regression trees (BART). A characterizing feature of financial returns data is the presence of volatility clustering, i.e. the tendency of persistent periods of low or high variance in the time series. This is in disagreement with the machine learning models which implicitly assume a constant variance. We therefore extend these models with the most widely used model for volatility clustering, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) process. This allows us to jointly estimate the time varying variance and the parameters of the machine learning using an iterative procedure. These GARCH-extended machine learning models are then applied to make one-step-ahead prediction by recursive estimation that the parameters estimated by this model are also updated with the new information. In order to predict returns, information related to the economic variables and the lagged variable will be used. This study is repeated on three different exchange rate returns: EUR/SEK, EUR/USD and USD/SEK in order to obtain robust results. Our result shows that machine learning models are capable of forecasting exchange returns both on daily and monthly frequency. The results were mixed, however. Overall, it was GARCH-extended SVR that shows great potential for improving the predictive performance of the forecasting of exchange rate returns.
2

Effect of cognitive biases on human understanding of rule-based machine learning models

Kliegr, Tomas January 2017 (has links)
This thesis investigates to what extent do cognitive biases a ect human understanding of interpretable machine learning models, in particular of rules discovered from data. Twenty cognitive biases (illusions, e ects) are analysed in detail, including identi cation of possibly e ective debiasing techniques that can be adopted by designers of machine learning algorithms and software. This qualitative research is complemented by multiple experiments aimed to verify, whether, and to what extent, do selected cognitive biases in uence human understanding of actual rule learning results. Two experiments were performed, one focused on eliciting plausibility judgments for pairs of inductively learned rules, second experiment involved replication of the Linda experiment with crowdsourcing and two of its modi cations. Altogether nearly 3.000 human judgments were collected. We obtained empirical evidence for the insensitivity to sample size e ect. There is also limited evidence for the disjunction fallacy, misunderstanding of and , weak evidence e ect and availability heuristic. While there seems no universal approach for eliminating all the identi ed cognitive biases, it follows from our analysis that the e ect of many biases can be ameliorated by making rule-based models more concise. To this end, in the second part of thesis we propose a novel machine learning framework which postprocesses rules on the output of the seminal association rule classi cation algorithm CBA [Liu et al, 1998]. The framework uses original undiscretized numerical attributes to optimize the discovered association rules, re ning the boundaries of literals in the antecedent of the rules produced by CBA. Some rules as well as literals from the rules can consequently be removed, which makes the resulting classi er smaller. Benchmark of our approach on 22 UCI datasets shows average 53% decrease in the total size of the model as measured by the total number of conditions in all rules. Model accuracy remains on the same level as for CBA.
3

PREDICTING TRADED VOLUMES OF RENEWABLE ENERGY CERTIFICATES : A comparison of different time series forecasting methods / ATT FÖRUTSPÅ OMSATTA VOLYMER AV CERTIFIKAT FÖR FÖRNYELSEBAR ENERGI : En jämförelse mellan olika metoder för tidsserieprediktion

Magnusson, Stina, Sköld, Ebba January 2022 (has links)
Predicting sales is an important step for many business processes. Several forecasting methods have been applied to uncountable different problems, however with no present research found in the area of renewable energy certificates. Thus, this study aims to examine the possibility of developing a model based on traded volumes of certificates, where a comparison between simpler and more complex models explores the general increased interest in machine learning models. Therefore, five different models are tested with monthly sales data: the statistical model Seasonal Autoregressive Integrated Moving Average, the machine learning models Support Vector Regression and Extreme Gradient Boosting and further the neural networks Long Short-Term Memory and Bidirectional Long Short-Term Memory. Extensive data preparation is operated by taking into account seasonality and trends where data transformations are applied in addition to feature engineering. To evaluate the models, non-aggregated monthly forecasts as well as aggregated predictions of two and three months are examined. The results show that it is feasible to model the sales volumes of renewable energy certificates. As expected, the models generally perform better when evaluated based on aggregated monthly predictions. Also, when considering both evaluation strategies, the Seasonal Autoregressive Integrated Moving Average, Support Vector Regression and Extreme Gradient Boosting are the only models showing better performance compared to a baseline model. The proposed solution to enable smarter and more efficient trading decisions today is a combination of the aggregated two months and quarterly predictions of the Seasonal Autoregressive Integrated Moving Average and Support Vector Regression models. Considering an expected expansion of relevant available data for the company, the recommendation for the future is to specifically further develop the machine learning models with an anticipation of improved performance and valuable feature importance insights.
4

Purchase Probability Prediction : Predicting likelihood of a new customer returning for a second purchase using machine learning methods

Alstermark, Olivia, Stolt, Evangelina January 2021 (has links)
When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame. To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models. The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.
5

Churn Prediction : Predicting User Churn for a Subscription-based Service using Statistical Analysis and Machine Learning Models

Flöjs, Amanda, Hägg, Alexandra January 2020 (has links)
Subscription-based services are becoming more popular in today’s society. Therefore, any company that engages in the subscription-based business needs to understand the user behavior and minimize the number of users canceling their subscription, i.e. minimize churn. According to marketing metrics, the probability of selling to an existing user is markedly higher than selling to a brand new user. Nonetheless, it is of great importance that more focus is directed towards preventing users from leaving the service, in other words preventing user churn. To be able to prevent user churn the company needs to identify the users in the risk zone of churning. Therefore, this thesis project will treat this as a classification problem. The objective of the thesis project was to develop a statistical model to predict churn for a subscription-based service. Various statistical methods were used in order to identify patterns in user behavior using activity and engagement data including variables describing recency, frequency, and volume. The best performing statistical model for predicting churn was achieved by the Random Forest algorithm. The selected model is able to separate the two classes of churning users and the non-churning users with 73% probability and has a fairly low missclassification rate of 35%. The results show that it is possible to predict user churn using statistical models. Although, there are indications that it is difficult for the model to generalize a specific behavioral pattern for user churn. This is understandable since human behavior is hard to predict. The results show that variables describing how frequent the user is interacting with the service are explaining the most whether a user is likely to churn or not. / Prenumerationstjänster blir alltmer populära i dagens samhälle. Därför är det viktigt för ett företag med en prenumerationsbaserad verksamhet att ha en god förståelse för sina användares beteendemönster på tjänsten, samt att de minskar antalet användare som avslutar sin prenumeration. Enligt marknads-föringsstatistik är sannolikheten att sälja till en redan existerande användare betydligt högre än att sälja till en helt ny. Av den anledningen, är det viktigt att ett stort fokus riktas mot att förebygga att användare lämnar tjänsten. För att förebygga att användare lämnar tjänsten måste företaget identifiera vilka användare som är i riskzonen att lämna. Därför har detta examensarbete behandlats som ett klassifikations problem. Syftet med arbetet var att utveckla en statistisk modell för att förutspå vilka användare som sannolikt kommer att lämna prenumerationstjänsten inom nästa månad. Olika statistiska metoder har prövats för att identifiera användares beteendemönster i aktivitet- och engagemangsdata, data som inkluderar variabler som beskriver senaste interaktion, frekvens och volym. Bäst prestanda för att förutspå om en användare kommer att lämna tjänsten gavs av Random Forest algoritmen. Den valda modellen kan separera de två klasserna av användare som lämnar tjänsten och de användare som stannar med 73% sannolikhet och har en relativt låg missfrekvens på 35%. Resultatet av arbetet visar att det går att förutspå vilka användare som befinner sig i riskzonen för att lämna tjänsten med hjälp av statistiska modeller, även om det är svårt för modellen att generalisera ett specifikt beteendemönster för de olika grupperna. Detta är dock förståeligt då det är mänskligt beteende som modellen försöker att förutspå. Resultatet av arbetet pekar mot att variabler som beskriver frekvensen av användandet av tjänsten beskriver mer om en användare är påväg att lämna tjänsten än variabler som beskriver användarens aktivitet i volym.
6

A Machine Learning Approach to Artificial Floorplan Generation

Goodman, Genghis 01 January 2019 (has links)
The process of designing a floorplan is highly iterative and requires extensive human labor. Currently, there are a number of computer programs that aid humans in floorplan design. These programs, however, are limited in their inability to fully automate the creative process. Such automation would allow a professional to quickly generate many possible floorplan solutions, greatly expediting the process. However, automating this creative process is very difficult because of the many implicit and explicit rules a model must learn in order create viable floorplans. In this paper, we propose a method of floorplan generation using two machine learning models: a sequential model that generates rooms within the floorplan, and a graph-based model that finds adjacencies between generated rooms. Each of these models can be altered such that they are each capable of producing a floorplan independently; however, we find that the combination of these models outperforms each of its pieces, as well as a statistic-based approach.
7

On the use of $\alpha$-stable random variables in Bayesian bridge regression, neural networks and kernel processes.pdf

Jorge E Loria (18423207) 23 April 2024 (has links)
<p dir="ltr">The first chapter considers the l_α regularized linear regression, also termed Bridge regression. For α ∈ (0, 1), Bridge regression enjoys several statistical properties of interest such</p><p dir="ltr">as sparsity and near-unbiasedness of the estimates (Fan & Li, 2001). However, the main difficulty lies in the non-convex nature of the penalty for these values of α, which makes an</p><p dir="ltr">optimization procedure challenging and usually it is only possible to find a local optimum. To address this issue, Polson et al. (2013) took a sampling based fully Bayesian approach to this problem, using the correspondence between the Bridge penalty and a power exponential prior on the regression coefficients. However, their sampling procedure relies on Markov chain Monte Carlo (MCMC) techniques, which are inherently sequential and not scalable to large problem dimensions. Cross validation approaches are similarly computation-intensive. To this end, our contribution is a novel non-iterative method to fit a Bridge regression model. The main contribution lies in an explicit formula for Stein’s unbiased risk estimate for the out of sample prediction risk of Bridge regression, which can then be optimized to select the desired tuning parameters, allowing us to completely bypass MCMC as well as computation-intensive cross validation approaches. Our procedure yields results in a fraction of computational times compared to iterative schemes, without any appreciable loss in statistical performance.</p><p><br></p><p dir="ltr">Next, we build upon the classical and influential works of Neal (1996), who proved that the infinite width scaling limit of a Bayesian neural network with one hidden layer is a Gaussian process, when the network weights have bounded prior variance. Neal’s result has been extended to networks with multiple hidden layers and to convolutional neural networks, also with Gaussian process scaling limits. The tractable properties of Gaussian processes then allow straightforward posterior inference and uncertainty quantification, considerably simplifying the study of the limit process compared to a network of finite width. Neural network weights with unbounded variance, however, pose unique challenges. In this case, the classical central limit theorem breaks down and it is well known that the scaling limit is an α-stable process under suitable conditions. However, current literature is primarily limited to forward simulations under these processes and the problem of posterior inference under such a scaling limit remains largely unaddressed, unlike in the Gaussian process case. To this end, our contribution is an interpretable and computationally efficient procedure for posterior inference, using a conditionally Gaussian representation, that then allows full use of the Gaussian process machinery for tractable posterior inference and uncertainty quantification in the non-Gaussian regime.</p><p><br></p><p dir="ltr">Finally, we extend on the previous chapter, by considering a natural extension to deep neural networks through kernel processes. Kernel processes (Aitchison et al., 2021) generalize to deeper networks the notion proved by Neal (1996) by describing the non-linear transformation in each layer as a covariance matrix (kernel) of a Gaussian process. In this way, each succesive layer transforms the covariance matrix in the previous layer by a covariance function. However, the covariance obtained by this process loses any possibility of representation learning since the covariance matrix is deterministic. To address this, Aitchison et al. (2021) proposed deep kernel processes using Wishart and inverse Wishart matrices for each layer in deep neural networks. Nevertheless, the approach they propose requires using a process that does not emerge from the limit of a classic neural network structure. We introduce α-stable kernel processes (α-KP) for learning posterior stochastic covariances in each layer. Our results show that our method is much better than the approach proposed by Aitchison et al. (2021) in both simulated data and the benchmark Boston dataset.</p>
8

HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELS

Bing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
9

Applications of Formal Explanations in ML

Smyrnioudis, Nikolaos January 2023 (has links)
The most performant Machine Learning (ML) classifiers have been labeled black-boxes due to the complexity of their decision process. eXplainable Artificial Intelligence (XAI) methods aim to alleviate this issue by crafting an interpretable explanation for a models prediction. A drawback of most XAI methods is that they are heuristic with some drawbacks such as non determinism and locality. Formal Explanations (FE) have been proposed as a way to explain the decisions of classifiers by extracting a set of features that guarantee the prediction. In this thesis we explore these guarantees for different use cases: speeding up the inference speed of tree-based Machine Learning classifiers, curriculum learning using said classifiers and also reducing training data. We find that under the right circumstances we can achieve up to 6x speedup by partially compiling the model to a set of rules that are extracted using formal explainability methods. / De mest effektiva maskininlärningsklassificerarna har betecknats som svarta lådor på grund av komplexiteten i deras beslutsprocess. Metoder för förklarbar artificiell intelligens (XAI) syftar till att lindra detta problem genom att skapa en tolkbar förklaring för modellens prediktioner. En nackdel med de flesta XAI-metoder är att de är heuristiska och har vissa nackdelar såsom icke-determinism och lokalitet. Formella förklaringar (FE) har föreslagits som ett sätt att förklara klassificerarnas beslut genom att extrahera en uppsättning funktioner som garanterar prediktionen. I denna avhandling utforskar vi dessa garantier för olika användningsfall: att öka inferenshastigheten för maskininlärningsklassificerare baserade på träd, kurser med hjälp av dessa klassificerare och även minska träningsdata. Vi finner att under rätt omständigheter kan vi uppnå upp till 6 gånger snabbare prestanda genom att delvis kompilera modellen till en uppsättning regler som extraheras med hjälp av formella förklaringsmetoder.
10

Assessing Viability of Open-Source Battery Cycling Data for Use in Data-Driven Battery Degradation Models

Ritesh Gautam (17582694) 08 December 2023 (has links)
<p dir="ltr">Lithium-ion batteries are being used increasingly more often to provide power for systems that range all the way from common cell-phones and laptops to advanced electric automotive and aircraft vehicles. However, as is the case for all battery types, lithium-ion batteries are prone to naturally occurring degradation phenomenon that limit their effective use in these systems to a finite amount of time. This degradation is caused by a plethora of variables and conditions including things like environmental conditions, physical stress/strain on the body of the battery cell, and charge/discharge parameters and cycling. Accurately and reliably being able to predict this degradation behavior in battery systems is crucial for any party looking to implement and use battery powered systems. However, due to the complicated non-linear multivariable processes that affect battery degradation, this can be difficult to achieve. Compared to traditional methods of battery degradation prediction and modeling like equivalent circuit models and physics-based electrochemical models, data-driven machine learning tools have been shown to be able to handle predicting and classifying the complex nature of battery degradation without requiring any prior knowledge of the physical systems they are describing.</p><p dir="ltr">One of the most critical steps in developing these data-driven neural network algorithms is data procurement and preprocessing. Without large amounts of high-quality data, no matter how advanced and accurate the architecture is designed, the neural network prediction tool will not be as effective as one trained on high quality, vast quantities of data. This work aims to gather battery degradation data from a wide variety of sources and studies, examine how the data was produced, test the effectiveness of the data in the Interfacial Multiphysics Laboratory’s autoencoder based neural network tool CD-Net, and analyze the results to determine factors that make battery degradation datasets perform better for use in machine learning/deep learning tools. This work also aims to relate this work to other data-driven models by comparing the CD-Net model’s performance with the publicly available BEEP’s (Battery Evaluation and Early Prediction) ElasticNet model. The reported accuracy and prediction models from the CD-Net and ElasticNet tools demonstrate that larger datasets with actively selected training/testing designations and less errors in the data produce much higher quality neural networks that are much more reliable in estimating the state-of-health of lithium-ion battery systems. The results also demonstrate that data-driven models are much less effective when trained using data from multiple different cell chemistries, form factors, and cycling conditions compared to more congruent datasets when attempting to create a generalized prediction model applicable to multiple forms of battery cells and applications.</p>

Page generated in 0.0249 seconds