Spelling suggestions: "subject:"istatistical"" "subject:"bystatistical""
21 |
Mining a large shopping database to predict where, when, and what consumers will buy nextHalam, Bantu 11 September 2020 (has links)
Retailers with electronic point-of-sale systems continuously amass detailed data about the items each consumer buys (i.e. what item, how often, its package size, how many were bought, whether the item was on special, etc.). Where the retailer can also associate purchases with a particular individual for example, when an account or loyalty card is issued, the buying behaviour of the consumer can be tracked over time, providing the retailer with valuable information about a customer's changing preferences. This project is based on mining a large database, containing the purchase histories of some 300 000 customers of a retailer, for insights into the behaviour of those customers. Specifically, the aim is to build three predictive models, each forming a chapter of the dissertation; forecasting the number of daily customers to visit a store, detecting changes in consumers' inter-purchase times, and predicting repeat customers after being given a special offer. Having too many goods and not enough customers implies loss for a business; having too few goods implies a lost opportunity to turn a profit. The ideal situation is to stock the appropriate number of goods for the number of customers arriving, so you can minimize loss, and maximize profit. To attend to this problem, in the first chapter we forecast the number of customers that will visit a store each day to buy any product (i.e. store daily visits). In the process we also carry out a time-series forecasting methods comparison, with the main aim of comparing machine learning methods to classical statistical methods. The models are fitted into a univariate time-series data and the best model for this particular dataset is selected using three accuracy measures. The results showed that there was not much difference between the methods, but some classical methods slightly performed better than the machine learning algorithms, and this was consistent with outcomes obtained by Makridakis et al. (2018) on similar comparisons. It is also vital for retailers to know when there has been a change in their consumers purchase behaviour. This change can either be the time between purchases, change in brand selection or change in market share. It is critical for such changes to be detected as early as possible, as speedy detection can help managers act before incurring loses. In the second chapter, we use change-point models to detect changes in consumers' inter-purchase times. Change-point models are approaches that offer a flexible, general-purpose solution to the problem of detecting changes in customer historic behaviour. This multiple change-point model assumes that there is a sequence of underlying parameters, and that this sequence is partitioned into contiguous blocks. These partitions are such that the parameter values are equal within, and different between blocks, whereby a beginning of a block is considered to be a change point. This changepoint model is fitted to consumers inter-purchase times (i.e. we model time between purchases) to see whether there were any significant changes on the consumers buying behaviour over a one year purchase period. The results showed that, depending on the length of the sequences, minority to a handful of customers do experience changes in their purchasing behaviours, with the longer sequences having more changes than the shorter ones. The results seemed to be different to those obtained by Clark and Durbach (2014), but analysing a portion of sequences of same lengths as those analysed in Clark and Durbach (2014), lead to similar results. Increasing sales growth is also vital for retailers, and there are various possible ways in which this can be achieved. One of the strategies is what is referred to as up-selling (whereby a customer is persuaded to make an additional purchase of the same product or purchase a more expensive version of the product.) and cross-selling (whereby a retailer sells a different product or service to an existing customer). These involve campaigning to customers and sell certain products, and sometimes include incentives in the campaign with the aim of exposing customers to these products hoping they will become repeat customers afterwards. In Chapter 3 we build a model to predict which customers are likely to become repeat customers after being given a special offer. This model is fitted to customers' time between two purchases, which makes the input time-series data, and is sequential in nature. Therefore, we build models that provide a good way for dealing with sequential inputs (i.e. convolutional neural networks and recurrent neural networks), and compare them to models that do not take into account the sequence of the data (i.e. feedforward neural networks and decision trees). The results showed that, inter-purchase times are only useful when they are about the same product, as models did no better than random if inter-purchase times were from a different product in the same department. Secondly, it is useful to take the order of the sequence into account, as models that do this do better than those who do not, with the latter not doing any better than a null model. Lastly, while none of the models performed well, deep learning models perform better than standard classification models and produce some substantial lift.
|
22 |
Interval AR(1) modelling of South African stock market pricesBiyana, Mahlubandile Dugmore January 2005 (has links)
Includes bibliographical references (leaves 124-126).
|
23 |
Multivariate muti-level non-linear mixed-effect models and their application to the modeling of drug-concentration time curvesMauff, Katya January 2011 (has links)
This thesis discusses the techniques involved in the fitting of nonlinear mixed effect (NLME) models. In particular, it looks at the application of these techniques to the analysis of concentration-time data for the aforementioned antimalarial compounds, and details the necessary extensions to the basic modeling process that were required in order to accommodate multiple responses and multiple observation phases (pregnant and postpartum).
|
24 |
Model selection-regression and time series applicationsClark, Allan Ernest January 2003 (has links)
In any statistical analysis the researcher is often faced with the challenging task of gleaning relevant information from a sample data set in order to answer questions about the area under investigation. Often the exact data generating process that governs any data set is unknown, indicating that we have to estimate the data generating process by using statistical methods. Regression analysis and time series analysis are two statistical techniques that can be used to undertake such an analysis. In practice researcher will propose one model or a group of competing models that attempts to explain the data being investigated. This process is known as model selection. Model selection techniques have been developed to aid researchers in finding a suitable approximation to the true data generating process. Methods have also been developed that attempt to distinguish between different competing models. Many of these techniques entail using an information criterion that estimates the "closeness" of a fitted model to the unknown data generating process. This study investigates the properties of Bozdogan's Information complexity measure (ICOMP) when undertaking time series and regression analysis. Model selection techniques have been developed for both time series and regression analysis. The regression analysis techniques however often provide unsatisfactory results due to poor experimental designs. Poor experimental design could induce collinearities causing parameter estimates to become unstable with large standard errors. Time series analysis utilizes lagged autocorrelation- and lagged partial autocorrelation coefficients in order to specify the lag structure of the model. In certain data sets this process is not informative in determining the order of an ARIMA model. ICOMP guards against collinearity by considering the interaction between the parameters being estimated in a model. This study investigates the properties of ICOMP when undertaking regression and time series analysis by means of a simulation study. Bibliography: pages 250-263.
|
25 |
Functional linear regression on Namibian and South African dataMzimela, Nosipho January 2016 (has links)
Indigenous to Southern Africa, the Aloe Dichotoma, most commonly known as the Quiver tree, are species of Aloe found mostly in the southern parts of Namibia and the Northern Cape Province in South Africa. Researchers noticed that Quiver trees assumed very different shapes depending on their geographical location. This project aims to model the observed differences in structural form of the trees between geographically spate populations with functional regression analysis using climate variables at each location. A number of statistical challenges present themselves such as the multivariate nature of the data. Functional data analysis was used in this project to display the data so as to highlight various characteristics while allowing us to study important sources of pattern and variation among the data. Functional data analysis can be best summarised as approximating discrete data with a function by assuming the existence of a function giving rise to the observed data. The underlying function is assumed to be smooth such that a pair of adjacent data values are necessarily linked together and unlikely to be too different from each other. There are a number of smoothing methods used to fit a function to the discrete data. In this project we use Roughness Penalty Smoothing methods which are based on optimising a fitting criterion that defines what a smooth of the data is trying to achieve. The meaning of smooth is explicitly expressed at the level of the criterion being optimised, rather than implicitly in terms of the number of basis functions used. Once the continuous functions for the climate variables have been fitted, these are used as predictors in a functional regression model with the structural variables as responses. This allows for the estimation of regression coefficients to describe the effect of the climate variables on each structural variable. The functional models suggest that maximum temperature has an effect on the structural form of Aloe Dichotoma. Further, the structural form of Aloe Dichotoma does differ in geographically spate locations. Trees found in the warmer Northern regions are more likely to have taller trees. The results did not necessarily prove the hypothesis that the number of branches found on trees in the North is fewer than those in the South, but these trees are more likely to have more dichotomous branches which may be translated to more branches.
|
26 |
A mathematical modelling approach for the elimination of malaria in Mpumalanga, South AfricaSilal, Sheetal Prakash January 2014 (has links)
Mpumalanga in South Africa is committed to eliminating malaria by 2018 and efforts are increasing beyond that necessary for malaria control. The eastern border of Mpumalanga is most affected by malaria with imported cases in Mpumalanga overtaking local cases in recent years. Mathematical modelling may be used to study the incidence and spread of disease with an important benefit being the ability to enact exogenous change on the system to predict impact without committing any real resources. Three models are developed to simulate malaria transmission: (1) a deterministic non-linear ordinary differential equation model, (2) a stochastic non-linear metapopulation differential equation model and (3) a stochastic hybrid metapopulation differential equation, individual-based model. These models are fitted to weekly case data from Mpumalanga from 2002 to 2008, and validated with data from 2009 to 2012. Interventions such as scaling-up vector control, mass drug administration, focal screen and treat campaign at the Mpumalanga-Maputo border-control point and source reduction are applied to the model to assess their potential impact on transmission and whether they may be used alone or in combination to achieve malaria elimination. The models predicted that scaling up vector control results in substantial decreases in local infections, though with little impact on imported infections. Mass drug administration is a high impacting but short-lived intervention with transmission reverting to pre-intervention levels within three years. Focal screen and treat campaigns are predicted to result in substantial decreases in local infections, though success of the campaign is dependent on the ability to detect low parasitemic infections. Large decreases in local infections are also predicted to be achieved through foreign source reduction. The impact of imported infections is such that malaria elimination is only predicted if all imported infections are treated before entry into Mpumalanga, or are themselves eliminated at their source. Thus a regionally-focused strategy may stand a better chance at achieving elimination in Mpumalanga and South Africa compared to a nationally-focused one. In this manner, mathematical models may form an integral part of the research, planning and evaluation of the research, planning and evaluation of elimination-focused strategies so that malaria elimination is possible in the foreseeable future.
|
27 |
An object-oriented approach to structuring multicriteria decision support in natural resource management problemsLiu, Dingfei January 2001 (has links)
Includes bibliographical references. / The undertaking of MCDM (Multicriteria Decision Making) and the development of DSSs (Decision Support Systems) tend to be complex and inefficient, leading to low productivity in decision analysis and DSSs. Towards this end, this study has developed an approach based on object orientation for MCDM and DSS modelling, with the emphasis on natural resource management. The object-oriented approach provides a philosophy to model decision analysis and DSSs in a uniform way, as shown by the diagrams presented in this study. The solving of natural resource management decision problems, the MCDM decision making procedure and decision making activities are modelled in an object-oriented way. The macro decision analysis system, its DSS, the decision problem, the decision context, and the entities in the decision making procedure are represented as "objects". The object-oriented representation of decision analysis also constitutes the basis for the analysis ofDSSs.
|
28 |
Insurance recommendation engine using a combined collaborative filtering and neural network approachPillay, Prinavan 15 September 2021 (has links)
A recommendation engine for insurance modelling was designed, implemented and tested using a neural network and collaborative filtering approach. The recommendation engine aims to suggest suitable insurance products for new or existing customers, based on their features or selection history. The collaborative filtering approach used matrix factorization on an existing user base to provide recommendation scores for new products to existing users. The content based method used a neural network architecture which utilized user features to provide a product recommendation for new users. Both methods were deployed using the Tensorflow machine learning framework. The hybrid approach helps solve for cold start problems where users have no interaction history. The accuracy on the collaborative filtering produced 0.13 root mean square error based on implicit feedback rating of 0-1, and an overall Top-3 classification accuracy (ability to predict one of the top 3 choices of a customer) of 83.8%. The neural network system achieved an accuracy of 77.2% on Top-3 classification. The system thus achieved good training performance and given further modifications, could be used in a production environment.
|
29 |
Triplet entropy loss: improving the generalisation of short speech language identification systemsVan Der Merwe, Ruan Henry 16 September 2021 (has links)
Spoken language identification systems form an integral part in many speech recognition tools today. Over the years many techniques have been used to identify the language spoken, given just the audio input, but in recent years the trend has been to use end to end deep learning systems. Most of these techniques involve converting the audio signal into a spectrogram which can be fed into a Convolutional Neural Network which can then predict the spoken language. This technique performs very well when the data being fed to model originates from the same domain as the training examples, but as soon as the input comes from a different domain these systems tend to perform poorly. Examples could be when these systems were trained on WhatsApp recordings but are put into production in an environment where the system receives recordings from a phone line. The research presented investigates several methods to improve the generalisation of language identification systems to new speakers and to new domains. These methods involve Spectral augmentation, where spectrograms are masked in the frequency or time bands during training and CNN architectures that are pre-trained on the Imagenet dataset. The research also introduces the novel Triplet Entropy Loss training method. This training method involves training a network simultaneously using Cross Entropy and Triplet loss. Several tests were run with three different CNN architectures to investigate what the effect all three of these methods have on the generalisation of an LID system. The tests were done in a South African context on six languages, namely Afrikaans, English, Sepedi, Setswanna, Xhosa and Zulu. The two domains tested were data from the NCHLT speech corpus, used as the training domain, with the Lwazi speech corpus being the unseen domain. It was found that all three methods improved the generalisation of the models, though not significantly. Even though the models trained using Triplet Entropy Loss showed a better understanding of the languages and higher accuracies, it appears as though the models still memorise word patterns present in the spectrograms rather than learning the finer nuances of a language. The research shows that Triplet Entropy Loss has great potential and should be investigated further, but not only in language identification tasks but any classification task.
|
30 |
A tabu search algorithm for the vehicle routing problem with time windowsParker, Saarah January 2015 (has links)
Includes bibliographic references.
|
Page generated in 0.0847 seconds