• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 19
  • 9
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 121
  • 121
  • 94
  • 17
  • 17
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS

Navarro Quiles, Ana 01 March 2018 (has links)
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias. / Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations. / Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries. / Navarro Quiles, A. (2018). COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
52

Analysis of diagnostic climate model cloud parameterisations using large-eddy simulations

Rosch, Jan, Heus, Thijs, Salzmann, Marc, Mülmenstädt, Johannes, Schlemmer, Linda, Quaas, Johannes 28 April 2016 (has links) (PDF)
Current climate models often predict fractional cloud cover on the basis of a diagnostic probability density function (PDF) describing the subgrid-scale variability of the total water specific humidity, qt, favouring schemes with limited complexity. Standard shapes are uniform or triangular PDFs the width of which is assumed to scale with the gridbox mean qt or the grid-box mean saturation specific humidity, qs. In this study, the qt variability is analysed from large-eddy simulations for two stratocumulus, two shallow cumulus, and one deep convective cases. We find that in most cases, triangles are a better approximation to the simulated PDFs than uniform distributions. In two of the 24 slices examined, the actual distributions were so strongly skewed that the simple symmetric shapes could not capture the PDF at all. The distribution width for either shape scales acceptably well with both the mean value of qt and qs, the former being a slightly better choice. The qt variance is underestimated by the fitted PDFs, but overestimated by the existing parameterisations. While the cloud fraction is in general relatively well diagnosed from fitted or parameterised uniform or triangular PDFs, it fails to capture cases with small partial cloudiness, and in 10 – 30% of the cases misdiagnoses clouds in clear skies or vice-versa. The results suggest choosing a parameterisation with a triangular shape, where the distribution width would scale with the grid-box mean qt using a scaling factor of 0.076. This, however, is subject to the caveat that the reference simulations examined here were partly for rather small domains and driven by idealised boundary conditions.
53

Uncertainty and sensitivity analysis of a materials test reactor / Mogomotsi Ignatius Modukanele

Modukanele, Mogomotsi Ignatius January 2013 (has links)
This study was based on the uncertainty and sensitivity analysis of a generic 10 MW Materials Test Reactor (MTR). In this study an uncertainty and sensitivity analysis methodology called code scaling applicability and uncertainty (CSAU) was implemented. Although this methodology follows 14 steps, only the following were carried out: scenario specification, nuclear power plant (NPP) selection, phenomena identification and ranking table (PIRT), selection of frozen code, provision of code documentation, determination of code applicability, determination of code and experiment accuracy, NPP sensitivity analysis calculations, combination of biases and uncertainties, and total uncertainty to calculate specific scenario in a specific NPP. The thermal hydraulic code Flownex®1 was used to model only the reactor core to investigate the effects of the input parameters on the selected output parameters of the hot channel in the core. These output parameters were mass flow rate, temperature of the coolant, outlet pressure, centreline temperature of the fuel and surface temperature of the cladding. The PIRT process was used in conjunction with the sensitivity analysis results in order to select the relevant input parameters that significantly influenced the selected output parameters. The input parameters that have the largest effect on the selected output parameters were found to be the coolant flow channel width between the plates in the hot channel, the width of the fuel plates itself in the hot channel, the heat generation in the fuel plate of the hot channel, the global mass flow rate, the global coolant inlet temperature, the coolant flow channel width between the plates in the cold channel, and the width of the fuel plates in the cold channel. The uncertainty of input parameters was then propagated in Flownex using the Monte Carlo based uncertainty analysis function. From these results, the corresponding probability density function (PDF) of each selected output parameter was constructed. These functions were found to follow a normal distribution. / MIng (Nuclear Engineering), North-West University, Potchefstroom Campus, 2014
54

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014
55

Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time

Bunger, R. C. (Robert Charles) 08 1900 (has links)
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
56

Multi-regime Turbulent Combustion Modeling using Large Eddy Simulation/ Probability Density Function

Shashank Satyanarayana Kashyap (6945575) 14 August 2019 (has links)
Combustion research is at the forefront of development of clean and efficient IC engines, gas turbines, rocket propulsion systems etc. With the advent of faster computers and parallel programming, computational studies of turbulent combustion is increasing rapidly. Many turbulent combustion models have been previously developed based on certain underlying assumptions. One of the major assumptions of the models is the regime it can be used for: either premixed or non-premixed combustion. However in reality, combustion systems are multi-regime in nature, i.e.,\ co-existence of premixed and non-premixed modes. Thus, there is a need for development of multi-regime combustion models which closely follows the physics of combustion phenomena. Much of previous modeling efforts for multi-regime combustion was done using flamelet-type models. As a first, the current study uses the highly robust transported Probability Density Function (PDF) method coupled with Large Eddy Simulation (LES) to develop a multi-regime model. The model performance is tested for Sydney Flame L, a piloted methane-air turbulent flame. The concept of flame index is used to detect the extent of premixed and non-premixed combustion modes. The drawbacks of using the traditional flame index definition in the context of PDF method are identified. Necessary refinements to this definition, which are based on the species gradient magnitudes, are proposed for the multi-regime model development. This results in identifying a new model parameter beta which defines a gradient threshold for the calculation of flame index. A parametric study is done to determine a suitable value for beta, using which the multi-regime model performance is assessed for Flame L by comparing it against the widely used non-premixed PDF model for three mixing models: Modified Curl (MCurl), Interaction by Exchange with Mean (IEM) and Euclidean Minimum Spanning Trees (EMST). The multi-regime model shows a significant improvement in prediction of mean scalar quantities compared to the non-premixed PDF model when MCurl mixing model is used. Similar improvements are observed in the multi-regime model when IEM and EMST mixing models are used. The results show potential foundation for further multi-regime model development using PDF model.
57

Estimation of Emission Strength and Air Pollutant Concentrations by Lagrangian Particle Modeling

Manomaiphiboon, Kasemsan 30 March 2004 (has links)
A Lagrangian particle model was applied to estimating emission strength and air pollutant concentrations specifically for the short-range dispersion of an air pollutant in the atmospheric boundary layer. The model performance was evaluated with experimental data. The model was then used as the platform of parametric uncertainty analysis, in which effects of uncertainties in five parameters (Monin-Obukhov length, friction velocity, roughness height, mixing height, and the universal constant of the random component) of the model on mean ground-level concentrations were examined under slightly and moderately stable conditions. The analysis was performed under a probabilistic framework using Monte Carlo simulations with Latin hypercube sampling and linear regression modeling. In addition, four studies related to the Lagrangian particle modeling was included. They are an alternative technique of formulating joint probability density functions of velocity for atmospheric turbulence based on the Koehler-Symanowski technique, analysis of local increments in a multidimensional single-particle Lagrangian particle model using the algebra of Ito integrals and the Wagner-Platen formula, analogy between the diffusion limit of Lagrangian particle models and the classical theory of turbulent diffusion, and evaluation of some proposed forms of the Lagrangian velocity autocorrelation of turbulence.
58

Eκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής

Γρενζελιάς, Αναστάσιος 25 June 2009 (has links)
Στη συγκεκριμένη εργασία ασχολήθηκα με την εκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής που επεξεργάστηκα. Στο θεωρητικό κομμάτι το μεγαλύτερο ενδιαφέρον παρουσίασαν ο Μη Καταστροφικός Έλεγχος και η Ακουστική Εκπομπή, καθώς και οι εφαρμογές τους. Τα δεδομένα που επεξεργάστηκα χωρίζονται σε δύο κατηγορίες: σε εκείνα που μου δόθηκαν έτοιμα και σε εκείνα που λήφθηκαν μετά από μετρήσεις. Στην επεξεργασία των πειραματικών δεδομένων χρησιμοποιήθηκε ο αλγόριθμος πρόβλεψης-μεγιστοποίησης, τον οποίο μελέτησα θεωρητικά και με βάση τον οποίο εξάχθηκαν οι παράμετροι για κάθε σήμα. Έχοντας βρει τις παραμέτρους, προχώρησα στην ταξινόμηση των σημάτων σε κατηγορίες με βάση τη θεωρία της αναγνώρισης προτύπων. Στο τέλος της εργασίας παρατίθεται το παράρτημα με τα αναλυτικά αποτελέσματα, καθώς και η βιβλιογραφία που χρησιμοποίησα. / In this diploma paper the subject was the calculation of the probability density function of parameters which come from signals of sources of acoustic emission. In the theoritical part, the chapters with the greatest interest were Non Destructive Control and Acoustic Emission and their applications. The data which were processed are divided in two categories: those which were given without requiring any laboratory research and those which demanded laboratory research. The expectation-maximization algorithm, which was used in the process of the laboratory data, was the basis for the calculation of the parameters of each signal. Having calculated the parameters, the signals were classified in categories according to the theory of pattern recognition. In the end of the paper, the results and the bibliography which was used are presented.
59

Synthetic Aperture Sonar Micronavigation Using An Active Acoustic Beacon.

Pilbrow, Edward Neil January 2007 (has links)
Synthetic aperture sonar (SAS) technology has rapidly progressed over the past few years with a number of commercial systems emerging. Such systems are typically based on an autonomous underwater vehicle platform containing multiple along-track receivers and an integrated inertial navigation system (INS) with Doppler velocity log aiding. While producing excellent images, blurring due to INS integration errors and medium fluctuations continues to limit long range, long run, image quality. This is particularly relevant in mine hunting, the main application for SAS, where it is critical to survey the greatest possible area in the shortest possible time, regardless of sea conditions. This thesis presents the simulation, design, construction, and sea trial results for a prototype "active beacon" and remote controller unit, to investigate the potential of such a device for estimating SAS platform motion and medium fluctuations. The beacon is deployed by hand in the area of interest and acts as an active point source with real-time data uploading and control performed by radio link. Operation is tightly integrated with the operation of the Acoustics Research Group KiwiSAS towed SAS, producing one-way and two-way time of flight (TOF) data for every ping by detecting the sonar chirps, time-stamping their arrival using a GPS receiver, and replying back at a different acoustic frequency after a fixed time delay. The high SNR of this reply signal, combined with the knowledge that it is produced by a single point source, provides advantages over passive point-like targets for SAS image processing. Stationary accuracies of < 2 mm RMS have been measured at ranges of up to 36m. This high accuracy allowed the beacon to be used in a separate study to characterise the medium fluctuation statistics in Lyttelton Harbour, New Zealand, using an indoor dive pool as a control. Probability density functions were fitted to the data then incorporated in SAS simulations to observe their effect on image quality. Results from recent sea trials in Lyttelton Harbour show the beacon TOF data, when used in a narrowband motion compensation (MOCOMP) process, provided improvements to the quality of SAS images centred on frequencies of 30 kHz and 100 kHz. This prototype uses simple matched-filtering algorithms for detection and while performing well under stationary conditions, the fluctuations caused by the narrow sonar transmit beam pattern (BP) and changing superposition of seabed multipath often cause dropouts and inaccurate detections during sea trials. An analysis of the BP effects and how the accuracy and robustness of the detection algorithms can be improved is presented. Overcoming these problems reliably is difficult without dedicated large scale testing facilities to allow conditions to be reproduced consistently.
60

Maximum-likelihood kernel density estimation in high-dimensional feature spaces /| C.M. van der Walt

Van der Walt, Christiaan Maarten January 2014 (has links)
With the advent of the internet and advances in computing power, the collection of very large high-dimensional datasets has become feasible { understanding and modelling high-dimensional data has thus become a crucial activity, especially in the field of pattern recognition. Since non-parametric density estimators are data-driven and do not require or impose a pre-defined probability density function on data, they are very powerful tools for probabilistic data modelling and analysis. Conventional non-parametric density estimation methods, however, originated from the field of statistics and were not originally intended to perform density estimation in high-dimensional features spaces { as is often encountered in real-world pattern recognition tasks. Therefore we address the fundamental problem of non-parametric density estimation in high-dimensional feature spaces in this study. Recent advances in maximum-likelihood (ML) kernel density estimation have shown that kernel density estimators hold much promise for estimating nonparametric probability density functions in high-dimensional feature spaces. We therefore derive two new iterative kernel bandwidth estimators from the maximum-likelihood (ML) leave one-out objective function and also introduce a new non-iterative kernel bandwidth estimator (based on the theoretical bounds of the ML bandwidths) for the purpose of bandwidth initialisation. We name the iterative kernel bandwidth estimators the minimum leave-one-out entropy (MLE) and global MLE estimators, and name the non-iterative kernel bandwidth estimator the MLE rule-of-thumb estimator. We compare the performance of the MLE rule-of-thumb estimator and conventional kernel density estimators on artificial data with data properties that are varied in a controlled fashion and on a number of representative real-world pattern recognition tasks, to gain a better understanding of the behaviour of these estimators in high-dimensional spaces and to determine whether these estimators are suitable for initialising the bandwidths of iterative ML bandwidth estimators in high dimensions. We find that there are several regularities in the relative performance of conventional kernel density estimators across different tasks and dimensionalities and that the Silverman rule-of-thumb bandwidth estimator performs reliably across most tasks and dimensionalities of the pattern recognition datasets considered, even in high-dimensional feature spaces. Based on this empirical evidence and the intuitive theoretical motivation that the Silverman estimator optimises the asymptotic mean integrated squared error (assuming a Gaussian reference distribution), we select this estimator to initialise the bandwidths of the iterative ML kernel bandwidth estimators compared in our simulation studies. We then perform a comparative simulation study of the newly introduced iterative MLE estimators and other state-of-the-art iterative ML estimators on a number of artificial and real-world high-dimensional pattern recognition tasks. We illustrate with artificial data (guided by theoretical motivations) under what conditions certain estimators should be preferred and we empirically confirm on real-world data that no estimator performs optimally on all tasks and that the optimal estimator depends on the properties of the underlying density function being estimated. We also observe an interesting case of the bias-variance trade-off where ML estimators with fewer parameters than the MLE estimator perform exceptionally well on a wide variety of tasks; however, for the cases where these estimators do not perform well, the MLE estimator generally performs well. The newly introduced MLE kernel bandwidth estimators prove to be a useful contribution to the field of pattern recognition, since they perform optimally on a number of real-world pattern recognition tasks investigated and provide researchers and practitioners with two alternative estimators to employ for the task of kernel density estimation. / PhD (Information Technology), North-West University, Vaal Triangle Campus, 2014

Page generated in 0.0487 seconds