41 |
On the Frechet means in simplex shape spacesKume, Alfred January 2001 (has links)
No description available.
|
42 |
Complex ray theoryLawry, James Milson Hassall January 1996 (has links)
No description available.
|
43 |
Algebraic IntegersBlack, Alvin M. 08 1900 (has links)
The primary purpose of this thesis is to give a substantial generalization of the set of integers Z, where particular emphasis is given to number theoretic questions such as that of unique factorization. The origin of the thesis came from a study of a special case of generalized integers called the Gaussian Integers, namely the set of all complex numbers in the form n + mi, for m,n in Z. The main generalization involves what are called algebraic integers.
|
44 |
Effective implementation of Gaussian process regression for machine learningDavies, Alexander James January 2015 (has links)
No description available.
|
45 |
Application and computation of likelihood methods for regression with measurement errorHigdon, Roger 23 September 1998 (has links)
This thesis advocates the use of maximum likelihood analysis for generalized
regression models with measurement error in a single explanatory variable. This will be
done first by presenting a computational algorithm and the numerical details for carrying
out this algorithm on a wide variety of models. The computational methods will be based
on the EM algorithm in conjunction with the use of Gauss-Hermite quadrature to
approximate integrals in the E-step. Second, this thesis will demonstrate the relative
superiority of likelihood-ratio tests and confidence intervals over those based on
asymptotic normality of estimates and standard errors, and that likelihood methods may
be more robust in these situations than previously thought. The ability to carry out
likelihood analysis under a wide range of distributional assumptions, along with the
advantages of likelihood ratio inference and the encouraging robustness results make
likelihood analysis a practical option worth considering in regression problems with
explanatory variable measurement error. / Graduation date: 1999
|
46 |
Effects of transcription errors on supervised learning in speech recognitionSundaram, Ramasubramanian H. January 2003 (has links)
Thesis (M.S.)--Mississippi State University. Department of Electrical and Computer Engineering. / Title from title screen. Includes bibliographical references.
|
47 |
Modelling the Concentration Distribution of Non-Buoyant Aerosols Released from Transient Point Sources into the AtmosphereCao, Xiaoying 23 October 2007 (has links)
Neural network models were developed to model the short-term concentration distribution of aerosols released from point sources. Those models were based on data from a wide range field experiments (November 2002, March, May and August 2003). The study focused on relative dispersion from the puff centroid. The influence of puff/cloud meandering and large-scale gusts were not considered, the modelling was limited to studying the dispersion caused by small-scale turbulence. The data collected were based on short range/time dispersion, usually shorter than 150 s. The ANN (Artificial Neural Network) models considered explicitly a number of meteorological and turbulence parameters, as opposed to the Gaussian models that used a single fitting parameter, the dispersion coefficient. The developed ANN models were compared with predictions generated from COMBIC (Combined Obscuration Model for Battlefield Induced Contaminants), a sophisticated model based on Gaussian distributions, and a traditional Gaussian puff model using Slade’s dispersion coefficients. Neural network predictions have been found to have better agreement with concentration measurements than either of the other two Gaussian puff models. All models underestimate the maximum concentration, but ANN predictions are much closer to observations. Simulations of concentration distributions under different stability conditions were also checked using the developed ANN model, and it showed that, for a short time, Gaussian distributions are a good fit for puff dispersion in the downwind, crosswind and vertical directions.
For Gaussian puff models, the key issue is to determine appropriate dispersion coefficients (standard deviations). ANN models for puff dispersion coefficients were trained and their average predictions were compared with the results of measurements. Very good agreement was observed, with a high correlation coefficient (>0.99). The ANN models for dispersion coefficients were used to analyze which input variables were more significant for puff expansions. Dispersion time, particle position relative to the centroid, turbulent kinetic energy and insolation showed the most significant influence on puff dispersion. The Gaussian puff model with dispersion coefficients from the ANN models was compared with COMBIC and a Gaussian puff model using Slade’s dispersion coefficients. Generally speaking, predictions generated by the Gaussian puff model with dispersion coefficients generated by ANN models showed better agreement with concentration measurements than the other two Gaussian puff models, by giving a much higher fraction within a factor of two, and lower normalised mean square errors. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2007-10-17 12:13:42.923 / NSERC, DGNS
|
48 |
Deblurring Gaussian blur : continuous and discrete approachesKimia, Behjoo. January 1986 (has links)
No description available.
|
49 |
Indirect adaptive control with quadratic cost functionsSalcudean, Septimiu. January 1981 (has links)
No description available.
|
50 |
Predictive classification using mixtures of normal distributionsSalazar, Rafael Perera January 1998 (has links)
Classification using mixture distributions to model each class has not received too much attention in the literature. The most important attempts use normal distributions as com- ponents in these mixtures. Recently developed methods have allowed the use of these kinds of models as a flexible approach for density estimation. Most of the methods de- veloped so far use plug-in estimates for the parameters and assume that the number of components in the mixture is known. We obtain a predictive classifier for the classes by using Markov Chain Monte Carlo techniques which allow us to obtain a sampling chain for the parameters. This fully Bayesian approach to classification has the advantage that the number of components for each class is taken as another variable parameter and integrated out of the classification. To achieve this we use a birth-and-death/Gibbs sampler algorithm developed by Stephens (1997). We use five different datasets, two simulated ones to test the methods on a single class and three real datasets to test the methods for classification. We look at different models to de- fine which gives better flexibility in the modelling and an overall better classification. We look at different types of priors for the means and dispersion matrices of the components. Joint conjugate priors and an independent conjugate priors for the means and dispersion matrices for the components are used. We use a model with a common dispersion matrix for all the components and another one with a reparametrisation of these dispersion ma- trices into size, shape and orientation (Banfield and Raftery (1993)). We allow the sizes to differ while keeping a common shape and orientation for the dispersion matrices of the components in a class. We found that this type of modelling with independent conjugate priors for the means and dispersions while allowing the sizes of the dispersions to vary gave the best results for classification purposes as it allowed great flexibility and separation between the compo- nents of the classes.
|
Page generated in 0.032 seconds