• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 62
  • 6
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 1906
  • 354
  • 197
  • 117
  • 75
  • 54
  • 53
  • 52
  • 52
  • 51
  • 50
  • 40
  • 40
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Low regularity solutions of nonlinear wave equations

Gibbeson, Dominic January 2004 (has links)
We investigate solutions of the coupled Diral Klein-Gordon Equations in one and three space dimensions. Through analysis of the Fourier representations of the solutions to these equations, we introduce the ‘Null Structure’ as developed by Klainerman and Machedon. This structure allows us to prove the necessary estimates, both fixed time and bilinear space-time, that allow us to show existence of solutions of these equations with initial data of lower regularity than previously required. We also study global existence for a two dimensional wave equation with a critical non-linearity.
12

Spectral analysis of spatial processes

Mugglestone, Moira A. January 1990 (has links)
This thesis is concerned with the development of two-dimensional spectral analysis as a technique for investigating spatial pattern, and the stochastic processes which generate pattern. The technique is discussed for two types of data: first, quantitative measurements associated with a rectangular grid, or lattice; secondly, analysis of spatial point patterns using the coordinates which describe the locations of events. Spectral analysis of lattice data is applied to two examples of remotely sensed digital imagery. The first example consists of digitised aerial photographs of glaciated terrain in Canada. Spectral analysis is used to detect geological lineations which are visible in the photographs, and to study the structure of the land surface beneath the lineations. The second example is meteorological satellite imagery. Spectral analysis is used to develop a system for discrimination between different cloud types. Point spectral analysis is used as the basis of formal tests for randomness, against alternatives such as clustering or inhibition. Spectral theory for univariate spatial point patterns is extended to cross-spectral analysis of bivariate point patterns. In particular, we show how cross-spectral functions indicate the type of interaction between the events of two patterns. A test for independent components is composed, and the application of the test is demonstrated using a variety of real and artificial patterns. A further extension, to bispectral analysis of third-order properties of spatial point patterns, is also discussed. This type of analysis is used to distinguish between processes which have the same first- and second-order properties, but different third-order properties. Finally, we show how Greig-Smith analysis of quadrant count data can be interpreted as a type of two-dimensional spectral analysis based on a set of orthogonal square waves known as Walsh functions. This representation indicates why Greig Smith's method is entirely dependent on the starting point of the grid.
13

REML and the analysis of series of variety trials

Nabugoomu, Fabian January 1994 (has links)
In the UK, official series of trials are grown annually at several centres with the objective of predicting future variety performance under the growing conditions sampled by the trials. For this purpose, the centres are chosen to be representative of the growing conditions in the region to which results will be applied. Analysis involves combination of trial results over centres and years. The analysis for individual years is also important as it predicts performance under conditions of a particular year and is also required for monitoring the trials. Varieties x environments tables are inevitably incomplete and the use of interactions as error makes the REML algorithm suitable for analysis. The models for analysis are determined solely by the objectives of analysis and the data structure. To predict variety performance for a range of conditions sampled by the trials, only variety effects should contribute to the systematic part of the model, all other effects and interactions are error. In this thesis we use REML to analyse the varieties x centres x years table, varieties x years/centres table and the varieties x regions/centres x years table. Simple methods based on least-squares analysis of two-way tables have been used to provide a combined analysis. We show that these methods give the same means as a full analysis if the within years tables are complete. Moreover, if centres are nested within years, use of REML in a two-stage analysis also gives correct standard errors. If some or all within-years tables are incomplete, simple methods can be inefficient.
14

Optimum quadrature formulae and natural splines

Kershaw, D. January 1972 (has links)
No description available.
15

Some geometric approaches to parameter estimation

Hitchcock, David January 1992 (has links)
The major part of this thesis is concerned with some geometric aspects of a parameteric statistical problem. In chapter 1 we show how to assign structure to the parameter space by turning it into a Riemannian Manifold. This is achieved by obtaining a metric from the model and the observations in a natural way. We also show how the standard idea of interest and nuisance parameters fits into this context. In chapter 2 the gradient log-likelihood vector field is introduced and a natural diffusion process is put on the parameter space with this vector field as drift. Some properties of this diffusion are investigated including the relationship with the original statistical problem. A method for creating a diffusion on the interest parameter space is exhibited. Chapter 3 considers the case when the nuisance parameters are incidental, ie increase with number of observations. In cases where an optimum method exists for such problems, the method of chapter 2 is equivalent for the right choice of metric. The method is also applied to more general cases and some of the problems that arise are explored. Chapter 4 consists mainly of examples of which the mixture model is probably the most interesting. Chapter 5 is somewhat disconnected from chapters 1-4 and considers observing a (continuous time) parameter dependent stochastic process at discrete time points. The actual likelihood function crucial to the analysis in chapters 1-4 cannot be calculated explicitly so an alternative approach based on martingale techniques is presented. Chapter 6 is self-contained and presents a theorem on the detection of a signal when corrupted by white noise. The likelihood approach used does not appear in previous papers on the subject and leads to a sharper result.
16

Intertwining operators

Power, Stephen Charles January 1976 (has links)
In this thesis we study operators and spaces of operators on a Hilbert space defined by intertwining relations. The classical Hankel operators are those operators which intertwine the unilateral shift and its adjoint. We consider generalised Hankel operators relative to shifts and relative to families of shifts and give generalisations of the classical theorems of Nehari and Hartman. In contrast to the classical approach our proofs are mainly geometric and rest on the Sz Nagy Foias lifting theorem. We show that the closed linear span of the positive Hankel operators is a proper subspace of the Hankel operators and contains all the compact Hankels. Part of this result is also obtained, via Douglas's localization theory for Toeplitz operators, from the fact that there exist Hankel operators which do not lie in the C*-algebra generated by the Toeplitz operators. In chapter 7 we see that certain sums of spaces of intertwining operators are closed and yield CS-algebras. In fact it is the algebraic properties of these spaces that ensure the automatic closure of their sum. As a consequence we obtain odd/even decompositions for Ct-algebras and van Nenmnnn algebras and related double commutant theorems.
17

Reinforcing connectionism : learning the statistical way

Dayan, Peter Samuel January 1991 (has links)
Connectionism's main contribution to cognitive science will prove to be the renewed impetus it has imparted to learning. Learning can be integrated into the existing theoretical foundations of the subject, and the combination, statistical computational theories, provide a framework within which many connectionist mathematical mechanisms naturally fit. Examples from supervised and reinforcement learning demonstrate this. Statistical computational theories already exist for certainn associative matrix memories. This work is extended, allowing real valued synapses and arbitrarily biased inputs. It shows that a covariance learning rule optimises the signal/noise ratio, a measure of the potential quality of the memory, and quantifies the performance penalty incurred by other rules. In particular two that have been suggested as occuring naturally are shown to be asymptotically optimal in the limit of sparse coding. The mathematical model is justified in comparison with other treatments whose results differ. Reinforcement comparison is a way of hastening the learning of reinforcement learning systems in statistical environments. Previous theoretical analysis has not distinguished between different comparison terms, even though empirically, a covariance rule has been shown to be better than just a constant one. The workings of reinforcement comparison are investigated by a second order analysis of the expected statistical performance of learning, and an alternative rule is proposed and empirically justified. The existing proof that temporal difference prediction learning converges in the mean is extended from a special case involving adjacent time steps to the general case involving arbitary ones. The interaction between the statistical mechanism of temporal difference and the linear representation is particularly stark. The performance of the method given a linearly dependent representation is also analysed.
18

Aspects of harmonic analysis over finite fields

Stones, Brendan January 2005 (has links)
In this thesis we study three topics in Harmonic Analysis in the finite field setting. The methods used are purely combinatorial in nature. We prove a sharp result for the maximal operator associated to dilations of quadric surfaces. We use Christ’s method ([Christ, Convolution, Curvature and Combinatorics. A case study, International Math. Research Notices 19 (1998)]), for L<sup>p</sup>→ L<sup>q</sup> estimates for convolution with the twisted n-bic curve in the European setting, to give L<sup>p</sup> → L<sup>q</sup> estimates for convolution with k-dimensional surfaces in the finite field setting. We give solution to the k-plane Radon transform problem and embark on a study of a generalisation of this problem.
19

Bayesian methods for Poisson models

Streftaris, George January 2000 (has links)
To account for overdispersion in count data, that is variation in excess of that justified from the assumed model, one may consider an additional source of variation, by assuming that each observation, <i>Y<sub>i</sub>, i = </i>1, ..., <i>m</i>, arises from a conditionally independent Poisson distribution, given its respective mean <i>q<sub>i</sub>, i = </i>1, ..., <i>m</i>. We review various frequentist methods for the estimation of the Poisson parameters <i>q<sub>i</sub>, i = </i>1, ..., <i>m</i>, which are based on the inadmissibility of the usual unbiased maximum likelihood estimator, in terms of the associated risk in dimensions greater than two. The so called shrinkage estimators adjust the maximum likelihood estimates towards a fixed or data-determined point, abandoning unbiasedness in favour of lower risk. Inferences for the parameters of interest can also be drawn employing Bayesian methods. Conjugate models are often adopted to facilitate the computational procedure. In this thesis we assume a nonconjugate log-normal prior distribution, which allows for more dispersion in the Poisson means and can also accommodate a correlation structure. We derive two empirical Bayes estimators, which approximate the posterior mean. The first is based on a linear shrinkage rule, while the second employs a non-iterative importance sampling technique. The frequency properties of the two estimators in terms of average risk are assessed and compared to other estimating approaches proposed in the literature. A full hierarchical Bayes analysis is also considered, assuming both informative and non-informative prior distributions at the lower stage of the hierarchy. Some analytical posterior inferences, based on simple approximations are obtained. We then employ stochastic simulation techniques, suggesting two Markov chain Monte Carlo methods which involve the Gibbs sampler and a hybrid strategy. They rely on a log-normal/gamma mixture approximation to the full conditional posterior distribution of the parameters <i>q<sub>i</sub></i>, <i>i </i>= 1, ,..., <i>m</i>. The shrinkage behaviour of the hierarchical Bayes estimator is explored, and its average risk is examined through frequency simulations. Examples and applications of the considered methods are given throughout the thesis.
20

The statistical mechanics of Bayesian model selection

Marion, Glenn January 1996 (has links)
In this thesis we examine the question of model selection in systems which learn input-output mappings from a data set of examples. The models we consider are inspired by feed-forward architectures used within the artificial neural networks community. The approach taken here is to elucidate the properties of various <I>model selection </I>criteria by calculation of relevant quantities derived in a Bayesian framework. These calculations make the assumption that examples are generated from some underlying rule or <I>teacher</I> by randomly sampling the input space and are performed using techniques borrowed from statistical mechanics. Such an approach allows for the comparison of different approaches on the basis of the resultant ability of the system to <I>generalize</I> to novel examples. Broadly stated, the model selection problem is the following. Given only a limited set of examples, which model, or <I>student</I>, should one choose from a set of candidates in order to achieve the highest level of generalization? We consider four model selection criteria. A penalty based method utilising a quantity derived from Bayesian statistics termed the <I>evidence</I>, and two methods based on estimates of the generalization performance namely, the <I>test error</I> and the <I>cross validation error</I>. The fourth method, less widely used, is based on the <I>noise sensitivity </I>of he models. In a simple scenario we demonstrate that model selection based on the evidence is susceptible to misspecification of the student. Our analysis is conducted in the <I>thermodynamic limit</I> where the system size is taken to be arbitrarily large. In particular we examine the <I>evidence procedure</I> assignments of the <I>hyperparameters</I> which control the learning algorithm. We find that, where the student is not sufficiently powerful to fully model the teacher, despite being sub-optimal this procedure is remarkably robust towards such misspecifications. In a scenario in which the student is more than able to represent the teacher we find the evidence procedure is optimal.

Page generated in 0.0274 seconds