511 |
Contributions to statistical quality controlHapuarachchi, Karunarathnage Piyasena, January 1988 (has links)
No description available.
|
512 |
Rank statistics of forecast ensemblesSiegert, Stefan 08 March 2013 (has links) (PDF)
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members.
An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank.
A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented.
Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account.
The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
|
513 |
Statistical inference in radio astronomyJunklewitz, Henrik 10 April 2014 (has links) (PDF)
This thesis unifies several studies, which all are dedicated to the subject of statistical data
analysis in radio astronomy and radio astrophysics.
Radio astronomy, like astronomy as a whole, has undergone a remarkable development in the past twenty years in introducing new instruments and technologies. New telescopes like the upgraded VLA, LOFAR, or the SKA and its pathfinder missions offer unprecedented sensitivities, previously uncharted frequency domains and unmatched survey capabilities.
Many of these have the potential to significantly advance the science of radio astrophysics and cosmology on all scales, from solar and stellar
physics, Galactic astrophysics and cosmic magnetic fields, to Galaxy cluster astrophysics
and signals from the epoch of reionization.
Since then, radio data analysis, calibration and imaging techniques have entered a similar phase of new development to push the boundaries and adapt the field to the new instruments and scientific opportunities.
This thesis contributes to these greater developments in two specific subjects, radio interferometric imaging and cosmic magnetic field statistics.
Throughout this study, different data analysis techniques are presented and employed
in various settings, but all can be summarized under the broad term of statistical infer-
ence. This subject encompasses a huge variety of statistical techniques, developed to solve
problems in which deductions have to be made from incomplete knowledge, data or measurements. This study focuses especially on Bayesian inference methods that make use of a subjective definition of probabilities, allowing for the expression of probabilities and statistical knowledge prior to an actual measurement.
The thesis contains two different sets of application for such techniques. First, situations
where a complicated, and generally ill-posed measurement problem can be approached by
assuming a statistical signal model prior to infer the desired measured variable. Such
a problem very often is met should the measurement device take less data then needed
to constrain all degrees of freedom of the problem. The principal case investigated in
this thesis is the measurement problem of a radio interferometer, which takes incomplete
samples of the Fourier transformed intensity of the radio emission in the sky, such that it is
impossible to exactly recover the signal. The new imaging algorithm RESOLVE is presented,
optimal for extended radio sources. A first showcase demonstrates the performance of the
new technique on real data. Further, a new Bayesian approach to multi-frequency radio
interferometric imaging is presented and integrated into RESOLVE.
The second field of application are astrophysical problems, in which the inherent stochas-
tic nature of a physical process demands a description, where properties of physical quanti-
ties can only be statistically estimated. Astrophysical plasmas for instance are very often in a turbulent state, and thus governed by statistical hydrodynamical laws. Two studies are presented that show how properties of turbulent plasma magnetic fields can be inferred from radio observations.
|
514 |
Bayesian perspectives on statistical modellingPolson, Nicholas G. January 1988 (has links)
This thesis explores the representation of probability measures in a coherent Bayesian modelling framework, together with the ensuing characterisation properties of posterior functionals. First, a decision theoretic approach is adopted to provide a unified modelling criterion applicable to assessing prior-likelihood combinations, design matrices, model dimensionality and choice of sample size. The utility structure and associated Bayes risk induces a distance measure, introducing concepts from differential geometry to aid in the interpretation of modelling characteristics. Secondly, analytical and approximate computations for the implementation of the Bayesian paradigm, based on the properties of the class of transformation models, are discussed. Finally, relationships between distance measures (in the form of either a derivative of a Bayes mapping or an induced distance) are explored, with particular reference to the construction of sensitivity measures.
|
515 |
The statistical analysis of animal populationsAlston, Robert David January 1996 (has links)
No description available.
|
516 |
Statistical mechanics of neural networksWhyte, William John January 1995 (has links)
We investigate five different problems in the field of the statistical mechanics of neural networks. The first three problems involve attractor neural networks that optimise particular cost functions for storage of static memories as attractors of the neural dynamics. We study the effects of replica symmetry breaking (RSB) and attempt to find algorithms that will produce the optimal network if error-free storage is impossible. For the Gardner-Derrida network we show that full RSB is necessary for an exact solution everywhere above saturation. We also show that, no matter what the cost function that is optimised, if the distribution of stabilities has a gap then the Parisi replica ansatz that has been made is unstable. For the noise-optimal network we find a continuous transition to replica symmetry breaking at the AT line, in line with previous studies of RSB for different networks. The change to RSBl improves the agreement between "experimental" and theoretical calculations of the local stability distribution ρ(λ) significantly. The effect on observables is smaller. We show that if the network is presented with a training set which has been generated from a set of prototypes by some noisy rule, but neither the noise level nor the prototypes are known, then the perceptron algorithm is the best initial choice to produce a network that will generalise well. If additional information is available more sophisticated algorithms will be faster and give a smaller generalisation error. The remaining problems deal with attractor neural networks with separable interaction matrices which can be used (under parallel dynamics) to store sequences of patterns without the need for time delays. We look at the effects of correlations on a singlesequence network, and numerically investigate the storage capacity of a network storing an extensive number of patterns in such sequences. When correlations are implemented along with a term in the interaction matrix designed to suppress some of the effects of those correlations, the competition between the two produces a rich range of behaviour. Contrary to expectations, increasing the correlations and the operating temperature proves capable of improving the sequenceprocessing behaviour of the network. Finally, we demonstrate that a network storing a large number of sequences of patterns using a Hebb-like rule can store approximately twice as many patterns as the network trained with the Hebb rule to store individual patterns.
|
517 |
Statistical mechanics of fluidsSeverin, E. S. January 1981 (has links)
The statistical mechanics of the interfacial region is studied using the Monte Carlo and molecular dynamics simulation techniques. The penetrable-sphere model of the liquid/vapour interface is simulated using the Monte Carlo method. The pressure equation of state is calculated in the one-phase region and compared to analytic virial expansions of the system. Density profiles of the gas/liquid surface in the two-phase region are calculated and are compared to profiles solved in the mean-field approximation. The effect of the profile near a hard wall is investigated and as a consequence the theory is modified to account for a hard wall. The theory agrees well with the computer result. This is a simple model for adsorption of a gas at a solid surface. A model for methane adsorbed on graphite is proposed. A number of simplifying assumptions are made. The surface is assumed to be perfectly smooth and rigid, and quantum effects are neglected. An effective site-site pair potential for the methane-graphite interaction is adjusted to fit the rotational barriers at OK. The isosteric enthalpy at zero coverage is predicted in the range OK to 200K, by averaging the configurational energy during a molecular dynamics simulation of one methane molecule. The surface second virial coefficients are calculated in the range 225K to 300K and agree with the experimental measurements. The effective pairwise potential predicts the height of the monolayer above the surface and the vibrational frequency against the surface. The translational and rotational behaviour of a single methane molecule are examined. Solid √3 x √3 epitaxial methane is studied at a constant coverage of θ = 0.87 by molecular dynamics simulation. The specific heat and configurational energy are monitored. A slow phase transition occurs between OK and 30K and a sharp transition is observed at 90K. Calculation of the centre-centre distribution functions and order parameters indicates the first transition is due to a slow rotational phase change. At 90K some molecules evaporate from the surface and the remaining bound molecules relax into a 2-d liquid. Between 10K and 25K the adsorbed methane floats across the surface and the question remains open whether this phenomenon is an artifact of the model system or does occur in nature. The dynamical behaviour of adsorbed methane is compared to incoherent inelastic neutron scattering. The principal peaks in the self part of the incoherent structure factor S<sub>s</sub> (0,<sub>ω</sub>) should correspond to the peaks in the Fourier transforms of the velocity and angular velocity auto-correlation functions. The peaks calculated from the Fourier transform of the auto-correlation functions agree with all the assignments in the experiments. The reorientational motion in the monolayer is monitored and the reorientational auto-correlation functions characterize the slow phase transition from UK to 30K. Three methane molecules are scattered on top of the θ = 0.87 monolayer at 30K. Reorientational correlation functions are compared for the single adsorbed molecule, the monolayer and a few particles in the bilayer. Rotation is less hindered in the monolayer than for a single adsorbed molecule and least hindered in the second layer. Adsorbed methane is studied at coverages of θ < 0.87 over a wide range of temperature in order to unravel various conflicting solid and liquid phases predicted by experiment. By careful monitoring of the structure via changes in the specific heat, the distribution functions and order parameters a liquid/gas coexistence is not observed in the region 56K to 75K. This result is confirmed by calculating the self diffusion coefficients over two isotherms at 65K and 95K. The diffusion coefficients decrease with increasing coverage over both isotherms. If liquid and gas coexist the diffusion coefficient should not change with increasing coverage. The statistical mechanical expression for the spreading pressure of an adsorbed fluid is derived and reported over a wide range of temperature and coverage. Experimental techniques are not as yet sufficiently highly developed to measure this quantity directly. An expression for the coherent neutron scattering structure factor for a model of liquid benzene adsorbed on graphite is derived. This expression is a function of the 2-dimensional centre-centre distribution function and we solve the Ornstein-Zernike equation in the Percus-Yevick approximation to obtain the 2-d distribution functions for hard discs. Agreement with present experimental results is reasonable, but a more highly orientated substrate needs to be used in experiment before a more exact comparison can be made.
|
518 |
Statistical mechanics of surfacesHemingway, S. J. January 1982 (has links)
The equilibrium properties of a spherical drop are investigated using the penetrable-sphere model of a fluid. To estimate the surface tension, a new statistical mechanical formula, the extension of the Triezenberg-Zwanzig result for a planar surface, is derived. The density profiles for use in this are obtained from an integral equation expressing the constancy of chemical potential through the interface. Numerical solutions can be obtained and from these numerical estimates for the surface tension. They are in good agreement with estimates from an independent thermodynamic route. These routes, as well as a further, zero-temperature, exact, analytic one, show that the surface tension of this model increases with decreasing drop size. The planar surface of the model is also briefly investigated using a well-known integrodifferential equation. Two approximations are made for the direct correlation function, one a systematic improvement on the other. They yield solutions for the density profile of a limited range of temperatures below the critical point. When the direct correlation function of a Lennard-Jones fluid is approximated the resulting equation for the profile resists numerical solution.
|
519 |
Statistical analyses of galaxy cataloguesShanks, Thomas January 1979 (has links)
Galaxy catalogues, complete over a wide range of limiting magnitude, are statistically analysed to test theories of formation of galaxies and clusters of galaxies. Statistical measures used in the past to investigate the galaxy distribution in these catalogues are reviewed and the results there from summarised. From applying statistical techniques new to extra galactic astronomy to shallower catalogues it is found that the evidence supporting the hierarchical distribution of galaxies at small scale lengths is not as strong as previously believed. The results are more consistent with a model for galaxy distribution where galaxies are found in clusters with power-law profiles. The implications of this result for theories of galaxy formation are discussed. New, extensive catalogues, complete in J and R to faint limits are obtained from machine measurements of U.K. Schmidt photographs at the South Galactic Pole. The techniques required to produce these catalogues are described. Number magnitude counts and colour magnitude diagrams for these samples are presented and found to be consistent with the work of other authors. These results are used to investigate the selection effects operating in the samples. Tentative evidence for galaxy luminosity evolution is discussed. The strength of galaxy clustering in these deep catalogues is statistically measured and compared with the results from shallower surveys. From this comparison good evidence is found for the homogeneity of the galaxy distribution over the large scales 50-700 h(^-1) Mpc. None of the discrepancies of previous studies are found. The possibility of testing theoretically predicted clustering growth rates with such data is discussed. Statistical analysis of galaxies in the deep samples also shows evidence for a feature in the 2-point galaxy correlation function like that found in the analysis of shallower catalogues. However, the position of the feature corresponds to a spatial separation of 3 h(^-1) Mpc instead of 9 h(^-1) Mpc as found locally. The reasons for the discrepancy are discussed and the implications for galaxy formation theory described. Finally the faint stellar catalogues produced alongside the deep galaxy catalogues are statistically analysed. Evidence is found for two distinct populations of stellar types at the South Galactic Pole. The spatial distribution of stars within these populations is investigated.
|
520 |
Statistics as an aid to the comptroller.Leis, S. Frank January 1959 (has links)
No description available.
|
Page generated in 0.1909 seconds