521 |
Applications of differential geometry to statisticsMarriott, Paul January 1990 (has links)
Chapters 1 and 2 are both surveys of the current work in applying geometry to statistics. Chapter 1 is a broad outline of all the work done so far, while Chapter 2 studies, in particular, the work of Amari and that of Lauritzen. In Chapters 3 and 4 we study some open problems which have been raised by Lauritzen's work. In particular we look in detail at some of the differential geometric theory behind Lauritzen's defmition of a Statistical manifold. The following chapters follow a different line of research. We look at a new non symmetric differential geometric structure which we call a preferred point manifold. We show how this structure encompasses the work of Amari and Lauritzen, and how it points the way to many generalizations of their results. In Chapter 5 we define this new structure, and compare it to the Statistical manifold theory. Chapter 6 develops some examples of the new geometry in a statistical context. Chapter 7 starts the development of the pure theory of these preferred point manifolds. In Chapter 8 we outline possible paths of research in which the new geometry may be applied to statistical theory. We include, in an appendix, a copy of a joint paper which looks at some direct applications of differential geometry to a statistical problem, in this case it is the problem of the behaviour of the Wald test with nonlinear restriction functions.
|
522 |
Statistical language learningOnnis, Luca January 2003 (has links)
Theoretical arguments based on the "poverty of the stimulus" have denied a priori the possibility that abstract linguistic representations can be learned inductively from exposure to the environment, given that the linguistic input available to the child is both underdetermined and degenerate. I reassess such learnability arguments by exploring a) the type and amount of statistical information implicitly available in the input in the form of distributional and phonological cues; b) psychologically plausible inductive mechanisms for constraining the search space; c) the nature of linguistic representations, algebraic or statistical. To do so I use three methodologies: experimental procedures, linguistic analyses based on large corpora of naturally occurring speech and text, and computational models implemented in computer simulations. In Chapters 1,2, and 5, I argue that long-distance structural dependencies - traditionally hard to explain with simple distributional analyses based on ngram statistics - can indeed be learned associatively provided the amount of intervening material is highly variable or invariant (the Variability effect). In Chapter 3, I show that simple associative mechanisms instantiated in Simple Recurrent Networks can replicate the experimental findings under the same conditions of variability. Chapter 4 presents successes and limits of such results across perceptual modalities (visual vs. auditory) and perceptual presentation (temporal vs. sequential), as well as the impact of long and short training procedures. In Chapter 5, I show that generalisation to abstract categories from stimuli framed in non-adjacent dependencies is also modulated by the Variability effect. In Chapter 6, I show that the putative separation of algebraic and statistical styles of computation based on successful speech segmentation versus unsuccessful generalisation experiments (as published in a recent Science paper) is premature and is the effect of a preference for phonological properties of the input. In chapter 7 computer simulations of learning irregular constructions suggest that it is possible to learn from positive evidence alone, despite Gold's celebrated arguments on the unlearnability of natural languages. Evolutionary simulations in Chapter 8 show that irregularities in natural languages can emerge from full regularity and remain stable across generations of simulated agents. In Chapter 9 I conclude that the brain may endowed with a powerful statistical device for detecting structure, generalising, segmenting speech, and recovering from overgeneralisations. The experimental and computational evidence gathered here suggests that statistical language learning is more powerful than heretofore acknowledged by the current literature.
|
523 |
Statistical aspects of fetal screeningDonovan, Christine M. January 1995 (has links)
This thesis discusses the current screening algorithm that is used to detect fetal Down's syndrome. The algorithm combines a model for predicting age related risks and a model for appropriately transformed serum concentrations to produce estimates of risks. A discriminant analysis is used to classify pregnancies as either unaffected or Down's syndrome. The serum concentrations vary with gestational age and the relationship between serum concentrations and gestational age is modelled using regression. These models are discussed and alternative models for these relationships are offered. Concentration values are generally expressed in terms of multiples of the medians for unaffected pregnancies, or MoM values, which involves grouping the concentrations into weekly bins. Transformations of the MoM values are used in the model for predicting risks. The transformed values are equivalent to the residuals of the fitted regression models. This thesis directly models the residuals rather than converting the data to MoM values. This approach avoids the need to group gestational dates into completed weeks. The performance of the algorithm is assessed in terms the detection rates and false positive rates. The performance rates are prone to considerable sampling error. Simulation methods are used to calculate standard errors for reported detection rates. The bias in the rates is also investigated using bootstrapping techniques. The algorithm often fails to recognize abnormalities other than Down's syndrome and frequently associates them with low risks. A solution to the problem is offered that assigns an index of atypicality to each pregnancy, to identify those pregnancies that are atypical of unaffected pregnancies, but are also unlike Down's syndrome pregnancies. Nonparametric techniques for estimating the class conditional densities of transformed serum values are used as an alternative to the conventional parametric techniques of estimation. High quality density estimates are illustrated and these are used to compute nonparametric likelihood ratios that can be used in the probability model to predict risks. The effect of errors in the methods of recording gestational dates on the parameter estimates that are used in the discriminant analysis is also considered.
|
524 |
Coherent radar clutter statisticsJahangir, Mohammed January 2000 (has links)
No description available.
|
525 |
Language identification by statistical analysis.Rau, Morton David 09 1900 (has links)
Approved for public release; distribution is unlimited. / An analysis was conducted of English and Spanish text. The statistical analysis determined the independent probability of letters and the joint probability of various letter combinations for large samples of each language. Various methods were tested in an attempt to utilize these characteristics to identify the language of a short sample text. By use of the joint probability of various vowel-consonant relationships and the Kolmogorov-Smirnov Goodness of Fit Test an identification system was defined that provided a significance level of .0077 for a sample of 107 letters (approximately 21 words). Investigation also showed that the space rate or the interword structure in each language contains a measure of intelligence and was useful in identification
|
526 |
Active statistical process controlIbrahim, Kamarul Asri January 1989 (has links)
Most Statistical Process Control (SPC) research has focused on the development of charting techniques for process monitoring. Unfortunately, little attention has been paid to the importance of bringing the process in control automatically via these charting techniques. This thesis shows that by drawing upon concepts from Automatic Process Control (APC), it is possible to devise schemes whereby the process is monitored and automatically controlled via SPC procedures. It is shown that Partial Correlation Analysis (PCorrA) or Principal Component Analysis (PCA) can be used to determine the variables that have to be monitored and manipulated as well as the corresponding control laws. We call this proposed procedure Active SPC and the capabilities of various strategies that arise are demonstrated by application to a simulated reaction process. Reactor product concentration was controlled using different manipulated input configurations e.g. manipulating all input variables, manipulating only two input variables, and manipulating only a single input variable. The last two manipulating schemes consider the cases when all input variables can be measured on-line but not all can be manipulated on-line. Different types of control charts are also tested with the new Active SPC method e.g. Shewhart chart with action limits; Shewhart chart with action and warning limits for individual observations, and lastly the Exponentially Weighted Moving Average control chart. The effects of calculating control limits on-line to accommodate possible changes in process characteristics were also studied. The results indicate that the use of the Exponentially Weighted Moving Average control chart, with limits calculated using Partial Correlations, showed the best promise for further development. It is also shown that this particular combination could provide better performance than the common Proportional Integral (PI) controller when manipulations incur costs.
|
527 |
The statistical lensing of QSOsMyers, Adam David January 2003 (has links)
We use the 2dF QSO Redshift Survey, to investigate whether QSOs are detectably gravitationally lensed. Lensing could magnify and distort light from QSOs, influencing QSO numbers near galaxies, which trace structure in our Universe. Following Boyle, Fong & Shanks (1988), we find a 3σ anti-correlation between QSOs and galaxy groups of strength W (_gg)(< 10') = -0.049. We limit absorption by dust in groups to A(_B) < 0.04 mag. To explain the anti-correlation by dust would need Ab ≈ 0.2 mag. We demonstrate that if the dearth of QSOs around groups is due to statistical lensing, more mass would be required in groups than Ω(_m) = 0.3 models suggest. We use a mock catalogue to test how many of our "2D" galaxy groups, which are detected using angular information, are associated in redshift-space. We then utilise 2dF Galaxy Redshift Survey groups, which are selected to trace dark matter haloes, to test the hypothesis that there is more mass in groups than Ωr(_m) = 0.3 models suggest, finding we cannot discount a lensing mass of 2dFGRS groups that is consistent with ACDM. We find QSOs and galaxies are also anti-correlated at the 3σ level, with strength w(< 10’) = -0.007 and use stars as a control sample to rule out observational systematics as a cause. By measuring QSO colours as a function of QSO-galaxy separation, we argue that obscuration by dust in galaxies could explain at most 30-40 per cent of the anti correlation. We show that if the anti-correlation is due to lensing, galaxies would be anti-biased [b ~ 0.05) on small scales. We discuss two surveys carried out to count faint QSOs, which newly identify 160 QSOs. We calculate that the faint-end QSO number-counts have a slope of 0.29 ± 0.03. Finally, we use our faint QSO data, to estimate that ~ 85(75) per cent of g < 21.15 (≥ 21.15) candidates targeted by the 2dFSDSS survey will be QSOs.
|
528 |
The teaching and learning of statistics in psychologyO'Donohue, Michael G. January 1996 (has links)
No description available.
|
529 |
A statistical treatment of annihilation reactions /Weiss, Nathan S. January 1976 (has links)
No description available.
|
530 |
The Statistical Learning Of Musical ExpectancyVuvan, Dominique 07 January 2013 (has links)
This project investigated the statistical learning of musical expectancy. As a secondary goal, the effects of the perceptual properties of tone set familiarity (Western vs. Bohlen-Pierce) and textural complexity (melody vs. harmony) on the robustness of that learning process were assessed. A series of five experiments was conducted, varying in terms of these perceptual properties, the grammatical structure used to generate musical sequences, and the methods used to measure musical expectancy. Results indicated that expectancies can indeed be developed following statistical learning, particularly for materials composed from familiar tone sets. Moreover, some expectancy effects were observed in the absence of the ability to successfully discriminate between grammatical and ungrammatical items. The effect of these results on our current understanding of expectancy formation is discussed, as is the appropriateness of the behavioural methods used in this research.
|
Page generated in 0.166 seconds