531 |
The Tritone Paradox: An Experimental and Statistical Analysis / The Tritone Paradox: A Statistical AnalysisGerhardt, Kris 04 1900 (has links)
When tones comprised of six octave-related harmonics are placed in pairs, where the two tones are separated by a tritone, some subjects perceive the direction of pitch change from the first to the second tone as ascending, while other subjects perceive it as descending. This is the basis for what is currently labeled the Tritone Paradox. The Tritone Paradox was investigated in a set of four experiments that made use of different experimental procedures that employed a 0-750 ms silent interval between tones in a trial-pair, different methods of stimulus presentation (i.e. open air or earphone) and 125 - 500 ms stimulus durations. Major emphasis was placed on the implementation of a standardized experimental procedure and the use of a standardized method of analyzing results in a Tritone Paradox experiment. To this end, an analysis method was designed using circular statistics, which resulted in a truly objective method of analyzing and classifying subjects while making use of all subject data. Analyses indicated that peak-pitch class was highly correlated with the angle of a mean vector (AMV) and that the depth of a profile was correlated with the length of a mean vector (LMV). The AMV and LMV may be combined to produce a single summary measure for a subject's performance. Two modes of responding employed by subjects were identified. Profiles generated using a spectral-envelope-controlled mode of responding are characterized by judgments of tones under one envelope being close to 180° out of phase with judgments made under a spectral envelope centered one half octave away from the other. Profiles generated using a pitch-class-controlled mode of responding are
characterized by judgments of tones under one spectral envelope closely resembling judgments of tones made under a spectral envelope centered a tritone away from the first spectral envelope. Angular-Separation analysis details the difference between the AMV for two separate profiles generated by a subject. This analysis technique is a fast, reliable method for identifying individual differences in mode of responding between subjects. Angular-Separation analyses were used to verify the presence of 'spectral-envelope-controlled' subjects, which were first described in detail by Repp (1994). These subjects appeared in significant proportions in all conditions. Based on the results of these analyses, the traditional practice of using a single-averaged profile to describe a subject has been questioned. Such a profile does not adequately describe the performance of a subject using a spectral-envelope-controlled mode of responding. The single-averaged profile masks the differences between these subjects and those using a 'pitch-class-controlled' mode of responding, which are the subjects typically described in the literature. Results of the Angular-Separation analysis across Experiments 1 -4 showed similar proportions of subjects using a pitch-class-controlled or a spectral-envelope-controlled mode of responding. Evidence was found to indicate that the largest proportion of subjects using a pitch-class-controlled mode of responding was observed in an experimental condition that used 125-ms stimuli with no silent interval between tones in a pair. Evidence was also found to indicate that the largest proportion of subjects using a spectral-envelope-controlled mode of responding was observed in an experimental condition that employed 500 ms stimuli with a 500 ms silent interval between tones in a trial pair. It therefore appears that the duration of tones and the
duration of a silent interval between tones in a trial pair will influence the mode of responding adopted by a subject in a Tritone Paradox experiment. The pure-tone pre-test, described in the literature as a subject selection tool, was investigated and results indicate that performance on such a pre-test does not predict how consistent a subject's performance will be in a Tritone Paradox task. Although mean LMV decreased as pre-test error scores increased, there was no significant correlation found between individual subjects' error scores and LMV values. This pretest, however, has some value in predicting some subjects' mode of responding. The vast majority of subjects with three or more errors on a pure-tone pre-test produced profiles that were classified as spectral-envelope controlled. These results suggest that the bulk of those subjects eliminated in past Tritone Paradox experiments that employed a pure-tone pre-test, may have been subjects who were far more likely to use a spectral-envelope controlled mode of responding. / Thesis / Doctor of Philosophy (PhD)
|
532 |
The effect of additional information on mineral deposit geostatistical grade estimates /Milioris, George J. (George Joseph) January 1983 (has links)
No description available.
|
533 |
The analysis of two-way cross-classified unbalanced data /Bartlett, Sheryl Anne. January 1980 (has links)
No description available.
|
534 |
Stagewise and stepwise regression techniques in meteorological forecastingHess, H. Allen January 1978 (has links)
No description available.
|
535 |
Empirical Bayes estimation of small area proportionsFarrell, Patrick John January 1991 (has links)
No description available.
|
536 |
A GPS-IPW Based Methodology for Forecasting Heavy Rain EventsGorugantula, Srikanth V. L. 03 January 2003 (has links)
The mountainous western Virginia is the source of the headwater streams for the New, the Roanoke, and the James rivers. The region is prone to flash flooding, typically the result of localized precipitation. Fortunately, within the region, there is an efficient system of instruments for real-time data gathering with IFLOWS (Integrated Flood Observing and Warning System) gages, WSR-88D Doppler radar, and high precision GPS (Global Positioning System) receiver. The focus of this research is to combine the measurements from these various sensors in an algorithmic framework to determine the flash flood magnitude. It has been found that the trend in the GPS signals serves as a precursor for rain events with a lead-time of 30 minutes to 2 hours. The methodology proposed herein takes advantage of this lead-time as the trigger to initiate alert related calculations. It is shown here that the sum of the rates of change of total cloud water, water vapor contents and logarithmic profiles of partial pressure of dry air and temperature in an atmospheric column is equal to the rain rate. The total water content is measurable as the profiles of integrated precipitable water (IPW) from the GPS, the vertically integrated liquid (VIL) from the radar (representing different phases of the atmospheric water) and the pressure and temperature profiles are available. An example problem is presented illustrating the involving the calculations. / Master of Science
|
537 |
An application of machine learning to statistical physics: from the phases of quantum control to satisfiability problemsDay, Alexandre G.R. 27 February 2019 (has links)
This dissertation presents a study of machine learning methods with a focus on applications to statistical and condensed matter physics, in particular the problem of quantum state preparation, spin-glass and constraint satisfiability. We will start by introducing the core principles of machine learning such as overfitting, bias-variance tradeoff and the disciplines of supervised, unsupervised and reinforcement learning. This discussion will be set in the context of recent applications of machine learning to statistical physics and condensed matter physics. We then present the problem of quantum state preparation and show how reinforcement learning along with stochastic optimization methods can be applied to identify and define phases of quantum control. Reminiscent of condensed matter physics, the underlying phases of quantum control are identified via a set of order parameters and further detailed in terms of their universal implications for optimal quantum control. In particular, casting the optimal quantum control problem as an optimization problem, we show that it exhibits a generic glassy phase and establish a connection with the fields of spin-glass physics and constraint satisfiability problems. We then demonstrate how unsupervised learning methods can be used to obtain important information about the complexity of the phases described. We end by presenting a novel clustering framework, termed HAL for hierarchical agglomerative learning, which exploits out-of-sample accuracy estimates of machine learning classifiers to perform robust clustering of high-dimensional data. We show applications of HAL to various clustering problems.
|
538 |
Inferring the time-varying transmission rate and effective reproduction number by fitting semi-mechanistic compartmental models to incidence dataForkutza, Gregory January 2024 (has links)
This thesis presents a novel approach to ecological dynamic modeling using non-stochastic compartmental models. Estimating the transmission rate (\(\beta\)) and the effective reproduction number (\(R_t\)) is essential for understanding disease spread and guiding public health interventions. We extend this method to infectious disease models, where the transmission rate varies dynamically due to external factors. Using Simon Wood's partially specified modeling framework, we introduce penalized smoothing to estimate time-varying latent variables within the `R` package `macpan2`. This integration provides an accessible tool for complex estimation problems. The efficacy of our approach is first validated via a simulation study and then demonstrated with real-world datasets on Scarlet Fever, COVID-19, and Measles. We infer the effective reproduction number (\(R_t\)) using the estimated \(\beta\) values, providing further insights into the dynamics of disease transmission. Model fit is compared using the Akaike Information Criterion (AIC), and we evaluate the performance of different smoothing bases derived using the `mgcv` package. Our findings indicate that this methodology can be extended to various ecological and epidemiological contexts, offering a versatile and robust approach to parameter estimation in dynamic models. / Thesis / Master of Science (MSc) / This thesis explores a new way to model how diseases spread using a deterministic mathematical framework. We focus on estimating the changing transmission rate and the effective reproduction number, key factors in understanding and controlling disease outbreaks. Our method, incorporated into the `macpan2` software, uses advanced techniques to estimate these changing rates over time. We first prove the effectiveness of our approach with simulations and then apply it to real data from Scarlet Fever, COVID-19, and Measles. We also compare the model performance. Our results show that this flexible and user-friendly approach is a valuable tool for modelers working on disease dynamics.
|
539 |
Empirical Analyses of a Spatial Model of Voter PreferencesMatje, Thorsten 06 December 2016 (has links)
To properly analyze the advantages and disadvantages of voting rules, and how well the outcomes that they yield reflect voters' preferences, one needs very large data sets, since paradoxes that occur very rarely may have large impacts. Since such amounts of election data are currently unavailable, it is important to be able to use random procedures to generate data that have the same statistical characteristics as real election data. It is the purpose of this work to identify a statistical characterization of voting data, to empower researchers to use random procedures to generate data that is statistically indistinguishable from real voting data. / Ph. D. / Democracies use various rules to determine the winners of elections. The plurality rule, under which each voter votes for one candidate and the candidate with the most votes wins, is one example. One can add a specification that when no candidate receives a majority of the votes there will be a run-off, which will sometimes change the outcome. There are many possible voting rules; all have their benefits and limitations. Some rules can yield unsatisfying anomalies, possibly with very small probability. Since such anomalies might occur very rarely, to estimate their frequency one needs data from a substantial number of elections, more elections than are available from historical experience. Thus to undertake research on voting rules, one needs a procedure for generating data that have the same statistical characteristics as real election data. The purpose of this work is to identify enough of the statistical properties of realistic voting data (from surveys) to permit researchers to generate an unlimited amount of simulated election data, so that they can analyze the frequency of various anomalies under different voting rules.
|
540 |
EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALSShen, Zhiyuan 01 January 2016 (has links)
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. It has been shown by Owen (1988,1990) and many others that empirical likelihood ratio (ELR) method can be used to produce nice confidence intervals or regions. Owen (1988) shows that -2logELR converges to a chi-square distribution with one degree of freedom subject to a linear statistical functional in terms of distribution functions. However, a generalization of Owen's result to the right censored data setting is difficult since no explicit maximization can be obtained under constraint in terms of distribution functions. Pan and Zhou (2002), instead, study the EL with right censored data using a linear statistical functional constraint in terms of cumulative hazard functions. In this dissertation, we extend Owen's (1988) and Pan and Zhou's (2002) results subject to non-linear but Hadamard differentiable statistical functional constraints. In this purpose, a study of differentiable functional with respect to hazard functions is done. We also generalize our results to two sample problems. Stochastic process and martingale theories will be applied to prove the theorems. The confidence intervals based on EL method are compared with other available methods. Real data analysis and simulations are used to illustrate our proposed theorem with an application to the Gini's absolute mean difference.
|
Page generated in 0.0953 seconds