51 |
Duality relationships for a nonlinear version of the generalyzed Neyman-Pearson problem /Meeks, Howard David January 1970 (has links)
No description available.
|
52 |
The exact non-null distribution of the likelihood ratio criterion for testing sphericity in a multinormal population /Suissa, Samy January 1977 (has links)
No description available.
|
53 |
Estimability and testability in linear modelsAlalouf, Serge January 1975 (has links)
No description available.
|
54 |
Database alignment: fundamental limits and multiple databases settingK, Zeynep 13 September 2024 (has links)
In modern data analysis, privacy is a critical concern when dealing with user-related databases. Ensuring user anonymity while extracting meaningful correlations from the data poses a significant challenge, especially when side information can potentially enable de-anonymization. This dissertation explores the standard information-theoretic problems in the correlated databases model. We define a "database" as a simple probabilistic model that contains a random feature vector for each user, with user labels shuffled to ensure anonymity.
We first investigate correlation detection between two databases, formulating it as a composite binary hypothesis testing problem. Under the alternate hypothesis, there exists an unknown permutation that aligns users in the first database with those in the second, thereby matching correlated entries. The null hypothesis assumes that the databases are independent, with no such alignment. For the special case of Gaussian feature vectors, we derive both upper and lower bounds on the correlation required to achieve or fail to achieve this statistical problem. Our results are tight up to a constant factor when the feature length exceeds the number of users. Regarding our achievability boundary, we draw connections to the user labeling recovery problem, highlighting significant parallels and insights. Additionally, for the two databases model, we initially examine the potential gaps in the statistical analysis conducted thus far for the large number of users regime by drawing parallels with similar problems in the literature. Motivated by these comparisons, we propose a novel approach to address the detection problem, focusing on the hidden permutation structure and intricate dependencies characterizing these relationships.
Building on our research, we present a comprehensive model for handling multiple correlated databases. In this multiple-databases setting, we address another fundamental information-theoretic problem: user label recovery. We evaluate the performance of the typicality matching estimator in relation to the asymptotic behavior of feature length, demonstrating an impossibility result that holds up to a multiplicative constant factor. This exploration into multiple databases not only broadens the scope of our study but also underscores the complexity and richness of correlation detection in a more generalized framework.
In conclusion, we summarize the statistical gaps identified in our findings, exploring their possible origins. We also discuss the limitations of our simple probabilistic model and propose strategies to address them. Finally, we outline potential future research directions, including the information-theoretic problem of change detection, which remains an open area of significant interest.
|
55 |
A Monte Carlo Study of the Robustness and Power Associated with Selected Tests of Variance Equality when Distributions are Non-Normal and Dissimilar in FormHardy, James C. (James Clifford) 08 1900 (has links)
When selecting a method for testing variance equality, a researcher should select a method which is robust to distribution non-normality and dissimilarity. The method should also possess sufficient power to ascertain departures from the equal variance hypothesis. This Monte Carlo study examined the robustness and power of five tests of variance equality under specific conditions. The tests examined included one procedure proposed by O'Brien (1978), two by O'Brien (1979), and two by Conover, Johnson, and Johnson (1981). Specific conditions included assorted combinations of the following factors: k=2 and k=3 groups, normal and non-normal distributional forms, similar and dissimilar distributional forms, and equal and unequal sample sizes. Under the k=2 group condition, a total of 180 combinations were examined. A total of 54 combinations were examined under the k=3 group condition. The Type I error rates and statistical power estimates were based upon 1000 replications in each combination examined. Results of this study suggest that when sample sizes are relatively large, all five procedures are robust to distribution non-normality and dissimilarity, as well as being sufficiently powerful.
|
56 |
Splitting Frames Based on Hypothesis Testing for Patient Motion Compensation in SPECTMA, LINNA 30 August 2006 (has links)
"Patient motion is a significant cause of artifacts in SPECT imaging. It is important to be able to detect when a patient undergoing SPECT imaging is stationary, and when significant motion has occurred, in order to selectively apply motion compensation. In our system, optical cameras observe reflective markers on the patient. Subsequent image processing determines the marker positions relative to the SPECT system, calculating patient motion. We use this information to decide how to aggregate detected gamma rays (events) into projection images (frames) for tomographic reconstruction. For the most part, patients are stationary, and all events acquired at a single detector angle are treated as a single frame. When a patient moves, it becomes necessary to split a frame into subframes during each of which the patient is stationary. This thesis presents a method for splitting frames based on hypothesis testing. Two competing hypotheses and probability model are designed. Whether to split frames is based on a Bayesian recursive estimation of the likelihood function. The estimation procedure lends itself to an efficient iterative implementation. We show that the frame splitting algorithm performance is good for a sample SNR. Different motion simulation cases are presented to verify the algorithm performance. This work is expected to improve the accuracy of motion compensation in clinical diagnoses."
|
57 |
Classification of image pixels based on minimum distance and hypothesis testingGhimire, Santosh January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / We introduce a new classification method that is applicable to classify image pixels. This
work was motivated by the test-based classification (TBC) introduced by Liao and Akritas(2007). We found that direct application of TBC on image pixel classification can lead to high mis-classification rate. We propose a method that combines the minimum distance
and evidence from hypothesis testing to classify image pixels. The method is implemented in R programming language. Our method eliminates the drawback of Liao and Akritas (2007).Extensive experiments show that our modified method works better in the classification of image pixels in comparison with some standard methods of classification; namely, Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Classification Tree(CT), Polyclass classification, and TBC. We demonstrate that our method works well in the case of both grayscale and color images.
|
58 |
Robustness of Parametric and Nonparametric Tests When Distances between Points Change on an Ordinal Measurement ScaleChen, Andrew H. (Andrew Hwa-Fen) 08 1900 (has links)
The purpose of this research was to evaluate the effect on parametric and nonparametric tests using ordinal data when the distances between points changed on the measurement scale. The research examined the performance of Type I and Type II error rates using selected parametric and nonparametric tests.
|
59 |
The design and analysis of benchmark experimentsHothorn, Torsten, Leisch, Friedrich, Zeileis, Achim, Hornik, Kurt January 2003 (has links) (PDF)
The assessment of the performance of learners by means of benchmark experiments is established exercise. In practice, benchmark studies are a tool to compare the performance of several competing algorithms for a certain learning problem. Cross-validation or resampling techniques are commonly used to derive point estimates of the performances which are compared to identify algorithms with good properties. For several benchmarking problems, test procedures taking the variability of those point estimates into account have been suggested. Most of the recently proposed inference procedures are based on special variance estimators for the cross-validated performance. We introduce a theoretical framework for inference problems in benchmark experiments and show that standard statistical test procedures can be used to test for differences in the performances. The theory is based on well defined distributions of performance measures which can be compared with established tests. To demonstrate the usefulness in practice, the theoretical results are applied to benchmark studies in a supervised learning situation based on artificial and real-world data. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
60 |
Personality and perceptions of situations from the thematic apperception test: quantifying alpha and beta pressUnknown Date (has links)
Theoretical models posit that the perception of situations consists of two
components: an objective component attributable to the situation being perceived and a
subjective component attributable to the person doing the perceiving (Murray, 1938;
Rauthmann, 2012; Sherman, Nave & Funder, 2013; Wagerman & Funder, 2009). In this
study participants (N = 186) viewed three pictures from the Thematic Apperception Test
(TAT; Murray, 1938) and rated the situations contained therein using a new measure of
situations, the Riverside Situational Q-Sort (RSQ; Wagerman & Funder, 2009). The RSQ
was used to calculate the overall agreement among ratings of situations and to examine
the objective and subjective properties of the pictures. These results support a twocomponent
theory of situation perception. Both the objective situation and the person perceiving that situation contributed to overall perception. Further, distinctive perceptions of situations were consistent across pictures and were associated with the Big Five personality traits in a theoretically meaningful manner. For instance, individuals high in Openness indicated that these pictures contained comparatively more humor (r = .26), intellectual stimuli (r = .20), and raised moral or ethical issues (r = .19) than individuals low on this trait. / Includes bibliography. / Thesis (M.A.)--Florida Atlantic University, 2013.
|
Page generated in 0.0691 seconds