Spelling suggestions: "subject:"exact test"" "subject:"axact test""
1 |
Resident Student Perceptions of On-Campus Living and Study Environments at the University of Namibia and their Relation to Academic PerformanceNeema, Isak 29 April 2003 (has links)
This study measures resident student perceptions of on-campus living and study environments at the University of Namibia campus residence and their relation to student academic performance. Data were obtained from a stratified random sample of resident students with hostels (individual dormitory) as strata. Student academic performance was measured by grade point average obtained from the university registrar. Student perceptions of living and study environments were obtained from a survey. Inferences were made from the sample to the population concerning: student perceptions of the adequacy of the library and campus safety, and differences in perceptions between students living in old-style and new-style hostels. To relate student perceptions to academic performance, a model regressing GPA on student perception variables was constructed. The principal findings of the analyses were that (1) Student perceptions do not differ between old and new hostels; (2) There is an association between time spent in the hostel and the type of room, ability to study in room during the day and the type of room, ability to study in room at night and the type of room, time spent in hostel and number of times student change blocks, ability to study in room at night and availability of study desk in room, ability to study in room at night and availability of study lamp in room, effectiveness of UNAM security personnel and safety studying at classes at night and also between effectiveness of UNAM security personnel and student perception on whether security on campus should remain unchanged respectively; (3) Mean GPA differs with respect to the type of room, ability to study in room during the day, time spent in hostel, number of times student change blocks, current year of study, time spent on study, students who are self-catering, sufficiency of water supply in blocks and also with students who are enrolled in Law and B.Commerce field of study and with students receiving financial support in the form of loans. (4) The variables found to be significant in the regression model were Law field of study, double rooms, inability to study in room during the day and self-catering respectively.
|
2 |
Exact Approaches for Bias Detection and Avoidance with Small, Sparse, or Correlated Categorical DataSchwartz, Sarah E. 01 December 2017 (has links)
Every day, traditional statistical methodology are used world wide to study a variety of topics and provides insight regarding countless subjects. Each technique is based on a distinct set of assumptions to ensure valid results. Additionally, many statistical approaches rely on large sample behavior and may collapse or degenerate in the presence of small, spare, or correlated data. This dissertation details several advancements to detect these conditions, avoid their consequences, and analyze data in a different way to yield trustworthy results.
One of the most commonly used modeling techniques for outcomes with only two possible categorical values (eg. live/die, pass/fail, better/worse, ect.) is logistic regression. While some potential complications with this approach are widely known, many investigators are unaware that their particular data does not meet the foundational assumptions, since they are not easy to verify. We have developed a routine for determining if a researcher should be concerned about potential bias in logistic regression results, so they can take steps to mitigate the bias or use a different procedure altogether to model the data.
Correlated data may arise from common situations such as multi-site medical studies, research on family units, or investigations on student achievement within classrooms. In these circumstance the associations between cluster members must be included in any statistical analysis testing the hypothesis of a connection be-tween two variables in order for results to be valid.
Previously investigators had to choose between using a method intended for small or sparse data while assuming independence between observations or a method that allowed for correlation between observations, while requiring large samples to be reliable. We present a new method that allows for small, clustered samples to be assessed for a relationship between a two-level predictor (eg. treatment/control) and a categorical outcome (eg. low/medium/high).
|
3 |
The Box-Cox Transformation:A Review曾能芳, Zeng, Neng-Fang Unknown Date (has links)
The use of transformation can usually simplify the analysis of data,
especially when the original observations deviate from the underlying
assumption of linear model. Box-Cox transformation receives much more
attention than others. In this dissertation,. we will review the theory
about the estimation, hypotheses test on transformation parameter and
about the sensitivity of the linear model parameters in Box-Cox
transformation. Monte Carlo simulation is used to study the performance
of the transformations. We also display whether Box-Cox transformation
make the transformed observations satisfy the assumption of linear model
actually.
|
4 |
Using Three Different Categorical Data Analysis Techniques to Detect Differential Item FunctioningStephens-Bonty, Torie Amelia 16 May 2008 (has links)
Diversity in the population along with the diversity of testing usage has resulted in smaller identified groups of test takers. In addition, computer adaptive testing sometimes results in a relatively small number of items being used for a particular assessment. The need and use for statistical techniques that are able to effectively detect differential item functioning (DIF) when the population is small and or the assessment is short is necessary. Identification of empirically biased items is a crucial step in creating equitable and construct-valid assessments. Parshall and Miller (1995) compared the conventional asymptotic Mantel-Haenszel (MH) with the exact test (ET) for the detection of DIF with small sample sizes. Several studies have since compared the performance of MH to logistic regression (LR) under a variety of conditions. Both Swaminathan and Rogers (1990), and Hildalgo and López-Pina (2004) demonstrated that MH and LR were comparable in their detection of items with DIF. This study followed by comparing the performance of the MH, the ET, and LR performance when both the sample size is small and test length is short. The purpose of this Monte Carlo simulation study was to expand on the research done by Parshall and Miller (1995) by examining power and power with effect size measures for each of the three DIF detection procedures. The following variables were manipulated in this study: focal group sample size, percent of items with DIF, and magnitude of DIF. For each condition, a small reference group size of 200 was utilized as well as a short, 10-item test. The results demonstrated that in general, LR was slightly more powerful in detecting items with DIF. In most conditions, however, power was well below the acceptable rate of 80%. As the size of the focal group and the magnitude of DIF increased, the three procedures were more likely to reach acceptable power. Also, all three procedures demonstrated the highest power for the most discriminating item. Collectively, the results from this research provide information in the area of small sample size and DIF detection.
|
5 |
Analyzing the Combination of Polymorphisms Associating with Antidepressant Response by Exact Conditional TestMa, Baofu 08 August 2005 (has links)
Genetic factors have been shown to be involved in etiology of a poor response to the antidepressant treatment with sufficient dosage and duration. Our goal was to identify the role of polymorphisms in the poor response to the treatment. To this end, 5 functional polymorphisms in 109 patients diagnosed with unipolar, major depressive disorder are analyzed. Due to the small sample size, exact conditional tests are utilized to analyze the contingency table. The data analysis involves: (1) Exact test for conditional independence in a high dimensional contingency table; (2) Marginal independence test; (3) Exact test for three-way interactions. The efficiency of program always limits the application of exact test. The appropriate methods for enumerating exact tables are the key to improve the efficiency of programs. The algorithm of enumerating the exact tables is also introduced.
|
6 |
The Impact of Midbrain Cauterize Size on Auditory and Visual Responses' DistributionZhang, Yan 20 April 2009 (has links)
This thesis presents several statistical analysis on a cooperative project with Dr. Pallas and Yuting Mao from Biology Department of Georgia State University. This research concludes the impact of cauterize size of animals’ midbrain on auditory and visual response in brains. Besides some already commonly used statistical analysis method, such as MANOVA and Frequency Test, a unique combination of Permutation Test, Kolmogorov-Smirnov Test and Wilcoxon Rank Sum Test is applied to our non-parametric data. Some simulation results show the Permutation Test we used has very good powers, and fits the need for this study. The result confirms part of the Biology Department’s hypothesis statistically and enhances more complete understanding of the experiments and the potential impact of helping patients with Acquired Brain Injury.
|
7 |
Driving and inhibiting factors in the adoption of open source software in organisationsGreenley, Neil January 2015 (has links)
The aim of this research is to investigate the extent to which Open Source Software (OSS) adoption behaviour can empirically be shown to be governed by a set of self-reported (driving and inhibiting) salient beliefs of key informants in a sample of organisations. Traditional IS adoption/usage theory, methodology and practice are drawn on. These are then augmented with theoretical constructs derived from IT governance and organisational diagnostics to propose an artefact that aids the understanding of organisational OSS adoption behaviour, stimulates debate and aids operational management interventions. For this research, a combination of quantitative methods (via Fisher's Exact Test) and complimentary qualitative method (via Content Analysis) were used using self-selection sampling techniques. In addition, a combination of data and methods were used to establish a set of mixed-methods results (or meta-inferences). From a dataset of 32 completed questionnaires in the pilot study, and 45 in the main study, a relatively parsimonious set of statistically significant driving and inhibiting factors were successfully established (ranging from 95% to 99.5% confidence levels) for a variety for organisational OSS adoption behaviours (i.e. by year, by software category and by stage of adoption). In addition, in terms of mixed-methods, combined quantitative and qualitative data yielded a number of factors limited to a relatively small number of organisational OSS adoption behaviour. The findings of this research are that a relatively small set of driving and inhibiting salient beliefs (e.g. Security, Perpetuity, Unsustainable Business Model, Second Best Perception, Colleagues in IT Dept., Ease of Implementation and Organisation is an Active User) have proven very accurate in predicting certain organisational OSS adoption behaviour (e.g. self-reported Intention to Adopt OSS in 2014) via Binomial Logistic Regression Analysis.
|
8 |
Real Estate Forecasting – An evaluation of forecasts / Prognoser på fastighetsmarknaden – Utvärdering av träffsäkerheten hos prognoserHorttana, Jonas January 2013 (has links)
This degree project aims to explore the subject of forecasting, which is an ongoing and much alive debate within economics and finance. Within the forecasting field the available research is vast and even if restricted to real estate, which is the main focus of this paper, the available material is comprehensive. A large fraction of published research concerning the subject of real estate forecasting consists of post mortem studies, with econometric models trying to replicate historical trends with the help of available micro and macro data. This branch within the field of forecasting seems to advance and progress with help of refined econometric models. This paper, on the other hand, rather examines the fundamentals behind forecasting and why forecasting can be a difficult task in general. This is shown with an examination of the accuracy of 160 unique forecasts within the field of real estate. To evaluate the accuracy and predictability from different perspectives we state three main null hypotheses: 1. Correct forecasts and the direction of the predictions are independent variables. 2. Correct forecasts and the examined consultants are independent variables. 3. Correct forecasts and the examined cities are independent variables. 4 The observed frequencies for Hypothesis 1 indicate that upward predictions seem to be easier to predict than downward predictions. This is however not supported by the statistical tests. The observed frequencies for Hypothesis 2 clearly indicate that one consultant is a superior forecaster than compared to the other consultants. The statistical tests confirm this. The observed frequencies for Hypothesis 3 indicate no signs of dependence for the variables. The statistical tests confirm this. / Detta examensarbete ämnar att utforska ämnesområdet kring prognoser och prognosmakande, vilket är en högst levande debatt inom ekonomi och finans. Inom detta område är tillgänglig forskning mycket omfattande och även om materialet begränsas till fastighetsmarknaden, som är huvudspåret i denna uppsats, är mängden information ansenlig. En stor andel av publicerad forskning som berör prognoser av fastighetsmarkanden består ofta av studier av typen "post mortem", där man med ekonometriska modeller försöker efterlikna tidigare historiska trender med hjälp av tillgänglig mikro- eller makrodata. Denna gren av forskningen tycks vinna mark och fortsätter att utvecklas med hjälp av allt mer avancerade ekonometriska modeller. Denna studie fokuserar däremot snarare på de fundamentala elementen av prognosmakande och varför detta ibland kan vara en problematisk uppgift. Detta visas med hjälp av en undersökning gällande utfallet och träffsäkerheten av 160 unika prognoser på fastighetsmarknaden. 7 För att utvärdera träffsäkerheten hos prognoserna sätts tre olika nollhypoteser upp: 1. Korrekt prognos och riktning av prognos är oberoende variabler. 2. Korrekt prognos och konsult är oberoende variabler. 3. Korrekt prognos och undersökta städer är oberoende variabler. De observerade frekvenserna för Hypotes 1 indikerar att uppåtgående prognoser är enklare att förutspå än övriga prognoser. Detta kan dock inte stödjas av de statistiska testerna. De observerade frekvenserna för Hypotes 2 indikerar tydligt att en konsult är en överlägsen prognosmakare än övriga konsulter. Detta stöds av de statistiska testerna. De observerade frekvenserna för Hypotes 3 indikerar inget samband av beroende mellan variablerna. Detta kan dock inte stödjas av de statistiska testerna.
|
9 |
Residue Associations In Protein Family AlignmentsOzer, Hatice Gulcin 24 June 2008 (has links)
No description available.
|
10 |
The Box-Cox 依變數轉換之技巧 / The Box-Cox Transformation: A Review曾能芳, Chan, Lan Fun Unknown Date (has links)
The use of transformation can usually simplify the analysis of data, especiallywhen the original observations deviate from the underlying assumption of linearmodel. Box-Cox transformation receives much more attention than others. Inthis dissertation, we will review the theory about the estimation, hypotheses test on transformation parameter and about the sensitivity of the linear model parameters. Monte Carlo simulation is used to study the performance of the transformation. We also display whether Box-Cox transformation makes the transformed observations satisfy the assumption of linear model actually.
|
Page generated in 0.1447 seconds