• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 282
  • 282
  • 101
  • 98
  • 81
  • 67
  • 67
  • 45
  • 39
  • 38
  • 37
  • 37
  • 35
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

ADMIXTURE MAPPING AND SUBSEQUENT FINEMAPPING SUGGESTS NOVEL LOCI FOR TYPE 2 DIABETES IN AFRICAN AMERICANS

Jeff, Janina Maria 21 December 2012 (has links)
Type 2 diabetes (T2D) is a complex metabolic disease that disproportionately affects African Americans. Obesity is a major risk factor for T2D, and it is postulated that chronic inflammation possibly stemming from adipose tissue macrophages and T cells plays a key role. Genome-wide association studies (GWAS) have identified over 20 disease loci that contribute to T2D in European Americans but few studies have been performed in admixed populations. We first performed a GWAS of 1,563 African Americans from the Vanderbilt Genome-Electronic Records Project and Northwestern University NUgene Project as part of the electronic Medical Records and Genomics (eMERGE) network. We successfully replicate an association in TCF7L2, previously identified by GWAS in our African American dataset. We were unable to identify novel associations at p<5.0x10-8 by GWAS. Admixture mapping disease loci in recently admixed populations is a powerful method used to identify disease loci in African Americans. Using admixture mapping, we sought to identify novel disease loci in the genome with T2D. Our admixture scan revealed multiple candidate genes with T2D, including TCIRG1, a T-cell immune regulator expressed in the pancreas and liver and not previously implicated in T2D. We performed a subsequent fine-mapping analysis to further assess the association with TCIRG1 and T2D in >5,000 African Americans. We successfully identified 13 independent associations in TCIRG1, CHKA, and ALDH3B1 genes on chromosome 11. Our results suggest a novel region on chromosome 11 identified by admixture mapping associated with T2D in African Americans and warrants additional replication and validation in this region.
12

Household Preferences for Financing Hurricane Risk Mitigation: A Survey Based Empirical Analysis

Fitzgerald, Damon 29 October 2014 (has links)
After a series of major storms over the last 20 years, the state of financing for U.S. natural disaster insurance has undergone substantial disruptions causing many federal and state backed programs against residential property damage to become severally underfunded. In order to regain actuarial soundness, policy makers have proposed a shift to a system that reflects risk-based pricing for property insurance. We examine survey responses from 1394 single-family homeowners in the state of Florida for support of several natural disaster mitigation policy reforms. Utilizing a partial proportional odds model we test for effects of location, risk perception, socio-economic and housing characteristics on support for policy reforms. Our findings suggest residents across the state, not just risk-prone homeowners, support the current subsidized model. We also examine several other policy questions from the survey to verify our initial results. Finally, the implications of our findings are discussed to provide inputs to policymakers.
13

The Effects of Agriculture on Canada's Major Watersheds

Ramunno, Daniel 10 1900 (has links)
<p>Water contamination is one of the major environmental issues that negatively impacts water quality of watersheds. It negatively affects drinking water and aquatic wildlife, which can indirectly have negative effects on everyone's health. Many different institutions collected samples of water from four of Canada's major watersheds and counted the number of bacteria in each sample. The data used in this paper was taken from one of these institutions and was analysed to investigate if agricultural waste impacts the water quality of these four watersheds. It was found that the agricultural waste produced from nearby farms significantly impacts the water quality of three of these watersheds. Principal component analysis was also done on these data, and it was found that all of the data can be expressed in terms of one variable without losing very much information of the data. The bootstrap distributions of the principal component analysis parameters were estimated, and it was found that the sampling distributions of these parameters are stable. There was also evidence that the variables in the data are not normally distributed and not all the variables are independent.</p> / Master of Science (MSc)
14

Empirical Assessment of Performance of Tests of Equal Variance in the Presence of Within and Between Dependence and Under Small Sample Size: Application to Craniofacial Variability Index in Smith-Lemli-Opitz Syndrome (SLOS)

Reano-Alvarez, German 04 1900 (has links)
<p>Craniofacial variability index (CVI), estimated by the standard deviation of z-scores obtained from craniofacial measurements of one or more individuals, is considered in the medical literature as a useful and relatively simple quantitative measure of the degree of dysmorphogenesis in the head and face.</p> <p>CVI obtained from patients diagnosed with syndromes such as Smith-Lemli-Opitz Syndrome (SLOLS) is often compared with CVI for healthy individuals. Moreover, CVI is commonly used to compare degree of dysmorphogenesis among individuals and groups characterized by the presence or absence of certain syndromes and abnormalities. However, the type of comparison provided is often subjective with no statistical comparison of values provided to account for sample to sample variability.</p> <p>We performed a simulation study to compare the performance of tests of variance under the presence of within and between individual dependencies. We compare empirical level and power obtained from 10,000 simulations to assess performance. We considered four variance comparison tests: F-test, Levene’s Test, Fligner-Killeen test and permutation F test. We also provided a detailed analysis of a real data set to illustrate our results. Overall, results from our simulation indicate that the performances of the F and the permutation F tests are better than the other methods. However, for all the tests considered, power for detecting small differences in variance is very low when the sample size is small. An interesting finding in our simulation is that the performance of the tests was actually enhanced by the presence of within individual or group dependence, where the power of the tests increased with increased correlation. On the other hand, between dependence and within and between individual dependence have the effect of lowering the power in comparison with the scenarios of independence and within dependence. It is also observed that the higher the correlation the lower the power. It should be noted that in the case of group comparison the simulation scenario of within and between dependence shows opposite results to those observed in the individual comparison. A higher correlation is associated to both a higher level and power.</p> <p>On another note, the analysis of the Smith-Lemli-Opitz Syndrome (SLOS) dataset has shown that comparison of pattern profile between individuals is a useful tool to identify influential z-scores of craniofacial measurements that affect CVI and the subsequent results of hypothesis testing of equal variances using the classical F test versus Levene-median and Fligner-Killeen tests.</p> / Master of Science (MSc)
15

A Systems Approach to the Evaluation of Sugar Research and Development Activities

Henderson, T. M. Unknown Date (has links)
No description available.
16

Comparison of Bootstrap with Other Tests for Several Distributions

Wong, Yu-Yu 01 May 1988 (has links)
This paper discusses results of a computer simulation to investigate several different tests when sampling several distributions. The hypothesis H0: μ=0 was tested against H0: μ≠0, using the usual t-test, trimmed t-test, the Jackkinfe, the Boostrap and signed-rank test. The p-values and empirical power show that the Bootstrap is as good as the t-test. The Jackknife procedure is too liberal, always obtaining small p-values. The signed-rank is a fairly good test if the data follows the Cauchy Distribution.
17

Modeling Subset Behavior: Prescriptive Analytics for Professional Basketball Data

Bynum, Lucius 01 January 2018 (has links)
Sports analytics problems have become increasingly prominent in the past decade. Modern image processing capabilities allow coaching staff to easily capture detailed game-time statistics on their players, opponents, team configurations, and plays. The challenge is to turn that data into meaningful insights for team managers and coaches. This project uses descriptive and predictive techniques on publicly available NBA basketball data to identify powerful combinations of players and predict how they will perform against other teams.
18

Interpretation of Principal Components

Dabdoub, Marwan A. 01 May 1978 (has links)
The principal component analysis can be carried out two ways. First the R-mode: R = K'K and the second is the Q-mode: Q = K K' where K is a data matrix centered by column or by row. The most commonly used method is the R-mode. It has been suggested that principal components computed from either the R-mode or the Q-mode may have the same interpretation. If this is true, then interpretation of the principal components could be put on a much more intuitive level in many applications. This will occur whenever one type of principal component is more intuitively related to the physical or natural system being studied than the other. The relationship between the principal components of the R-mode and the Q-mode have been investigated with the result that they show a perfect correlation between them. The conclusion that the principal components of the R-mode or the Q-mode have the same interpretation is established. An example is given to illustrate this work. The resulting interpretation is found to be the same as that obtained by Donald L. Phillips (1977) using different methods.
19

Estimation of Floods When Runoff Originates from Nonhomogeneous Sources

Olson, David Ray 01 May 1979 (has links)
Extreme value theory is used as a basis for deriving a distribution function for flood frequency analysis when runoff originates from nonhomogeneous sources. A modified least squares technique is used to estimate the parameters of the distribution function for eleven rivers. Goodness-of-fit statistics are computed and the distribution function is found to fit the data very well. The derived distribution function is recommended as a base method for flood frequency analysis for rivers exhibiting nonhomogeneous sources of runoff if further investigation also proves to be positive.
20

A New method for Testing Normality based upon a Characterization of the Normal Distribution

Melbourne, Davayne A 21 March 2014 (has links)
The purposes of the thesis were to review some of the existing methods for testing normality and to investigate the use of generated data combined with observed to test for normality. The approach to testing for normality is in contrast to the existing methods which are derived from observed data only. The test of normality proposed follows a characterization theorem by Bernstein (1941) and uses a test statistic D*, which is the average of the Hoeffding’s D-Statistic between linear combinations of the observed and generated data to test for normality. Overall, the proposed method showed considerable potential and achieved adequate power for many of the alternative distributions investigated. The simulation results revealed that the power of the test was comparable to some of the most commonly used methods of testing for normality. The test is performed with the use of a computer-based statistical package and in general takes a longer time to run than some of the existing methods of testing for normality.

Page generated in 0.0887 seconds