• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43919
  • 14509
  • 11350
  • 6369
  • 5838
  • 3082
  • 1643
  • 1244
  • 976
  • 968
  • 968
  • 968
  • 968
  • 968
  • Tagged with
  • 43369
  • 8699
  • 6874
  • 6558
  • 6138
  • 5561
  • 5531
  • 5324
  • 5142
  • 5057
  • 4740
  • 4342
  • 3963
  • 3695
  • 2967
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Cox Model Analysis with the Dependently Left Truncated Data

Li, Ji 07 August 2010 (has links)
A truncated sample consists of realizations of a pair of random variables (L, T) subject to the constraint that L ≤T. The major study interest with a truncated sample is to find the marginal distributions of L and T. Many studies have been done with the assumption that L and T are independent. We introduce a new way to specify a Cox model for a truncated sample, assuming that the truncation time is a predictor of T, and this causes the dependence between L and T. We develop an algorithm to obtain the adjusted risk sets and use the Kaplan-Meier estimator to estimate the Marginal distribution of L. We further extend our method to more practical situation, in which the Cox model includes other covariates associated with T. Simulation studies have been conducted to investigate the performances of the Cox model and the new estimators.
242

Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR

Luo, Shan 13 July 2009 (has links)
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the second stage we propose either partial least squares (PLS) method or simply taking significant differences as the discriminator. Finally, k-fold cross-validation method is used to estimate the shrinkages of the goodness measures, such as sensitivity, specificity and area under the ROC curve (AUC). There is no doubt in our mind 6 hour is enough for discriminating mock from Hsv1, and Coxsackie viruses. Adeno virus is an exception.
243

Looking in the Crystal Ball: Determinants of Excess Return

Akolly, Kokou S 18 August 2010 (has links)
This paper investigates the determinants of excess returns using dividend yields as a proxy in a cross-sectional setting. First, we find that types of industry and the current business cycle are determining factors of returns. Second, our results suggest that dividend yield serves a signaling mechanism indicating “healthiness” of a firm among prospective investors. Third we see that there is a positive relationship between dividend yield and risk, especially in the utility and financial sectors. And finally, using actual excess returns, instead of dividend yield in our model shows that all predictors of dividend yield were also significant predictors of excess returns. This connection between dividend yield and excess returns support our use of dividend yield as a proxy for excess returns.
244

Racial Disparities Study in Diabetes-Related Complication Using National Health Survey Data

Yan, Fengxia 15 December 2010 (has links)
The main aim of this study is to compare the prevalence of diabetes-related complications in white to the prevalence in other racial and ethnic groups in United States using 2009 Behavioral Risk Factor Surveillance System (BRFSS). By constructing the logistic regression model, odds ratios (OR) were calculated to compare the prevalence of diabetes complications in white and other groups. Compared to white, the prevalence of hypertension and stroke in African Americans were higher, while the prevalence of heart attack and coronary heart disease were lower. The Asian Americans or Pacific Islanders, African Americans and Hispanics were more likely to develop retinopathy compared to white. The prevalence of hypertension, hypercholesterolemia, heart attack, coronary heart disease, Stroke in Native Americans and “other” group were not significantly different from the prevalence in white. Asian or Pacific Islanders were less likely to experience stroke.
245

On the Lebesgue Integral

Kastine, Jeremiah D 18 March 2011 (has links)
We look from a new point of view at the definition and basic properties of the Lebesgue measure and integral on Euclidean spaces, on abstract spaces, and on locally compact Hausdorff spaces. We use mini sums to give all of them a unified treatment that is more efficient than the standard ones. We also give Fubini's theorem a proof that is nicer and uses much lighter technical baggage than the usual treatments.
246

Estimation of Hazard Function for Right Truncated Data

Jiang, Yong 27 April 2011 (has links)
This thesis centers on nonparametric inferences of the cumulative hazard function of a right truncated variable. We present three variance estimators for the Nelson-Aalen estimator of the cumulative hazard function and conduct a simulation study to investigate their performances. A close match between the sampling standard deviation and the estimated standard error is observed when an estimated survival probability is not close to 1. However, the problem of poor tail performance exists due to the limitation of the proposed variance estimators. We further analyze an AIDS blood transfusion sample for which the disease latent time is right truncated. We compute three variance estimators, yielding three sets of confidence intervals. This work provides insights of two-sample tests for right truncated data in the future research.
247

A New Jackknife Empirical Likelihood Method for U-Statistics

Ma, Zhengbo 25 April 2011 (has links)
U-statistics generalizes the concept of mean of independent identically distributed (i.i.d.) random variables and is widely utilized in many estimating and testing problems. The standard empirical likelihood (EL) for U-statistics is computationally expensive because of its onlinear constraint. The jackknife empirical likelihood method largely relieves computation burden by circumventing the construction of the nonlinear constraint. In this thesis, we adopt a new jackknife empirical likelihood method to make inference for the general volume under the ROC surface (VUS), which is one typical kind of U-statistics. Monte Carlo simulations are conducted to show that the EL confidence intervals perform well in terms of the coverage probability and average length for various sample sizes.
248

Stability Selection of the Number of Clusters

Reizer, Gabriella v 18 April 2011 (has links)
Selecting the number of clusters is one of the greatest challenges in clustering analysis. In this thesis, we propose a variety of stability selection criteria based on cross validation for determining the number of clusters. Clustering stability measures the agreement of clusterings obtained by applying the same clustering algorithm on multiple independent and identically distributed samples. We propose to measure the clustering stability by the correlation between two clustering functions. These criteria are motivated by the concept of clustering instability proposed by Wang (2010), which is based on a form of clustering distance. In addition, the effectiveness and robustness of the proposed methods are numerically demonstrated on a variety of simulated and real world samples.
249

A Review of Cross Validation and Adaptive Model Selection

Syed, Ali R 27 April 2011 (has links)
We perform a review of model selection procedures, in particular various cross validation procedures and adaptive model selection. We cover important results for these procedures and explore the connections between different procedures and information criteria.
250

Analysis of Faculty Evaluation by Students as a Reliable Measure of Faculty Teaching Performance

Twagirumukiza, Etienne 11 August 2011 (has links)
Most American universities and colleges require students to provide faculty evaluation at end of each academic term, as a way of measuring faculty teaching performance. Although some analysts think that this kind of evaluation does not necessarily provide a good measurement of teaching effectiveness, there is a growing agreement in the academic world about its reliability. This study attempts to find any strong statistical evidence supporting faculty evaluation by students as a measure of faculty teaching effectiveness. Emphasis will be on analyzing relationships between instructor ratings by students and corresponding students’ grades. Various statistical methods are applied to analyze a sample of real data and derive conclusions. Methods considered include multivariate statistical analysis, principal component analysis, Pearson's correlation coefficient, Spearman's and Kendall’s rank correlation coefficients, linear and logistic regression analysis.

Page generated in 0.1533 seconds