• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 61
  • 39
  • 34
  • 23
  • 12
  • 9
  • 8
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 601
  • 289
  • 120
  • 118
  • 96
  • 80
  • 60
  • 50
  • 49
  • 44
  • 36
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Sensitivity Analyses of the Effect of Atomoxetine and Behavioral Therapy in a Randomized Control Trial

Nwosu, Ann 06 September 2017 (has links)
No description available.
62

Statistical Analysis of Species Level Phylogenetic Trees

Ferguson, Meg Elizabeth 14 November 2017 (has links)
No description available.
63

Judgment Post-Stratication with Machine Learning Techniques: Adjusting for Missing Data in Surveys and Data Mining

Chen, Tian 02 October 2013 (has links)
No description available.
64

Inference on cross correlation with repeated measures data

Tang, Yuxiao 17 March 2004 (has links)
No description available.
65

RANKED SET SAMPLING: A LOOK AT ALLOCATION ISSUES AND MISSING DATA COMPLICATIONS

Kohlschmidt, Jessica Kay 31 August 2009 (has links)
No description available.
66

Three Essays on Spatial Econometric Models with Missing Data

Wang, Wei 03 September 2010 (has links)
No description available.
67

Seven methods of handling missing data using samples from a national data base

Witta, Eleanor Lea 06 June 2008 (has links)
The effectiveness of seven methods of handling missing data was investigated in a factorial design using random samples selected from the National Education Longitudinal Study of 1988 (NELS-88). Methods evaluated were listwise deletion, pairwise deletion, mean substitution, Buck's procedure, mean regression, one iteration regression, and iterative regression. Factors controlled were number of variables (4 and 8), average intercorrelation (0.2 and 0.4), sample size (200 and 2000), and proportion of incomplete cases (10%, 20%, and 40%). The pattern of missing values was determined by the pattern existing in the variables selected from NELS-88 data base. Covariance matrices resulting from the use of each missing data method were compared to the 'true' covariance matrix using multi-sample analysis in LISREL 7. Variable means were compared to the 'true' means using the MANOVA procedure in SPSS/PC+. Statistically significant differences (p≤.05) were detected in both comparisons. The most surprising result of this study was the effectiveness (p>.05) of pairwise deletion whenever the sample size was large thus supporting the contention that the error term disappears as sample size approaches infinity (Glasser, 1964). Listwise deletion was also effective (p>.05) whenever there were four variables or the sample size was small. Almost as surprising was the relative ineffectiveness (p<.05) of the regression methods. This is explained by the difference in proportion of incomplete cases versus the proportion of missing values, and by the distribution of the missing values within the incomplete cases. / Ph. D.
68

Dynamic Causal Modeling Across Network Topologies

Zaghlool, Shaza B. 03 April 2014 (has links)
Dynamic Causal Modeling (DCM) uses dynamical systems to represent the high-level neural processing strategy for a given cognitive task. The logical network topology of the model is specified by a combination of prior knowledge and statistical analysis of the neuro-imaging signals. Parameters of this a-priori model are then estimated and competing models are compared to determine the most likely model given experimental data. Inter-subject analysis using DCM is complicated by differences in model topology, which can vary across subjects due to errors in the first-level statistical analysis of fMRI data or variations in cognitive processing. This requires considerable judgment on the part of the experimenter to decide on the validity of assumptions used in the modeling and statistical analysis; in particular, the dropping of subjects with insufficient activity in a region of the model and ignoring activation not included in the model. This manual data filtering is required so that the fMRI model's network size is consistent across subjects. This thesis proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various data set sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool. / Ph. D.
69

The wild bootstrap resampling in regression imputation algorithm with a Gaussian Mixture Model

Mat Jasin, A., Neagu, Daniel, Csenki, Attila 08 July 2018 (has links)
Yes / Unsupervised learning of finite Gaussian mixture model (FGMM) is used to learn the distribution of population data. This paper proposes the use of the wild bootstrapping to create the variability of the imputed data in single miss-ing data imputation. We compare the performance and accuracy of the proposed method in single imputation and multiple imputation from the R-package Amelia II using RMSE, R-squared, MAE and MAPE. The proposed method shows better performance when compared with the multiple imputation (MI) which is indeed known as the golden method of missing data imputation techniques.
70

A Comparsion of Multiple Imputation Methods for Missing Covariate Values in Recurrent Event Data

Huo, Zhao January 2015 (has links)
Multiple imputation (MI) is a commonly used approach to impute missing data. This thesis studies missing covariates in recurrent event data, and discusses ways to include the survival outcomes in the imputation model. Some MI methods under consideration are the event indicator D combined with, respectively, the right-censored event times T, the logarithm of T and the cumulative baseline hazard H0(T). After imputation, we can then proceed to the complete data analysis. The Cox proportional hazards (PH) model and the PWP model are chosen as the analysis models, and the coefficient estimates are of substantive interest. A Monte Carlo simulation study is conducted to compare different MI methods, the relative bias and mean square error will be used in the evaluation process. Furthermore, an empirical study based on cardiovascular disease event data which contains missing values will be conducted. Overall, the results show that MI based on the Nelson-Aalen estimate of H0(T) is preferred in most circumstances.

Page generated in 0.157 seconds