• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 25
  • 20
  • 17
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 250
  • 250
  • 77
  • 53
  • 53
  • 52
  • 35
  • 33
  • 31
  • 25
  • 25
  • 24
  • 23
  • 20
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sensitivity Analyses of the Effect of Atomoxetine and Behavioral Therapy in a Randomized Control Trial

Nwosu, Ann 06 September 2017 (has links)
No description available.
22

Statistical Analysis of Species Level Phylogenetic Trees

Ferguson, Meg Elizabeth 14 November 2017 (has links)
No description available.
23

Judgment Post-Stratication with Machine Learning Techniques: Adjusting for Missing Data in Surveys and Data Mining

Chen, Tian 02 October 2013 (has links)
No description available.
24

Inference on cross correlation with repeated measures data

Tang, Yuxiao 17 March 2004 (has links)
No description available.
25

RANKED SET SAMPLING: A LOOK AT ALLOCATION ISSUES AND MISSING DATA COMPLICATIONS

Kohlschmidt, Jessica Kay 31 August 2009 (has links)
No description available.
26

Three Essays on Spatial Econometric Models with Missing Data

Wang, Wei 03 September 2010 (has links)
No description available.
27

Dynamic Causal Modeling Across Network Topologies

Zaghlool, Shaza B. 03 April 2014 (has links)
Dynamic Causal Modeling (DCM) uses dynamical systems to represent the high-level neural processing strategy for a given cognitive task. The logical network topology of the model is specified by a combination of prior knowledge and statistical analysis of the neuro-imaging signals. Parameters of this a-priori model are then estimated and competing models are compared to determine the most likely model given experimental data. Inter-subject analysis using DCM is complicated by differences in model topology, which can vary across subjects due to errors in the first-level statistical analysis of fMRI data or variations in cognitive processing. This requires considerable judgment on the part of the experimenter to decide on the validity of assumptions used in the modeling and statistical analysis; in particular, the dropping of subjects with insufficient activity in a region of the model and ignoring activation not included in the model. This manual data filtering is required so that the fMRI model's network size is consistent across subjects. This thesis proposes a solution to this problem by treating missing regions in the first-level analysis as missing data, and performing estimation of the time course associated with any missing region using one of four candidate methods: zero-filling, average-filling, noise-filling using a fixed stochastic process, or one estimated using expectation-maximization. The effect of this estimation scheme was analyzed by treating it as a preprocessing step to DCM and observing the resulting effects on model evidence. Simulation studies show that estimation using expectation-maximization yields the highest classification accuracy using a simple loss function and highest model evidence, relative to other methods. This result held for various data set sizes and varying numbers of model choice. In real data, application to Go/No-Go and Simon tasks allowed computation of signals from the missing nodes and the consequent computation of model evidence in all subjects compared to 62 and 48 percent respectively if no preprocessing was performed. These results demonstrate the face validity of the preprocessing scheme and open the possibility of using single-subject DCM as an individual cognitive phenotyping tool. / Ph. D.
28

Planned Missing Data Designs in Communication Research

Parsons, Michael M. January 2013 (has links)
No description available.
29

A Simulation Study On The Comparison Of Methods For The Analysis Of Longitudinal Count Data

Inan, Gul 01 July 2009 (has links) (PDF)
The longitudinal feature of measurements and counting process of responses motivate the regression models for longitudinal count data (LCD) to take into account the phenomenons such as within-subject association and overdispersion. One common problem in longitudinal studies is the missing data problem, which adds additional difficulties into the analysis. The missingness can be handled with missing data techniques. However, the amount of missingness in the data and the missingness mechanism that the data have affect the performance of missing data techniques. In this thesis, among the regression models for LCD, the Log-Log-Gamma marginalized multilevel model (Log-Log-Gamma MMM) and the random-intercept model are focused on. The performance of the models is compared via a simulation study under three missing data mechanisms (missing completely at random, missing at random conditional on observed data, and missing not random), two types of missingness percentage (10% and 20%), and four missing data techniques (complete case analysis, subject, occasion and conditional mean imputation). The simulation study shows that while the mean absolute error and mean square error values of Log-Log-Gamma MMM are larger in amount compared to the random-intercept model, both regression models yield parallel results. The simulation study results justify that the amount of missingness in the data and that the missingness mechanism that the data have, strictly influence the performance of missing data techniques under both regression models. Furthermore, while generally occasion mean imputation displays the worst performance, conditional mean imputation shows a superior performance over occasion and subject mean imputation and gives parallel results with complete case analysis.
30

Real-Time Estimation of Aerodynamic Parameters

Larsson Cahlin, Sofia January 2016 (has links)
Extensive testing is performed when a new aircraft is developed. Flight testing is costly and time consuming but there are aspects of the process that can be made more efficient. A program that estimates aerodynamic parameters during flight could be used as a tool when deciding to continue or abort a flight from a safety or data collecting perspective. The algorithm of such a program must function in real time, which for this application would mean a maximum delay of a couple of seconds, and it must handle telemetric data, which might have missing samples in the data stream. Here, a conceptual program for real-time estimation of aerodynamic parameters is developed. Two estimation methods and four methods for handling of missing data are compared. The comparisons are performed using both simulated data and real flight test data. The first estimation method uses the least squares algorithm in the frequency domain and is based on the chirp z-transform. The second estimation method is created by adding boundary terms in the frequency domain differentiation and instrumental variables to the first method. The added boundary terms result in better estimates at the beginning of the excitation and the instrumental variables result in a smaller bias when the noise levels are high. The second method is therefore chosen in the algorithm of the conceptual program as it is judged to have a better performance than the first. The sequential property of the transform ensures functionality in real-time and the program has a maximum delay of just above one second. The four compared methods for handling missing data are to discard the missing data, hold the previous value, use linear interpolation or regard the missing samples as variations in the sample time. The linear interpolation method performs best on analytical data and is compared to the variable sample time method using simulated data. The results of the comparison using simulated data varies depending on the other implementation choices but neither method is found to give unbiased results. In the conceptual program, the variable sample time method is chosen as it gives a lower variance and is preferable from an implementational point of view.

Page generated in 0.0886 seconds