• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43817
  • 14509
  • 11348
  • 6365
  • 5838
  • 3082
  • 1643
  • 1241
  • 976
  • 968
  • 968
  • 968
  • 968
  • 968
  • Tagged with
  • 43281
  • 8682
  • 6861
  • 6552
  • 6130
  • 5531
  • 5526
  • 5303
  • 5131
  • 5049
  • 4740
  • 4323
  • 3963
  • 3679
  • 2966
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

On Intraclass Correlation Coefficients

Yu, Jianhui 17 July 2009 (has links)
This paper uses Maximum likelihood estimation method to estimate the common correlation coefficients for multivariate datasets. We discuss a graphical tool, Q-Q plot, to test equality of the common intraclass correlation coefficients. Kolmogorov-Smirnov test and Cramér-von Mises test are used to check if the intraclass correlation coefficients are the same among populations. Bootstrap and empirical likelihood methods are applied to construct the confidence interval of the common intraclass correlation coefficients.
232

Empirical Likelihood Inference for the Accelerated Failure Time Model via Kendall Estimating Equation

Lu, Yinghua 17 July 2010 (has links)
In this thesis, we study two methods for inference of parameters in the accelerated failure time model with right censoring data. One is the Wald-type method, which involves parameter estimation. The other one is empirical likelihood method, which is based on the asymptotic distribution of likelihood ratio. We employ a monotone censored data version of Kendall estimating equation, and construct confidence intervals from both methods. In the simulation studies, we compare the empirical likelihood (EL) and the Wald-type procedure in terms of coverage accuracy and average length of confidence intervals. It is concluded that the empirical likelihood method has a better performance. We also compare the EL for Kendall’s rank regression estimator with the EL for other well known estimators and find advantages of the EL for Kendall estimator for small size sample. Finally, a real clinical trial data is used for the purpose of illustration.
233

Method for Improving the Efficiency of Image Super-Resolution Algorithms Based on Kalman Filters

Dobson, William Keith 01 December 2009 (has links)
The Kalman Filter has many applications in control and signal processing but may also be used to reconstruct a higher resolution image from a sequence of lower resolution images (or frames). If the sequence of low resolution frames is recorded by a moving camera or sensor, where the motion can be accurately modeled, then the Kalman filter may be used to update pixels within a higher resolution frame to achieve a more detailed result. This thesis outlines current methods of implementing this algorithm on a scene of interest and introduces possible improvements for the speed and efficiency of this method by use of block operations on the low resolution frames. The effects of noise on camera motion and various blur models are examined using experimental data to illustrate the differences between the methods discussed.
234

Comparative Studies between Robotic Laparoscopic Myomectomy and Abdominal Myomectomy with Factors Affecting Short-Term Surgical Outcomes

Fomo, Amy N. 01 December 2009 (has links)
The purpose of this study is to compare short-term surgical outcomes of robotic and abdominal myomectomy and to analyze the factors affecting the total operative time, estimated blood loss and length of hospital stay from a retrospective study of a consecutive case series of 122 pa-tients with symptomatic leiomyomata. Wilcoxon, t tests, multiple linear and logistic regressions analyses were performed. Patients in abdominal group had larger number of leiomyomata, larger tumor size and BMI. The operative time was longer in robotic group and was affected by the size and number of tumors, parity and interaction between parity and BMI. Estimated blood loss was lower in robotic group and was affected by the size and number of tumors .The pre-dicted odds of staying one day or less in the hospital for robotic group was 193.5 times the odds for abdominal group and was affected by the size and number of tumors.
235

Analysis of the Total Food Folate Intake Data from the National Health and Nutrition Exa-amination Survey (Nhanes) Using Generalized Linear Model

Lee, Kyung Ah 01 December 2009 (has links)
The National health and nutrition examination survey (NHANES) is a respected nation-wide program in charge of assessing the health and nutritional status of adults and children in United States. Recent cal research found that folic acid play an important role in preventing baby birth defects. In this paper, we use the generalized estimating equation (GEE) method to study the generalized linear model (GLM) with compound symmetric correlation matrix for the NHANES data and investigate significant factors to ence the intake of food folic acid.
236

Analyzing Gene Expression Data in Terms of Gene Sets: Gene Set Enrichment Analysis

Li, Wei 01 December 2009 (has links)
The DNA microarray biotechnology simultaneously monitors the expression of thousands of genes and aims to identify genes that are differently expressed under different conditions. From the statistical point of view, it can be restated as identify genes strongly associated with the response or covariant of interest. The Gene Set Enrichment Analysis (GSEA) method is one method which focuses the analysis at the functional related gene sets level instead of single genes. It helps biologists to interpret the DNA microarray data by their previous biological knowledge of the genes in a gene set. GSEA has been shown to efficiently identify gene sets containing known disease-related genes in the real experiments. Here we want to evaluate the statistical power of this method by simulation studies. The results show that the the power of GSEA is good enough to identify the gene sets highly associated with the response or covariant of interest.
237

Some Contributions in Statistical Discrimination of Different Pathogens Using Observations through FTIR

Wang, Dongmei 01 December 2009 (has links)
Fourier Transform Infrared (FTIR) has been use to discriminate different pathogens by signals from cells infected with these versus normal cells as references. To do the statistical analysis, Partial Least Square Regression (PLSR) was utilized to distinguish any two kinds of virus‐infected cells and normal cells. Validation using Bootstrap method and Cross‐validations were employed to calculate the shrinkages of Area Under the ROC Curve (AUC) and specificities corresponding to 80%, 90%, and 95% sensitivities. The result shows that our procedure can significantly discriminate these pathogens when we compare infected cells with the normal cells. On the height of this success, PLSR was applied again to simultaneously compare two kinds of virus‐infected cells and the normal cells. The shrinkage of Volume Under the Surface (VUS) was calculated to do the evaluation of model diagnostic performance. The high value of VUS demonstrates that our method can effectively differentiate virus‐infected cells and normal cells.
238

Theoretical and Numerical Study of Tikhonov's Regularization and Morozov's Discrepancy Principle

Whitney, MaryGeorge L. 01 December 2009 (has links)
A concept of a well-posed problem was initially introduced by J. Hadamard in 1923, who expressed the idea that every mathematical model should have a unique solution, stable with respect to noise in the input data. If at least one of those properties is violated, the problem is ill-posed (and unstable). There are numerous examples of ill- posed problems in computational mathematics and applications. Classical numerical algorithms, when used for an ill-posed model, turn out to be divergent. Hence one has to develop special regularization techniques, which take advantage of an a priori information (normally available), in order to solve an ill-posed problem in a stable fashion. In this thesis, theoretical and numerical investigation of Tikhonov's (variational) regularization is presented. The regularization parameter is computed by the discrepancy principle of Morozov, and a first-kind integral equation is used for numerical simulations.
239

Comparing Cognitive Decision Models of Iowa Gambling Task in Indivituals Following Temporal Lobectomy

Jeyarajah, Jenny Vennukkah 19 November 2009 (has links)
This study examined the theoretical basis for decision making behavior of patients with right or left temporal lobectomy and a control group when they participated in the Iowa Gambling Task. Two cognitive decision models, Expectancy Valence Model and Strategy Switching Heuristic Choice Model, were compared for best fit. The best fitting model was then chosen to provide the basis for parameter estimation (sources of decision making, i.e. cognitive, motivational, and response processes) and interpretation. Both models outperformed the baseline model. However comparison of G2 means between the two cognitive decision models showed the expectancy valence model having a higher mean and thus a better model between the two. Decision parameters were analyzed for the expectancy valence model. The analysis revealed that the parameters were not significant between the three groups. The data was simulated from the baseline model to determine whether the models are different from baseline.
240

Weak Primary Decomposition of Modules Over a Commutative Ring

Stalvey, Harrison 21 April 2010 (has links)
This paper presents the theory of weak primary decomposition of modules over a commutative ring. A generalization of the classic well-known theory of primary decomposition, weak primary decomposition is a consequence of the notions of weakly associated prime ideals and nearly nilpotent elements, which were introduced by N. Bourbaki. We begin by discussing basic facts about classic primary decomposition. Then we prove the results on weak primary decomposition, which are parallel to the classic case. Lastly, we define and generalize the Compatibility property of primary decomposition.

Page generated in 0.1359 seconds