• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 61
  • 24
  • 22
  • 17
  • 11
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 312
  • 122
  • 58
  • 43
  • 30
  • 30
  • 28
  • 27
  • 27
  • 26
  • 26
  • 24
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Bootstrap and Empirical Likelihood-based Semi-parametric Inference for the Difference between Two Partial AUCs

Huang, Xin 17 July 2008 (has links)
With new tests being developed and marketed, the comparison of the diagnostic accuracy of two continuous-scale diagnostic tests are of great importance. Comparing the partial areas under the receiver operating characteristic curves (pAUC) is an effective method to evaluate the accuracy of two diagnostic tests. In this thesis, we study the semi-parametric inference for the difference between two pAUCs. A normal approximation for the distribution of the difference between two pAUCs has been derived. The empirical likelihood ratio for the difference between two pAUCs is defined and its asymptotic distribution is shown to be a scaled chi-quare distribution. Bootstrap and empirical likelihood based inferential methods for the difference are proposed. We construct five confidence intervals for the difference between two pAUCs. Simulation studies are conducted to compare the finite sample performance of these intervals. We also use a real example as an application of our recommended intervals.
122

New Non-Parametric Confidence Interval for the Youden

Zhou, Haochuan 18 July 2008 (has links)
Youden index, a main summary index for the Receiver Operating Characteristic (ROC) curve, is a comprehensive measurement for the effectiveness of a diagnostic test. For a continuous-scale diagnostic test, the optimal cut-point for the positive of disease is the cut-point leading to the maximization of the sum of sensitivity and specificity. Finding the Youden index of the test is equivalent to maximize the sum of sensitivity and specificity for all the possible values of the cut-point. In this thesis, we propose a new non-parametric confidence interval for the Youden index. Extensive simulation studies are conducted to compare the relative performance of the new interval with the existing intervals for the index. Our simulation results indicate that the newly developed non-parametric method performs as well as the existing parametric method but it has better finite sample performance than the existing non-parametric methods. The new method is flexible and easy to implement in practice. A real example is also used to illustrate the application of the proposed interval.
123

Statistical Evaluation of Continuous-Scale Diagnostic Tests with Missing Data

Wang, Binhuan 12 June 2012 (has links)
The receiver operating characteristic (ROC) curve methodology is the statistical methodology for assessment of the accuracy of diagnostics tests or bio-markers. Currently most widely used statistical methods for the inferences of ROC curves are complete-data based parametric, semi-parametric or nonparametric methods. However, these methods cannot be used in diagnostic applications with missing data. In practical situations, missing diagnostic data occur more commonly due to various reasons such as medical tests being too expensive, too time consuming or too invasive. This dissertation aims to develop new nonparametric statistical methods for evaluating the accuracy of diagnostic tests or biomarkers in the presence of missing data. Specifically, novel nonparametric statistical methods will be developed with different types of missing data for (i) the inference of the area under the ROC curve (AUC, which is a summary index for the diagnostic accuracy of the test) and (ii) the joint inference of the sensitivity and the specificity of a continuous-scale diagnostic test. In this dissertation, we will provide a general framework that combines the empirical likelihood and general estimation equations with nuisance parameters for the joint inferences of sensitivity and specificity with missing diagnostic data. The proposed methods will have sound theoretical properties. The theoretical development is challenging because the proposed profile log-empirical likelihood ratio statistics are not the standard sum of independent random variables. The new methods have the power of likelihood based approaches and jackknife method in ROC studies. Therefore, they are expected to be more robust, more accurate and less computationally intensive than existing methods in the evaluation of competing diagnostic tests.
124

Evaluation and implementation of neural brain activity detection methods for fMRI

Breitenmoser, Sabina January 2005 (has links)
Functional Magnetic Resonance Imaging (fMRI) is a neuroimaging technique used to study brain functionality to enhance our understanding of the brain. This technique is based on MRI, a painless, noninvasive image acquisition method without harmful radiation. Small local blood oxygenation changes which are reflected as small intensity changes in the MR images are utilized to locate the active brain areas. Radio frequency pulses and a strong static magnetic field are used to measure the correlation between the physical changes in the brain and the mental functioning during the performance of cognitive tasks. This master thesis presents approaches for the analysis of fMRI data. The constrained Canonical Correlation Analysis (CCA) which is able to exploit the spatio-temporal nature of an active area is presented and tested on real human fMRI data. The actual distribution of active brain voxels is not known in the case of real human data. To evaluate the performance of the diagnostic algorithms applied to real human data, a modified Receiver Operating Characteristics (modified ROC) which deals with this lack of knowledge is presented. The tests on real human data reveal the better detection efficiency with the constrained CCA algorithm. A second aim of this thesis was to implement the promising technique of constrained CCA into the software environment SPM. To implement the constrained CCA algorithms into the fMRI part of SPM2, a toolbox containing Matlab functions has been programmed for the further use by neurological scientists. The new SPM functionalities to exploit the spatial extent of the active regions with CCA are presented and tested.
125

Prediction Performance of Survival Models

Yuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of future random variables. There are two types of prediction, point prediction and probabilistic prediction. The prediction accuracy is quantified by performance measures, which are typically based on loss functions. We study the estimators of these performance measures, the prediction error and performance scores, for point and probabilistic predictors, respectively. The focus of this thesis is to assess the prediction performance of survival models that analyze censored survival times. To accommodate censoring, we extend the inverse probability censoring weighting (IPCW) method, thus arbitrary loss functions can be handled. We also develop confidence interval procedures for these performance measures. We compare model-based, apparent loss based and cross-validation estimators of prediction error under model misspecification and variable selection, for absolute relative error loss (in chapter 3) and misclassification error loss (in chapter 4). Simulation results indicate that cross-validation procedures typically produce reliable point estimates and confidence intervals, whereas model-based estimates are often sensitive to model misspecification. The methods are illustrated for two medical contexts in chapter 5. The apparent loss based and cross-validation estimators of performance scores for probabilistic predictor are discussed and illustrated with an example in chapter 6. We also make connections for performance.
126

Statistical Geocomputing: Spatial Outlier Detection in Precision Agriculture

Chu Su, Peter 29 September 2011 (has links)
The collection of crop yield data has become much easier with the introduction of technologies such as the Global Positioning System (GPS), ground-based yield sensors, and Geographic Information Systems (GIS). This explosive growth and widespread use of spatial data has challenged the ability to derive useful spatial knowledge. In addition, outlier detection as one important pre-processing step remains a challenge because the technique and the definition of spatial neighbourhood remain non-trivial, and the quantitative assessments of false positives, false negatives, and the concept of region outlier remain unexplored. The overall aim of this study is to evaluate different spatial outlier detection techniques in terms of their accuracy and computational efficiency, and examine the performance of these outlier removal techniques in a site-specific management context. In a simulation study, unconditional sequential Gaussian simulation is performed to generate crop yield as the response variable along with two explanatory variables. Point and region spatial outliers are added to the simulated datasets by randomly selecting observations and adding or subtracting a Gaussian error term. With simulated data which contains known spatial outliers in advance, the assessment of spatial outlier techniques can be conducted as a binary classification exercise, treating each spatial outlier detection technique as a classifier. Algorithm performance is evaluated with the area and partial area under the ROC curve up to different true positive and false positive rates. Outlier effects in on-farm research are assessed in terms of the influence of each spatial outlier technique on coefficient estimates from a spatial regression model that accounts for autocorrelation. Results indicate that for point outliers, spatial outlier techniques that account for spatial autocorrelation tend to be better than standard spatial outlier techniques in terms of higher sensitivity, lower false positive detection rate, and consistency in performance. They are also more resistant to changes in the neighbourhood definition. In terms of region outliers, standard techniques tend to be better than spatial autocorrelation techniques in all performance aspects because they are less affected by masking and swamping effects. In particular, one spatial autocorrelation technique, Averaged Difference, is superior to all other techniques in terms of both point and region outlier scenario because of its ability to incorporate spatial autocorrelation while at the same time, revealing the variation between nearest neighbours. In terms of decision-making, all algorithms led to slightly different coefficient estimates, and therefore, may result in distinct decisions for site-specific management. The results outlined here will allow an improved removal of crop yield data points that are potentially problematic. What has been determined here is the recommendation of using Averaged Difference algorithm for cleaning spatial outliers in yield dataset. Identifying the optimal nearest neighbour parameter for the neighbourhood aggregation function is still non-trivial. The recommendation is to specify a large number of nearest neighbours, large enough to capture the region size. Lastly, the unbiased coefficient estimates obtained with Average Difference suggest it is the better method for pre-processing spatial outliers in crop yield data, which underlines its suitability for detecting spatial outlier in the context of on-farm research.
127

The relations among the organization transformation,employee¡¦s commitment and working morale-a study on¡§the ROC Armed Forces Streamlining Program¡¨of the Ministry of Nationl Defens

Hui, Shi 26 July 2006 (has links)
Whether the execution of ¡§the ROC Armed Forces Streamlining Program¡¨ is smooth or not, heavily depends on member¡¦s cognition on the transformation, organization commitment, and working morale as well. However, there are few papers discuss about the relations among the above elements. The objective of this thesis is to study the relation among the transformation cognition, organization commitment, and morale of the high level command under the process of the organization transformation through questionnaire and analysis. The following conclusions are: 1. Under the prerequisite of employee participation, understanding, and guarantee the rights and interests, the employees will be willing to stay in the service. 2. The higher the degree of recognition and evaluation of employees on the transformation objective, the higher the concern on the organization¡¦s future development, pursue of the objective, the devotion and values on their jobs from the employees. 3. To promote the commitment of the employees to the organization would motivate the employees, and treat their works as the center of their lives, and therefore pursue better achievements. 4. For those senior high-ranking officers who own higher educational backgrounds and employees with long service years tend to have higher degree of recognition on transformation. 5. For those employees who are senior, own high educational backgrounds, and with long service years tend to have higher overall organizational commitment and stronger willing to stay in their positions. 6. For those 25 to 34 years old, with military appointments, married, high educational background, as the directors or deputy directors, higher-ranking officers and longer service years tend to have better recognition of organization, devotion and group spirits. According to the above results, four suggestions are addressed: 1. To respect the participation of employees, and to guarantee the employees¡¦ rights and interests. 2. To encourage the employees to attend courses or training during off-hours in order to build up multiple specialties. 3. To understand the employees¡¦ characteristics and specialties in order to adopt the strategy of differentiate management. 4. To enhance to recognition of the employees on the attainment of transformation benefits and to draft a complete set of measures.
128

Statistical Methods In Credit Rating

Sezgin, Ozge 01 September 2006 (has links) (PDF)
Credit risk is one of the major risks banks and financial institutions are faced with. With the New Basel Capital Accord, banks and financial institutions have the opportunity to improve their risk management process by using Internal Rating Based (IRB) approach. In this thesis, we focused on the internal credit rating process. First, a short overview of credit scoring techniques and validation techniques was given. By using real data set obtained from a Turkish bank about manufacturing firms, default prediction logistic regression, probit regression, discriminant analysis and classification and regression trees models were built. To improve the performances of the models the optimum sample for logistic regression was selected from the data set and taken as the model construction sample. In addition, also an information on how to convert continuous variables to ordered scaled variables to avoid difference in scale problem was given. After the models were built the performances of models for whole data set including both in sample and out of sample were evaluated with validation techniques suggested by Basel Committee. In most cases classification and regression trees model dominates the other techniques. After credit scoring models were constructed and evaluated, cut-off values used to map probability of default obtained from logistic regression to rating classes were determined with dual objective optimization. The cut-off values that gave the maximum area under ROC curve and minimum mean square error of regression tree was taken as the optimum threshold after 1000 simulation. Keywords: Credit Rating, Classification and Regression Trees, ROC curve, Pietra Index
129

Empirical Likelihood Confidence Intervals for ROC Curves Under Right Censorship

Yang, Hanfang 16 September 2010 (has links)
In this thesis, we apply smoothed empirical likelihood method to investigate confidence intervals for the receiver operating characteristic (ROC) curve with right censoring. As a particular application of comparison of distributions from two populations, the ROC curve is constructed by the combination of cumulative distribution function and quantile function. Under mild conditions, the smoothed empirical likelihood ratio converges to chi-square distribution, which is the well-known Wilks's theorem. Furthermore, the performances of the empirical likelihood method are also illustrated by simulation studies in terms of coverage probability and average length of confidence intervals. Finally, a primary biliary cirrhosis data is used to illustrate the proposed empirical likelihood procedure.
130

Empirical Likelihood-Based NonParametric Inference for the Difference between Two Partial AUCS

Yuan, Yan 02 August 2007 (has links)
Compare the accuracy of two continuous-scale tests is increasing important when a new test is developed. The traditional approach that compares the entire areas under two Receiver Operating Characteristic (ROC) curves is not sensitive when two ROC curves cross each other. A better approach to compare the accuracy of two diagnostic tests is to compare the areas under two ROC curves (AUCs) in the interested specificity interval. In this thesis, we have proposed bootstrap and empirical likelihood (EL) approach for inference of the difference between two partial AUCs. The empirical likelihood ratio for the difference between two partial AUCs is defined and its limiting distribution is shown to be a scaled chi-square distribution. The EL based confidence intervals for the difference between two partial AUCs are obtained. Additionally we have conducted simulation studies to compare four proposed EL and bootstrap based intervals.

Page generated in 0.0289 seconds