• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43817
  • 14509
  • 11348
  • 6365
  • 5838
  • 3082
  • 1643
  • 1241
  • 976
  • 968
  • 968
  • 968
  • 968
  • 968
  • Tagged with
  • 43281
  • 8682
  • 6861
  • 6552
  • 6130
  • 5531
  • 5526
  • 5303
  • 5131
  • 5049
  • 4740
  • 4323
  • 3963
  • 3679
  • 2966
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Discrimination of High Risk and Low Risk Populations for the Treatment of STDs

Zhao, Hui 05 August 2011 (has links)
It is an important step in clinical practice to discriminate real diseased patients from healthy persons. It would be great to get such discrimination from some common information like personal information, life style, and the contact with diseased patient. In this study, a score is calculated for each patient based on a survey through generalized linear model, and then the diseased status is decided according to previous sexually transmitted diseases (STDs) records. This study will facilitate clinics in grouping patients into real diseased or healthy, which in turn will affect the method clinics take to screen patients: complete screening for possible diseased patient and some common screening for potentially healthy persons.
252

Testing an Assumption of Non-Differential Misclassification in Case-Control Studies

Hui, Qin 01 August 2011 (has links)
One of the issues regarding the misclassification in case-control studies is whether the misclassification error rates are the same for both cases and controls. Currently, a common practice is to assume that the rates are the same (“non-differential” assumption). However, it is suspicious that this assumption is valid in many case-control studies. Unfortunately, no test is available so far to test the validity of the assumption of non-differential misclassification when the validation data are not available. We propose the first such method to test the validity of non-differential assumption in a case-control study with 2 × 2 contingency table. First, the Exposure Operating Characteristic curve is defined. Next, two non-parametric methods are applied to test the assumption of non-differential misclassification. Three examples from practical applications are used to illustrate the methods and a comparison is made.
253

Assessment of the Sustained Financial Impact of Risk Engineering Service on Insurance Claims Costs

Parker, Bobby I, Mr. 01 December 2011 (has links)
This research paper creates a comprehensive statistical model, relating financial impact of risk engineering activity, and insurance claims costs. Specifically, the model shows important statistical relationships among six variables including: types of risk engineering activity, risk engineering dollar cost, duration of risk engineering service, and type of customer by industry classification, dollar premium amounts, and dollar claims costs. We accomplish this by using a large data sample of approximately 15,000 customer-years of insurance coverage, and risk engineering activity. Data sample is from an international casualty/property insurance company and covers four years of operations, 2006-2009. The choice of statistical model is the linear mixed model, as presented in SAS 9.2 software. This method provides essential capabilities, including the flexibility to work with data having missing values, and the ability to reveal time-dependent statistical associations.
254

Revisiting the Dimensions of Residential Segregation

Sharp, Harry 01 August 2011 (has links)
The first major work to analyze the dimensions of segregation, done in the late 1980s by Massey and Denton, found five dimensions which explained the phenomenon of segregation. Since the original work was done in 1988 it seems relevant to revisit the issue with new data. Massey and Denton used the technique of factor analysis to identify the latent structure underlying the phenomenon. In this research their methodology is applied to a more complete data set from the 1980 Census to confirm their results and extend the methodology. Due to problems identified during the analysis confirmation was not possible. However, a simpler structure was identified which is comprised of only two factors. This structure is replicated when the methodology is applied to the 1990 and 2000 Census data thereby proving the robustness of the methodology.
255

The Path from Foster Care to Permanence: Does Proximity Outweigh Stability?

Fost, Michael 01 August 2011 (has links)
This thesis investigates the relationship between foster care placement settings and discharges. Placement settings are where foster children live: foster homes, group homes, etc. There may be one or several placements for any individual child. In the interest of stability, federal funding to states depends in part on low numbers of placement moves. Federal reviews, however, do not consider whether the placement settings resemble permanent family life (foster homes compared to congregate care) or the direction of placement moves. Competing risks regression was used to analyze time to discharge data of foster children in Georgia. Discharges (competing risks) were compared based on the number and the direction of placement moves. Children with movement patterns that favored placements similar to permanent family life were found to have higher probabilities of discharges to safe permanence. This thesis promotes “proximity to permanence” as an important, but often overlooked, consideration in foster care placements.
256

Estimation of the Optimal Threshold Using Kernel Estimate and ROC Curve Approaches

Zhu, Zi 23 May 2011 (has links)
Credit Line Analysis plays a very important role in the housing market, especially with the situation of large number of frozen loans during the current financial crisis. In this thesis, we apply the methods of kernel estimate and the Receiver Operating Characteristic (ROC) curve in the credit loan application process in order to help banks select the optimal threshold to differentiate good customers from bad customers. Better choice of the threshold is essential for banks to prevent loss and maximize profit from loans. One of the main advantages of our study is that the method does not require us to specify the distribution of the latent risk score. We apply bootstrap method to construct the confidence interval for the estimate.
257

Prevalence of Chronic Diseases and Risk Factors for Death among Elderly Americans

Han, Guangming 14 July 2011 (has links)
The main aim of this study is to explore the effects of risk factors contributing to death in the elderly American population. To achieve this purpose, we constructed Cox proportional hazard regression models and logistic regression models with the complex survey dataset from the national Second Longitudinal Study of Aging (LSOA II) to calculate the hazard ratios (HR)/odds ratios (OR) and confidence interval (CI) of risk factors. Our results show that in addition to chronic disease conditions, many risk factors, such as demographic factors (gender and age), social factors (interaction with friends or relatives), personal health behaviors (smoking and exercise), and biomedical factors (Body mass index and emotional factors) have significant effects on death in the elderly American population. This will provide important information for elderly people to prolong lifespan regardless of whether they have chronic disease/diseases or not.
258

Analysis of Dependently Truncated Sample Using Inverse Probability Weighted Estimator

Liu, Yang 01 August 2011 (has links)
Many statistical methods for truncated data rely on the assumption that the failure and truncation time are independent, which can be unrealistic in applications. The study cohorts obtained from bone marrow transplant (BMT) registry data are commonly recognized as truncated samples, the time-to-failure is truncated by the transplant time. There are clinical evidences that a longer transplant waiting time is a worse prognosis of survivorship. Therefore, it is reasonable to assume the dependence between transplant and failure time. To better analyze BMT registry data, we utilize a Cox analysis in which the transplant time is both a truncation variable and a predictor of the time-to-failure. An inverse-probability-weighted (IPW) estimator is proposed to estimate the distribution of transplant time. Usefulness of the IPW approach is demonstrated through a simulation study and a real application.
259

Minimum Degree Conditions for Tilings in Graphs and Hypergraphs

Lightcap, Andrew 01 August 2011 (has links)
We consider tiling problems for graphs and hypergraphs. For two graphs and , an -tiling of is a subgraph of consisting of only vertex disjoint copies of . By using the absorbing method, we give a short proof that in a balanced tripartite graph , if every vertex is adjacent to of the vertices in each of the other vertex partitions, then has a -tiling. Previously, Magyar and Martin [11] proved the same result (without ) by using the Regularity Lemma. In a 3-uniform hypergraph , let denote the minimum number of edges that contain for all pairs of vertices. We show that if , there exists a -tiling that misses at most vertices of . On the other hand, we show that there exist hypergraphs such that and does not have a perfect -tiling. These extend the results of Pikhurko [12] on -tilings.
260

Jackknife Empirical Likelihood for the Accelerated Failure Time Model with Censored Data

Bouadoumou, Maxime K 15 July 2011 (has links)
Kendall and Gehan estimating functions are used to estimate the regression parameter in accelerated failure time (AFT) model with censored observations. The accelerated failure time model is the preferred survival analysis method because it maintains a consistent association between the covariate and the survival time. The jackknife empirical likelihood method is used because it overcomes computation difficulty by circumventing the construction of the nonlinear constraint. Jackknife empirical likelihood turns the statistic of interest into a sample mean based on jackknife pseudo-values. U-statistic approach is used to construct the confidence intervals for the regression parameter. We conduct a simulation study to compare the Wald-type procedure, the empirical likelihood, and the jackknife empirical likelihood in terms of coverage probability and average length of confidence intervals. Jackknife empirical likelihood method has a better performance and overcomes the under-coverage problem of the Wald-type method. A real data is also used to illustrate the proposed methods.

Page generated in 0.237 seconds