• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 35
  • 20
  • 12
  • 11
  • 10
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Group Specific Dynamic Models of Time Varying Exposures on a Time-to-Event Outcome

Tong, Yan 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Time-to-event outcomes are widely utilized in medical research. Assessing the cumulative effects of time-varying exposures on time-to-event outcomes poses challenges in statistical modeling. First, exposure status, intensity, or duration may vary over time. Second, exposure effects may be delayed over a latent period, a situation that is not considered in traditional survival models. Third, exposures that occur within a time window may cumulatively in uence an outcome. Fourth, such cumulative exposure effects may be non-linear over exposure latent period. Lastly, exposure-outcome dynamics may differ among groups defined by individuals' characteristics. These challenges have not been adequately addressed in current statistical models. The objective of this dissertation is to provide a novel approach to modeling group-specific dynamics between cumulative timevarying exposures and a time-to-event outcome. A framework of group-specific dynamic models is introduced utilizing functional time-dependent cumulative exposures within an etiologically relevant time window. Penalizedspline time-dependent Cox models are proposed to evaluate group-specific outcome-exposure dynamics through the associations of a time-to-event outcome with functional cumulative exposures and group-by-exposure interactions. Model parameter estimation is achieved by penalized partial likelihood. Hypothesis testing for comparison of group-specific exposure effects is performed by Wald type tests. These models are extended to group-specific non-linear exposure intensity-latency-outcome relationship and group-specific interaction effect from multiple exposures. Extensive simulation studies are conducted and demonstrate satisfactory model performances. The proposed methods are applied to the analyses of group-specific associations between antidepressant use and time to coronary artery disease in a depression-screening cohort using data extracted from electronic medical records.
22

Factors Related to the Timing of Anterior Cruciate Ligament Reconstruction Failure Among an Active Population

Schroeder, Matthew Jason 27 August 2012 (has links)
No description available.
23

AN ASSOCIATION STUDY BETWEEN ADULT BLOOD PRESSURE AND TIME TO FIRST CARDIOVASCULAR DISEASE

Pu, Yongjia 01 January 2015 (has links)
BACKGROUND: Several studies have demonstrated the association between the time to hypertension event and multiple baseline measurements for adults, yet other survival cardiovascular disease (CVD) outcomes such as high cholesterol and heart attack have been somewhat less considered. The Fels Longitudinal Study (FLS) provides us an opportunity to connect adult blood pressure (BP) at certain ages to the time to first CVD outcomes. The availability of long-term serial BP measurements from FLS also potentially allows us to evaluate if the trend of the measured BP biomarkers over time predicts survival outcomes in adulthood through statistical modeling. METHODS: When the reference standard is right-censored time-to-event (survival) outcome, the C index or concordance C, is commonly used as a summary measure of discrimination between a survival outcome that is possibly right censored and a predictive-score variable, say, a measured biomarker or a composite-score output from a statistical model that combines multiple biomarkers. When we have subjects longitudinally followed up, it is of primary interest to assess if some baseline measurements predict the time-to-event outcome. Specifically, in this study, systolic blood pressure, diastolic blood pressure, as well as their variation over time, are considered predictive biomarkers, and we assess their predictive ability for certain time-to-event outcomes in terms of the C index. RESULTS: There are a few summary C index differences that are statistically significant in predicting and discriminating certain CVD metric at certain age stage, though some of these differences are altered in the presence of medicine treatment and lifestyle characteristics. The variation of systolic BP measures over time has a significantly different predicting ability comparing with systolic BP measures at certain given time point, for predicting certain survival outcome such as high cholesterol level. CONCLUSIONS: Adult systolic and diastolic BP measurements may have significantly different ability in predicting time to first CVD events. The fluctuation of BP measurements over time may have better association than BP measurement at a single baseline time point, with the time to first CVD events.
24

Time to Diagnosis of Second Primary Cancers among Patients with Breast Cancer

Irobi, Edward Okezie 01 January 2016 (has links)
Many breast cancer diagnoses and second cancers are associated with BRCA gene mutations. Early detection of cancer is necessary to improve health outcomes, particularly with second cancers. Little is known about the influence of risk factors on time to diagnosis of second primary cancers after diagnosis with BRCA-related breast cancer. The purpose of this cohort study was to examine the risk of diagnosis of second primary cancers among women diagnosed with breast cancer after adjusting for BRCA status, age, and ethnicity. The study was guided by the empirical evidence supporting the mechanism of action in the mutation of BRCA leading to the development of cancer. Composite endpoint was used to define second primary cancer occurrences, and Kaplan-Meier survival curves were used to compare the median time-to-event among comparison groups and BRCA gene mutation status. Cox proportional hazards was used to examine the relationships between age at diagnosis, ethnicity, BRCA gene mutation status, and diagnosis of a second primary cancer. The overall median time to event for diagnosis of second primary cancers was 14 years. The hazard ratios for BRCA2 = 1.47, 95% CI [1.03 - 2.11], White = 1.511, 95% CI [1.18 - 1.94], and American Indian/Hawaiian = 1.424, 95% CI [1.12 -1.81] showing positive significant associations between BRCA2 mutation status and risk of diagnosis of second primary colorectal, endometrial, cervical, kidney, thyroid, and bladder cancers. Data on risk factors for development of second cancers would allow for identification of appropriate and timely screening procedures, determining the best course of action for prevention and treatment, and improving quality of life among breast cancer survivors.
25

A Case-Only Genome-wide Association Study of Gender- and Age-specific Risk Markers for Childhood Leukemia

Singh, Sandeep Kumar 26 March 2015 (has links)
Males and age group 1 to 5 years show a much higher risk for childhood acute lymphoblastic leukemia (ALL). We performed a case-only genome-wide association study (GWAS), using the Illumina Infinium HumanCoreExome Chip, to unmask gender- and age-specific risk variants in 240 non-Hispanic white children with ALL recruited at Texas Children’s Cancer Center, Houston, Texas. Besides statistically most significant results, we also considered results that yielded the highest effect sizes. Existing experimental data and bioinformatic predictions were used to complement results, and to examine the biological significance of statistical results. Our study identified novel risk variants for childhood ALL. The SNP, rs4813720 (RASSF2), showed the statistically most significant gender-specific associations (P < 2 x 10-6). Likewise, rs10505918 (SOX5) yielded the lowest P value (P < 1 x 10-5) for age-specific associations, and also showed the statistically most significant association with age-at-onset (P < 1 x 10-4). Two SNPs, rs12722042 and 12722039, from the HLA-DQA1 region yielded the highest effect sizes (odds ratio (OR) = 15.7; P = 0.002) for gender-specific results, and the SNP, rs17109582 (OR = 12.5; P = 0.006), showed the highest effect size for age-specific results. Sex chromosome variants did not appear to be involved in gender-specific associations. The HLA-DQA1 SNPs belong to DQA1*01:07and confirmed previously reported male-specific association with DQA1*01:07. Twenty one of the SNPs identified as risk markers for gender- or age-specific associations were located in the transcription factor binding sites and 56 SNPs were non-synonymous variants, likely to alter protein function. Although bioinformatic analysis did not implicate a particular mechanism for gender- and age-specific associations, RASSF2 has an estrogen receptor-alpha binding site in its promoter. The unknown mechanisms may be due to lack of interest in gender- and age-specificity in associations. These results provide a foundation for further studies to examine the gender- and age-differential in childhood ALL risk. Following replication and mechanistic studies, risk factors for one gender or age group may have a potential to be used as biomarkers for targeted intervention for prevention and maybe also for treatment.
26

Deep Learning Approach for Time- to-Event Modeling of Credit Risk / Djupinlärningsmetod för överlevnadsanalys av kreditriskmodellering

Kazi, Mehnaz, Stanojlovic, Natalija January 2022 (has links)
This thesis explores how survival analysis models performs for default risk prediction of small-to-medium sized enterprises (SME) and investigates when survival analysis models are preferable to use. This is examined by comparing the performance of three deep learning models in a survival analysis setting, a traditional survival analysis model Cox Proportional Hazards, and a traditional credit risk model logistic regression. The performance is evaluated by three metrics; concordance index, integrated Brier score and ROC-AUC. The models are trained on financial data from Swedish SME holding profit and loss statement and balance sheet results. The dataset is divided into two feature sets: a smaller and a larger, additionally the features are binned.  The results show that DeepHit and Logistic Hazard performed the best with the three metrics in mind. In terms of the AUC score all three deep learning survival models generally outperform the logistic regression model. The Cox Proportional Hazards (Cox PH) showed worse performance than the logistic regression model on the non-binned feature sets while having more comparable results in the case where the data was binned. In terms of the concordance index and integrated Brier score the Cox Proportional Hazards model consistently performed the worst out of all survival models. The largest significant performance gain for the concordance index and AUC score was however seen by the Cox PH model when binning was applied to the larger feature set. The concordance index went from 0.65 to 0.75 and the test AUC went from 76.56% to 83.91% for the larger set to larger dataset with binned features. The main conclusions is that the neural networks models did outperform the traditional models slightly and that binning had a great impact on all models, but in particular for the Cox PH model. / Det här examensarbete utreder hur modeller inom överlevnadsanalys presterar för kreditriskprediktion på små och medelstora företag (SMF) och utvärderar när överlevnadsanalys modeller är att föredra. För att besvara frågan jämförs prestandan av tre modeller för djupinlärning i en överlevnadsanalysmiljö, en traditionell överlevnadsanalys modell: Cox Proportional Hazards och en traditionell kreditriskmodell: logistik regression. Prestandan har utvärderats utifrån tre metriker; concordance index, integrated Brier score och AUC. Modellerna är tränade på finansiell data från små och medelstora företag som innefattar resultaträkning och balansräkningsresultat. Datasetet är fördelat i ett mindre variabelset och ett större set, dessutom är variablerna binnade.  Resultatet visar att DeepHit och Logistic Hazard presterar bäst baserat på alla metriker. Generellt sett är AUC måttet högre för alla djupinlärningsmodeller än för den logistiska regressionen. Cox Proportional Hazards (Cox PH) modellen presterar sämre för variabelset som inte är binnade men får jämförelsebar resultat när datan är binnad. När det gäller concordance index och integrated Brier score så har Cox PH överlag sämst resultat utav alla överlevnadsmodeller. Den största signifikanta förbättringen i resultatet för concordance index och AUC ses för Cox PH när datan binnas för det stora variabelsetet. Concordance indexet gick från 0.65 till 0.75 och test AUC måttet gick från 76.56% till 83.91% för det större variabel setet till större variabel setet med binnade variabler. De huvudsakliga slutsatserna är att de neurala nätverksmodeller presterar något bättre än de traditionella modellerna och att binning är mycket gynnsam för alla modeller men framförallt för Cox PH.
27

Computational Modeling for Censored Time to Event Data Using Data Integration in Biomedical Research

Choi, Ickwon 20 June 2011 (has links)
No description available.
28

Modellering av åtgärdsintervall för vägar med tung trafik

Brännmark, My, Fors, Ellen January 2019 (has links)
In Sweden, there has been an long term effort to allow as heavy traffic as possible, provided thatthe road network can handle it. This is because heavy traffic offers a competitive advantage withsocio-economic gains. In July 2018, the Swedish Transport Administration made 12 percent ofthe Swedish road network avaliable for the new maximum vehicle weight of 74 tonnes, basedon a legislative change from 2017. It is known that heavy traffic has a negative effect on thedegradation of the road, but it prevails divided opinions on whether 74 tonnes have a greaterimpact on the degradation rate compared to previous maximum gross weights of 64 tonnes.The 74 tonne vehicles have the same allowed axle load, which means more axles per vehicle. Some argue that an increased total load and more axles affect the degradation associated withtime-dependent material properties, while others argue that 74 tonnes mean fewer heavy vehiclesoverall, and thus should have a positive impact on the road’s lifespan. The construction companySkanska therefore requests a statistical analysis that enables to nuance the effects that heavytraffic has on the Swedish state road network. Since there is very limited data on the effect of 74 tonne traffic, this Master thesis instead focuseson modeling heavy traffic in general in order to be able to draw conclusions on which variablesare significant for a road’s lifetime. The method used is survival analysis where the lifetimeof the road is defined as the time between two maintenance treatments. The model selectedis the semi-parametric ’Cox Proportional Hazard Model’. The model is fitted with data froman open source database called LTPP (Long Term Pavement Performance) which is providedby the National Road and Transport Research Institute (VTI). The result of the modeling ispresented with hazard ratios, which is the relative risk that a road will require maintance atthe next time stamp compared to a reference category. The covariates that turned out to besignificant for a road’s lifetime and thus are included in the model are; lane width, undergroundtype, speed limit, asphalt layer thickness, bearing layer thickness and proportion of heavy traffic. Survival curves estimated by the model are also presented. In addition, a sensitivity analysis ismade by exploring survival curves estimated for different scenarios, with different combinationsof covariate levels.The results is then compared with previous studies on the subject. The most interesting finding isa case study from Finland since Finland allow 76 tonne vehicles since 2013. In the comparison,the model’s significant variables are confirmed, but the significance of precipitation and thenumber of axes for a roads lifetime is also highlighted
29

Benefits of Pharmacometric Model-Based Design and Analysis of Clinical Trials

Karlsson, Kristin E January 2010 (has links)
Quantitative pharmacokinetic-pharmacodynamic and disease progression models are the core of the science of pharmacometrics which has been identified as one of the strategies that can make drug development more effective. To adequately develop and utilize these models one needs to carefully consider the nature of the data, choice of appropriate estimation methods, model evaluation strategies, and, most importantly, the intended use of the model. The general aim of this thesis was to investigate how the use of pharmacometric models can improve the design and analysis of clinical trials within drug development. The development of pharmacometric models for clinical assessment scales in stroke and graded severity events, in this thesis, show the benefit of describing data as close to its true nature as possible, as it increases the predictive abilities and allows for mechanistic interpretations of the models. Performance of three estimation methods implemented in the mixed-effects modeling software NONMEM; 1) Laplace, 2) SAEM, and 3) Importance sampling, applied when modeling repeated time-to-event data, was investigated. The two latter methods are to be preferred if less than approximately half of the individuals experience events. In addition, predictive performance of two validation procedures, internal and external validation, was explored, with internal validation being preferred in most cases. Model-based analysis was compared to conventional methods by the use of clinical trial simulations and the power to detect a drug effect was improved with a pharmacometric design and analysis. Throughout this thesis several examples have shown the possibility of significantly reducing sample sizes in clinical trials with a pharmacometric model-based analysis. This approach will reduce time and costs spent in the development of new drug therapies, but foremost reduce the number of healthy volunteers and patients exposed to experimental drugs.
30

Pharmacometric Methods and Novel Models for Discrete Data

Plan, Elodie L January 2011 (has links)
Pharmacodynamic processes and disease progression are increasingly characterized with pharmacometric models. However, modelling options for discrete-type responses remain limited, although these response variables are commonly encountered clinical endpoints. Types of data defined as discrete data are generally ordinal, e.g. symptom severity, count, i.e. event frequency, and time-to-event, i.e. event occurrence. Underlying assumptions accompanying discrete data models need investigation and possibly adaptations in order to expand their use. Moreover, because these models are highly non-linear, estimation with linearization-based maximum likelihood methods may be biased. The aim of this thesis was to explore pharmacometric methods and novel models for discrete data through (i) the investigation of benefits of treating discrete data with different modelling approaches, (ii) evaluations of the performance of several estimation methods for discrete models, and (iii) the development of novel models for the handling of complex discrete data recorded during (pre-)clinical studies. A simulation study indicated that approaches such as a truncated Poisson model and a logit-transformed continuous model were adequate for treating ordinal data ranked on a 0-10 scale. Features that handled serial correlation and underdispersion were developed for the models to subsequently fit real pain scores. The performance of nine estimation methods was studied for dose-response continuous models. Other types of serially correlated count models were studied for the analysis of overdispersed data represented by the number of epilepsy seizures per day. For these types of models, the commonly used Laplace estimation method presented a bias, whereas the adaptive Gaussian quadrature method did not. Count models were also compared to repeated time-to-event models when the exact time of gastroesophageal symptom occurrence was known. Two new model structures handling repeated time-to-categorical events, i.e. events with an ordinal severity aspect, were introduced. Laplace and two expectation-maximisation estimation methods were found to be performing well for frequent repeated time-to-event models. In conclusion, this thesis presents approaches, estimation methods, and diagnostics adapted for treating discrete data. Novel models and diagnostics were developed when lacking and applied to biological observations.

Page generated in 0.0934 seconds