• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 37
  • 23
  • 17
  • 9
  • 7
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 210
  • 210
  • 68
  • 65
  • 62
  • 48
  • 39
  • 39
  • 37
  • 30
  • 29
  • 28
  • 27
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Modélisation conjointe de trajectoire socioprofessionnelle individuelle et de la survie globale ou spécifique / Joint modeling of individual socio-professional trajectory and overall or cause-specific survival

Karimi, Maryam 06 June 2016 (has links)
Appartenir à une catégorie socio-économique moins élevée est généralement associé à une mortalité plus élevée pour de nombreuses causes de décès. De précédentes études ont déjà montré l’importance de la prise en compte des différentes dimensions des trajectoires socio-économiques au cours de la vie. L’analyse des trajectoires professionnelles constitue une étape importante pour mieux comprendre ces phénomènes. L’enjeu pour mesurer l’association entre les parcours de vie des trajectoires socio-économiques et la mortalité est de décomposer la part respective de ces facteurs dans l’explication du niveau de survie des individus. La complexité de l’interprétation de cette association réside dans la causalité bidirectionnelle qui la sous-tend: Les différentiels de mortalité sont-ils dus à des différentielsd’état de santé initial influençant conjointement la situation professionnelle et la mortalité, ou l’évolution professionnelle influence-t-elle directement l’état de santé puis la mortalité?Les méthodes usuelles ne tiennent pas compte de l’interdépendance des changements de situation professionnelle et de la bidirectionnalité de la causalité qui conduit à un biais important dans l’estimation du lien causale entre situation professionnelle et mortalité. Par conséquent, il est nécessaire de proposer des méthodes statistiques qui prennent en compte des mesures répétées (les professions) simultanément avec les variables de survie. Cette étude est motivée par la base de données Cosmop-DADS qui est un échantillon de la population salariée française.Le premier objectif de cette thèse était d’examiner l’ensemble des trajectoires professionnelles avec une classification professionnelle précise, au lieu d’utiliser un nombre limité d’états dans un parcours professionnel qui a été considéré précédemment. A cet effet, nous avons défini des variables dépendantes du temps afinde prendre en compte différentes dimensions des trajectoires professionnelles, à travers des modèles dits de "life-course", à savoir critical period, accumulation model et social mobility model, et nous avons mis en évidence l’association entre les trajectoires professionnelles et la mortalité par cause en utilisant ces variables dans un modèle de Cox.Le deuxième objectif a consisté à intégrer les épisodes professionnel comme un sous-modèle longitudinal dans le cadre des modèles conjoints pour réduire le biais issude l’inclusion des covariables dépendantes du temps endogènes dans le modèle de Cox. Nous avons proposé un modèle conjoint pour les données longitudinales nominaleset des données de risques concurrents dans une approche basée sur la vraisemblance. En outre, nous avons proposé une approche de type méta-analyse pour résoudre les problèmes liés au temps des calculs dans les modèles conjoints appliqués à l’analyse des grandes bases de données. Cette approche consiste à combiner les résultats issus d’analyses effectuées sur les échantillons stratifiés indépendants. Dans la même perspective de l’utilisation du modèle conjoint sur les grandes bases de données, nous avons proposé une procédure basée sur l’avantage computationnel de la régression de Poisson.Cette approche consiste à trouver les trajectoires typesà travers les méthodes de la classification, et d’appliquerle modèle conjoint sur ces trajectoires types. / Being in low socioeconomic position is associated with increased mortality risk from various causes of death. Previous studies have already shown the importance of considering different dimensions of socioeconomic trajectories across the life-course. Analyses of professional trajectories constitute a crucial step in order to better understand the association between socio-economic position and mortality. The main challenge in measuring this association is then to decompose the respectiveshare of these factors in explaining the survival level of individuals. The complexity lies in the bidirectional causality underlying the observed associations:Are mortality differentials due to differences in the initial health conditions that are jointly influencing employment status and mortality, or the professional trajectory influences directly health conditions and then mortality?Standard methods do not consider the interdependence of changes in occupational status and the bidirectional causal effect underlying the observed association and that leads to substantial bias in estimating the causal link between professional trajectory and mortality. Therefore, it is necessary to propose statistical methods that consider simultaneously repeated measurements (careers) and survivalvariables. This study was motivated by the Cosmop-DADS database, which is a sample of the French salaried population.The first aim of this dissertation was to consider the whole professional trajectories and an accurate occupational classification, instead of using limitednumber of stages during life course and a simple occupational classification that has been considered previously. For this purpose, we defined time-dependent variables to capture different life course dimensions, namely critical period, accumulation model and social mobility model, and we highlighted the association between professional trajectories and cause-specific mortality using the definedvariables in a Cox proportional hazards model.The second aim was to incorporate the employment episodes in a longitudinal sub-model within the joint model framework to reduce the bias resulting from the inclusion of internal time-dependent covariates in the Cox model. We proposed a joint model for longitudinal nominal outcomes and competing risks data in a likelihood-based approach. In addition, we proposed an approach mimicking meta-analysis to address the calculation problems in joint models and large datasets, by extracting independent stratified samples from the large dataset, applying the joint model on each sample and then combining the results. In the same objective, that is fitting joint model on large-scale data, we propose a procedure based on the appeal of the Poisson regression model. This approach consist of finding representativetrajectories by means of clustering methods and then applying the joint model on these representative trajectories.
202

Contributions à l'analyse de données fonctionnelles multivariées, application à l'étude de la locomotion du cheval de sport / Contributions to the analysis of multivariate functional data, application to the study of the sport horse's locomotion

Schmutz, Amandine 15 November 2019 (has links)
Avec l'essor des objets connectés pour fournir un suivi systématique, objectif et fiable aux sportifs et à leur entraineur, de plus en plus de paramètres sont collectés pour un même individu. Une alternative aux méthodes d'évaluation en laboratoire est l'utilisation de capteurs inertiels qui permettent de suivre la performance sans l'entraver, sans limite d'espace et sans procédure d'initialisation fastidieuse. Les données collectées par ces capteurs peuvent être vues comme des données fonctionnelles multivariées : se sont des entités quantitatives évoluant au cours du temps de façon simultanée pour un même individu statistique. Cette thèse a pour objectif de chercher des paramètres d'analyse de la locomotion du cheval athlète à l'aide d'un capteur positionné dans la selle. Cet objet connecté (centrale inertielle, IMU) pour le secteur équestre permet de collecter l'accélération et la vitesse angulaire au cours du temps, dans les trois directions de l'espace et selon une fréquence d'échantillonnage de 100 Hz. Une base de données a ainsi été constituée rassemblant 3221 foulées de galop, collectées en ligne droite et en courbe et issues de 58 chevaux de sauts d'obstacles de niveaux et d'âges variés. Nous avons restreint notre travail à la prédiction de trois paramètres : la vitesse par foulée, la longueur de foulée et la qualité de saut. Pour répondre aux deux premiers objectifs nous avons développé une méthode de clustering fonctionnelle multivariée permettant de diviser notre base de données en sous-groupes plus homogènes du point de vue des signaux collectés. Cette méthode permet de caractériser chaque groupe par son profil moyen, facilitant leur compréhension et leur interprétation. Mais, contre toute attente, ce modèle de clustering n'a pas permis d'améliorer les résultats de prédiction de vitesse, les SVM restant le modèle ayant le pourcentage d'erreur inférieur à 0.6 m/s le plus faible. Il en est de même pour la longueur de foulée où une précision de 20 cm est atteinte grâce aux Support Vector Machine (SVM). Ces résultats peuvent s'expliquer par le fait que notre base de données est composée uniquement de 58 chevaux, ce qui est un nombre d'individus très faible pour du clustering. Nous avons ensuite étendu cette méthode au co-clustering de courbes fonctionnelles multivariées afin de faciliter la fouille des données collectées pour un même cheval au cours du temps. Cette méthode pourrait permettre de détecter et prévenir d'éventuels troubles locomoteurs, principale source d'arrêt du cheval de saut d'obstacle. Pour finir, nous avons investigué les liens entre qualité du saut et les signaux collectés par l'IMU. Nos premiers résultats montrent que les signaux collectés par la selle seuls ne suffisent pas à différencier finement la qualité du saut d'obstacle. Un apport d'information supplémentaire sera nécessaire, à l'aide d'autres capteurs complémentaires par exemple ou encore en étoffant la base de données de façon à avoir un panel de chevaux et de profils de sauts plus variés / With the growth of smart devices market to provide athletes and trainers a systematic, objective and reliable follow-up, more and more parameters are monitored for a same individual. An alternative to laboratory evaluation methods is the use of inertial sensors which allow following the performance without hindering it, without space limits and without tedious initialization procedures. Data collected by those sensors can be classified as multivariate functional data: some quantitative entities evolving along time and collected simultaneously for a same individual. The aim of this thesis is to find parameters for analysing the athlete horse locomotion thanks to a sensor put in the saddle. This connected device (inertial sensor, IMU) for equestrian sports allows the collection of acceleration and angular velocity along time in the three space directions and with a sampling frequency of 100 Hz. The database used for model development is made of 3221 canter strides from 58 ridden jumping horses of different age and level of competition. Two different protocols are used to collect data: one for straight path and one for curved path. We restricted our work to the prediction of three parameters: the speed per stride, the stride length and the jump quality. To meet the first to objectives, we developed a multivariate functional clustering method that allow the division of the database into smaller more homogeneous sub-groups from the collected signals point of view. This method allows the characterization of each group by it average profile, which ease the data understanding and interpretation. But surprisingly, this clustering model did not improve the results of speed prediction, Support Vector Machine (SVM) is the model with the lowest percentage of error above 0.6 m/s. The same applied for the stride length where an accuracy of 20 cm is reached thanks to SVM model. Those results can be explained by the fact that our database is build from 58 horses only, which is a quite low number of individuals for a clustering method. Then we extend this method to the co-clustering of multivariate functional data in order to ease the datamining of horses’ follow-up databases. This method might allow the detection and prevention of locomotor disturbances, main source of interruption of jumping horses. Lastly, we looked for correlation between jumping quality and signals collected by the IMU. First results show that signals collected by the saddle alone are not sufficient to differentiate finely the jumping quality. Additional information will be needed, for example using complementary sensors or by expanding the database to have a more diverse range of horses and jump profiles
203

Variable selection and structural discovery in joint models of longitudinal and survival data

He, Zangdong January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Joint models of longitudinal and survival outcomes have been used with increasing frequency in clinical investigations. Correct specification of fixed and random effects, as well as their functional forms is essential for practical data analysis. However, no existing methods have been developed to meet this need in a joint model setting. In this dissertation, I describe a penalized likelihood-based method with adaptive least absolute shrinkage and selection operator (ALASSO) penalty functions for model selection. By reparameterizing variance components through a Cholesky decomposition, I introduce a penalty function of group shrinkage; the penalized likelihood is approximated by Gaussian quadrature and optimized by an EM algorithm. The functional forms of the independent effects are determined through a procedure for structural discovery. Specifically, I first construct the model by penalized cubic B-spline and then decompose the B-spline to linear and nonlinear elements by spectral decomposition. The decomposition represents the model in a mixed-effects model format, and I then use the mixed-effects variable selection method to perform structural discovery. Simulation studies show excellent performance. A clinical application is described to illustrate the use of the proposed methods, and the analytical results demonstrate the usefulness of the methods.
204

AI based prediction of road users' intents and reactions

Gurudath, Akshay January 2022 (has links)
Different road users follow different behaviors and intentions in the trajectories that they traverse. Predicting the intent of these road users at intersections would not only help increase the comfort of drive in autonomous vehicles, but also help detect potential accidents. In this thesis, the research objective is to build models that predicts future positions of road users (pedestrians,cyclists and autonomous shuttles) by capturing behaviors endemic to different road users.  Firstly, a constant velocity state space model is used as a benchmark for intent prediction, with a fresh approach to estimate parameters from the data through the EM algorithm. Then, a neural network based LSTM sequence modeling architecture is used to better capture the dynamics of road user movement and their dependence on the spatial area. Inspired by the recent success of transformers and attention in text mining, we then propose a mechanism to capture the road users' social behavior amongst their neighbors. To achieve this, past trajectories of different road users are forward propagated through the LSTM network to obtain representative feature vectors for each road users' behaviour. These feature vectors are then passed through an attention-layer to obtain representations that incorporate information from other road users' feature vectors, which are in-turn used to predict future positions for every road user in the frame. It is seen that the attention based LSTM model slightly outperforms the plain LSTM models, while both substantially outperform the constant velocity model. A comparative qualitative analysis is performed to assess the behaviors that are captured/missed by the different models. The thesis concludes with a dissection of the behaviors captured by the attention module.
205

CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONS

FENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling lifetime data involving long time survivors. Applications of cure rate models can be seen in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario, with the assumption of proportional odds (PO) lifetime distributions for the susceptibles, and statistical inferential methods are then developed based on right-censored data. In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution, and their corresponding lifetimes of non-cured or susceptible individuals can be described by PO model. This provides a natural extension of the work of Gu et al. (2011) who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization (EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios, and model discrimination between some well-known cure models like geometric, Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the model are also discussed. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing causes is modelled by a weighted Poisson distribution with special focus on exponentially weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage distribution is introduced for the number of initial causes which do not get destroyed. An EM-type algorithm for computing the MLEs is developed. An extensive simulation study is carried out for various scenarios, and model discrimination between the three weighted Poisson distributions is also examined. All the models and methods of estimation are evaluated through a simulation study. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution in which the lifetimes of non-cured individuals can be described by PO model. The detailed steps of the EM algorithm are then developed for this model and an extensive simulation study is carried out to evaluate the performance of the proposed model and the estimation method. A cutaneous melanoma dataset as well as a simulated data are used for illustrative purposes. Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
206

Inference for Gamma Frailty Models based on One-shot Device Data

Yu, Chenxi January 2024 (has links)
A device that is accompanied by an irreversible chemical reaction or physical destruction and could no longer function properly after performing its intended function is referred to as a one-shot device. One-shot device test data differ from typical data obtained by measuring lifetimes in standard life-tests. Due to the very nature of one-shot devices, actual lifetimes of one-shot devices under test cannot be observed, and they are either left- or right-censored. In addition, a one-shot device often has multiple components that could cause the failure of the device. The components are coupled together in the manufacturing process or assembly, resulting in the failure modes possessing latent heterogeneity and dependence. Frailty models enable us to describe the influence of common, but unobservable covariates, on the hazard function as a random effect in a model and also provide an easily understandable interpretation. In this thesis, we develop some inferential results for one-shot device testing data with gamma frailty model. We first develop an efficient expectation-maximization (EM) algorithm for determining the maximum likelihood estimates of model parameters of a gamma frailty model with exponential lifetime distributions for components based on one-shot device test data with multiple failure modes, wherein the data are obtained from a constant-stress accelerated life-test. The maximum likelihood estimate of the mean lifetime of $k$-out-of-$M$ structured one-shot devices under normal operating conditions is also presented. In addition, the asymptotic variance–covariance matrix of the maximum likelihood estimates is derived, which is then used to construct asymptotic confidence intervals for the model parameters. The performance of the proposed inferential methods is finally evaluated through Monte Carlo simulations and then illustrated with a numerical example. A gamma frailty model with Weibull baseline hazards is considered next for fitting one-shot device testing data. The Weibull baseline hazards enable us to analyze time-varying failure rates more accurately, allowing for a deeper understanding of the dynamic nature of system's reliability. We develop an EM algorithm for estimating the model parameters utilizing the complete likelihood function. A detailed simulation study evaluates the performance of the Weibull baseline hazard model with that of the exponential baseline hazard model. The introduction of shape parameters in the component's lifetime distribution within the Weibull baseline hazard model offers enhanced flexibility in model fitting. Finally, Bayesian inference is then developed for the gamma frailty model with exponential baseline hazard for one-shot device testing data. We introduce the Bayesian estimation procedure using Markov chain Monte Carlo (MCMC) technique for estimating the model parameters as well as for developing credible intervals for those parameters. The performance of the proposed method is evaluated in a simulation study. Model comparison between independence model and the frailty model is made using Bayesian model selection criterion. / Thesis / Candidate in Philosophy
207

Confiabilidade em sistemas coerentes: um modelo bayesiano Weibull. / Reliability in coherent systems: a bayesian weibull model

Bhering, Felipe Lunardi 28 June 2013 (has links)
O principal objetivo desse trabalho é introduzir um modelo geral bayesiano Weibull hierárquico para dados censurados que estima a função de confiabilidade de cada componente para sistemas de confiabilidade coerentes. São introduzidos formas de estimação mais sólidas, sem a inserção de estimativas médias nas funções de confiabilidade (estimador plug-in). Através desse modelo, são expostos e solucionados exemplos na área de confiabilidade como sistemas em série, sistemas em paralelo, sistemas k-de-n, sistemas bridge e um estudo clínico com dados censurados intervalares. As soluções consideram que as componentes tem diferentes distribuições, e nesse caso, o sistema bridge ainda não havia solução na literatura. O modelo construído é geral e pode ser utilizado para qualquer sistema coerente e não apenas para dados da área de confiabilidade, como também na área de sobrevivência, dentre outros. Diversas simulações com componentes com diferentes proporções de censura, distintas médias, três tipos de distribuições e tamanhos de amostra foram feitas em todos os sistemas para avaliar a eficácia do modelo. / The main purpose of this work is to introduce a general bayesian Weibull hierarchical model for censored data which estimates each reliability components function from coherent systems. Its introduced estimation procedures which do not consider plug-in estimators. Also, its exposed and solved with this model examples in reliability area such as series systems, parallel systems, k-out-of-n systems, bridge systems and a clinical study with interval censoring data. The problem of bridge system hadnt a solution before for the case of each component with different distribution. Actually, this model is general and can be used to analyse any kind of coherent system and censored data, not only reliability ones, but also survival data and others. Several components simulations with different censored proportions, distinct means, three kinds of distributions and sample size were made in all systems to evaluate model efficiency.
208

狀態轉換下利率與跳躍風險股票報酬之歐式選擇權評價與實證分析 / Option Pricing and Empirical Analysis for Interest Rate and Stock Index Return with Regime-Switching Model and Dependent Jump Risks

巫柏成, Wu, Po Cheng Unknown Date (has links)
Chen, Chang, Wen and Lin (2013)提出馬可夫調控跳躍過程模型(MMJDM)描述股價指數報酬率,布朗運動項、跳躍項之頻率與市場狀態有關。然而,利率並非常數,本論文以狀態轉換模型配適零息債劵之動態過程,提出狀態轉換下的利率與具跳躍風險的股票報酬之二維模型(MMJDMSI),並以1999年至2013年的道瓊工業指數與S&P 500指數和同期間之一年期美國國庫劵價格為實證資料,採用EM演算法取得參數估計值。經由概似比檢定結果顯示無論道瓊工業指數還是S&P 500指數,狀態轉換下利率與跳躍風險之股票報酬二維模型更適合描述報酬率。接著,利用Esscher轉換法推導出各模型下的股價指數之歐式買權定價公式,再對MMJDMSI模型進行敏感度分析以評估模型參數發生變動時對於定價公式的影響。最後,以實證資料對各模型進行模型校準及計算隱含波動度,結果顯示MMJDMSI在價內及價外時定價誤差為最小或次小,且此模型亦能呈現出波動度微笑曲線之現象。 / To model asset return, Chen, Chang, Wen and Lin (2013) proposed Markov-Modulated Jump Diffusion Model (MMJDM) assuming that the Brownian motion term and jump frequency are all related to market states. In fact, the interest rate is not constant, Regime-Switching Model is taken to fit the process of the zero-coupon bond price, and a bivariate model for interest rate and stock index return with regime-switching and dependent jump risks (MMJDMSI) is proposed. The empirical data are Dow Jones Industrial Average and S&P 500 Index from 1999 to 2013, together with US 1-Year Treasury Bond over the same period. Model parameters are estimated by the Expectation-Maximization (EM) algorithm. The likelihood ratio test (LRT) is performed to compare nested models, and MMJDMSI is better than the others. Then, European call option pricing formula under each model is derived via Esscher transformation, and sensitivity analysis is conducted to evaluate changes resulted from different parameter values under the MMJDMSI pricing formula. Finally, model calibrations are performed and implied volatilities are computed under each model empirically. In cases of in-the-money and out-the-money, MMJDMSI has either the smallest or the second smallest pricing error. Also, the implied volatilities from MMJDMSI display a volatility smile curve.
209

Confiabilidade em sistemas coerentes: um modelo bayesiano Weibull. / Reliability in coherent systems: a bayesian weibull model

Felipe Lunardi Bhering 28 June 2013 (has links)
O principal objetivo desse trabalho é introduzir um modelo geral bayesiano Weibull hierárquico para dados censurados que estima a função de confiabilidade de cada componente para sistemas de confiabilidade coerentes. São introduzidos formas de estimação mais sólidas, sem a inserção de estimativas médias nas funções de confiabilidade (estimador plug-in). Através desse modelo, são expostos e solucionados exemplos na área de confiabilidade como sistemas em série, sistemas em paralelo, sistemas k-de-n, sistemas bridge e um estudo clínico com dados censurados intervalares. As soluções consideram que as componentes tem diferentes distribuições, e nesse caso, o sistema bridge ainda não havia solução na literatura. O modelo construído é geral e pode ser utilizado para qualquer sistema coerente e não apenas para dados da área de confiabilidade, como também na área de sobrevivência, dentre outros. Diversas simulações com componentes com diferentes proporções de censura, distintas médias, três tipos de distribuições e tamanhos de amostra foram feitas em todos os sistemas para avaliar a eficácia do modelo. / The main purpose of this work is to introduce a general bayesian Weibull hierarchical model for censored data which estimates each reliability components function from coherent systems. Its introduced estimation procedures which do not consider plug-in estimators. Also, its exposed and solved with this model examples in reliability area such as series systems, parallel systems, k-out-of-n systems, bridge systems and a clinical study with interval censoring data. The problem of bridge system hadnt a solution before for the case of each component with different distribution. Actually, this model is general and can be used to analyse any kind of coherent system and censored data, not only reliability ones, but also survival data and others. Several components simulations with different censored proportions, distinct means, three kinds of distributions and sample size were made in all systems to evaluate model efficiency.
210

Joint models for longitudinal and survival data

Yang, Lili 11 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. In the first part of this thesis, we proposed to use bivariate change point models for two longitudinal outcomes with a focus on estimating the correlation between the two change points. We adopted a Bayesian approach for parameter estimation and inference. In the second part, we considered the situation when time-to-event outcome is also collected along with multiple longitudinal biomarkers measured until the occurrence of the event or censoring. Joint models for longitudinal and time-to-event data can be used to estimate the association between the characteristics of the longitudinal measures over time and survival time. We developed a maximum-likelihood method to joint model multiple longitudinal biomarkers and a time-to-event outcome. In addition, we focused on predicting conditional survival probabilities and evaluating the predictive accuracy of multiple longitudinal biomarkers in the joint modeling framework. We assessed the performance of the proposed methods in simulation studies and applied the new methods to data sets from two cohort studies. / National Institutes of Health (NIH) Grants R01 AG019181, R24 MH080827, P30 AG10133, R01 AG09956.

Page generated in 0.0557 seconds