Spelling suggestions: "subject:"nonrecurrent every"" "subject:"morecurrent every""
1 |
Evaluating multiple endpoints in heart failure clinical trialsYang, Yijun 12 March 2016 (has links)
The selection of the best response variables in a clinical trial is often not straightforward; the primary endpoint of a trial should be clinically relevant, directly related to the primary objective of the trial, and with favorable efficiency to detect the treatment benefit with a reasonable sample size and duration of the trial. With the recent success in the management of heart failure, the mortality rate has dropped significantly compared to two decades ago, and patients with heart failure have high rates of hospitalization and morbid complications along with multiple symptoms and severe limitations in daily activities. Although mortality still remains important as a measure of the clinically relevant benefit and the safety of the intervention, with the low event rate of mortality, it requires large and longer clinical trials to detect treatment benefit of new intervention using mortality as the sole primary endpoint. Thus most heart failure trials use the combined endpoint of death and a second efficacy outcome, such as hospitalizations. This is often analyzed with time-to-first-event survival analysis which ignores possible subsequent hospitalization events and treating the death and first hospitalization equally in the importance and hierarchy of clinical relevance. Accounting for the recurrent events or subsequent death after the hospitalization(s) provides more detailed information on the disease-control process and treatment benefit.
In this dissertation we propose a hierarchical endpoint with death in the higher priority and number of hospitalization events in the lower priority as primary endpoint to assess experimental treatment benefit versus a control using a non-parametric generalized Gehan-Wilcoxon test. In addition to the hierarchical endpoint, we also evaluated assessment of experimental treatment benefit on recurrent events with a multi-state model using extended stratified Cox model, considering the multi-states in which patients might transition during the study. We compared the false positive rate and power of the above mentioned methods with the composite endpoint approach and recurrent event endpoint approach analyzed using Andersen-Gill, WLW, and PWP models in simulation studies. Finally we applied all evaluated procedures to the Digitalis Investigation Group (DIG) trial.
|
2 |
以重複事件模型分析破產機率 / Recurrent Event Analysis of Bankruptcy Probability曾士懷, Tseng,Shih Huai Unknown Date (has links)
Bankruptcy prediction has been of great interest to academics in the fields of accounting and finance for decades. Prior literatures focus mostly on investigating the covariates that lead to bankruptcy. In this thesis, however, we extend the issue of interest to what are the possible covariates that cause significant jumps in bankruptcy probability for a company.
We consider the BSM-probability measure examined by Hillegeist, Keating, Cram, and Lundsedt (2004) to help us calculate the variation in bankruptcy probabilities for companies. In addition, recurrent event data analysis is applied to explore these jumps in bankruptcy intensity.
By investigating the S&P500 constituents with sample consists of 343 S&P500-listed companies and 17,836 quarter observations starting from 1994 to 2007, we find that, in three of our models, all of these six covariates are negatively related to the recurrences of event that a company will suffer significant jumps in its bankruptcy probability during the next quarter. Additionally, macroeconomic covariates have greater explanatory power as factors affecting the probability of these jumps, while company-specific covariates contribute less to these recurrences of events. In comparison, we conduct another estimation based on the observation of slight increases in bankruptcy probability for companies. Contrary to what we find on the prior dataset, our empirical results suggest the factors that evoke these events are less prominent and their influences on the event recurrence are mixed.
|
3 |
Analyses of 2002-2013 China’s Stock Market Using the Shared Frailty ModelTang, Chao 01 August 2014 (has links)
This thesis adopts a survival model to analyze China’s stock market. The data used are the capitalization-weighted stock market index (CSI 300) and the 300 stocks for creating the index. We define the recurrent events using the daily return of the selected stocks and the index. A shared frailty model which incorporates the random effects is then used for analyses since the survival times of individual stocks are correlated. Maximization of penalized likelihood is presented to estimate the parameters in the model. The covariates are selected using the Akaike information criterion (AIC) and the variance inflation factor (VIF) to avoid multicollinearity. The result of analyses show that the general capital, total amount of a stock traded in a day, turnover rate and price book ratio are significant in the shared frailty model for daily stock data.
|
4 |
A Comparsion of Multiple Imputation Methods for Missing Covariate Values in Recurrent Event DataHuo, Zhao January 2015 (has links)
Multiple imputation (MI) is a commonly used approach to impute missing data. This thesis studies missing covariates in recurrent event data, and discusses ways to include the survival outcomes in the imputation model. Some MI methods under consideration are the event indicator D combined with, respectively, the right-censored event times T, the logarithm of T and the cumulative baseline hazard H0(T). After imputation, we can then proceed to the complete data analysis. The Cox proportional hazards (PH) model and the PWP model are chosen as the analysis models, and the coefficient estimates are of substantive interest. A Monte Carlo simulation study is conducted to compare different MI methods, the relative bias and mean square error will be used in the evaluation process. Furthermore, an empirical study based on cardiovascular disease event data which contains missing values will be conducted. Overall, the results show that MI based on the Nelson-Aalen estimate of H0(T) is preferred in most circumstances.
|
5 |
Flexible Joint Hierarchical Gaussian Process Model for Longitudinal and Recurrent Event DataSu, Weiji 22 October 2020 (has links)
No description available.
|
6 |
以重複事件分析法分析現金增資 / Recurrent event analysis of seasoned equity offerings劉佩芸, Liu, Pei Yun Unknown Date (has links)
在公司財務的領域中,探討公司資本結構決策主要有三個主流理論:靜態抵換理論、融資順位理論以及折時理論。本篇文章採用重複事件分析法,首先沿用Baker and Wurgler (2002)中提及之五個因素做為自變數,研究影響公司辦理現金增資危險函數之因子研究,研究結果顯示,公司現金增資之危險函數與財務槓桿成正向關係,此項證據傾向支持融資順位理論,然而本篇論文研究結果,並無顯著證據支持折時理論。本篇論文接著建立另一組變素設定,將價格趨勢納入模型中,取代原來在Baker and Wurgler(2002)中觀察折時現象之因子,結果顯示折時現象是顯著的。因此,本篇論文研究結果並未對是否支持折時理論下定論,值得思考的是,欲觀察公司是否存在折時現象,除了Baker and Wurgler(2002)中提及之變數之外,直接將價格趨勢納入模型或許是另一個可行之道。 / In the field of traditional corporate financing theories, there are three mainstream theories leading the way while talking about the firms’ financing decisions: static trade-off theory, pecking order theory, and market timing theory. In this paper, we apply the recurrent event analysis and follow the independent variables appearing in the Baker and Wurgler (2002) first to examine the factors that affect firms’ hazard rate to offer seasoned equity. The results indicate that higher leverage is in positive relation
with the hazard rate of firms’ seasoned equity offering, meaning that firms’ financing decisions follow the pecking order theory to some degree. However, while the recurrent event analysis is adopted, the market timing effect becomes insignificant when considering the independent variables appearing in the Baker and Wurgler(2002). As a result, we proceed to establish another set of covariates in which the
price trend factor is involved to examine the market timing effect. While the price trend factor is substituted for the market-to-book ratio to represent the market timing effect, the market timing effect turns out to be significant. Thus, we consider that using the price trend of the market directly may be a suitable way to examine the market timing effect.
|
7 |
Modelagem de dados de eventos recorrentes via processo de Poisson com termo de fragilidade. / Modelling Recurrent Event Data Via Poisson Process With a Frailty Term.Tomazella, Vera Lucia Damasceno 28 July 2003 (has links)
Nesta tese é analisado situações onde eventos de interesse podem ocorrer mais que uma vez para o mesmo indivíduo. Embora os estudos nessa área tenham recebido considerável atenção nos últimos anos, as técnicas que podem ser aplicadas a esses casos especiais ainda são pouco exploradas. Além disso, em problemas desse tipo, é razoável supor que existe dependência entre as observações. Uma das formas de incorporá-la é introduzir um efeito aleatório na modelagem da função de risco, dando origem aos modelos de fragilidade. Esses modelos, em análise de sobrevivência, visam descrever a heterogeneidade não observada entre as unidades em estudo. Os modelos estatísticos apresentados neste texto são fundamentalmente modelos de sobrevivência baseados em processos de contagem, onde é representado o problema como um processo de Poisson homogêneo e não-homogêneo com um termo de fragilidade, para o qual um indivíduo com um dado vetor de covariável x é acometido pela ocorrência de eventos repetidos. Esses modelos estão divididos em duas classes: modelos de fragilidade multiplicativos e aditivos; ambos visam responder às diferentes formas de avaliar a influência da heterogeneidade entre as unidades na função de intensidade dos processos de contagem. Até agora, a maioria dos estudos tem usado a distribuição gama para o termo de fragilidade, a qual é matematicamente conveniente. Este trabalho mostra que a distribuição gaussiana inversa tem propriedade igualmente simples à distribuição gama. Consequências das diferentes distribuições são examinadas, visando mostrar que a escolha da distribuição de fragilidade é importante. O objetivo deste trabalho é propor alguns métodos estatísticos para a análise de eventos recorrentes e verificar o efeito da introdução do termo aleatório no modelo por meio do estudo do custo, da estimação dos outros parâmetros de interesse. Também um estudo de simulação bootstrap é apresentado para fazer inferências dos parâmetros de interesse. Além disso, uma abordagem Bayesiana é proposta para os modelos de fragilidade multiplicativos e aditivos. Métodos de simulações são utilizados para avaliar as quantidades de interesse a posteriori. Por fim para ilustrar a metodologia, considera-se um conjunto de dados reais sobre um estudo dos resultados experimentais de animais cancerígenos. / In this thesis we analyse situations where events of interest may occur more than once for the same individual and it is reasonable to assume that there is dependency among the observations. A way of incorporating this dependency is to introduce a random effect in the modelling include a frailty term in the intensity function. The statistical methods presented here are intensity models based, where we represent the problem as a homogeneous and nonhomogeneous Poisson process with a frailty term for which an individual with given fixed covariate vector x has reccurent events occuring. These models are divided into two classes: multiplicative and additive models, aiming to answer the different ways of assessing the influence of heterogeneity among individuals in the intensity function of the couting processes. Until now most of the studies have used a frailty gamma distribution, due to mathematical convenience. In this work however we show that a frailty gaussian inverse distribution has equally simple proprieties when compared to a frailty gamma distribution. Methods for regression analysis are presented where we verify the effect of the frailty term in the model through of the study of the cost of estimating the other parameters of interest. We also use the simulation bootstrap method to make inference on the parameters of interest. Besides we develop a Bayesian approach for the homogeneous and nonhomogeneous Poisson process with multiplicative and additive frailty. Simulation methods are used to assess the posterior quantities of interest. In order to ilustrate our methodology we considere a real data set on results of an experimental animal carcinogenesis study.
|
8 |
Regresní analýza výskytu opakovaných událostí / Regression analysis of recurrent eventsRusá, Pavla January 2018 (has links)
V této práci se zabýváme metodami pro regresní analýzu výskytu opako- vaných událostí, při které je třeba se vypořádat se závislostí čas· do události v rámci jednoho subjektu. V první části práce se zabýváme možným rozšířením Coxova modelu proporcionálního rizika, který se využívá při analýze cenzoro- vaných dat, pro analýzu výskytu opakovaných událostí. Hlavní část práce je věnována odhadu parametr· v marginálních modelech a jejich asymptotickým vlastnostem. Následně se zabýváme i odhadem parametr· v marginálních mo- delech pro mnohorozměrná cenzorovaná data. Vhodnost použití marginálních model· je zkoumána pomocí simulací. 1
|
9 |
Modelagem de dados de eventos recorrentes via processo de Poisson com termo de fragilidade. / Modelling Recurrent Event Data Via Poisson Process With a Frailty Term.Vera Lucia Damasceno Tomazella 28 July 2003 (has links)
Nesta tese é analisado situações onde eventos de interesse podem ocorrer mais que uma vez para o mesmo indivíduo. Embora os estudos nessa área tenham recebido considerável atenção nos últimos anos, as técnicas que podem ser aplicadas a esses casos especiais ainda são pouco exploradas. Além disso, em problemas desse tipo, é razoável supor que existe dependência entre as observações. Uma das formas de incorporá-la é introduzir um efeito aleatório na modelagem da função de risco, dando origem aos modelos de fragilidade. Esses modelos, em análise de sobrevivência, visam descrever a heterogeneidade não observada entre as unidades em estudo. Os modelos estatísticos apresentados neste texto são fundamentalmente modelos de sobrevivência baseados em processos de contagem, onde é representado o problema como um processo de Poisson homogêneo e não-homogêneo com um termo de fragilidade, para o qual um indivíduo com um dado vetor de covariável x é acometido pela ocorrência de eventos repetidos. Esses modelos estão divididos em duas classes: modelos de fragilidade multiplicativos e aditivos; ambos visam responder às diferentes formas de avaliar a influência da heterogeneidade entre as unidades na função de intensidade dos processos de contagem. Até agora, a maioria dos estudos tem usado a distribuição gama para o termo de fragilidade, a qual é matematicamente conveniente. Este trabalho mostra que a distribuição gaussiana inversa tem propriedade igualmente simples à distribuição gama. Consequências das diferentes distribuições são examinadas, visando mostrar que a escolha da distribuição de fragilidade é importante. O objetivo deste trabalho é propor alguns métodos estatísticos para a análise de eventos recorrentes e verificar o efeito da introdução do termo aleatório no modelo por meio do estudo do custo, da estimação dos outros parâmetros de interesse. Também um estudo de simulação bootstrap é apresentado para fazer inferências dos parâmetros de interesse. Além disso, uma abordagem Bayesiana é proposta para os modelos de fragilidade multiplicativos e aditivos. Métodos de simulações são utilizados para avaliar as quantidades de interesse a posteriori. Por fim para ilustrar a metodologia, considera-se um conjunto de dados reais sobre um estudo dos resultados experimentais de animais cancerígenos. / In this thesis we analyse situations where events of interest may occur more than once for the same individual and it is reasonable to assume that there is dependency among the observations. A way of incorporating this dependency is to introduce a random effect in the modelling include a frailty term in the intensity function. The statistical methods presented here are intensity models based, where we represent the problem as a homogeneous and nonhomogeneous Poisson process with a frailty term for which an individual with given fixed covariate vector x has reccurent events occuring. These models are divided into two classes: multiplicative and additive models, aiming to answer the different ways of assessing the influence of heterogeneity among individuals in the intensity function of the couting processes. Until now most of the studies have used a frailty gamma distribution, due to mathematical convenience. In this work however we show that a frailty gaussian inverse distribution has equally simple proprieties when compared to a frailty gamma distribution. Methods for regression analysis are presented where we verify the effect of the frailty term in the model through of the study of the cost of estimating the other parameters of interest. We also use the simulation bootstrap method to make inference on the parameters of interest. Besides we develop a Bayesian approach for the homogeneous and nonhomogeneous Poisson process with multiplicative and additive frailty. Simulation methods are used to assess the posterior quantities of interest. In order to ilustrate our methodology we considere a real data set on results of an experimental animal carcinogenesis study.
|
10 |
Statistical Methods for Life History Analysis Involving Latent ProcessesShen, Hua January 2014 (has links)
Incomplete data often arise in the study of life history processes. Examples include missing responses, missing covariates, and unobservable latent processes in addition to right censoring. This thesis is on the development of statistical models and methods to address these problems as they arise in oncology and chronic disease. Methods of estimation and inference in parametric, weakly parametric and semiparametric settings are investigated.
Studies of chronic diseases routinely sample individuals subject to conditions on an event time of interest. In epidemiology, for example, prevalent cohort studies aiming to evaluate risk factors for survival following onset of dementia require subjects to have survived to the point of screening. In clinical trials designed to assess the effect of experimental cancer treatments on survival, patients are required to survive from the time of cancer diagnosis to recruitment. Such conditions yield samples featuring left-truncated event time distributions. Incomplete covariate data often arise in such settings, but standard methods do not deal with the fact that the covariate distribution is also affected by left truncation. We develop a likelihood and algorithm for estimation for dealing with incomplete covariate data in such settings. An expectation-maximization algorithm deals with the left truncation by using the covariate distribution conditional on the selection criterion. An extension to deal with sub-group analyses in clinical trials is described for the case in which the stratification variable is incompletely observed.
In studies of affective disorder, individuals are often observed to experience recurrent symptomatic exacerbations of symptoms warranting hospitalization. Interest lies in modeling the occurrence of such exacerbations over time and identifying associated risk factors to better understand the disease process. In some patients, recurrent exacerbations are temporally clustered following disease onset, but cease to occur after a period of time. We develop a dynamic mover-stayer model in which a canonical binary variable associated with each event indicates whether the underlying disease has resolved. An individual whose disease process has not resolved will experience events following a standard point process model governed by a latent intensity. If and when the disease process resolves, the complete data intensity becomes zero and no further events will arise. An expectation-maximization algorithm is developed for parametric and semiparametric model fitting based on a discrete time dynamic mover-stayer model and a latent intensity-based model of the underlying point process. The method is applied to a motivating dataset from a cohort of individuals with affective disorder experiencing recurrent hospitalization for their mental health disorder.
Interval-censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. Extensions on model fitting techniques for the dynamic mover-stayer model are discussed and incorporate interval censoring. The likelihood and algorithm for estimation are developed for piecewise constant baseline rate functions and are shown to yield estimators with small empirical bias in simulation studies. Data on the cumulative number of damaged joints in patients with psoriatic arthritis are analysed to provide an illustrative application.
|
Page generated in 0.0917 seconds