Spelling suggestions: "subject:"postcopula"" "subject:"popula""
241 |
Ensaios em cópulas e finanças empíricasSilva, Fernando Augusto Boeira Sabino da January 2017 (has links)
Nesta tese discutimos abordagens que utilizam cópulas para descrever dependências entre instrumentos nanceiros e avaliamos a performance destes métodos. Muitas crises nanceiras aconteceram desde o nal da década de 90, incluindo a crise asiática (1997), a crise da dívida da Rússia (1998), a crise da bolha da internet (2000), as crises após o 9/11 (2001) e a guerra do Iraque (2003), a crise do subprime or crise nanceira global (2007-08), e a crise da dívida soberana europeia (2009). Todas estas crises levaram a uma perda maciça de riqueza nanceira e a um aumento da volatilidade observada, e enfatizaram a importância de uma política macroprudencial mais robusta. Em outras palavras, perturbações nanceiras tornam os processos econômicos altamente não-lineares, levando os principais bancos centrais a tomarem medidas contrárias para conter a angústia - nanceira. Devido aos complexos padrões de dependência dos mercados nanceiros, uma abordagem multivariada em grandes dimensões para a análise da dependência caudal é seguramente mais perspicaz do que assumir retornos com distribuição normal multivariada. Dada a sua exibilidade, as cópulas são capazes de modelar melhor as regularidades empiricamente veri cadas que são normalmente atribuídas a retornos nanceiros multivariados: (1) volatilidade condicional assimétrica com maior volatilidade para grandes retornos negativos e menor volatilidade para retornos positivos (HAFNER, 1998); (2) assimetria condicional (AIT-SAHALIA; BRANDT, 2001; CHEN; HONG; STEIN, 2001; PATTON, 2001); (3) excesso de curtose (TAUCHEN, 2001; ANDREOU; PITTIS; SPANOS, 2001); e (4) dependência temporal não linear (CONT, 2001; CAMPBELL; LO; MACKINLAY, 1997). A principal contribuição dos ensaios é avaliar se abordagens mais so sticadas do que o método da distância e o tradicional modelo de Markowitz podem tirar proveito de quaisquer anomalias/fricções de mercado. Os ensaios são uma tentativa de fornecer uma análise adequada destas questões usando conjuntos de dados abrangentes e de longo prazo. Empiricamente, demonstramos que as abordagens baseadas em cópulas são úteis em todos os ensaios, mostrando-se bené cas para modelar dependências em diferentes cenários, avaliando as medidas de risco caudais mais adequadamente e gerando rentabilidade superior a dos benchmarks utilizados. / In this thesis we discuss copula-based approaches to describe statistical dependencies within nancial instruments and evaluate its performance. Many nancial crises have occurred since the late 1990s, including the Asian crisis (1997), the Russian national debt crisis (1998), the dot-com bubble crisis (2000), the crises after 9-11 (2001) and Iraq war (2003), the subprime mortgage crisis or global nancial crisis (2007-08), and the European sovereign debt crisis (2009). All of these crises lead to a massive loss of nancial wealth and an upward in observed volatility and have emphasized the importance of a more robust macro-prudential policy. In other words, nancial disruptions make the economic processes highly nonlinear making the major central banks to take counter-measures in order to contain nancial distress. The methods for modeling uncertainty and evaluating the market risk on nancial markets are now under more scrutiny after the global nancial crisis. Due to the complex dependence patterns of nancial markets, a high-dimensional multivariate approach to tail dependence analysis is surely more insightful than assuming multivariate normal returns. Given its exibility, copulas are able to model better the empirically veri ed regularities normally attributed to multivariate nancial returns: (1) asymmetric conditional volatility with higher volatility for large negative returns and smaller volatility for positive returns (HAFNER, 1998); (2) conditional skewness (AITSAHALIA; BRANDT, 2001; CHEN; HONG; STEIN, 2001; PATTON, 2001); (3) excess kurtosis (TAUCHEN, 2001; ANDREOU; PITTIS; SPANOS, 2001); and (4) nonlinear temporal dependence (CONT, 2001; CAMPBELL; LO; MACKINLAY, 1997). The principal contribution of the essays is to assess if more sophisticated approaches than the distance method and plain Markowitz model can take advantage of any market anomalies/ fricctions. The essays are one attempt to provide a proper analysis in these issues using a long-term and comprehensive datasets. We empirically show that copula-based approaches are useful in all essays, proving bene cial to model dependencies in di erent scenarios, assessing the downside risk measures more adequately and yielding higher profitability than the benchmarks.
|
242 |
Value at Risk no mercado financeiro internacional: avaliação da performance dos modelos nos países desenvolvidos e emergentes / Value at Risk in international finance: evaluation of the models performance in developed and emerging countriesLuiz Eduardo Gaio 01 April 2015 (has links)
Diante das exigências estipuladas pelos órgãos reguladores pelos acordos internacionais, tendo em vistas as inúmeras crises financeiras ocorridas nos últimos séculos, as instituições financeiras desenvolveram diversas ferramentas para a mensuração e controle do risco inerente aos negócios. Apesar da crescente evolução das metodologias de cálculo e mensuração do risco, o Value at Risk (VaR) se tornou referência como ferramenta de estimação do risco de mercado. Nos últimos anos novas técnicas de cálculo do Value at Risk (VaR) vêm sendo desenvolvidas. Porém, nenhuma tem sido considerada como a que melhor ajusta os riscos para diversos mercados e em diferentes momentos. Não existe na literatura um modelo conciso e coerente com as diversidades dos mercados. Assim, o presente trabalho tem por objetivo geral avaliar os estimadores de risco de mercado, gerados pela aplicação de modelos baseados no Value at Risk (VaR), aplicados aos índices das principais bolsas dos países desenvolvidos e emergentes, para os períodos normais e de crise financeira, de modo a apurar os mais efetivos nessa função. Foram considerados no estudo os modelos VaR Não condicional, pelos modelos tradicionais (Simulação Histórica, Delta-Normal e t-Student) e baseados na Teoria de Valores Extremos; o VaR Condicional, comparando os modelos da família ARCH e Riskmetrics e o VaR Multivariado, com os modelos GARCH bivariados (Vech, Bekk e CCC), funções cópulas (t-Student, Clayton, Frank e Gumbel) e por Redes Neurais Artificiais. A base de dados utilizada refere-se as amostras diárias dos retornos dos principais índices de ações dos países desenvolvidos (Alemanha, Estados Unidos, França, Reino Unido e Japão) e emergentes (Brasil, Rússia, Índia, China e África do Sul), no período de 1995 a 2013, contemplando as crises de 1997 e 2008. Os resultados do estudo foram, de certa forma, distintos das premissas iniciais estabelecidas pelas hipóteses de pesquisa. Diante de mais de mil modelagens realizadas, os modelos condicionais foram superiores aos não condicionais, na maioria dos casos. Em específico o modelo GARCH (1,1), tradicional na literatura, teve uma efetividade de ajuste de 93% dos casos. Para a análise Multivariada, não foi possível definir um modelo mais assertivo. Os modelos Vech, Bekk e Cópula - Clayton tiveram desempenho semelhantes, com bons ajustes em 100% dos testes. Diferentemente do que era esperado, não foi possível perceber diferenças significativas entre os ajustes para países desenvolvidos e emergentes e os momentos de crise e normal. O estudo contribuiu na percepção de que os modelos utilizados pelas instituições financeiras não são os que apresentam melhores resultados na estimação dos riscos de mercado, mesmo sendo recomendados pelas instituições renomadas. Cabe uma análise mais profunda sobre o desempenho dos estimadores de riscos, utilizando simulações com as carteiras de cada instituição financeira. / Given the requirements stipulated by regulatory agencies for international agreements, in considering the numerous financial crises in the last centuries, financial institutions have developed several tools to measure and control the risk of the business. Despite the growing evolution of the methodologies of calculation and measurement of Value at Risk (VaR) has become a reference tool as estimate market risk. In recent years new calculation techniques of Value at Risk (VaR) have been developed. However, none has been considered the one that best fits the risks for different markets and in different times. There is no literature in a concise and coherent model with the diversity of markets. Thus, this work has the objective to assess the market risk estimates generated by the application of models based on Value at Risk (VaR), applied to the indices of the major stock exchanges in developed and emerging countries, for normal and crisis periods financial, in order to ascertain the most effective in that role. Were considered in the study models conditional VaR, the conventional models (Historical Simulation, Delta-Normal and Student t test) and based on Extreme Value Theory; Conditional VaR by comparing the models of ARCH family and RiskMetrics and the Multivariate VaR, with bivariate GARCH (VECH, Bekk and CCC), copula functions (Student t, Clayton, Frank and Gumbel) and Artificial Neural Networks. The database used refers to the daily samples of the returns of major stock indexes of developed countries (Germany, USA, France, UK and Japan) and emerging (Brazil, Russia, India, China and South Africa) from 1995 to 2013, covering the crisis in 1997 and 2008. The results were somewhat different from the initial premises established by the research hypotheses. Before more than 1 mil modeling performed, the conditional models were superior to non-contingent, in the majority of cases. In particular the GARCH (1,1) model, traditional literature, had a 93% adjustment effectiveness of cases. For multivariate analysis, it was not possible to set a more assertive style. VECH models, and Bekk, Copula - Clayton had similar performance with good fits to 100% of the tests. Unlike what was expected, it was not possible to see significant differences between the settings for developed and emerging countries and the moments of crisis and normal. The study contributed to the perception that the models used by financial institutions are not the best performing in the estimation of market risk, even if recommended by renowned institutions. It is a deeper analysis on the performance of the estimators of risk, using simulations with the portfolios of each financial institution.
|
243 |
Cadre méthodologique et applicatif pour le développement de réseaux de capteurs fiables / The design of reliable sensor networks : methods and applicationsLalem, Farid 11 September 2017 (has links)
Les réseaux de capteurs sans fil émergent comme une technologie innovatrice qui peut révolutionner et améliorer notre façon de vivre, de travailler et d'interagir avec l'environnement physique qui nous entoure. Néanmoins, l'utilisation d'une telle technologie soulève de nouveaux défis concernant le développement de systèmes fiables et sécurisés. Ces réseaux de capteurs sans fil sont souvent caractérisés par un déploiement dense et à grande échelle dans des environnements limités en terme de ressources. Les contraintes imposées sont la limitation des capacités de traitement, de stockage et surtout d'énergie car ils sont généralement alimentés par des piles.Nous visons comme objectif principal à travers cette thèse à proposer des solutions permettant de garantir un certain niveau de fiabilité dans un RCSF dédié aux applications sensibles. Nous avons ainsi abordé trois axes, qui sont :- Le développement de méthodes permettant de détecter les noeuds capteurs défaillants dans un RCSF,- Le développement de méthodes permettant de détecter les anomalies dans les mesures collectées par les nœuds capteurs, et par la suite, les capteurs usés (fournissant de fausses mesures).- Le développement de méthodes permettant d'assurer l'intégrité et l'authenticité des données transmise dans un RCSF. / Wireless sensor networks emerge as an innovative technology that can revolutionize and improve our way to live, work and interact with the physical environment around us. Nevertheless, the use of such technology raises new challenges in the development of reliable and secure systems. These wireless sensor networks are often characterized by dense deployment on a large scale in resource-onstrained environments. The constraints imposed are the limitation of the processing, storage and especially energy capacities since they are generally powered by batteries.Our main objective is to propose solutions that guarantee a certain level of reliability in a WSN dedicated to sensitive applications. We have thus proposed three axes, which are:- The development of methods for detecting failed sensor nodes in a WSN.- The development of methods for detecting anomalies in measurements collected by sensor nodes, and subsequently fault sensors (providing false measurements).- The development of methods ensuring the integrity and authenticity of transmitted data over a WSN.
|
244 |
High-dimensional dependence modelling using Bayesian networks for the degradation of civil infrastructures and other applications / Modélisation de dépendance en grandes dimensions par les réseaux Bayésiens pour la détérioration d’infrastructures et autres applicationsKosgodagan, Alex 26 June 2017 (has links)
Cette thèse explore l’utilisation des réseaux Bayésiens (RB) afin de répondre à des problématiques de dégradation en grandes dimensions concernant des infrastructures du génie civil. Alors que les approches traditionnelles basées l’évolution physique déterministe de détérioration sont déficientes pour des problèmes à grande échelle, les gestionnaires d’ouvrages ont développé une connaissance de modèles nécessitant la gestion de l’incertain. L’utilisation de la dépendance probabiliste se révèle être une approche adéquate dans ce contexte tandis que la possibilité de modéliser l’incertain est une composante attrayante. Le concept de dépendance au sein des RB s’exprime principalement de deux façons. D’une part, les probabilités conditionnelles classiques s’appuyant le théorème de Bayes et d’autre part, une classe de RB faisant l’usage de copules et corrélation de rang comme mesures de dépendance. Nous présentons à la fois des contributions théoriques et pratiques dans le cadre de ces deux classes de RB ; les RB dynamiques discrets et les RB non paramétriques, respectivement. Des problématiques concernant la paramétrisation de chacune des classes sont également abordées. Dans un contexte théorique, nous montrons que les RBNP permet de caractériser n’importe quel processus de Markov. / This thesis explores high-dimensional deterioration-related problems using Bayesian networks (BN). Asset managers become more and more familiar on how to reason with uncertainty as traditional physics-based models fail to fully encompass the dynamics of large-scale degradation issues. Probabilistic dependence is able to achieve this while the ability to incorporate randomness is enticing.In fact, dependence in BN is mainly expressed in two ways. On the one hand, classic conditional probabilities that lean on thewell-known Bayes rule and, on the other hand, a more recent classof BN featuring copulae and rank correlation as dependence metrics. Both theoretical and practical contributions are presented for the two classes of BN referred to as discrete dynamic andnon-parametric BN, respectively. Issues related to the parametrization for each class of BN are addressed. For the discrete dynamic class, we extend the current framework by incorporating an additional dimension. We observed that this dimension allows to have more control on the deterioration mechanism through the main endogenous governing variables impacting it. For the non-parametric class, we demonstrate its remarkable capacity to handle a high-dimension crack growth issue for a steel bridge. We further show that this type of BN can characterize any Markov process.
|
245 |
Elaboration d'un score de vieillissement : propositions théoriques / Development of a score of ageing : proposal for a mathematical theorySarazin, Marianne 21 May 2013 (has links)
Le vieillissement fait actuellement l’objet de toutes les attentions, constituant en effet un problème de santé publique majeur. Sa description reste cependant complexe en raison des intrications à la fois individuelles et collectives de sa conceptualisation et d’une dimension subjective forte. Les professionnels de santé sont de plus en plus obligés d’intégrer cette donnée dans leur réflexion et de proposer des protocoles de prise en charge adaptés. Le vieillissement est une évolution inéluctable du corps dont la quantification est établie par l’âge dépendant du temps dit « chronologique ». Ce critère âge est cependant imparfait pour mesurer l’usure réelle du corps soumise à de nombreux facteurs modificateurs dépendant des individus. Aussi, partant de réflexions déjà engagées et consistant à substituer cet âge chronologique par un critère composite appelé « âge biologique », aboutissant à la création d’un indicateur ou score de vieillissement et sensé davantage refléter le vieillissement individuel, une nouvelle méthodologie est proposée adaptée à la pratique de médecine générale. Une première phase de ce travail a consisté à sonder les médecins généralistes sur leur perception et leur utilisation des scores cliniques en pratique courante par l’intermédiaire d’une enquête qualitative et quantitative effectuée en France métropolitaine. Cette étude a montré que l’adéquation entre l’utilisation déclarée et la conception intellectualisée des scores restait dissociée. Les scores constituent un outil d’aide à la prise en charge utile pour cibler une approche systémique souvent complexe dans la mesure où ils sont simples à utiliser (peu d’items et items adaptés à la pratique) et à la validité scientifiquement comprise par le médecin. Par ailleurs, l’âge du patient a été cité comme un élément prépondérant influençant le choix adéquat du score par le médecin généraliste. Cette base de travail a donc servi à proposer une modélisation de l’âge biologique dont la réflexion a porté tant sur le choix du modèle mathématique que des variables constitutives de ce modèle. Une sélection de variables marqueurs du vieillissement a été effectuée à partir d’une revue de la littérature et tenant compte de leur possible intégration dans le processus de soin en médecine générale. Cette sélection a été consolidée par une approche mathématique selon un processus de sélection ascendant à partir d’un modèle régressif. Une population dite « témoin » au vieillissement considéré comme normal a été ensuite constituée servant de base comparative au calcul de l’âge biologique. Son choix a été influencé dans un premier temps par les données de la littérature puis secondairement selon un tri par classification utilisant la méthode des nuées dynamiques. Un modèle de régression linéaire simple a ensuite été construit mais avec de données normalisées selon la méthode des copules gaussiennes suivi d’une étude des queues de distribution marginales. Les résultats ainsi obtenus laissent entrevoir des perspectives intéressantes de réflexion pour approfondir le calcul d’un âge biologique et du score en découlant en médecine générale, sa validation par une étude de morbidité constituant l’étape ultime de ce travail / Ageing is nowadays a major public health problem. Its description remains complex, both individual and collective conceptualization being interlaced with a strong subjective dimension. Health professionals are increasingly required to integrate ageing and prevention into their thought and to create adapted protocol and new tools. Ageing characterizes unavoidable changes in the body. It is usually measured by the age dependent on time and called “chronological age”. However, the criterion « chronological age » reflects imperfectly the actual ageing of the body depending on many individual factors. Also, this criterion has for a long time been replaced by another composite criterion called « biological age » supposed to better reflect the ageing process. In order to build a score of ageing adapted to general practice, a new methodology is proposed suitable for general practitioners. First of all, a first phase of this work consisted in a qualitative and quantitative survey conducted among general practitioners in France. This survey was done to obtain data on the use of predictive scores by general practitioners in their daily practice and their appropriateness, as well as to know the reasons of their non-utilization. Results showed that predictive scores are useful tools in daily practice to target a complex systemic approach insofar as they are simple to use (few items, items suitable for general practice) and their scientific validity is easily understood. In addition, patient’s age has been cited as a major criterion influencing general practitioners use of a predictive score. Results of this first phase have been used to propose a model of biological ageing, with reflexion on mathematical model as well as on component variables of this model. A selection of variables as markers of ageing was carried out from a review of the literature, taking into account their capacity of integration in general practitioners’ daily practice. This selection was completed by a mathematical approach based on an ascending process on a regression model. A control sample, assumed to be "normal ageing" on the basis of current knowledge in general medicine, was then used. This sample was first carried out from a review of the literature and then from a K-means method that classified this sample into several groups. The statistical dependence of measured variables was modeled by a Gaussian copula (taking into account only linear correlations of pairs). A standardized biological age was defined explicitly from these correlation coefficients. The tails of marginal distribution (method of excess) were estimated to enhance the discriminating power of the model. Results suggest interesting possibilities for a biological ageing calculation, and the predictive score they provide, suitable for general practitioners’ daily practice. Its validation by a morbidity and mortality survey will constitute the final phase of this work
|
246 |
Copula theory and its applications in computer networksDong, Fang 12 July 2017 (has links)
Traffic modeling in computer networks has been researched for decades. A good model should reflect the features of real-world network traffic. With a good model, synthetic traffic data can be generated for experimental studies; network performance can be analysed mathematically; service provisioning and scheduling can be designed aligning with traffic changes. An important part of traffic modeling is to capture the dependence, either the dependence among different traffic flows or the temporal dependence within the same traffic flow. Nevertheless, the power of dependence models, especially those that capture the functional dependence, has not been fully explored in the domain of computer networks. This thesis studies copula theory, a theory to describe dependence between random variables, and applies it for better performance evaluation and network resource provisioning. We apply copula to model both contemporaneous dependence between traffic flows and temporal dependence within the same flow. The dependence models are powerful and capture the functional dependence beyond the linear scope. With numerical examples, real-world experiments and simulations, we show that copula modeling can benefit many applications in computer networks, including, for example, tightening performance bounds in statistical network calculus, capturing full dependence structure in Markov Modulated Poisson Process (MMPP), MMPP parameter estimation, and predictive resource provisioning for cloud-based composite services. / Graduate / 0984 / fdong@uvic.ca
|
247 |
Sur la dépendance des queues de distributions / On the tait dependence of distributionsAleiyouka, Mohalilou 27 September 2018 (has links)
Pour modéliser de la dépendance entre plusieurs variables peut s'appuyer soit sur la corrélation entre les variables, soit sur d'autres mesures, qui déterminent la dépendance des queues de distributions.Dans cette thèse, nous nous intéressons à la dépendance des queues de distributions, en présentant quelques propriétés et résultats.Dans un premier temps, nous obtenons le coefficient de dépendance de queue pour la loi hyperbolique généralisée selon les différentes valeurs de paramètres de cette loi.Ensuite, nous exposons des propriétés et résultats du coefficient de dépendance extrémale dans le cas où les variables aléatoires suivent une loi de Fréchet unitaire.Finalement, nous présentons un des systèmes de gestion de bases de données temps réel (SGBDTR). Le but étant de proposer des modèles probabilistes pour étudier le comportement des transactions temps réel, afin d'optimiser ses performances. / The modeling of the dependence between several variables can focus either on the positive or negative correlation between the variables, or on other more effective ways, which determine the tails dependence of distributions.In this thesis, we are interested in the tail dependence of distributions, by presenting some properties and results. Firstly, we obtain the limit tail dependence coefficient for the generalized hyperbolic law according to different parameter values of this law. Then, we exhibit some properties and results of die extremal dependence coefficient in the case where the random variables follow a unitary Fréchet law.Finally, we present a Real Time Database ManagementSystems (RDBMS). The goal is to propose probabilistic models to study thebehavior of real-time transactions, in order to optimize its performance.
|
248 |
Survival Instantaneous Log-Odds Ratio From Empirical FunctionsJung, Jung Ah, Drane, J. Wanzer 01 January 2007 (has links)
The objective of this work is to introduce a new method called the Survivorship Instantaneous Log-odds Ratios (SILOR); to illustrate the creation of SILOR from empirical bivariate survival functions; to also derive standard errors of estimation; to compare results with those derived from logistic regression. Hip fracture, AGE and BMI from the Third National Health and Nutritional Examination Survey (NHANES III) were used to calculate empirical survival functions for the adverse health outcome (AHO) and non-AHO. A stable copula was used to create a parametric bivariate survival function, that was fitted to the empirical bivariate survival function. The bivariate survival function had SILOR contours which are not constant. The proposed method has better advantages than logistic regression by following two reasons. The comparison deals with (i) the shapes of the survival surfaces, S(X1, X2), and (ii) the isobols of the log-odds ratios. When using logistic regression the survival surface is either a hyper plane or at most a conic section. Our approach preserves the shape of the survival surface in two dimensions, and the isobols are observed in every detail instead of being overly smoothed by a regression with no more than a second degree polynomial. The present method is straightforward, and it captures all but random variability of the data.
|
249 |
Développement de méthodes pour la validation de critères de substitution en survie : méta-analyses de cancer / Development of methods for the validation of time-to-event surrogate endpoints : meta-analysis of cancerSofeu, Casimir 12 December 2019 (has links)
Les critères de substitution peuvent être utilisés à la place du critère de jugement le plus pertinent pour évaluer l'efficacité d'un nouveau traitement. Dans un contexte de méta-analyse, l'approche classique pour la validation d'un critère de substitution est basée sur une stratégie d'analyse en deux étapes. Pour des critères de jugement à temps d’évènements, cette approche est souvent sujette à des problèmes d'estimations. Nous proposons une approche de validation en une étape s'appuyant sur des modèles conjoints à fragilités et à copules. Ces modèles incluent à la fois des effets aléatoires au niveau essai et au niveau individuel ou des fonctions de copule. Nous considérons des fonctions de risque de base non paramétriquesà l'aide des splines. Les paramètres des modèles et les fonctions de risque de base ont été estimés par une méthode semi-paramétrique, par maximisation de la vraisemblance marginale pénalisée, considérant différentes méthodes d'intégration numérique. La validation des critères de substitution à la fois au niveau individuel et au niveau essai a été faite à partir du tau de Kendall et du coefficient de détermination. Les études de simulation ont été faites pour évaluer les performances de nos modèles. Les modèles ont été appliqués aux données individuelles issues des méta-analyses sur le cancer afin de rechercher de potentiels critères de substitution à la survie globale. Les modèles étaient assez robustes avec réduction des problèmes de convergence et d'estimation rencontrés dans l'approche en deux étapes. Nous avons développé un package R convivial implémentant les nouveaux modèles. / Surrogate endpoint can be used instead of the most relevant clinical endpointto assess the efficiency of a new treatment. In a meta-analysis framework, the classical approach for the validation of surrogate endpoint is based on a two-step analysis. For failure time endpoints, this approach often raises estimation issues.We propose a one-step validation approach based on a joint frailty and a joint frailty-copula model.The models include both trial-level and individual-level random effects or copula functions. We chose a non-parametric form of the baseline hazard functions using splines. We estimated parameters and hazard functions using a semi-parametric penalized marginal likelihood method, considering various numerical integration methods. Both individual level and trial level surrogacy were evaluated using Kendall's tau and coefficient of determination. The performance of the estimators was evaluated using simulation studies. The models were applied to individual patient data meta-analyses in cancer clinical trials for assesing potentiel surrogate endpoint to overall survival.The models were quite robust with a reduction of convergence and model estimation issues encountered in the two-step approach.We developed a user friendly R package implementing the models.
|
250 |
Statistical and Machine Learning Approaches For Visualizing and Analyzing Large-Scale Simulation DataHazarika, Subhashis January 2019 (has links)
No description available.
|
Page generated in 0.0334 seconds