• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 152
  • 86
  • 54
  • 21
  • 10
  • 7
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 412
  • 181
  • 87
  • 86
  • 78
  • 78
  • 77
  • 70
  • 65
  • 58
  • 57
  • 56
  • 48
  • 43
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Essays in empirical macroeconomics with application to monetary policy in a data-rich environment

Ahmadi, Pooyan Amir 05 July 2010 (has links)
Diese Dissertation besteht aus vier eigenständigen Aufsätzen. Das erste Kapitel liefert eine Einleitung uns einen Literaturüberblick. Im zweiten Kapitel schätzen wir die Effekte eines geldpolitischen Schocks in einer Bayesianischen faktorerweiterten Vektorautoregression. Als ein Identifikationsschema schlagen wir theoretisch fundierte Vorzeichenrestriktionen vor, welche auf die angemessenen Impuls-Antwortfolgen auferlegt werden können. Der Vorteil der faktorbasierten Vorzeichenrestriktion liegt in der Möglichkeit sehr viele theoretische fundierte Restriktionen zu setzen um so exakter zu identifizieren. Im dritten Kapitel untersuchen wir die Rolle der Geldpolitik während der Weltwirtschaftskrise in den USA. Die besondere Rolle der Geldpolitik gilt seit Friedman and Schwartz [1963] als gängige Meinung. In diesem Papier versuchen wir die entscheidenden Dynamiken der Zwischenkriegszeit mit dem BFAVAR Modell abzubilden und die Effekte geldpolitischer Schocks zu analysieren. Weiterhin schauen wir uns die Effekte der systematischen Komponente der Geldpolitik an. Wir finden heraus, dass der Anteil der Geldpolitik insgesamt zwar präsent allerdings recht gemäßigt vorhanden. Im vierten Kapitel werden die makroökonomischen Dynamiken innerhalb des Euroraumes untersucht. Hierbei schlage ich einen neuen Ansatz vor um die vielen relevanten Interrelationen effizient und sparsam zu vereinbaren. Ein faktorbasiertes DSGE Modell wird gemeinsam mit einem dynamischen Faktormodell geschätzt. Hierbei wird explizit ökonomische Theorie zur Datenanalyse verwendet. Zur Identifikation makroökonomischer Schocks verwende ich sowohl Vorzeichenrestriktionen wie auch die DSGE Rotation. / This thesis consists of four self-contained chapters. The first chapter provides an introduction with a literature overview. In Chapter 2 we estimate the effects of monetary policy shocks in a Bayesian Factor- Augmented vector autoregression (BFAVAR). We propose to employ as an identification strategy sign restrictions on the impulse response function of pertinent variables according to conventional wisdom. The key strength of our factor based approach is that sign restrictions can be imposed on many variables in order to pin down the impact of monetary policy shocks. Thus an exact identification of shocks can be approximated and monitored. In chapter 3 the role of monetary policy during the interwar Great Depression is analyzed. The prominent role of monetary policy in the U.S. interwar depression has been conventional wisdom since Friedman and Schwartz [1963]. This paper attempts to capture the pertinent dynamics through a BFAVAR methodology of the previous chapter. We find the effects of monetary policy shocks and the systematic component to have been moderate. Our results caution against a predominantly monetary interpretation of the Great Depression. This final chapter 4 analyzes macroeconomic dynamics within the Euro area. To tackle the questions at hand I propose a novel approach to jointly estimate a factor-based DSGE model and a structural dynamic factor model that simultaneously captures the rich interrelations in a parsimonious way and explicitly involves economic theory in the estimation procedure. To identify shocks I employ both sign restrictions derived from the estimated DSGE model and the implied restrictions from the DSGE model rotation. I find a high degree of comovement across the member countries, homogeneity in the monetary transmission mechanism and heterogeneity in transmission of technology shocks. The suggested approach results in a factor generalization of the DSGE-VAR methodology of Del Negro and Schorfheide [2004].
112

Estimação e diagnóstico na distribuição exponencial por partes em análise de sobrevivência com fração de cura / Estimation and diagnostics for the piecewise exponential distribution in survival analysis with fraction cure

Sibim, Alessandra Cristiane 31 March 2011 (has links)
O principal objetivo deste trabalho é desenvolver procedimentos inferências em uma perspectiva bayesiana para modelos de sobrevivência com (ou sem) fração de cura baseada na distribuição exponencial por partes. A metodologia bayesiana é baseada em métodos de Monte Carlo via Cadeias de Markov (MCMC). Para detectar observações influentes nos modelos considerados foi usado o método bayesiano de análise de influência caso a caso (Cho et al., 2009), baseados na divergência de Kullback-Leibler. Além disso, propomos o modelo destrutivo binomial negativo com fração de cura. O modelo proposto é mais geral que os modelos de sobrevivência com fração de cura, já que permitem estimar a probabilidade do número de causas que não foram eliminadas por um tratamento inicial / The main objective is to develop procedures inferences in a bayesian perspective for survival models with (or without) the cure rate based on piecewise exponential distribution. The methodology is based on bayesian methods for Markov Chain Monte Carlo (MCMC). To detect influential observations in the models considering bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence (Cho et al., 2009). Furthermore, we propose the negative binomial model destructive cure rate. The proposed model is more general than the survival models with cure rate, since the probability to estimate the number of cases which were not eliminated by an initial treatment
113

Optimisation des méthodes algorithmiques en inférence bayésienne. Modélisation dynamique de la transmission d'une infection au sein d'une population hétérogène / Optimization of algorithmic methods for Bayesian inference. Dynamic modeling of infectious disease transmission in heterogeneous population

Gajda, Dorota 13 October 2011 (has links)
Ce travail se décompose en deux grandes parties, "Estimations répétées dans le cadre de la modélisation bayésienne" et "Modélisation de la transmission de maladies infectieuses dans une population. Estimation des paramètres.". Les techniques développées dans la première partie sont utilisées en fin de la seconde partie. La première partie est consacrée à des optimisations d'algorithmes stochastiques très souvent utilisés, notamment dans le contexte des modélisations Bayésiennes. Cette optimisation est particulièrement faite lors de l'étude empirique d'estimateurs des paramètres d'un modèle où les qualités des estimateurs sont évaluées sur un grand nombre de jeux de données simulées. Quand les lois a posteriori ne sont pas explicites, le recours à des algorithmes stochastiques itératifs (de la famille des algorithmes dits de Monte Carlo par Chaîne de Makov) pour approcher les lois a posteriori est alors très couteux en temps car doit être fait pour chaque jeu de données. Dans ce contexte, ce travail consiste en l'étude de solutions évitant un trop grand nombre d'appels à ces algorithmes mais permettant bien-sûr d'obtenir malgré tout des résultats précis. La principale technique étudiée dans cette partie est celle de l'échantillonnage préférentiel. La seconde partie est consacrée aux études de modèles épidémiques, en particulier le modèle compartimental dit SIS (Susceptible-Infecté-Susceptible) dans sa version stochastique. L'approche stochastique permet de prendre en compte l'hétérogénéité de l'évolution de la maladie dans la population. les approches par des processus Markoviens sont étudiés où la forme des probabilités de passage entre les états est non linéaire. La solution de l'équation différentielle en probabilité n'est alors en général pas explicite. Les principales techniques utilisées dans cette partie sont celles dites de développement de l'équation maîtresse ("master equation") appliquées au modèle SIS avec une taille de population constante. Les propriétés des estimateurs des paramètres sont étudiées dans le cadre fréquentiste et bayésien. Concernant l'approche Bayésienne, les solutions d'optimisation algorithmique de la première partie sont appliquées. / This work consists in two parts, "Repeated estimates in bayesian modelling " and " Modelling of the transmission of infectious diseases in a population. Estimation of the parameters". Techniques developed in the first part are used at the end of the second part.The first part deals with optimizations of very often used stochastic algorithms, in particular in the context of Bayesian modelling. This optimization is particularly made when empirical study of estimates based on numerous simulated data sets is done. When posterior distribution of parameters are not explicit, its approximation is obtained via iterative stochastic algorithms (of the family of Markov Chain Monte Carlo) which is computationally expensive because has to be done on each data set. In this context, solutions are proposed avoiding an excess large number of MCMC calls but nevertheless giving accurate results. The Importance Sampling method is used in combination with MCMC in Bayesian simulation study. The second part deals with epidemic models, in particular the compartimental model SIS (Susceptible-Infectious-Susceptible) in its stochastic version. The stochastic approach allows to take into account the heterogeneousness of disease evolution in the population. Markov Process is particularly studied where transition probability between states is not linear, the solution of the differential equation in probability being then generally not explicit. The main techniques used in this part are the ones based on Master equation applied on SIS model with a constant population size. Empirical properties of parameters estimates are studied in frequentist and Bayesian context with algorithmic optimization presented in the first part.
114

"Métodos de estimação na teoria de resposta ao item" / Estimation methods in item response theory

Azevedo, Caio Lucidius Naberezny 27 February 2003 (has links)
Neste trabalho apresentamos os mais importantes processos de estimação em algumas classes de modelos de resposta ao item (Dicotômicos e Policotômicos). Discutimos algumas propriedades desses métodos. Com o objetivo de comparar o desempenho dos métodos conduzimos simulações apropriadas. / In this work we show the most important estimation methods for some item response models (both dichotomous and polichotomous). We discuss some proprieties of these methods. To compare the characteristic of these methods we conducted appropriate simulations.
115

Bayesian risk management : "Frequency does not make you smarter"

Fucik, Markus January 2010 (has links)
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity. / Die vorliegende Arbeit befasst sich mit den Ansätzen eines Bayes’schen Risikomanagements zur Messung von Risiken. Dabei konzentriert sich die Arbeit auf folgende zentrale Fragestellungen: (1) Wie ist es möglich, transparent Risiken zu quantifizieren, falls nur eine begrenzte Anzahl an geeigneten historischen Beobachtungen zur Datenanalyse zur Verfügung steht? (2) Wie ist es möglich, transparent Risiken zu quantifizieren, falls mangels geeigneter historischer Beobachtungen keine Datenanalyse möglich ist? (3) Inwieweit ist es möglich, Willkür und Beliebigkeit bei der Risikoquantifizierung zu begrenzen? Zur Beantwortung der ersten Frage schlägt diese Arbeit die Anwendung der Bayes’schen Statistik vor. Im Gegensatz zu klassischen Kleinste-Quadrate bzw. Maximum-Likelihood Punktschätzern können Bayes’sche A-Posteriori Verteilungen die dateninduzierte Parameter- und Modellunsicherheit explizit messen. Als Anwendungsbeispiel werden in der Arbeit zwölf verschiedene stochastische Prozesse an CO2-Preiszeitreihen mittels des effizienten Bayes’schen Markov Chain Monte Carlo (MCMC) Simulationsalgorithmus kalibriert. Da die Bayes’sche Statistik die Berechnung von Modellwahrscheinlichkeiten zur kardinalen Modellgütemessung erlaubt, konnten Log-Varianz Prozesse als mit Abstand beste Modellklasse identifiziert werden. Für ausgewählte Prozesse wurden zusätzlich die Auswirkung von Parameterunsicherheit auf abgeleitete Risikomaße (ex-ante/ ex-post Value-at-Risks, regulatorische Kapitalrücklagen, Optionspreise) untersucht. Generell sind die Unterschiede zwischen Bayes’schen und klassischen Risikomaßen umso größer, je komplexer die Modellannahmen für den CO2-Preis sind. Überdies sind Bayes’sche Value-at-Risks und Kapitalrücklagen konservativer als ihre klassischen Pendants (Risikoprämie für Parameterunsicherheit). Bezüglich der zweiten Frage ist die in dieser Arbeit vertretene Position, dass eine Risikoquantifizierung ohne (ausreichend) verlässliche Daten nur durch die Berücksichtigung von Expertenwissen erfolgen kann. Dies erfordert ein strukturiertes Vorgehen. Daher wird das integrated Bayesian Risk Analysis (iBRA) Konzept vorgestellt, welches Konzepte, Techniken und Werkzeuge zur expertenbasierten Identifizierung und Quantifizierung von Risikofaktoren und deren Abhängigkeiten vereint. Darüber hinaus bietet es Ansätze für den Umgang mit konkurrierenden Expertenmeinungen. Da gerade ressourceneffiziente Werkzeuge zur Quantifizierung von Expertenwissen von besonderem Interesse für die Praxis sind, wurden im Rahmen dieser Arbeit der Onlinemarkt PCXtrade und die Onlinebefragungsplattform PCXquest konzipiert und mehrfach erfolgreich getestet. In zwei empirischen Studien wurde zudem untersucht, inwieweit Menschen überhaupt in der Lage sind, ihre Unsicherheiten zu quantifizieren und inwieweit sie Selbsteinschätzungen von Experten bewerten. Die Ergebnisse deuten an, dass Menschen zu einer Selbstüberschätzung ihrer Prognosefähigkeiten neigen und tendenziell hohes Vertrauen in solche Experteneinschätzungen zeigen, zu denen der jeweilige Experte selbst hohes Zutrauen geäußert hat. Zu letzterer Feststellung ist jedoch zu bemerken, dass ein nicht unbeträchtlicher Teil der Befragten sehr hohe Selbsteinschätzung des Experten als negativ ansehen. Da der Bayesianismus Wahrscheinlichkeiten als Maß für die persönliche Unsicherheit propagiert, bietet er keinerlei Rahmen für die Verifizierung bzw. Falsifizierung von Einschätzungen. Dies wird mitunter mit Beliebigkeit gleichgesetzt und könnte einer der Gründe sein, dass offen praktizierter Bayesianismus in Deutschland ein Schattendasein fristet. Die vorliegende Arbeit stellt daher das Konzept des Bayesian Due Diligence zur Diskussion. Es schlägt eine kriterienbasierte Bewertung von Experteneinschätzungen vor, welche insbesondere die Intersubjektivität und den Informationsgehalt von Einschätzungen beleuchtet.
116

Modélisation des données d'attractivité hospitalière par les modèles d'utilité / Modeling hospital attractivity data by using utility models

Saley, Issa 29 November 2017 (has links)
Savoir comment les patients choisissent les hôpitaux est d'une importance majeure non seulement pour les gestionnaires des hôpitaux mais aussi pour les décideurs. Il s'agit entre autres pour les premiers, de la gestion des flux et l'offre des soins et pour les seconds, l'implémentation des reformes dans le système de santé.Nous proposons dans cette thèse différentes modélisations des données d'admission de patients en fonction de la distance par rapport à un hôpital afin de prévoir le flux des patients et de comparer son attractivité par rapport à d'autres hôpitaux. Par exemple, nous avons utilisé des modèles bayésiens hiérarchiques pour des données de comptage avec possible dépendance spatiale. Des applications on été faites sur des données d'admission de patients dans la région de Languedoc-Roussillon.Nous avons aussi utilisé des modèles de choix discrets tels que les RUMs. Mais vu certaines limites qu'ils présentent pour notre objectif, nous avons relâché l'hypothèse de maximisation d'utilité pour une plus souple et selon laquelle un agent (patient) peut choisir un produit (donc hôpital) dès lors que l'utilité que lui procure ce produit a atteint un certain seuil de satisfaction, en considérant certains aspects. Une illustration de cette approche est faite sur trois hôpitaux de l'Hérault pour les séjours dus à l'asthme en 2009 pour calculer l'envergure territorial d'un hôpital donné . / Understanding how patients choose hospitals is of utmost importance for both hospitals administrators and healthcare decision makers; the formers for patients incoming tide and the laters for regulations.In this thesis, we present different methods of modelling patients admission data in order to forecast patients incoming tide and compare hospitals attractiveness.The two first method use counting data models with possible spatial dependancy. Illustration is done on patients admission data in Languedoc-Roussillon.The third method uses discrete choice models (RUMs). Due to some limitations of these models according to our goal, we introduce a new approach where we released the assumption of utility maximization for an utility-threshold ; that is to say that an agent (patient) can choose an alternative (hospital) since he thinks that he can obtain a certain level of satisfaction of doing so, according to some aspects. Illustration of the approach is done on 2009 asthma admission data in Hérault.
117

Méthodes bayésiennes pour l'analyse génétique / Bayesian methods for gene expression factor analysis

Bazot, Cécile 27 September 2013 (has links)
Ces dernières années, la génomique a connu un intérêt scientifique grandissant, notamment depuis la publication complète des cartes du génome humain au début des années 2000. A présent, les équipes médicales sont confrontées à un nouvel enjeu : l'exploitation des signaux délivrés par les puces ADN. Ces signaux, souvent de grande taille, permettent de connaître à un instant donné quel est le niveau d'expression des gênes dans un tissu considéré, sous des conditions particulières (phénotype, traitement, ...), pour un individu. Le but de cette recherche est d'identifier des séquences temporelles caractéristiques d'une pathologie, afin de détecter, voire de prévenir, une maladie chez un groupe de patients observés. Les solutions développées dans cette thèse consistent en la décomposition de ces signaux en facteurs élémentaires (ou signatures génétiques) selon un modèle bayésien de mélange linéaire, permettant une estimation conjointe de ces facteurs et de leur proportion dans chaque échantillon. L’utilisation de méthodes de Monte Carlo par chaînes de Markov sera tout particulièrement appropriée aux modèles bayésiens hiérarchiques proposés puisqu'elle permettra de surmonter les difficultés liées à leur complexité calculatoire. / In the past few years, genomics has received growing scientic interest, particularly since the map of the human genome was completed and published in early 2000's. Currently, medical teams are facing a new challenge: processing the signals issued by DNA microarrays. These signals, often of voluminous size, allow one to discover the level of a gene expression in a given tissue at any time, under specic conditions (phenotype, treatment, ...). The aim of this research is to identify characteristic temporal gene expression proles of host response to a pathogen, in order to detect or even prevent a disease in a group of observed patients. The solutions developed in this thesis consist of the decomposition of these signals into elementary factors (genetic signatures) following a Bayesian linear mixing model, allowing for joint estimation of these factors and their relative contributions to each sample. The use of Markov chain Monte Carlo methods is particularly suitable for the proposed hierarchical Bayesian models. Indeed they allow one to overcome the diculties related to their computational complexity.
118

Modelos HMM com dependência de segunda ordem: aplicação em genética.

Zuanetti, Daiane Aparecida 20 February 2006 (has links)
Made available in DSpace on 2016-06-02T20:06:12Z (GMT). No. of bitstreams: 1 DissDAZ.pdf: 2962567 bytes, checksum: 5c6271a67fae12d6b0160ac8ed9351a2 (MD5) Previous issue date: 2006-02-20 / Universidade Federal de Minas Gerais / (See full text for download) / A crescente necessidade do desenvolvimento de eficientes técnicas computacionais e estatísticas para analisar a profusão de dados biológicos transformaram o modelo Markoviano oculto (HMM), caso particular das redes bayesianas ou probabilísticas, em uma alternativa interessante para analisar sequências de DNA. Uma razão do interesse no HMM é a sua flexibilidade em descrever segmentos heterogêneos da sequência através de uma mesma estrutura de dependência entre as variáveis, supostamente conhecida. No entanto, na maioria dos problemas práticos, a estrutura de dependência não é conhecida e precisa ser também estimada. A maneira mais comum para estimação de estrutra de um HMM é o uso de métodos de seleção de modelos. Outra solução é a utilização de metodologias para estimação da estrutura de uma rede probabilística. Neste trabalho, propomos o HMM de segunda ordem e seus estimadores bayesianos, definimos o fator de Bayes e o DIC para seleção do HMM mais adequado a uma sequência específica, verificamos seus desempenhos e a performance da metodologia proposta por Friedman e Koller (2003) em conjunto de dados simulados e aplicamos estas metodologias em duas sequências de DNA: o intron 7 do gene a - fetoprotein dos cimpanzés e o genoma do parasita Bacteriophage lambda, para o qual o modelo de segunda ordem é mais adequado.
119

[en] ENERGY PRICE SIMULATION IN BRAZIL THROUGH DEMAND SIDE BIDDING / [pt] SIMULAÇÃO DOS PREÇOS DE ENERGIA NO LEILÃO DE EFICIÊNCIA ENERGÉTICA NO BRASIL

JAVIER LINKOLK LOPEZ GONZALES 18 May 2016 (has links)
[pt] A Eficiência Energética (EE) pode ser considerada sinônimo de preservação ambiental, pois a energia economizada evita a construção de novas plantas de geração e de linhas de transmissão. O Leilão de Eficiência Energética (LEE) poderia representar uma alternativa muito interessante para a dinamização e promoção de práticas de EE no Brasil. Porém, é importante mencionar que isso pressupõe uma confiança na quantidade de energia reduzida, o que só pode se tornar realidade com a implantação e desenvolvimento de um sistema de Medição e Verificação (M&V) dos consumos de energia. Neste contexto, tem-se como objetivo principal simular os preços de energia do Leilão de Eficiência Energética no ambiente regulado para conhecer se a viabilidade no Brasil poderia se concretizar. A metodologia utilizada para realizar as simulações foi a de Monte Carlo, ademais, antes se utilizou o método do Kernel com a finalidade de conseguir ajustar os dados a uma curva através de polinômios. Uma vez conseguida a curva melhor ajustada se realizou a análise de cada cenário (nas diferentes rodadas) com cada amostra (500, 1000, 5000 e 10000) para encontrar a probabilidade dos preços ficarem entre o intervalo de 110 reais e 140 reais (preços ótimos propostos no LEE). Finalmente, os resultados apresentam que a probabilidade de o preço ficar no intervalo de 110 reais e 140 reais na amostra de 500 dados é de 28,20 por cento, na amostra de 1000 é de 33,00 por cento, na amostra de 5000 é de 29,96 por cento e de 10000 é de 32,36 por cento. / [en] The Energy Efficiency (EE) is considered a synonymous of environmental preservation, because the energy saved prevents the construction of new generating plants and transmission lines. The Demand-Side Bidding (DSB) could represent a very interesting alternative for the revitalization and promotion of EE practices in Brazil. However, it is important to note that this presupposes a confidence on the amount of reduced energy, which can only take reality with the implementation and development of a measurement system and verification (M&V) the energy consumption. In this context, the main objective is to simulate of the prices of the demand-side bidding in the regulated environment to meet the viability in Brazil that could become a reality. The methodology used to perform the simulations was the Monte Carlo addition, prior to the Kernel method was used in order to be able to adjust the data to a curve, using polynomials. Once achieved the best-fitted curve was carried out through an analysis of each scenario (in different rounds) with each sample (500, 1000, 5000 and 10000) to find the probability of the price falling between the 110 real range and 140 real (great prices proposed by the DSB). Finally, the results showed that the probability of staying in the price range from 110 real nd 140 real data 500 in the sample is 28.20 percent, the sample 1000 is 33.00 percent, the sample 5000 is 29.96 percent and 10000 is 32.36 percent.
120

Estima??o cl?ssica e Bayesiana em modelos de sobrevida com fra??o de cura

Almeida, Josemir Ramos de 22 March 2013 (has links)
Made available in DSpace on 2014-12-17T15:26:39Z (GMT). No. of bitstreams: 1 JosemirRA_DISSERT.pdf: 5246271 bytes, checksum: fe59e69070ae193e15ab79d805c3b449 (MD5) Previous issue date: 2013-03-22 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates / Em An?lise de Sobreviv?ncia, os modelos de longa dura??o permitem a estima??o da fra??o de cura, que representa uma parcela da popula??o imune ao evento de interesse. No referido trabalho abordamos os enfoques cl?ssico e Bayesiano com base nos modelos de mistura padr?o e de tempo de promo??o, utilizando diferentes distribui??es (exponencial, Weibull e Pareto) para modelar os tempos de falhas. A base de dados utilizada para ilustrar as implementa??es ? descrita em Kersey et al. (1987) e consiste em um grupo de pacientes com leucemia que foram submetidos a um certo tipo de transplante. As implementa??es espec?ficas utilizadas foram de otimiza??o num?rica por BFGS implementado em R (base::optim), aproxima??o de Laplace (implementa??o pr?pria) e o amostrador de Gibbs implementado no Open- Bugs. Descrevemos as principais caracter?sticas dos modelos utilizados, os m?todos de estima??o e os aspectos computacionais. Tamb?m discutimos como diferentes prioris podem afetar nas estimativas Bayesianas

Page generated in 0.0551 seconds