81 |
Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade InversaSientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
|
82 |
Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade InversaSientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
|
83 |
Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare / Characterisation of population diversity from quantified measures of a nonlinear model. Application to hyperbaric divingBennani, Youssef 10 December 2015 (has links)
Cette thèse propose une nouvelle méthode pour l'estimation non-paramétrique de densité à partir de données censurées par des régions de formes quelconques, éléments de partitions du domaine paramétrique. Ce travail a été motivé par le besoin d'estimer la distribution des paramètres d'un modèle biophysique de décompression afin d'être capable de prédire un risque d'accident. Dans ce contexte, les observations (grades de plongées) correspondent au comptage quantifié du nombre de bulles circulant dans le sang pour un ensemble de plongeurs ayant exploré différents profils de plongées (profondeur, durée), le modèle biophysique permettant de prédire le volume de gaz dégagé pour un profil de plongée donné et un plongeur de paramètres biophysiques connus. Dans un premier temps, nous mettons en évidence les limitations de l'estimation classique de densité au sens du maximum de vraisemblance non-paramétrique. Nous proposons plusieurs méthodes permettant de calculer cet estimateur et montrons qu'il présente plusieurs anomalies : en particulier, il concentre la masse de probabilité dans quelques régions seulement, ce qui le rend inadapté à la description d'une population naturelle. Nous proposons ensuite une nouvelle approche reposant à la fois sur le principe du maximum d'entropie, afin d'assurer une régularité convenable de la solution, et mettant en jeu le critère du maximum de vraisemblance, ce qui garantit une forte attache aux données. Il s'agit de rechercher la loi d'entropie maximale dont l'écart maximal aux observations (fréquences de grades observées) est fixé de façon à maximiser la vraisemblance des données. / This thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance.
|
84 |
Caracterização e extensões da distribuição Burr XII: propriedades e aplicações / Characterization and extensions of the Burr XII distribution: Properties and ApplicationsPatrícia Ferreira Paranaíba 21 September 2012 (has links)
A distribuição Burr XII (BXII) possui, como casos particulares, as distribuições normal, log-normal, gama, logística, valor extremo tipo I, entre outras. Por essa razão, ela é considerada uma distribuição flexível no ajuste dos dados. As ideias de Eugene; Lee e Famoye (2002) e Cordeiro e Castro (2011) foram utilizadas para o desenvolvimento de duas novas distribuições de probabilidade a partir da distribuição BXII. Uma delas é denominada beta Burr XII (BBXII) e possui cinco parâmetros. Desenvolveu-se o modelo de regressão log-beta Burr XII (LBBXII). A outra distribuição é denominada de Kumaraswamy Burr XII (KwBXII) e possui cinco parâmetros. A vantagem desses novos modelos reside na capacidade de acomodar várias formas da função risco, além disso, eles também se mostraram úteis na discriminação de modelos. Para cada um dos modelos foram calculados os momentos, função geradora de momentos, os desvios médios, a confiabilidade e a função densidade de probabilidade da estatística de ordem. Foi realizado um estudo de simulação para avaliar o desempenho desses modelos. Para a estimação dos parâmetros, foram utilizados os métodos de máxima verossimilhança e bayesiano e, finalmente, para ilustrar a aplicação das novas distribuições foram analisados alguns conjuntos de dados reais. / The Burr XII (BXII) distribution has as particular cases the normal, lognormal, gamma, logistic and extreme-value type I distributions, among others. For this reason, it is considered a flexible distribution for fitting data. In this paper, the ideas of Eugene; Lee e Famoye (2002) and Cordeiro and Castro (2011) is used to develop two new probability distributions based on the BBXII distribution. The first is called beta Burr XII (BBXII) and has five parameters. Based in these, we develop the extended generalized log-beta Burr XII regression model. The other distribution is called Kumaraswamy Burr XII (KwBXII) and has five parameters. The advantage of these new models rests in their capacity to accommodate various risk function forms. They are also useful in model discrimination. We calculate the moments, moments generating function, mean deviations, reliability and probability density function of the order statistics. A simulation study was conducted to evaluate the performance of these models. To estimate the parameters we use the maximum likelihood and Bayesian methods. Finally, to illustrate the application of the new distributions, we analyze some real data sets.
|
85 |
Reading The Catcher in the Rye in the EFL classroom : A didactic perspective of the reasons and consequences for banning or censoring literatureGustavsson, Josefin January 2018 (has links)
By discussing the ethical issues with banned and censored literature, students can learn how to approach a text written in different contexts. The essay brings to light the triggered instances, which lead to banning The Catcher in the Rye in American schools in the 1950s. Using a cultural studies approach allows an in-depth investigation of the patterns in the triggering instances and leads to findings of possible reasons for ban- and censorship. These instances, sums up to; unrealistic protagonist, vulgar language, blasphemy and a pessimistic and depressing point of view. To introduce these instances into a Swedish classroom can hopefully bring an insight into another historical time and another context, to better understand the Swedish context, e.g. democratic values as well as freedom of speech.
|
86 |
Quantitative Microbial Risk Assessment of Water Treatment Process for Reducing Chlorinous Odor / カルキ臭低減型浄水処理プロセスにおける定量的微生物リスク評価Zhou, Liang 24 November 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第19372号 / 工博第4117号 / 新制||工||1635(附属図書館) / 32386 / 新制||工||1635 / 京都大学大学院工学研究科都市環境工学専攻 / (主査)教授 伊藤 禎彦, 教授 田中 宏明, 教授 米田 稔 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
87 |
Gibbs Sampling and Expectation Maximization Methods for Estimation of Censored Values from Correlated Multivariate DistributionsHUNTER, TINA D. 25 August 2008 (has links)
No description available.
|
88 |
Maximum response statistics of MDoF linear structures excited by non-stationary random processes.Muscolino, G., Palmeri, Alessandro January 2004 (has links)
no / The paper deals with the problem of predicting the maximum response statistics of Multi-Degree-of-Freedom (MDoF) linear structures subjected to non-stationary non-white noises. The extension of two different censored closures of Gumbel type, originally proposed by the authors for the response of Single-Degree-of-Freedom oscillators, it is presented. The improvement associated with the introduction in the closure of a consistent censorship factor, accounting for the response bandwidth, it is pointed out. Simple and effective step-by-step procedures are formulated and described in details. Numerical applications on a realistic 25-storey moment-resisting frame along with comparisons with classical approximations and Monte Carlo simulations are also included.
|
89 |
Application Of The Empirical Likelihood Method In Proportional Hazards ModelHe, Bin 01 January 2006 (has links)
In survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly interval-censoring in survival data makes model assessment difficult, and the existing tests for goodness-of-fit do not have direct extension to these complicated types of censored data. In this work, we use empirical likelihood (Owen, 1988) approach to construct goodness-of-fit test and provide estimates for the Cox model with various types of censored data. Specifically, the problems under consideration are the two-sample Cox model and stratified Cox model with right censored data, doubly censored data and partly interval-censored data. Related computational issues are discussed, and some simulation results are presented. The procedures developed in the work are applied to several real data sets with some discussion.
|
90 |
我國人壽保險公司經營效率之探討羅敏瑞, Luo, Min Rey Unknown Date (has links)
我國全面開放保險市場後,壽險業所面臨市場競爭更加激烈,經營風險亦隨之增加,甚至影響其經營績效。本文以資料包絡分析法(DEA)評估2002年至2004年我國人壽保險公司的經營效率,並找出相對無效率壽險公司改善空間,再以Tobit截斷迴歸模型探討可能造成壽險公司之間經營效率差異的影響因素。其中由DEA技術效率評估結果發現:(一) 壽險公司整體技術效率平均值介於50.98﹪與70.15﹪之間,代表我國壽險公司在投入資源運用與配置上仍存有改善空間,在產出不變下,平均而言,可以節省29.85%至49.02%的資源使用量。(二)純技術效率值大於規模效率平均值,顯示造成壽險公司技術無效率之來源,資源浪費及生產規模不適當所造成的情況均有,但大部分來自於前者。迴歸實證結果顯示:(一)外勤兼職人員比率與技術效率具正向關係,顯示壽險公司僱用兼職人員招攬業務,相對專職人員可減少人事成本,可提昇經營績效。(二)國外投資比例與技術效率具正向關係,即壽險公司因國外投資商品多樣化選擇,可靈活運用資金,提昇技術效率。(三)佣金率與技術效率具正向關係,代表壽險公司支付業務員佣金及津貼愈高,愈能激勵業務員積極招攬業務,增進公司業務績效,以提高技術效率。(四)逾期放款比率與技術效率為負向關係,即壽險公司逾期放款比率愈高,績營效率愈差。(五)市場占有率與技術效率為正向關係,顯示壽險公司市場占有率愈高,對市場的控制能力較佳,在產品銷售亦具有規模經濟,可提昇經營效率。(六)外內勤人員比例與技術效率為負向關係,表示外內勤人員比例愈高,壽險公司易忽略內勤行政人員在核保、理賠及客服等作業品質,將影響公司產品創新及保戶後續權益等,不利公司經營績效。
|
Page generated in 0.0627 seconds