• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 19
  • 12
  • 10
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 34
  • 30
  • 28
  • 26
  • 23
  • 22
  • 22
  • 20
  • 20
  • 18
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Modelo de confiabilidade associando dados de garantia e pós-garantia a três comportamentos de falhas / Reliability model for warranty and post-warranty data presenting three failure behaviours

Santos, Gilberto Tavares dos January 2008 (has links)
Nesta tese, apresenta-se um modelo de confiabilidade estatística para aplicação em dados de vida de um produto, buscando classificar três modos de falhas distintos associados à ocorrência de falhas prematuras, aleatórias e por desgaste. A ocorrência dos três modos de falhas segue os princípios de aplicação dos modelos teóricos por riscos concorrentes e seccionais. O modelo proposto utiliza duas distribuições de Weibull, com dois e três parâmetros, e uma distribuição exponencial. A distribuição de Weibull com dois parâmetros tem por objetivo representar os modos de falhas prematuras: a distribuição de Weibull com três parâmetros busca capturar os modos de falhas por desgaste; a distribuição exponencial mede a ocorrência de falhas aleatórias decorrentes de uso operacional de um produto. Considera-se que falhas prematuras e por desgaste ocorram seqüencialmente, enquanto falhas aleatórias ocorram de forma concorrente às falhas prematuras e por desgaste tão logo o produto seja colocado em operação. Para dimensionar o número de ocorrências vinculadas aos três modos de falhas são utilizados dados coletados durante o período de garantia e pós-garantia. Os dados de garantia são registros históricos do produtor e os dados da pós-garantia referem-se a informações obtidas de especialistas, já que dados após a garantia apresentam elevado nível de censura. Equações de confiabilidade e estimadores de máxima verossimilhança são apresentados para definir o perfil e os parâmetros do modelo proposto. Um estudo de caso com dados coletados de um equipamento elétrico-eletrônico subsidia a aplicação do modelo enquanto que um teste estatístico de ajuste de dados é utilizado para validar o referido modelo. / This thesis presents a reliability model for product life data presenting three different failure modes, associated with early, random and wear-out failures. The model is based on theoretical concepts related to competing risk and sectional models. The proposed model is structured based on two Weibull distributions, with two and three parameters, and one exponential distribution. The Weibull distribution with two parameters is aimed at modeling early failure modes; the Weibull distribution with three parameters models wear-out failure modes; the exponential distribution models random failures due to operational use. It is considered that early and wearout failures take place one after the other while random failures occur at the same time as early and wear-out failures as soon as the product starts operating. To measure each period related to the three failure modes, data from warranty and post-warranty periods are used. Warranty data are historical records; post-warranty data are gathered from experts, and are aimed at decreasing the degree of censoring in the data. Once the model is defined, reliability figures and maximum likelihood estimators are derived. Real data obtained from warranty claims on electricelectronic equipments are used to illustrate the developments proposed and a goodness-of-fit test is used to validate the performance of this model.
42

Regressão linear com medidas censuradas / Linear regression with censored data

Marcel Frederico de Lima Taga 07 November 2008 (has links)
Consideramos um modelo de regressão linear simples, em que tanto a variável resposta como a independente estão sujeitas a censura intervalar. Como motivação utilizamos um estudo em que o objetivo é avaliar a possibilidade de previsão dos resultados de um exame audiológico comportamental a partir dos resultados de um exame audiológico eletrofisiológico. Calculamos intervalos de previsão para a variável resposta, analisamos o comportamento dos estimadores de máxima verossimilhança obtidos sob o modelo proposto e comparamos seu desempenho com aquele de estimadores obtidos de um modelo de regressão linear simples usual, no qual a censura dos dados é desconsiderada. / We consider a simple linear regression model in which both variables are interval censored. To motivate the problem we use data from an audiometric study designed to evaluate the possibility of prediction of behavioral thresholds from physiological thresholds. We develop prediction intervals for the response variable, obtain the maximum likelihood estimators of the proposed model and compare their performance with that of estimators obtained under ordinary linear regression models.
43

Goodness-of-Fit for Length-Biased Survival Data with Right-Censoring

Younger, Jaime January 2012 (has links)
Cross-sectional surveys are often used in epidemiological studies to identify subjects with a disease. When estimating the survival function from onset of disease, this sampling mechanism introduces bias, which must be accounted for. If the onset times of the disease are assumed to be coming from a stationary Poisson process, this bias, which is caused by the sampling of prevalent rather than incident cases, is termed length-bias. A one-sample Kolomogorov-Smirnov type of goodness-of-fit test for right-censored length-biased data is proposed and investigated with Weibull, log-normal and log-logistic models. Algorithms detailing how to efficiently generate right-censored length-biased survival data of these parametric forms are given. Simulation is employed to assess the effects of sample size and censoring on the power of the test. Finally, the test is used to evaluate the goodness-of-fit using length-biased survival data of patients with dementia from the Canadian Study of Health and Aging.
44

Quantifying Information Leakage via Adversarial Loss Functions: Theory and Practice

January 2020 (has links)
abstract: Modern digital applications have significantly increased the leakage of private and sensitive personal data. While worst-case measures of leakage such as Differential Privacy (DP) provide the strongest guarantees, when utility matters, average-case information-theoretic measures can be more relevant. However, most such information-theoretic measures do not have clear operational meanings. This dissertation addresses this challenge. This work introduces a tunable leakage measure called maximal $\alpha$-leakage which quantifies the maximal gain of an adversary in inferring any function of a data set. The inferential capability of the adversary is modeled by a class of loss functions, namely, $\alpha$-loss. The choice of $\alpha$ determines specific adversarial actions ranging from refining a belief for $\alpha =1$ to guessing the best posterior for $\alpha = \infty$, and for the two specific values maximal $\alpha$-leakage simplifies to mutual information and maximal leakage, respectively. Maximal $\alpha$-leakage is proved to have a composition property and be robust to side information. There is a fundamental disjoint between theoretical measures of information leakages and their applications in practice. This issue is addressed in the second part of this dissertation by proposing a data-driven framework for learning Censored and Fair Universal Representations (CFUR) of data. This framework is formulated as a constrained minimax optimization of the expected $\alpha$-loss where the constraint ensures a measure of the usefulness of the representation. The performance of the CFUR framework with $\alpha=1$ is evaluated on publicly accessible data sets; it is shown that multiple sensitive features can be effectively censored to achieve group fairness via demographic parity while ensuring accuracy for several \textit{a priori} unknown downstream tasks. Finally, focusing on worst-case measures, novel information-theoretic tools are used to refine the existing relationship between two such measures, $(\epsilon,\delta)$-DP and R\'enyi-DP. Applying these tools to the moments accountant framework, one can track the privacy guarantee achieved by adding Gaussian noise to Stochastic Gradient Descent (SGD) algorithms. Relative to state-of-the-art, for the same privacy budget, this method allows about 100 more SGD rounds for training deep learning models. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2020
45

Semiparametric Regression Under Left-Truncated and Interval-Censored Competing Risks Data and Missing Cause of Failure

Park, Jun 04 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Observational studies and clinical trials with time-to-event data frequently involve multiple event types, known as competing risks. The cumulative incidence function (CIF) is a particularly useful parameter as it explicitly quantifies clinical prognosis. Common issues in competing risks data analysis on the CIF include interval censoring, missing event types, and left truncation. Interval censoring occurs when the event time is not observed but is only known to lie between two observation times, such as clinic visits. Left truncation, also known as delayed entry, is the phenomenon where certain participants enter the study after the onset of disease under study. These individuals with an event prior to their potential study entry time are not included in the analysis and this can induce selection bias. In order to address unmet needs in appropriate methods and software for competing risks data analysis, this thesis focuses the following development of application and methods. First, we develop a convenient and exible tool, the R package intccr, that performs semiparametric regression analysis on the CIF for interval-censored competing risks data. Second, we adopt the augmented inverse probability weighting method to deal with both interval censoring and missing event types. We show that the resulting estimates are consistent and double robust. We illustrate this method using data from the East-African International Epidemiology Databases to Evaluate AIDS (IeDEA EA) where a significant portion of the event types is missing. Last, we develop an estimation method for semiparametric analysis on the CIF for competing risks data subject to both interval censoring and left truncation. This method is applied to the Indianapolis-Ibadan Dementia Project to identify prognostic factors of dementia in elder adults. Overall, the methods developed here are incorporated in the R package intccr. / 2021-05-06
46

Essays on Econometric Methods for Panel and Duration Data Analysis / パネルデータ分析とdurationデータ分析のための計量経済学手法に関する諸研究

Sakaguchi, Shosei 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(経済学) / 甲第20870号 / 経博第565号 / 新制||経||283(附属図書館) / 京都大学大学院経済学研究科経済学専攻 / (主査)教授 西山 慶彦, 准教授 山田 憲, 准教授 高野 久紀 / 学位規則第4条第1項該当 / Doctor of Economics / Kyoto University / DGAM
47

Non-Parametric and Parametric Estimators of the Survival Function under Dependent Censorship

Qin, Yulin 22 November 2013 (has links)
No description available.
48

Exact Analysis of Exponential Two-Component System Failure Data

Zhang, Xuan 01 1900 (has links)
<p>A survival distribution is developed for exponential two-component systems that can survive as long as at least one of the two components in the system function. It is assumed that the two components are initially independent and non-identical. If one of the two components fail (repair is impossible), the surviving component is subject to a different failure rate due to the stress caused by the failure of the other.</p> <p>In this paper, we consider such an exponential two-component system failure model when the observed failure time data are (1) complete, (2) Type-I censored, (3) Type-I censored with partial information on component failures, (4) Type-II censored and (5) Type-II censored with partial information on component failures. In these situations, we discuss the maximum likelihood estimates (MLEs) of the parameters by assuming the lifetimes to be exponentially distributed. The exact distributions (whenever possible) of the MLEs of the parameters are then derived by using the conditional moment generating function approach. Construction of confidence intervals for the model parameters are discussed by using the exact conditional distributions (when available), asymptotic distributions, and two parametric bootstrap methods. The performance of these four confidence intervals, in terms of coverage probabilities are then assessed through Monte Carlo simulation studies. Finally, some examples are presented to illustrate all the methods of inference developed here.</p> <p>In the case of Type-I and Type-II censored data, since there are no closed-form expressions for the MLEs, we present an iterative maximum likelihood estimation procedure for the determination of the MLEs of all the model parameters. We also carry out a Monte Carlo simulation study to examine the bias and variance of the MLEs.</p> <p>In the case of Type-II censored data, since the exact distributions of the MLEs depend on the data, we discuss the exact conditional confidence intervals and asymptotic confidence intervals for the unknown parameters by conditioning on the data observed.</p> / Thesis / Doctor of Philosophy (PhD)
49

LIKELIHOOD INFERENCE FOR LOG-LOGISTIC DISTRIBUTION UNDER PROGRESSIVE TYPE-II RIGHT CENSORING

Alzahrani, Alya 10 1900 (has links)
<p>Censoring arises quite often in lifetime data. Its presence may be planned or unplanned. In this project, we demonstrate progressive Type-II right censoring when the underlying distribution is log-logistic. The objective is to discuss inferential methods for the unknown parameters of the distribution based on the maximum likelihood estimation method. The Newton-Raphson method is proposed as a numerical technique to solve the pertinent non-linear equations. In addition, confidence intervals for the unknown parameters are constructed based on (i) asymptotic normality of the maximum likelihood estimates, and (ii) percentile bootstrap resampling technique. A Monte Carlo simulation study is conducted to evaluate the performance of the methods of inference developed here. Some illustrative examples are also presented.</p> / Master of Science (MSc)
50

Statistical methods for evaluating treatment effect in the presence of multiple time-to-event outcomes

Lin, Jingyi 12 February 2025 (has links)
2024 / Contemporary randomized trials frequently assess treatment effects across multiple time-to-event outcomes. In scenarios involving competing risks, prioritized outcomes, or informative censoring, alternatives to conventional methods to estimate and test for treatment effects are needed. For competing risks data, we proposed a doubly robust estimator for the difference in the restricted mean times lost to a specific cause. The estimator relies on non-parametric pseudo-observations of the cumulative incidence function, and therefore does not rely on the proportional hazard assumption. We evaluated the performance of the estimator in different scenarios of model misspecification. We applied the estimator to compare the event-free time lost to disease progression in the POPLAR and OAK studies for non-small-cell lung cancer. For prioritized time-to-event outcomes, we compared the performance of novel tests that prioritize events with higher clinical importance to traditional tests that do not. None of the tests was uniformly best when component-wise treatment effects varied. As these tests differ in how they characterize the treatment effect over the entire disease course, we proposed a generalizable framework to quantify the information used and ignored by each test. Under the Gumbel survival copula model, we also derived analytically the true value of the treatment effect corresponding to each test. We illustrated these methods using a five-component prioritized outcome in the SPRINT randomized trial. For informative censoring, we considered the issue of differential censoring between randomization groups in oncology trials. We assessed the impact of informative censoring on the treatment effect estimation, as well as on the performance of generalized log-rank tests under a delayed effect setting. We showed how to generate informative censoring data from survival copulas with piece-wise exponential marginals. We also derived the relationship between the copula rank correlation and the probability of informative censoring. We showed how to use this relationship to guide the choice of an adequate copula model to analyze informative censoring data. / 2027-02-12T00:00:00Z

Page generated in 0.0528 seconds