• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 2
  • 2
  • 1
  • Tagged with
  • 25
  • 25
  • 20
  • 9
  • 8
  • 8
  • 8
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Parameter estimation in proportional hazard model with interval censored data

Chang, Shih-hsun 24 June 2006 (has links)
In this paper, we estimate the parameters $S_0(t)$ and $ eta$ in Cox proportional hazard model when data are all interval-censored. For the application of this model, data should be either exact or right-censored, therefore we transform interval-censored data into exact data by three di®erent methods and then apply Nelson-Aalen estimate to obtain $S_0(t)$ and $ eta$. The test statistic $hat{ eta}^2I(hat{ eta})$ is not approximately distributed as $chi^2_{(1)}$ but $chi^2_{(1)}$ times a constant c.
12

A generalization of rank tests based on interval-censored failure time data and its application to AIDS studies.

Kuo, Yu-Yu 11 July 2000 (has links)
In this paper we propose a generalized rank test based on discrete interval-censored failure time data to determine whether two lifetime populations come from the same distribution. It reduces to the Logrank test or Wilcoxon test when one has exact or right-censored data. Simulation shows that the proposed test performs pretty satisfactory. An example is presented to demonstrate how the proposed test can be applied in AIDS study.
13

The estimation of the truncation ratio and an algorithm for the parameter estimation in the random interval truncation model.

Zhu, Huang-Xu 01 August 2003 (has links)
For interval-censored and truncated failure time data, the truncation ratio is unknown. In this paper, we propose an algorithm, similar to Turnbull's, to estimate the parameters. The truncation ratio for the interval-censored and truncated failure time data can also be estimated by the convergence result of the algorithm. A simulation study is proposed to compare with Turnbull (1976). Our algorithm seems to have better result.
14

Nonparametric tests for interval-censored failure time data via multiple imputation

Huang, Jin-long 26 June 2008 (has links)
Interval-censored failure time data often occur in follow-up studies where subjects can only be followed periodically and the failure time can only be known to lie in an interval. In this paper we consider the problem of comparing two or more interval-censored samples. We propose a multiple imputation method for discrete interval-censored data to impute exact failure times from interval-censored observations and then apply existing test for exact data, such as the log-rank test, to imputed exact data. The test statistic and covariance matrix are calculated by our proposed multiple imputation technique. The formula of covariance matrix estimator is similar to the estimator used by Follmann, Proschan and Leifer (2003) for clustered data. Through simulation studies we find that the performance of the proposed log-rank type test is comparable to that of the test proposed by Finkelstein (1986), and is better than that of the two existing log-rank type tests proposed by Sun (2001) and Zhao and Sun (2004) due to the differences in the method of multiple imputation and the covariance matrix estimation. The proposed method is illustrated by means of an example involving patients with breast cancer. We also investigate applying our method to the other two-sample comparison tests for exact data, such as Mantel's test (1967) and the integrated weighted difference test.
15

Monetary Valuation of Waterfront Open Space in Coastal Areas of Mississippi and Alabama

Dahal, Ram Prasad 08 December 2017 (has links)
Open space provides a wide range of ecosystem services to communities. In growing communities, open space offers relief from congestion and other negative externalities associated with rapid development. To make effective policy and planning decisions pertaining to open space preservation, it is important to estimate monetary values of its benefits. In addition, assessing public opinions regarding open space provides information on demand and how residents value open space. This study estimated the monetary value of open space in Mississippi and Alabama Gulf Coast communities. The study also collected information on coastal residents’ attitudes towards open space, working waterfronts, and their willingness to support waterfront open space preservation monetarily. Two methodological approaches were employed to estimate the monetary value of waterfront open space: contingent valuation (CVM) and hedonic price (HPM) methods. Data were collected using a mail survey, the Multiple Listing Service (MLS), and publicly available data sources such as the U.S. Census. Data were analyzed using an interval regression, ordinary least squares, and geographically weighted regression (GWR) models. Mail survey results indicated that the majority of residents valued open space and were willing to pay from $80.52 to $162.14 per household as estimated by four different interval-censored econometric models. Respondent’s membership in groups promoting conservation goals, income, age, and residence duration were major factors associated with their willingness to pay. Results from the HPM indicated proximities to waterfronts, with the exception of bayous, were positively related to home prices, suggesting open space produced positive economic benefits. Findings from the HPM analysis using publicly available data were consistent and comparable with the results from the HPM that used MLS data. This similarity of results indicates the use of publicly available data is feasible in HPM analysis, which is important for broad applications of the method during city planning. In addition, GWR estimates provided site specific monetary values of waterfront open space benefits, which will be helpful for policymakers and city planners in developing site-specific conservation and preservation strategies. Findings can help formulate future decisions related to alternative development scenarios of coastal areas and conservation efforts to preserve open space.
16

PARAMETRIC ESTIMATION IN COMPETING RISKS AND MULTI-STATE MODELS

Lin, Yushun 01 January 2011 (has links)
The typical research of Alzheimer's disease includes a series of cognitive states. Multi-state models are often used to describe the history of disease evolvement. Competing risks models are a sub-category of multi-state models with one starting state and several absorbing states. Analyses for competing risks data in medical papers frequently assume independent risks and evaluate covariate effects on these events by modeling distinct proportional hazards regression models for each event. Jeong and Fine (2007) proposed a parametric proportional sub-distribution hazard (SH) model for cumulative incidence functions (CIF) without assumptions about the dependence among the risks. We modified their model to assure that the sum of the underlying CIFs never exceeds one, by assuming a proportional SH model for dementia only in the Nun study. To accommodate left censored data, we computed non-parametric MLE of CIF based on Expectation-Maximization algorithm. Our proposed parametric model was applied to the Nun Study to investigate the effect of genetics and education on the occurrence of dementia. After including left censored dementia subjects, the incidence rate of dementia becomes larger than that of death for age < 90, education becomes significant factor for incidence of dementia and standard errors for estimates are smaller. Multi-state Markov model is often used to analyze the evolution of cognitive states by assuming time independent transition intensities. We consider both constant and duration time dependent transition intensities in BRAiNS data, leading to a mixture of Markov and semi-Markov processes. The joint probability of observing a sequence of same state until transition in a semi-Markov process was expressed as a product of the overall transition probability and survival probability, which were simultaneously modeled. Such modeling leads to different interpretations in BRAiNS study, i.e., family history, APOE4, and sex by head injury interaction are significant factors for transition intensities in traditional Markov model. While in our semi-Markov model, these factors are significant in predicting the overall transition probabilities, but none of these factors are significant for duration time distribution.
17

Inferences for the Weibull parameters based on interval-censored data and its application

Huang, Jinn-Long 19 June 2000 (has links)
In this article, we make inferences for the Weibull parameters and propose two test statistics for the comparison of two Weibull distributions based on interval-censored data. However, the distributions of the two statistics are unknown and not easy to obtain, therefore a simulation study is necessary. An urn model in the simulation of interval-censored data was proposed by Lee (1999) to select random intervals. Then we propose a simulation procedure with urn model to obtain approximately the quantiles of the two statistics. We demonstrate an example in AIDS study to illustrate how the tests can be applied to the infection time distributions of AIDS.
18

Modelos de dose-resposta com censura intervalar aplicados a dados de germinação de sementes / Dose-response models with interval-censored applied to seed germination data

Azevedo, Iábita Fabiana Sousa 26 August 2016 (has links)
O crescimento de investimentos em biotecnologia na agricultura tem sido um elemento primordial para a segurança alimentar global. Isso tem levado a uma reorganização da indústria mundial de sementes na busca por técnicas mais adequadas de cultivo, mecanização, uso de fertilizantes, defensivos agrícolas e a utilização de sementes. O Brasil por ser um dos ambientes mais sólidos nesse contexto tem contribuído com o crescimento e a diversificação da produção de sementes levando as lavouras brasileiras a atingirem um novo patamar de produtividade. Diferentes metodologias estatísticas têm sido utilizadas para analisar o comportamento da germinação de uma população de sementes. Entretanto, usar abordagens estatísticas que analisam os dados de germinação da melhor maneira possível permitirá uma maior confiabilidade dos resultados, bem como, ganho de informações pertinentes. Como em testes de germinação de sementes estuda-se o tempo até a ocorrência do evento, que envolve medidas repetidas feitas no mesmo lote e não se conhece o tempo exato da germinação da semente, propõe-se o uso de modelos de dose-resposta com censura intervalar, que permitem a interpretação biológica dos parâmetros usados para medir o processo germinativo e refletem o desenho experimental dos dados. Neste trabalho foram utilizadas duas metodologias estatísticas usuais na análise de dados de germinação de sementes e seus resultados foram comparados com os da abordagem de modelos dose-resposta com censura intervalar. Foram utilizados os modelos de dose-resposta Weibull 2 e log-logístico para explicar o processo germinativo de sementes de Brachiaria e Citrumelo Swingle respectivamente, com diferentes tempos de observação. Os experimentos foram realizados em delineamento inteiramente aleatorizado e os procedimentos dos testes de germinação de acordo com as Regras para Análise de Sementes (RAS). As conclusões obtidas a partir da análise dos dados por meio da metodologia proposta, em geral, divergiram das conclusões obtidas por abordagens tradicionais (modelos de regressão não-linear considerando a distribuição normal e índices de germinação com o uso da análise de variância) utilizadas para analisar dados de germinação. Os modelos de dose-resposta com censura intervalar apresentaram ajustes satisfatórios e sendo portanto uma análise mais adequada que as abordagens usuais. / The growth of investment in biotechnology in agriculture has been a vital element for global food security. This has led to a reorganization of the world seed industry in the seeking of the most appropriate techniques of cultivation, mechanization, use of fertilizers, pesticides and seeds. The Brazil for being one of the most solid in the world context of the seed industry has contributed to the growth and diversification of seed production, leading Brazilian crops to a new level of productivity. Different statistical methodologies have been used to analyze the germination behavior. However, using statistical approaches that analyze germination data as efficiently as possible will allow a greater reliability of the results, as well as relevant information gain. As in seed germination test studies the time until the occurrence of the event, involves repeated measurements on the same experimental unit and do not know the exact time of germination, we propose the use of dose-response models with interval censured that allow biological interpretation of parameters used to measure the germination process and reflect the experimental design of the germination data. In this work we used two methodologies of usual statistical analysis and their results were compared with those of the approach that uses the dose-response models with interval censured. We used the dose response models Weibull 2 and log-logistic to explain the process of seed germination of Brachiaria and Citrumelo Swingle with different observation times. The experiments were carried out in completely randomized design and testing procedures according to the rules for seed analysis (RAS). The conclusions obtained from the analysis of the data by the proposed methodology in general diverged from the conclusions obtained by traditional approaches (regression models nonlinear considering normal distribution and germination indices using analysis of variance) used to analyze data germination. The dose-response models with interval-censored showed satisfactory adjustments and therefore a more accurate analysis than the usual approaches.
19

Regression models with an interval-censored covariate

Langohr, Klaus 16 June 2004 (has links)
El análisis de supervivencia trata de la evaluación estadística de variables que miden el tiempo transcurrido hasta un evento de interés. Una particularidad que ha de considerar el análisis de supervivencia son datos censurados. Éstos aparecen cuando el tiempo de interés no puede ser observado exactamente y la información al respecto es parcial. Se distinguen diferentes tipos de censura: un tiempo censurado por la derecha está presente si el tiempo de supervivencia es sabido mayor a un tiempo observado; censura por izquierda está dada si la supervivencia es menor que un tiempo observado. En el caso de censura en un intervalo, el tiempo está en un intervalo de tiempo observado, y el caso de doble censura aparece cuando, también, el origen del tiempo de supervivencia está censurado.La primera parte del Capítulo 1 contiene un resumen de la metodología estadística para datos censurados en un intervalo, incluyendo tanto métodos paramétricos como no-paramétricos. En la Sección 1.2 abordamos el tema de censura noinformativa que se supone cumplida para todos los métodos presentados. Dada la importancia de métodos de optimización en los demás capítulos, la Sección 1.3 trata de la teoría de optimización. Esto incluye varios algoritmos de optimización y la presentación de herramientas de optimización. Se ha utilizado el lenguaje de programación matemática AMPL para resolver los problemas de maximización que han surgido. Una de las características más importantes de AMPL es la posibilidad de enviar problemas de optimización al servidor 'NEOS: Server for Optimization' en Internet para que sean solucionados por ese servidor.En el Capítulo 2, se presentan los conjuntos de datos que han sido analizados. El primer estudio es sobre la supervivencia de pacientes de tuberculosis co-infectados por el VIH en Barcelona, mientras el siguiente, también del área de VIH/SIDA, trata de usuarios de drogas intra-venosas de Badalona y alrededores que fueron admitidos a la unidad de desintoxicación del Hospital Trias i Pujol. Un área completamente diferente son los estudios sobre la vida útil de alimentos. Se presenta la aplicación de la metodología para datos censurados en un intervalo en esta área. El Capítulo 3 trata del marco teórico de un modelo de vida acelerada con una covariante censurada en un intervalo. Puntos importantes a tratar son el desarrollo de la función de verosimilitud y el procedimiento de estimación de parámetros con métodos del área de optimización. Su uso puede ser una herramienta importante en la estadística. Estos métodos se aplican también a otros modelos con una covariante censurada en un intervalo como se demuestra en el Capítulo 4.Otros métodos que se podrían aplicar son descritos en el Capítulo 5. Se trata sobre todo de métodos basados en técnicas de imputación para datos censurados en un intervalo. Consisten en dos pasos: primero, se imputa el valor desconocido de la covariante, después, se pueden estimar los parámetros con procedimientos estadísticos estándares disponibles en cualquier paquete de software estadístico.El método de maximización simultánea ha sido implementado por el autor con el código de AMPL y ha sido aplicado al conjunto de datos de Badalona. Presentamos los resultados de diferentes modelos y sus respectivas interpretaciones en el Capítulo 6. Se ha llevado a cabo un estudio de simulación cuyos resultados se dan en el Capítulo 7. Ha sido el objetivo comparar la maximización simultánea con dos procedimientos basados en la imputación para el modelo de vida acelerada. Finalmente, en el último capítulo se resumen los resultados y se abordan diferentes aspectos que aún permanecen sin ser resueltos o podrían ser aproximados de manera diferente. / Survival analysis deals with the evaluation of variables which measure the elapsed time until an event of interest. One particularity survival analysis has to account for are censored data, which arise whenever the time of interest cannot be measured exactly, but partial information is available. Four types of censoring are distinguished: right-censoring occurs when the unobserved survival time is bigger, left-censoring when it is less than an observed time, and in case of interval-censoring, the survival time is observed within a time interval. We speak of doubly-censored data if also the time origin is censored.In Chapter 1 of the thesis, we first give a survey on statistical methods for interval-censored data, including both parametric and nonparametric approaches. In the second part of Chapter 1, we address the important issue of noninformative censoring, which is assumed in all the methods presented. Given the importance of optimization procedures in the further chapters of the thesis, the final section of Chapter 1 is about optimization theory. This includes some optimization algorithms, as well as the presentation of optimization tools, which have played an important role in the elaboration of this work. We have used the mathematical programming language AMPL to solve the maximization problems arisen. One of its main features is that optimization problems written in the AMPL code can be sent to the internet facility 'NEOS: Server for Optimization' and be solved by its available solvers.In Chapter 2, we present the three data sets analyzed for the elaboration of this dissertation. Two correspond to studies on HIV/AIDS: one is on the survival of Tuberculosis patients co-infected with HIV in Barcelona, the other on injecting drug users from Badalona and surroundings, most of whom became infected with HIV as a result of their drug addiction. The complex censoring patterns in the variables of interest of the latter study have motivated the development of estimation procedures for regression models with interval-censored covariates. The third data set comes from a study on the shelf life of yogurt. We present a new approach to estimate the shelf lives of food products taking advantage of the existing methodology for interval-censored data.Chapter 3 deals with the theoretical background of an accelerated failure time model with an interval-censored covariate, putting emphasize on the development of the likelihood functions and the estimation procedure by means of optimization techniques and tools. Their use in statistics can be an attractive alternative to established methods such as the EM algorithm. In Chapter 4 we present further regression models such as linear and logistic regression with the same type of covariate, for the parameter estimation of which the same techniques are applied as in Chapter 3. Other possible estimation procedures are described in Chapter 5. These comprise mainly imputation methods, which consist of two steps: first, the observed intervals of the covariate are replaced by an imputed value, for example, the interval midpoint, then, standard procedures are applied to estimate the parameters.The application of the proposed estimation procedure for the accelerated failure time model with an interval-censored covariate to the data set on injecting drug users is addressed in Chapter 6. Different distributions and covariates are considered and the corresponding results are presented and discussed. To compare the estimation procedure with the imputation based methods of Chapter 5, a simulation study is carried out, whose design and results are the contents of Chapter 7. Finally, in the closing Chapter 8, the main results are summarized and several aspects which remain unsolved or might be approximated in another way are addressed.
20

Essays in direct marketing : understanding response behavior and implementation of targeting strategies

Sinha, Shameek 06 July 2011 (has links)
In direct marketing, understanding the response behavior of consumers to marketing initiatives is a pre-requisite for marketers before implementing targeting strategies to reach potential as well as existing consumers in the future. Consumer response can either be in terms of the incidence or timing of purchases, category/ brand choice of purchases made as well as the volume or purchase amounts in each category. Direct marketers seek to explore how past consumer response behavior as well as their targeting actions affects current response patterns. However, considerable heterogeneity is also prevalent in consumer responses and the possible sources of this heterogeneity need to be investigated. With the knowledge of consumer response and the corresponding heterogeneity, direct marketers can devise targeting strategies to attract potential new consumers as well as retain existing consumers. In the first essay of my dissertation (Chapter 2), I model the response behavior of donors in non-profit charity fund-raising in terms of their timing and volume of donations. I show that past donations (both the incidence and volume) and solicitation for alternative causes by non-profits matter in donor responses and the heterogeneity in donation behavior can be explained in terms of individual and community level donor characteristics. I also provide a heuristic approach to target new donors by using a classification scheme for donors in terms of the frequency and amount of donations and then characterize each donor portfolio with corresponding donor characteristics. In the second essay (Chapter 3), I propose a more structural approach in the targeting of customers by direct marketers in the context of customized retail couponing. First I model customer purchase in a retail setting where brand choice decisions in a product category depend on pricing, in-store promotions, coupon targeting as well as the face values of those coupons. Then using a utility function specification for the retailer which implements a trade-off between net revenue (revenue – coupon face value) and information gain, I propose a Bayesian decision theoretic approach to determine optimal customized coupon face values. The optimization algorithm is sequential where past as well as future customer responses affect targeted coupon face values and the direct marketer tries to determine the trade-off through natural experimentation. / text

Page generated in 0.0771 seconds